sentence1
stringlengths
1
133k
sentence2
stringlengths
1
131k
traffic, both cars and lorries, between the north (Norrland) and south of Sweden or beyond. From Haparanda on the Finnish border, it stretches south along the Gulf of Bothnia to Gävle, then on a more inland route southwards. It ends in Helsingborg in Sweden, at the port for the ferry to Helsingør in Denmark. The route intersects with European route E6 just outside Helsingborg, which continues to Trelleborg on the southern coast of Sweden. History and naming Under the new system of European routes it was planned to have been a part of E55, but it remains in the pre-1992 designation (E4) within Sweden, because the expenses connected with re-signing this long road portion would be too great. Besides the signs along the road, there are thousands of signs, especially in cities, showing how to
which continues to Trelleborg on the southern coast of Sweden. History and naming Under the new system of European routes it was planned to have been a part of E55, but it remains in the pre-1992 designation (E4) within Sweden, because the expenses connected with re-signing this long road portion would be too great. Besides the signs along the road, there are thousands of signs, especially in cities, showing how to reach the E4 road. The road is now fully authorised as E4 by the relevant authority, not as E55. Route North of Gävle the road is of mixed standard. Depending on the fashion at the time of construction it is either a single standard carriageway road, usually wide, or a 2+1 road, a wide road with two lanes in one direction and one in the other with a steel wire barrier in between, or sometimes a motorway with two lanes in each direction. North of Sundsvall, the road passes through several of the larger cities as city streets. South of Gävle, the road becomes an
Powers Act (IEEPA) and the Sudanese Sanctions Regulations, 31 C.F.R. part 538 (SSR).1 Acquisitions and cooperation Around 2000, companies and governments began to push for standards for mobile Internet. In May 2000, the European Commission created the Wireless Strategic Initiative, a consortium of four telecommunications suppliers in Europe – Ericsson, Nokia, Alcatel (France) and Siemens (Germany) – to develop and test new prototypes for advanced wireless communications systems. Later that year, the consortium partners invited other companies to join them in a Wireless World Research Forum in 2001. In December 1999, Microsoft and Ericsson announced a strategic partnership to combine the former's web browser and server software with the latter's mobile-internet technologies. In 2000, the Dot-com bubble burst with marked economic implications for Sweden. Ericsson, the world's largest producer of mobile telecommunications equipment, shed thousands of jobs, as did the country's Internet consulting firms and dot-com start-ups. In the same year, Intel, the world's largest semiconductor chip manufacturer, signed a $1.5 billion deal to supply flash memory to Ericsson over the next three years. The short-lived joint venture called Ericsson Microsoft Mobile Venture, owned 70/30 percent by Ericsson and Microsoft, respectively, ended in October 2001 when Ericsson announced it would absorb the former joint venture and adopt a licensing agreement with Microsoft instead. The same month, Ericsson announced the launch of Sony Ericsson, a joint venture mobile telephone business, together with Sony. Sony Ericsson remained in operation until February 2012, when Sony bought out Ericsson's share; Ericsson said it wanted to focus on the global wireless market as a whole. Lower stock prices and job losses affected many telecommunications companies in 2001. The major equipment manufacturers – Motorola (U.S.), Lucent Technologies (U.S.), Cisco Systems (U.S.), Marconi (UK), Siemens (Germany), Nokia (Finland), as well as Ericsson – all announced job cuts in their home countries and in subsidiaries around the world. Ericsson's workforce worldwide fell during 2001 from 107,000 to 85,000. In September 2001, Ericsson purchased the remaining shares in EHPT from Hewlett Packard. Founded in 1993, Ericsson Hewlett Packard Telecom (EHPT) was a joint venture made up of 60% Ericsson interests and 40% Hewlett-Packard interests. In 2002, ICT investor losses topped $2 trillion and share prices fell by 95% until August that year. More than half a million people lost their jobs in the global telecom industry over the two years. The collapse of U.S. carrier WorldCom, with more than $107 billion in assets, was the biggest in U.S. history. The sector's problems caused bankruptcies and job losses, and led to changes in the leadership of a number of major companies. Ericsson made 20,000 more staff redundant and raised about $3 billion from its shareholders. In June 2002, Infineon Technologies (then the sixth-largest semiconductor supplier and a subsidiary of Siemens) bought Ericsson's microelectronics unit for $400 million. Ericsson was an official backer in the 2005 launch of the .mobi top level domain created specifically for the mobile internet. Co-operation with Hewlett-Packard did not end with EHPT; in 2003 Ericsson outsourced its IT to HP, which included Managed Services, Help Desk Support, Data Center Operations, and HP Utility Data Center. The contract was extended in 2008. In October 2005, Ericsson acquired the bulk of the troubled UK telecommunications manufacturer Marconi Company, including its brand name that dates back to the creation of the original Marconi Company by the "father of radio" Guglielmo Marconi. In September 2006, Ericsson sold the greater part of its defense business Ericsson Microwave Systems, which mainly produced sensor and radar systems, to Saab AB, which renamed the company to Saab Microwave Systems. In 2007, Ericsson acquired carrier edge-router maker Redback Networks, and then Entrisphere, a US-based company providing fiber-access technology. In September 2007, Ericsson acquired an 84% interest in German customer-care and billing software firm LHS, a stake later raised to 100%. In 2008, Ericsson sold its enterprise PBX division to Aastra Technologies, and acquired Tandberg Television, the television technology division of Norwegian company Tandberg. In 2009, Ericsson bought the CDMA2000 and LTE business of Nortel's carrier networks division for US$1.18 billion; Bizitek, a Turkish business support systems integrator; the Estonian manufacturing operations of electronic manufacturing company Elcoteq; and completed its acquisition of LHS. Acquisitions in 2010 included assets from the Strategy and Technology Group of inCode, a North American business and consulting-services company; Nortel's majority shareholding (50% plus one share) in LG-Nortel, a joint venture between LG Electronics and Nortel Networks providing sales, R&D and industrial capacity in South Korea, now known as Ericsson-LG; further Nortel carrier-division assets, relating from Nortel's GSM business in the United States and Canada; Optimi Corporation, a U.S.–Spanish telecommunications vendor specializing in network optimization and management; and Pride, a consulting and systems-integration company operating in Italy. In 2011, Ericsson acquired manufacturing and research facilities, and staff from the Guangdong Nortel Telecommunication Equipment Company (GDNT) as well as Nortel's Multiservice Switch business. Ericsson acquired U.S. company Telcordia Technologies in January 2012, an operations and business support systems (OSS/BSS) company. In March, Ericsson announced it was buying the broadcast-services division of Technicolor, a media broadcast technology company. In April 2012 Ericsson completed the acquisition of BelAir Networks a strong Wi-Fi network technology company. On 3 May 2013, Ericsson announced it would divest its power cable operations to Danish company NKT Holding. On 1 July 2013, Ericsson announced it would acquire the media management company Red Bee Media, subject to regulatory approval. The acquisition was completed on 9 May 2014. In September 2013, Ericsson completed its acquisition of Microsoft's Mediaroom business and televisions services, originally announced in April the same year. The acquisition makes Ericsson the largest provider of IPTV and multi-screen services in the world, by market share; it was renamed Ericsson Mediaroom. In September 2014, Ericsson acquired majority stake in Apcera for cloud policy compliance. In October 2015, Ericsson completed the acquisition of Envivio, a software encoding company. In April 2016, Ericsson acquired Polish and Ukrainian operations of software development company Ericpol, a long-time supplier to Ericsson. Approximately 2,300 Ericpol employees joined Ericsson, bringing software development competence in radio, cloud, and IP. On 20 June 2017, Bloomberg disclosed that Ericsson hired Morgan Stanley to explore the sale of its media businesses. The Red Bee Media business was kept in-house as an independent subsidiary company, as no suitable buyer was found, but a 51% stake of the remainder of the Media Solution division was sold to private equity firm One Equity Partners, the new company being named MediaKind. The transaction was completed on 31 January 2019. In February 2018, Ericsson acquired the location-based mobile data management platform Placecast. Ericsson has since integrated Placecast's platform and capabilities with its programmatic mobile ad subsidiary, Emodo. In May 2018, SoftBank partnered with Ericsson to trial new radio technology. In September 2020, Ericsson acquired US-based carrier equipment manufacturer Cradlepoint for $1.1 billion. In November 2021, Ericsson announced it has reached an agreement to acquire Vonage for $6.2 billion. Corporate governance , members of the board of directors of LM Ericsson were: Leif Johansson, Jacob Wallenberg, Kristin S. Rinne, Helena Stjernholm, Sukhinder Singh Cassidy, Börje Ekholm, Ulf J. Johansson, Mikael Lännqvist, Zlatko Hadzic, Kjell-Åke Soting, Nora Denzel, Kristin Skogen Lund, Pehr Claesson, Karin Åberg and Roger Svensson. Research and development Ericsson has structured its R&D in three levels depending on when products or technologies will be introduced to customers and users. Its research and development organization is part of 'Group Function Technology' and addresses several facets of network architecture: wireless access networks; radio access technologies; broadband technologies; packet technologies; multimedia technologies; services software; EMF safety and sustainability; security; and global services. The head of research since 2012 is Sara Mazur. Group Function Technology holds research co-operations with several major universities and research institutes including: Lund University in Sweden, Eötvös Loránd University in Hungary and Beijing Institute of Technology in China. Ericsson also holds research co-operations within several European research programs such as GigaWam and OASE. Ericsson holds 33,000 granted patents, and is the number-one holder of GSM/GPRS/EDGE, WCDMA/HSPA, and LTE essential patents. In 2021, the WIPO’s annual World Intellectual Property Indicators report ranked Ericsson's number of patent applications published under the PCT System as 6th in the world, with 1,989 patent applications being published during 2020. This position is up from their previous ranking as 7th in 2019 with 1,698 applications. Ericsson hosts a developer program called Ericsson Developer Connection designed to encourage development of applications and services. Ericsson also has an open innovation initiative for beta applications and beta API's & tools called Ericsson Labs. The company hosts several internal innovation competitions among its employees. Products and services Ericsson's business includes technology research, development, network systems and software development, and running operations for telecom service providers. and software Ericsson offers end-to-end services for all major mobile communication standards, and has three main business units. Business Area Networks Business Area Networks, previously called Business Unit Networks, develops network infrastructure for communication needs over mobile and fixed connections. Its products include radio base stations, radio network controllers, mobile switching centers and service application nodes. Operators use Ericsson products to migrate from 2G to 3G and, most recently, to 4G networks. The company's network division has been described as a driver in the development of 2G, 3G, 4G/LTE and 5G technology, and the evolution towards all-IP, and it develops and deploys advanced LTE systems, but it is still developing the older GSM, WCDMA, and CDMA technologies. The company's networks portfolio also includes microwave transport, Internet Protocol (IP) networks, fixed-access services for copper and fiber, and mobile broadband modules, several levels of fixed broadband access, radio access networks from small pico cells to high-capacity macro cells and controllers for radio base stations. Network services Ericsson's network rollout services employ in-house capabilities, subcontractors and central resources to make changes to live networks. Services such as technology deployment, network transformation, support services and network optimization are also provided. Business Area Digital Services This unit provides core networks, Operations Support Systems such as network management and analytics, and Business Support Systems such as billing and mediation. Within the Digital Services unit, there is an m-Commerce offering, which focuses on service providers and facilitates their working with financial institutions and intermediaries. Ericsson has announced m-commerce deals with Western Union and African wireless carrier MTN. Business Area Managed Services The unit is active in 180 countries; it supplies managed services, systems integration, consulting, network rollout, design and optimization, broadcast services, learning services and support. The company also works with television and media, public safety, and utilities. Ericsson claims to manage networks that serve more than 1 billion subscribers worldwide, and to support customer networks that serve more than 2.5 billion subscribers. Broadcast services Ericsson's Broadcast Services unit was evolved into a unit called Red Bee Media, which has been spun out into a joint venture. It deals with the playout of live and
users. Its research and development organization is part of 'Group Function Technology' and addresses several facets of network architecture: wireless access networks; radio access technologies; broadband technologies; packet technologies; multimedia technologies; services software; EMF safety and sustainability; security; and global services. The head of research since 2012 is Sara Mazur. Group Function Technology holds research co-operations with several major universities and research institutes including: Lund University in Sweden, Eötvös Loránd University in Hungary and Beijing Institute of Technology in China. Ericsson also holds research co-operations within several European research programs such as GigaWam and OASE. Ericsson holds 33,000 granted patents, and is the number-one holder of GSM/GPRS/EDGE, WCDMA/HSPA, and LTE essential patents. In 2021, the WIPO’s annual World Intellectual Property Indicators report ranked Ericsson's number of patent applications published under the PCT System as 6th in the world, with 1,989 patent applications being published during 2020. This position is up from their previous ranking as 7th in 2019 with 1,698 applications. Ericsson hosts a developer program called Ericsson Developer Connection designed to encourage development of applications and services. Ericsson also has an open innovation initiative for beta applications and beta API's & tools called Ericsson Labs. The company hosts several internal innovation competitions among its employees. Products and services Ericsson's business includes technology research, development, network systems and software development, and running operations for telecom service providers. and software Ericsson offers end-to-end services for all major mobile communication standards, and has three main business units. Business Area Networks Business Area Networks, previously called Business Unit Networks, develops network infrastructure for communication needs over mobile and fixed connections. Its products include radio base stations, radio network controllers, mobile switching centers and service application nodes. Operators use Ericsson products to migrate from 2G to 3G and, most recently, to 4G networks. The company's network division has been described as a driver in the development of 2G, 3G, 4G/LTE and 5G technology, and the evolution towards all-IP, and it develops and deploys advanced LTE systems, but it is still developing the older GSM, WCDMA, and CDMA technologies. The company's networks portfolio also includes microwave transport, Internet Protocol (IP) networks, fixed-access services for copper and fiber, and mobile broadband modules, several levels of fixed broadband access, radio access networks from small pico cells to high-capacity macro cells and controllers for radio base stations. Network services Ericsson's network rollout services employ in-house capabilities, subcontractors and central resources to make changes to live networks. Services such as technology deployment, network transformation, support services and network optimization are also provided. Business Area Digital Services This unit provides core networks, Operations Support Systems such as network management and analytics, and Business Support Systems such as billing and mediation. Within the Digital Services unit, there is an m-Commerce offering, which focuses on service providers and facilitates their working with financial institutions and intermediaries. Ericsson has announced m-commerce deals with Western Union and African wireless carrier MTN. Business Area Managed Services The unit is active in 180 countries; it supplies managed services, systems integration, consulting, network rollout, design and optimization, broadcast services, learning services and support. The company also works with television and media, public safety, and utilities. Ericsson claims to manage networks that serve more than 1 billion subscribers worldwide, and to support customer networks that serve more than 2.5 billion subscribers. Broadcast services Ericsson's Broadcast Services unit was evolved into a unit called Red Bee Media, which has been spun out into a joint venture. It deals with the playout of live and pre-recorded, commercial and public service television programmes, including presentation (continuity announcements), trailers, and ancillary access services such as closed-caption subtitles, audio description and in-vision sign language interpreters. Its media management services consist of Managed Media Preparation and Managed Media Internet Delivery. Divested businesses Sony Ericsson Mobile Communications AB (Sony Ericsson) was a joint venture with Sony that merged the previous mobile telephone operations of both companies. It manufactured mobile telephones, accessories and personal computer (PC) cards. Sony Ericsson was responsible for product design and development, marketing, sales, distribution and customer services. On 16 February 2012, Sony announced it had completed the full acquisition of Sony Ericsson, after which it changed name to Sony Mobile Communications, and nearly a year later it moved headquarters from Sweden to Japan. Mobile (cell) telephones As a joint venture with Sony, Ericsson's mobile telephone production was moved into the company Sony Ericsson in 2001. The following is a list of mobile phones marketed under the brand name Ericsson. Ericsson GS88 – Cancelled mobile telephone where Ericsson invented the "Smartphone" name for Ericsson GA628 – Known for its Z80 CPU Ericsson SH888 – First mobile telephone to have wireless modem capabilities Ericsson A1018 – Dualband cellphone, notably easy to hack Ericsson A2618 & Ericsson A2628 – Dualband cellphones. Use graphical LCD display based on PCF8548 I²C controller. Ericsson PF768 Ericsson GF768 Ericsson GH388 Ericsson T10 – Colourful Cellphone Ericsson T18 – Business model of the T10, with active flip Ericsson T28 – Very slim telephone. Uses lithium polymer batteries. Ericsson T28 FAQ use graphical LCD display based on PCF8558 I²C controller. Ericsson T20s Ericsson T29s – Similar to the T28s, but with WAP support Ericsson T29m – pre-alpha prototype for the T39m Ericsson T36m – Prototype for the T39m. Announced in yellow and blue. Never hit the market due to release T39m Ericsson T39 – Similar to the T28, but with a GPRS modem, Bluetooth and triband capabilities Ericsson T65 Ericsson T66 Ericsson T68m – The first Ericsson handset to have a color display, later branded as Sony Ericsson T68i Ericsson R250s Pro – Fully dust and water resistant telephone Ericsson R310s Ericsson R320s Ericsson R320s Titan – Limited Edition with titanium front Ericsson R320s GPRS – Prototype for testing GPRS networks Ericsson R360m – Pre-alpha prototype for the R520m Ericsson R380 – First cellphone to use the Symbian OS Ericsson R520m – Similar to the T39, but in a candy bar form factor and with added features such as a built-in speakerphone and an optical proximity sensor Ericsson R520m UMTS – Prototype to test UMTS networks Ericsson R520m SyncML – Prototype to test the SyncML implementation Ericsson R580m – Announced in several press releases. Supposed to be a successor of the R380s without external antenna and with color display Ericsson R600 Telephones Ericsson Dialog Ericofon Ericsson Mobile Platforms Ericsson Mobile Platforms existed for eight years; on 12 February 2009, Ericsson announced it would be merged with the mobile platform company of STMicroelectronics, ST-NXP Wireless, to create a 50/50 joint venture owned by Ericsson and STMicroelectronics. This joint venture was divested in 2013 and remaining activities can be found in Ericsson Modems and STMicroelectronics. Ericsson Mobile Platform ceased being a legal entity early 2009. Ericsson Enterprise Starting in 1983 Ericsson Enterprise provided communications systems and services for businesses, public entities and educational institutions. It produced products for voice over Internet protocol (VoIP)-based private branch exchanges (PBX), wireless local area networks (WLAN), and mobile intranets. Ericsson Enterprise operated mainly from Sweden but also operated through regional units and other partners/distributors. In 2008 it was sold to Aastra. Corruption On 7 December 2019, Ericsson agreed to pay more than $1.2 billion (€1.09 billion) to settle U.S. Department of Justice FCPA criminal and civil investigations into foreign corruption. US authorities accused the company of conducting a campaign of corruption between 2000 and 2016 across China, Indonesia, Vietnam, Kuwait and Djibouti. Ericsson admitted to paying bribes, falsifying books and records and failing to implement reasonable internal accounting controls in an attempt to strengthen its position in the telecommunications industry. In 2022, an internal investigation into corruption inside the company was leaked. It detailed corruption in at least 10 countries. Ericsson has admitted “serious breaches of compliance
of developing ethology. Ethology is now a well-recognized scientific discipline, and has a number of journals covering developments in the subject, such as Animal Behaviour, Animal Welfare, Applied Animal Behaviour Science, Animal Cognition, Behaviour, Behavioral Ecology and Ethology: International Journal of Behavioural Biology. In 1972, the International Society for Human Ethology was founded to promote exchange of knowledge and opinions concerning human behaviour gained by applying ethological principles and methods and published their journal, The Human Ethology Bulletin. In 2008, in a paper published in the journal Behaviour, ethologist Peter Verbeek introduced the term "Peace Ethology" as a sub-discipline of Human Ethology that is concerned with issues of human conflict, conflict resolution, reconciliation, war, peacemaking, and peacekeeping behaviour. Social ethology and recent developments In 1972, the English ethologist John H. Crook distinguished comparative ethology from social ethology, and argued that much of the ethology that had existed so far was really comparative ethology—examining animals as individuals—whereas, in the future, ethologists would need to concentrate on the behaviour of social groups of animals and the social structure within them. E. O. Wilson's book Sociobiology: The New Synthesis appeared in 1975, and since that time, the study of behaviour has been much more concerned with social aspects. It has also been driven by the stronger, but more sophisticated, Darwinism associated with Wilson, Robert Trivers, and W. D. Hamilton. The related development of behavioural ecology has also helped transform ethology. Furthermore, a substantial rapprochement with comparative psychology has occurred, so the modern scientific study of behaviour offers a more or less seamless spectrum of approaches: from animal cognition to more traditional comparative psychology, ethology, sociobiology, and behavioural ecology. In 2020, Dr. Tobias Starzak and Professor Albert Newen from the Institute of Philosophy II at the Ruhr University Bochum postulated that animals may have beliefs. Relationship with comparative psychology Comparative psychology also studies animal behaviour, but, as opposed to ethology, is construed as a sub-topic of psychology rather than as one of biology. Historically, where comparative psychology has included research on animal behaviour in the context of what is known about human psychology, ethology involves research on animal behaviour in the context of what is known about animal anatomy, physiology, neurobiology, and phylogenetic history. Furthermore, early comparative psychologists concentrated on the study of learning and tended to research behaviour in artificial situations, whereas early ethologists concentrated on behaviour in natural situations, tending to describe it as instinctive. The two approaches are complementary rather than competitive, but they do result in different perspectives, and occasionally conflicts of opinion about matters of substance. In addition, for most of the twentieth century, comparative psychology developed most strongly in North America, while ethology was stronger in Europe. From a practical standpoint, early comparative psychologists concentrated on gaining extensive knowledge of the behaviour of very few species. Ethologists were more interested in understanding behaviour across a wide range of species to facilitate principled comparisons across taxonomic groups. Ethologists have made much more use of such cross-species comparisons than comparative psychologists have. Instinct The Merriam-Webster dictionary defines instinct as "A largely inheritable and unalterable tendency of an organism to make a complex and specific response to environmental stimuli without involving reason". Fixed action patterns An important development, associated with the name of Konrad Lorenz though probably due more to his teacher, Oskar Heinroth, was the identification of fixed action patterns. Lorenz popularized these as instinctive responses that would occur reliably in the presence of identifiable stimuli called sign stimuli or "releasing stimuli". Fixed action patterns are now considered to be instinctive behavioural sequences that are relatively invariant within the species and that almost inevitably run to completion. One example of a releaser is the beak movements of many bird species performed by newly hatched chicks, which stimulates the mother to regurgitate food for her offspring. Other examples are the classic studies by Tinbergen on the egg-retrieval behaviour and the effects of a "supernormal stimulus" on the behaviour of graylag geese. One investigation of this kind was the study of the waggle dance ("dance language") in bee communication by Karl von Frisch. Learning Habituation Habituation is a simple form of learning and occurs in many animal taxa. It is the process whereby an animal ceases responding to a stimulus. Often, the response is an innate behaviour. Essentially, the animal learns not to respond to irrelevant stimuli. For example, prairie dogs (Cynomys ludovicianus) give alarm calls when predators approach, causing all individuals in the group to quickly scramble down burrows. When prairie dog towns are located near trails used by humans, giving alarm calls every time a person walks by is expensive in terms of time and energy. Habituation to humans is therefore an important adaptation in this context. Associative learning Associative learning in animal behaviour is any learning process in which a new response becomes associated with a particular stimulus. The first studies of associative learning were made by Russian physiologist Ivan Pavlov, who observed that dogs trained to associate food with the ringing of a bell would salivate on hearing the bell. Imprinting Imprinting
growing field. Since the dawn of the 21st century researchers have re-examined and reached new conclusions in many aspects of animal communication, emotions, culture, learning and sexuality that the scientific community long thought it understood. New fields, such as neuroethology, have developed. Understanding ethology or animal behaviour can be important in animal training. Considering the natural behaviours of different species or breeds enables trainers to select the individuals best suited to perform the required task. It also enables trainers to encourage the performance of naturally occurring behaviours and the discontinuance of undesirable behaviours. Etymology The term ethology derives from the Greek language: ἦθος, ethos meaning "character" and , -logia meaning "the study of". The term was first popularized by American myrmecologist (a person who studies ants) William Morton Wheeler in 1902. History The beginnings of ethology Because ethology is considered a topic of biology, ethologists have been concerned particularly with the evolution of behaviour and its understanding in terms of natural selection. In one sense, the first modern ethologist was Charles Darwin, whose 1872 book The Expression of the Emotions in Man and Animals influenced many ethologists. He pursued his interest in behaviour by encouraging his protégé George Romanes, who investigated animal learning and intelligence using an anthropomorphic method, anecdotal cognitivism, that did not gain scientific support. Other early ethologists, such as Eugène Marais, Charles O. Whitman, Oskar Heinroth, Wallace Craig and Julian Huxley, instead concentrated on behaviours that can be called instinctive, or natural, in that they occur in all members of a species under specified circumstances. Their beginning for studying the behaviour of a new species was to construct an ethogram (a description of the main types of behaviour with their frequencies of occurrence). This provided an objective, cumulative database of behaviour, which subsequent researchers could check and supplement. Growth of the field Due to the work of Konrad Lorenz and Niko Tinbergen, ethology developed strongly in continental Europe during the years prior to World War II. After the war, Tinbergen moved to the University of Oxford, and ethology became stronger in the UK, with the additional influence of William Thorpe, Robert Hinde, and Patrick Bateson at the Sub-department of Animal Behaviour of the University of Cambridge. In this period, too, ethology began to develop strongly in North America. Lorenz, Tinbergen, and von Frisch were jointly awarded the Nobel Prize in Physiology or Medicine in 1973 for their work of developing ethology. Ethology is now a well-recognized scientific discipline, and has a number of journals covering developments in the subject, such as Animal Behaviour, Animal Welfare, Applied Animal Behaviour Science, Animal Cognition, Behaviour, Behavioral Ecology and Ethology: International Journal of Behavioural Biology. In 1972, the International Society for Human Ethology was founded to promote exchange of knowledge and opinions concerning human behaviour gained by applying ethological principles and methods and published their journal, The Human Ethology Bulletin. In 2008, in a paper published in the journal Behaviour, ethologist Peter Verbeek introduced the term "Peace Ethology" as a sub-discipline of Human Ethology that is concerned with issues of human conflict, conflict resolution, reconciliation, war, peacemaking, and peacekeeping behaviour. Social ethology and recent developments In 1972, the English ethologist John H. Crook distinguished comparative ethology from social ethology, and argued that much of the ethology that had existed so far was really comparative ethology—examining animals as individuals—whereas, in the future, ethologists would need to concentrate on the behaviour of social groups of animals and the social structure within them. E. O. Wilson's book Sociobiology: The New Synthesis appeared in 1975, and since that time, the study of behaviour has been much more concerned with social aspects. It has also been driven by the stronger, but more sophisticated, Darwinism associated with Wilson, Robert Trivers, and W. D. Hamilton. The related development of behavioural ecology has also helped transform ethology. Furthermore, a substantial rapprochement with comparative psychology has occurred, so the modern scientific study of behaviour offers a more or less seamless spectrum of approaches: from animal cognition to more traditional comparative psychology, ethology, sociobiology, and behavioural ecology. In 2020, Dr. Tobias Starzak and Professor Albert Newen from the Institute of Philosophy II at the Ruhr University Bochum postulated that animals may have beliefs. Relationship with comparative psychology Comparative psychology also studies animal behaviour, but, as opposed to ethology, is construed as a sub-topic of psychology rather than as one of biology. Historically, where comparative psychology has included research on animal behaviour in the context of what is known about human psychology, ethology involves research on animal behaviour in the context of what is known about animal anatomy, physiology, neurobiology, and phylogenetic history. Furthermore, early comparative psychologists concentrated on the study of learning and tended to research behaviour in artificial situations, whereas early ethologists concentrated on behaviour in natural situations, tending to describe it as instinctive. The two approaches are complementary rather than competitive, but they do result in different perspectives, and occasionally conflicts of opinion about matters of substance. In addition, for most of the twentieth century, comparative psychology developed most strongly in North America, while ethology was stronger in Europe. From a practical standpoint, early comparative psychologists concentrated on gaining extensive knowledge of the behaviour of very few species. Ethologists were more interested in understanding behaviour across a wide range of species to facilitate principled comparisons across taxonomic groups. Ethologists have made much more use of such cross-species comparisons than comparative psychologists have. Instinct The Merriam-Webster dictionary defines instinct as "A largely inheritable and unalterable tendency of an organism to make a complex and specific response to environmental stimuli without involving reason". Fixed action patterns An important development, associated with the name of Konrad Lorenz though probably due more to his teacher, Oskar Heinroth, was the identification of fixed action patterns. Lorenz popularized these as instinctive responses that would occur reliably in the presence of identifiable stimuli called sign stimuli or "releasing stimuli". Fixed action patterns are now considered to be instinctive behavioural sequences that are relatively invariant within the species and that almost inevitably run to completion. One example of a releaser is the beak movements of many bird species performed by newly hatched chicks, which stimulates the mother to regurgitate food for her offspring. Other examples are the classic studies by Tinbergen on the egg-retrieval behaviour and the effects of a "supernormal stimulus" on the behaviour of graylag geese. One investigation of this kind was the study of the
receiver. Due to conservation of energy, the amount of power passing through any spherical surface drawn around the source is the same. Because such a surface has an area proportional to the square of its distance from the source, the power density of EM radiation always decreases with the inverse square of the distance from the source; this is called the inverse-square law. This is in contrast to dipole parts of the EM field close to the source (the near-field), which vary in power according to an inverse cube power law, and thus do not transport a conserved amount of energy over distances, but instead fade with distance, with its energy (as noted) rapidly returning to the transmitter or absorbed by a nearby receiver (such as a transformer secondary coil). The far-field (EMR) depends on a different mechanism for its production than the near-field, and upon different terms in Maxwell's equations. Whereas the magnetic part of the near-field is due to currents in the source, the magnetic field in EMR is due only to the local change in the electric field. In a similar way, while the electric field in the near-field is due directly to the charges and charge-separation in the source, the electric field in EMR is due to a change in the local magnetic field. Both processes for producing electric and magnetic EMR fields have a different dependence on distance than do near-field dipole electric and magnetic fields. That is why the EMR type of EM field becomes dominant in power "far" from sources. The term "far from sources" refers to how far from the source (moving at the speed of light) any portion of the outward-moving EM field is located, by the time that source currents are changed by the varying source potential, and the source has therefore begun to generate an outwardly moving EM field of a different phase. A more compact view of EMR is that the far-field that composes EMR is generally that part of the EM field that has traveled sufficient distance from the source, that it has become completely disconnected from any feedback to the charges and currents that were originally responsible for it. Now independent of the source charges, the EM field, as it moves farther away, is dependent only upon the accelerations of the charges that produced it. It no longer has a strong connection to the direct fields of the charges, or to the velocity of the charges (currents). In the Liénard–Wiechert potential formulation of the electric and magnetic fields due to motion of a single particle (according to Maxwell's equations), the terms associated with acceleration of the particle are those that are responsible for the part of the field that is regarded as electromagnetic radiation. By contrast, the term associated with the changing static electric field of the particle and the magnetic term that results from the particle's uniform velocity, are both associated with the electromagnetic near-field, and do not comprise EM radiation. Properties Electrodynamics is the physics of electromagnetic radiation, and electromagnetism is the physical phenomenon associated with the theory of electrodynamics. Electric and magnetic fields obey the properties of superposition. Thus, a field due to any particular particle or time-varying electric or magnetic field contributes to the fields present in the same space due to other causes. Further, as they are vector fields, all magnetic and electric field vectors add together according to vector addition. For example, in optics two or more coherent light waves may interact and by constructive or destructive interference yield a resultant irradiance deviating from the sum of the component irradiances of the individual light waves. The electromagnetic fields of light are not affected by traveling through static electric or magnetic fields in a linear medium such as a vacuum. However, in nonlinear media, such as some crystals, interactions can occur between light and static electric and magnetic fields—these interactions include the Faraday effect and the Kerr effect. In refraction, a wave crossing from one medium to another of different density alters its speed and direction upon entering the new medium. The ratio of the refractive indices of the media determines the degree of refraction, and is summarized by Snell's law. Light of composite wavelengths (natural sunlight) disperses into a visible spectrum passing through a prism, because of the wavelength-dependent refractive index of the prism material (dispersion); that is, each component wave within the composite light is bent a different amount. EM radiation exhibits both wave properties and particle properties at the same time (see wave-particle duality). Both wave and particle characteristics have been confirmed in many experiments. Wave characteristics are more apparent when EM radiation is measured over relatively large timescales and over large distances while particle characteristics are more evident when measuring small timescales and distances. For example, when electromagnetic radiation is absorbed by matter, particle-like properties will be more obvious when the average number of photons in the cube of the relevant wavelength is much smaller than 1. It is not so difficult to experimentally observe non-uniform deposition of energy when light is absorbed, however this alone is not evidence of "particulate" behavior. Rather, it reflects the quantum nature of matter. Demonstrating that the light itself is quantized, not merely its interaction with matter, is a more subtle affair. Some experiments display both the wave and particle natures of electromagnetic waves, such as the self-interference of a single photon. When a single photon is sent through an interferometer, it passes through both paths, interfering with itself, as waves do, yet is detected by a photomultiplier or other sensitive detector only once. A quantum theory of the interaction between electromagnetic radiation and matter such as electrons is described by the theory of quantum electrodynamics. Electromagnetic waves can be polarized, reflected, refracted, diffracted or interfere with each other. Wave model In homogeneous, isotropic media, electromagnetic radiation is a transverse wave, meaning that its oscillations are perpendicular to the direction of energy transfer and travel. It comes from the following equations:These equations predicate that any electromagnetic wave must be a transverse wave, where the electric field and the magnetic field are both perpendicular to the direction of wave propagation. The electric and magnetic parts of the field in an electromagnetic wave stand in a fixed ratio of strengths to satisfy the two Maxwell equations that specify how one is produced from the other. In dissipation-less (lossless) media, these E and B fields are also in phase, with both reaching maxima and minima at the same points in space (see illustrations). A common misconception is that the E and B fields in electromagnetic radiation are out of phase because a change in one produces the other, and this would produce a phase difference between them as sinusoidal functions (as indeed happens in electromagnetic induction, and in the near-field close to antennas). However, in the far-field EM radiation which is described by the two source-free Maxwell curl operator equations, a more correct description is that a time-change in one type of field is proportional to a space-change in the other. These derivatives require that the E and B fields in EMR are in-phase (see mathematics section below). An important aspect of light's nature is its frequency. The frequency of a wave is its rate of oscillation and is measured in hertz, the SI unit of frequency, where one hertz is equal to one oscillation per second. Light usually has multiple frequencies that sum to form the resultant wave. Different frequencies undergo different angles of refraction, a phenomenon known as dispersion. A monochromatic wave (a wave of a single frequency) consists of successive troughs and crests, and the distance between two adjacent crests or troughs is called the wavelength. Waves of the electromagnetic spectrum vary in size, from very long radio waves longer than a continent to very short gamma rays smaller than atom nuclei. Frequency is inversely proportional to wavelength, according to the equation: where v is the speed of the wave (c in a vacuum or less in other media), f is the frequency and λ is the wavelength. As waves cross boundaries between different media, their speeds change but their frequencies remain constant. Electromagnetic waves in free space must be solutions of Maxwell's electromagnetic wave equation. Two main classes of solutions are known, namely plane waves and spherical waves. The plane waves may be viewed as the limiting case of spherical waves at a very large (ideally infinite) distance from the source. Both types of waves can have a waveform which is an arbitrary time function (so long as it is sufficiently differentiable to conform to the wave equation). As with any time function, this can be decomposed by means of Fourier analysis into its frequency spectrum, or individual sinusoidal components, each of which contains a single frequency, amplitude and phase. Such a component wave is said to be monochromatic. A monochromatic electromagnetic wave can be characterized by its frequency or wavelength, its peak amplitude, its phase relative to some reference phase, its direction of propagation, and its polarization. Interference is the superposition of two or more waves resulting in a new wave pattern. If the fields have components in the same direction, they constructively interfere, while opposite directions cause destructive interference. An example of interference caused by EMR is electromagnetic interference (EMI) or as it is more commonly known as, radio-frequency interference (RFI). Additionally, multiple polarization signals can be combined (i.e. interfered) to form new states of polarization, which is known as parallel polarization state generation. The energy in electromagnetic waves is sometimes called radiant energy. Particle model and quantum theory An anomaly arose in the late 19th century involving a contradiction between the wave theory of light and measurements of the electromagnetic spectra that were being emitted by thermal radiators known as black bodies. Physicists struggled with this problem unsuccessfully for many years. It later became known as the ultraviolet catastrophe. In 1900, Max Planck developed a new theory of black-body radiation that explained the observed spectrum. Planck's theory was based on the idea that black bodies emit light (and other electromagnetic radiation) only as discrete bundles or packets of energy. These packets were called quanta. In 1905, Albert Einstein proposed that light quanta be regarded as real particles. Later the particle of light was given the name photon, to correspond with other particles being described around this time, such as the electron and proton. A photon has an energy, E, proportional to its frequency, f, by where h is Planck's constant, is the wavelength and c is the speed of light. This is sometimes known as the Planck–Einstein equation. In quantum theory (see first quantization) the energy of the photons is thus directly proportional to the frequency of the EMR wave. Likewise, the momentum p of a photon is also proportional to its frequency and inversely proportional to its wavelength: The source of Einstein's proposal that light was composed of particles (or could act as particles in some circumstances) was an experimental anomaly not explained by the wave theory: the photoelectric effect, in which light striking a metal surface ejected electrons from the surface, causing an electric current to flow across an applied voltage. Experimental measurements demonstrated that the energy of individual ejected electrons was proportional to the frequency, rather than the intensity, of the light. Furthermore, below a certain minimum frequency, which depended on the particular metal, no current would flow regardless of the intensity. These observations appeared to contradict the wave theory, and for years physicists tried in vain to find an explanation. In 1905, Einstein explained this puzzle by resurrecting the particle theory of light to explain the observed effect. Because of the preponderance of evidence in favor of the wave theory, however, Einstein's ideas were met initially with great skepticism among established physicists. Eventually Einstein's explanation was accepted as new particle-like behavior of light was observed, such as the Compton effect. As a photon is absorbed by an atom, it excites the atom, elevating an electron to a higher energy level (one that is on average farther from the nucleus). When an electron in an excited molecule or atom descends to a lower energy level, it emits a photon of light at a frequency corresponding to the energy difference. Since the energy levels of electrons in atoms are discrete, each element and each molecule emits and absorbs its own characteristic frequencies. Immediate photon emission is called fluorescence, a type of photoluminescence. An example is visible light emitted from fluorescent paints, in response to ultraviolet (blacklight). Many other fluorescent emissions are known in spectral bands other than visible light. Delayed emission is called phosphorescence. Wave–particle duality The modern theory that explains the nature of light includes the notion of wave–particle duality. More generally, the theory states that everything has both a particle nature and a wave nature, and various experiments can be done to bring out one or the other. The particle nature is more easily discerned using an object with a large mass. A bold proposition by Louis de Broglie in 1924 led the scientific community to realize that matter (e.g. electrons) also exhibits wave–particle duality. Wave and particle effects of electromagnetic radiation Together, wave and particle effects fully explain the emission and absorption spectra of EM radiation. The matter-composition of the medium through which the light travels determines the nature of the absorption and emission spectrum. These bands correspond to the allowed energy levels in the atoms. Dark bands in the absorption spectrum are due to the atoms in an intervening medium between source and observer. The atoms absorb certain frequencies of the light between emitter and detector/eye, then emit them in all directions. A dark band appears to the detector, due to the radiation scattered out of the beam. For instance, dark bands in the light emitted by a distant star are due to the atoms in the star's atmosphere. A similar phenomenon occurs for emission, which is seen when an emitting gas glows due to excitation of the atoms from any mechanism, including heat. As electrons descend to lower energy levels, a spectrum is emitted that represents the jumps between the energy levels of the electrons, but lines are seen because again emission happens only at particular energies after excitation. An example is the emission spectrum of nebulae. Rapidly moving electrons are most sharply accelerated when they encounter a region of force, so they are responsible for producing much of the highest frequency electromagnetic radiation observed in nature. These phenomena can aid various chemical determinations for the composition of gases lit from behind (absorption spectra) and for glowing gases (emission spectra). Spectroscopy (for example) determines what chemical elements comprise a particular star. Spectroscopy is also used in the determination of the distance of a star, using the red shift. Propagation speed When any wire (or other conducting object such as an antenna) conducts alternating current, electromagnetic radiation is propagated at the same frequency as the current. In many such situations it is possible to identify an electrical dipole moment that arises from separation of charges due to the exciting electrical potential, and this dipole moment oscillates in time, as the charges move back and forth. This oscillation at a given frequency gives rise to changing electric and magnetic fields, which then set the electromagnetic radiation in motion. At the quantum level, electromagnetic radiation is produced when the wavepacket of a charged particle oscillates or otherwise accelerates. Charged particles in a stationary state do not move, but a superposition of such states may result in a transition state that has an electric dipole moment that oscillates in time. This oscillating dipole moment is responsible for the phenomenon of radiative transition between quantum states of a charged particle. Such states occur (for example) in atoms when photons are radiated as the atom shifts from one stationary state to another. As a wave, light is characterized by a velocity (the speed of light), wavelength, and frequency. As particles, light is a stream of photons. Each has an energy related to the frequency of the wave given by Planck's relation E = hf, where E is the energy of the photon, h is Planck's constant, 6.626 × 10−34 J·s, and f is the frequency of the wave. One rule is obeyed regardless of circumstances: EM radiation in a vacuum travels at the speed of light, relative to the observer, regardless of the observer's velocity. In a medium (other than vacuum), velocity factor or refractive index are considered, depending on frequency and application. Both of these are ratios of the speed in a medium to speed in a vacuum. Special theory of relativity By the late nineteenth century, various experimental anomalies could not be explained by the simple wave theory. One of these anomalies involved a controversy over the speed of light. The speed of light and other EMR predicted by Maxwell's equations did not appear unless the equations were modified in a way first suggested by FitzGerald and Lorentz (see history of special relativity), or else otherwise that speed would depend on the speed of observer relative to the "medium" (called luminiferous aether) which supposedly "carried" the electromagnetic wave (in a manner analogous to the way air carries sound waves). Experiments failed to find any observer effect. In 1905, Einstein proposed that space and time appeared to be velocity-changeable entities for light propagation and all other processes and laws. These changes accounted for the constancy of the speed of light and all electromagnetic radiation, from the viewpoints of all observers—even those in relative motion. History of discovery Electromagnetic radiation of wavelengths other than those of visible light were discovered in the early 19th century. The discovery of infrared radiation is ascribed to astronomer William Herschel, who published his results in 1800 before the Royal Society of London. Herschel used a glass prism to refract light from the Sun and detected invisible rays that caused heating beyond the red part of the spectrum, through an increase in the temperature recorded with a thermometer. These "calorific rays" were later termed infrared. In 1801, German physicist Johann Wilhelm Ritter discovered ultraviolet in an experiment similar to Herschel's, using sunlight and a glass prism. Ritter noted that invisible rays near the violet edge of a solar spectrum dispersed by a triangular prism darkened silver chloride preparations more quickly than did the nearby violet light. Ritter's experiments were an early precursor to what would become photography. Ritter noted that the ultraviolet rays (which at first were called "chemical rays") were capable of causing chemical reactions. In 1862–64 James Clerk Maxwell developed equations for the electromagnetic field which suggested that waves in the field would travel with a speed that was very close to the known speed of light. Maxwell therefore suggested that visible light (as well as invisible infrared and ultraviolet rays by inference) all consisted of propagating disturbances (or radiation) in the electromagnetic field. Radio waves were first produced deliberately by Heinrich Hertz in 1887, using electrical circuits calculated to produce oscillations at a much lower frequency than that of visible light, following recipes for producing oscillating charges and currents suggested by Maxwell's equations. Hertz also developed ways to detect these waves, and produced and characterized what were later termed radio waves and microwaves. Wilhelm Röntgen discovered and named X-rays. After experimenting with high voltages applied to an evacuated tube on 8 November 1895, he noticed a fluorescence on a nearby plate of coated glass. In one month, he discovered X-rays' main properties. The last portion of the EM spectrum to be discovered was associated with radioactivity. Henri Becquerel found that uranium salts caused fogging of an unexposed photographic plate through a covering paper in a manner similar to X-rays, and Marie Curie discovered that only certain elements gave off these rays of energy, soon discovering the intense radiation of radium. The radiation from pitchblende was differentiated into alpha rays (alpha particles) and beta rays (beta particles) by Ernest Rutherford through simple experimentation in 1899, but these proved to be charged particulate types of radiation. However, in 1900 the French scientist Paul Villard discovered a third neutrally charged and especially penetrating type of radiation from radium, and after he described it, Rutherford realized it must be yet a third type of radiation, which in 1903 Rutherford named gamma rays. In 1910 British physicist William Henry Bragg demonstrated that gamma rays are electromagnetic radiation, not particles, and in 1914 Rutherford and Edward Andrade measured their wavelengths, finding that they were similar to X-rays but with shorter wavelengths and higher frequency, although a 'cross-over' between X and gamma rays makes it possible to have X-rays with a higher energy (and hence shorter wavelength) than gamma rays and vice versa. The origin of the ray differentiates them, gamma rays tend to be natural phenomena originating
distance from those charges. Thus, EMR is sometimes referred to as the far field. In this language, the near field refers to EM fields near the charges and current that directly produced them, specifically electromagnetic induction and electrostatic induction phenomena. In quantum mechanics, an alternate way of viewing EMR is that it consists of photons, uncharged elementary particles with zero rest mass which are the quanta of the electromagnetic field, responsible for all electromagnetic interactions. Quantum electrodynamics is the theory of how EMR interacts with matter on an atomic level. Quantum effects provide additional sources of EMR, such as the transition of electrons to lower energy levels in an atom and black-body radiation. The energy of an individual photon is quantized and is greater for photons of higher frequency. This relationship is given by Planck's equation E = hf, where E is the energy per photon, f is the frequency of the photon, and h is Planck's constant. A single gamma ray photon, for example, might carry ~100,000 times the energy of a single photon of visible light. The effects of EMR upon chemical compounds and biological organisms depend both upon the radiation's power and its frequency. EMR of visible or lower frequencies (i.e., visible light, infrared, microwaves, and radio waves) is called non-ionizing radiation, because its photons do not individually have enough energy to ionize atoms or molecules, or break chemical bonds. The effects of these radiations on chemical systems and living tissue are caused primarily by heating effects from the combined energy transfer of many photons. In contrast, high frequency ultraviolet, X-rays and gamma rays are called ionizing radiation, since individual photons of such high frequency have enough energy to ionize molecules or break chemical bonds. These radiations have the ability to cause chemical reactions and damage living cells beyond that resulting from simple heating, and can be a health hazard. Physics Theory Maxwell's equations James Clerk Maxwell derived a wave form of the electric and magnetic equations, thus uncovering the wave-like nature of electric and magnetic fields and their symmetry. Because the speed of EM waves predicted by the wave equation coincided with the measured speed of light, Maxwell concluded that light itself is an EM wave. Maxwell's equations were confirmed by Heinrich Hertz through experiments with radio waves. Maxwell realized that since a lot of physics is symmetrical and mathematically artistic in a way, that there must also be a symmetry between electricity and magnetism. He realized that light is a combination of electricity and magnetism and thus that the two must be tied together. According to Maxwell's equations, a spatially varying electric field is always associated with a magnetic field that changes over time. Likewise, a spatially varying magnetic field is associated with specific changes over time in the electric field. In an electromagnetic wave, the changes in the electric field are always accompanied by a wave in the magnetic field in one direction, and vice versa. This relationship between the two occurs without either type of field causing the other; rather, they occur together in the same way that time and space changes occur together and are interlinked in special relativity. In fact, magnetic fields can be viewed as electric fields in another frame of reference, and electric fields can be viewed as magnetic fields in another frame of reference, but they have equal significance as physics is the same in all frames of reference, so the close relationship between space and time changes here is more than an analogy. Together, these fields form a propagating electromagnetic wave, which moves out into space and need never again interact with the source. The distant EM field formed in this way by the acceleration of a charge carries energy with it that "radiates" away through space, hence the term. Near and far fields Maxwell's equations established that some charges and currents ("sources") produce a local type of electromagnetic field near them that does not have the behaviour of EMR. Currents directly produce a magnetic field, but it is of a magnetic dipole type that dies out with distance from the current. In a similar manner, moving charges pushed apart in a conductor by a changing electrical potential (such as in an antenna) produce an electric dipole type electrical field, but this also declines with distance. These fields make up the near-field near the EMR source. Neither of these behaviours are responsible for EM radiation. Instead, they cause electromagnetic field behaviour that only efficiently transfers power to a receiver very close to the source, such as the magnetic induction inside a transformer, or the feedback behaviour that happens close to the coil of a metal detector. Typically, near-fields have a powerful effect on their own sources, causing an increased "load" (decreased electrical reactance) in the source or transmitter, whenever energy is withdrawn from the EM field by a receiver. Otherwise, these fields do not "propagate" freely out into space, carrying their energy away without distance-limit, but rather oscillate, returning their energy to the transmitter if it is not received by a receiver. By contrast, the EM far-field is composed of radiation that is free of the transmitter in the sense that (unlike the case in an electrical transformer) the transmitter requires the same power to send these changes in the fields out, whether the signal is immediately picked up or not. This distant part of the electromagnetic field is "electromagnetic radiation" (also called the far-field). The far-fields propagate (radiate) without allowing the transmitter to affect them. This causes them to be independent in the sense that their existence and their energy, after they have left the transmitter, is completely independent of both transmitter and receiver. Due to conservation of energy, the amount of power passing through any spherical surface drawn around the source is the same. Because such a surface has an area proportional to the square of its distance from the source, the power density of EM radiation always decreases with the inverse square of the distance from the source; this is called the inverse-square law. This is in contrast to dipole parts of the EM field close to the source (the near-field), which vary in power according to an inverse cube power law, and thus do not transport a conserved amount of energy over distances, but instead fade with distance, with its energy (as noted) rapidly returning to the transmitter or absorbed by a nearby receiver (such as a transformer secondary coil). The far-field (EMR) depends on a different mechanism for its production than the near-field, and upon different terms in Maxwell's equations. Whereas the magnetic part of the near-field is due to currents in the source, the magnetic field in EMR is due only to the local change in the electric field. In a similar way, while the electric field in the near-field is due directly to the charges and charge-separation in the source, the electric field in EMR is due to a change in the local magnetic field. Both processes for producing electric and magnetic EMR fields have a different dependence on distance than do near-field dipole electric and magnetic fields. That is why the EMR type of EM field becomes dominant in power "far" from sources. The term "far from sources" refers to how far from the source (moving at the speed of light) any portion of the outward-moving EM field is located, by the time that source currents are changed by the varying source potential, and the source has therefore begun to generate an outwardly moving EM field of a different phase. A more compact view of EMR is that the far-field that composes EMR is generally that part of the EM field that has traveled sufficient distance from the source, that it has become completely disconnected from any feedback to the charges and currents that were originally responsible for it. Now independent of the source charges, the EM field, as it moves farther away, is dependent only upon the accelerations of the charges that produced it. It no longer has a strong connection to the direct fields of the charges, or to the velocity of the charges (currents). In the Liénard–Wiechert potential formulation of the electric and magnetic fields due to motion of a single particle (according to Maxwell's equations), the terms associated with acceleration of the particle are those that are responsible for the part of the field that is regarded as electromagnetic radiation. By contrast, the term associated with the changing static electric field of the particle and the magnetic term that results from the particle's uniform velocity, are both associated with the electromagnetic near-field, and do not comprise EM radiation. Properties Electrodynamics is the physics of electromagnetic radiation, and electromagnetism is the physical phenomenon associated with the theory of electrodynamics. Electric and magnetic fields obey the properties of superposition. Thus, a field due to any particular particle or time-varying electric or magnetic field contributes to the fields present in the same space due to other causes. Further, as they are vector fields, all magnetic and electric field vectors add together according to vector addition. For example, in optics two or more coherent light waves may interact and by constructive or destructive interference yield a resultant irradiance deviating from the sum of the component irradiances of the individual light waves. The electromagnetic fields of light are not affected by traveling through static electric or magnetic fields in a linear medium such as a vacuum. However, in nonlinear media, such as some crystals, interactions can occur between light and static electric and magnetic fields—these interactions include the Faraday effect and the Kerr effect. In refraction, a wave crossing from one medium to another of different density alters its speed and direction upon entering the new medium. The ratio of the refractive indices of the media determines the degree of refraction, and is summarized by Snell's law. Light of composite wavelengths (natural sunlight) disperses into a visible spectrum passing through a prism, because of the wavelength-dependent refractive index of the prism material (dispersion); that is, each component wave within the composite light is bent a different amount. EM radiation exhibits both wave properties and particle properties at the same time (see wave-particle duality). Both wave and particle characteristics have been confirmed in many experiments. Wave characteristics are more apparent when EM radiation is measured over relatively large timescales and over large distances while particle characteristics are more evident when measuring small timescales and distances. For example, when electromagnetic radiation is absorbed by matter, particle-like properties will be more obvious when the average number of photons in the cube of the relevant wavelength is much smaller than 1. It is not so difficult to experimentally observe non-uniform deposition of energy when light is absorbed, however this alone is not evidence of "particulate" behavior. Rather, it reflects the quantum nature of matter. Demonstrating that the light itself is quantized, not merely its interaction with matter, is a more subtle affair. Some experiments display both the wave and particle natures of electromagnetic waves, such as the self-interference of a single photon. When a single photon is sent through an interferometer, it passes through both paths, interfering with itself, as waves do, yet is detected by a photomultiplier or other sensitive detector only once. A quantum theory of the interaction between electromagnetic radiation and matter such as electrons is described by the theory of quantum electrodynamics. Electromagnetic waves can be polarized, reflected, refracted, diffracted or interfere with each other. Wave model In homogeneous, isotropic media, electromagnetic radiation is a transverse wave, meaning that its oscillations are perpendicular to the direction of energy transfer and travel. It comes from the following equations:These equations predicate that any electromagnetic wave must be a transverse wave, where the electric field and the magnetic field are both perpendicular to the direction of wave propagation. The electric and magnetic parts of the field in an electromagnetic wave stand in a fixed ratio of strengths to satisfy the two Maxwell equations that specify how one is produced from the other. In dissipation-less (lossless) media, these E and B fields are also in phase, with both reaching maxima and minima at the same points in space (see illustrations). A common misconception is that the E and B fields in electromagnetic radiation are out of phase because a change in one produces the other, and this would produce a phase difference between them as sinusoidal functions (as indeed happens in electromagnetic induction, and in the near-field close to antennas). However, in the far-field EM radiation which is described by the two source-free Maxwell curl operator equations, a more correct description is that a time-change in one type of field is proportional to a space-change in the other. These derivatives require that the E and B fields in EMR are in-phase (see mathematics section below). An important aspect of light's nature is its frequency. The frequency of a wave is its rate of oscillation and is measured in hertz, the SI unit of frequency, where one hertz is equal to one oscillation per second. Light usually has multiple frequencies that sum to form the resultant wave. Different frequencies undergo different angles of refraction, a phenomenon known as dispersion. A monochromatic wave (a wave of a single frequency) consists of successive troughs and crests, and the distance between two adjacent crests or troughs is called the wavelength. Waves of the electromagnetic spectrum vary in size, from very long radio waves longer than a continent to very short gamma rays smaller than atom nuclei. Frequency is inversely proportional to wavelength, according to the equation: where v is the speed of the wave (c in a vacuum or less in other media), f is the frequency and λ is the wavelength. As waves cross boundaries between different media, their speeds change but their frequencies remain constant. Electromagnetic waves in free space must be solutions of Maxwell's electromagnetic wave equation. Two main classes of solutions are known, namely plane waves and spherical waves. The plane waves may be viewed as the limiting case of spherical waves at a very large (ideally infinite) distance from the source. Both types of waves can have a waveform which is an arbitrary time function (so long as it is sufficiently differentiable to conform to the wave equation). As with any time function, this can be decomposed by means of Fourier analysis into its frequency spectrum, or individual sinusoidal components, each of which contains a single frequency, amplitude and phase. Such a component wave is said to be monochromatic. A monochromatic electromagnetic wave can be characterized by its frequency or wavelength, its peak amplitude, its phase relative to some reference phase, its direction of propagation, and its polarization. Interference is the superposition of two or more waves resulting in a new wave pattern. If the fields have components in the same direction, they constructively interfere, while opposite directions cause destructive interference. An example of interference caused by EMR is electromagnetic interference (EMI) or as it is more commonly known as, radio-frequency interference (RFI). Additionally, multiple polarization signals can be combined (i.e. interfered) to form new states of polarization, which is known as parallel polarization state generation. The energy in electromagnetic waves is sometimes called radiant energy. Particle model and quantum theory An anomaly arose in the late 19th century involving a contradiction between the wave theory of light and measurements of the electromagnetic spectra that were being emitted by thermal radiators known as black bodies. Physicists struggled with this problem unsuccessfully for many years. It later became known as the ultraviolet catastrophe. In 1900, Max Planck developed a new theory of black-body radiation that explained the observed spectrum. Planck's theory was based on the idea that black bodies emit light (and other electromagnetic radiation) only as discrete bundles or packets of energy. These packets were called quanta. In 1905, Albert Einstein proposed that light quanta be regarded as real particles. Later the particle of light was given the name photon, to correspond with other particles being described around this time, such as the electron and proton. A photon has an energy, E, proportional to its frequency, f, by where h is Planck's constant, is the wavelength and c is the speed of light. This is sometimes known as the Planck–Einstein equation. In quantum theory (see first quantization) the energy of the photons is thus directly proportional to the frequency of the EMR wave. Likewise, the momentum p of a photon is also proportional to its frequency and inversely proportional to its wavelength: The source of Einstein's proposal that light was composed of particles (or could act as particles in some circumstances) was an experimental anomaly not explained by the wave theory: the photoelectric effect, in which light striking a metal surface ejected electrons from the surface, causing an electric current to flow across an applied voltage. Experimental measurements demonstrated that the energy of individual ejected electrons was proportional to the frequency, rather than the intensity, of the light. Furthermore, below a certain minimum frequency, which depended on the particular metal, no current would flow regardless of the intensity. These observations appeared to contradict the wave theory, and for years physicists tried in vain to find an explanation. In 1905, Einstein explained this puzzle by resurrecting the particle theory of light to explain the observed effect. Because of the preponderance of evidence in favor of the wave theory, however, Einstein's ideas were met initially with great skepticism among established physicists. Eventually Einstein's explanation was accepted as new particle-like behavior of light was observed, such as the Compton effect. As a photon is absorbed by an atom, it excites the atom, elevating an electron to a higher energy level (one that is on average farther from the nucleus). When an electron in an excited molecule or atom descends to a lower energy level, it emits a photon of light at a frequency corresponding to the energy difference. Since the energy levels of electrons in atoms are discrete, each element and each molecule emits and absorbs its own characteristic frequencies. Immediate photon emission is called fluorescence, a type of photoluminescence. An example is visible light emitted from fluorescent paints, in response to ultraviolet (blacklight). Many other fluorescent emissions are known in spectral bands other than visible light. Delayed emission is called phosphorescence. Wave–particle duality The modern theory that explains the nature of light includes the notion of wave–particle duality. More generally, the theory states that everything has both a particle nature and a wave nature, and various experiments can be done to bring out one or the other. The particle nature is more easily discerned using an object with a large mass. A bold proposition by Louis de Broglie in 1924 led the scientific community to realize that matter (e.g. electrons) also exhibits wave–particle duality. Wave and particle effects of electromagnetic radiation Together, wave and particle effects fully explain the emission and absorption spectra of EM radiation. The matter-composition of the medium through which the light travels determines the nature of the absorption and emission spectrum. These bands correspond to the allowed energy levels in the atoms. Dark bands in the absorption spectrum are due to the atoms in an intervening medium between source and observer. The atoms absorb certain frequencies of the light between emitter and detector/eye, then emit them in all directions. A dark band appears to the detector, due to the radiation scattered out of the beam. For instance, dark bands in the light emitted by a distant star are due to the atoms in the star's atmosphere. A similar phenomenon occurs for emission, which is seen when an emitting gas glows due to excitation of the atoms from any mechanism, including heat. As electrons descend to lower energy levels, a spectrum is emitted that represents the jumps between the energy levels of the electrons, but lines are seen because again emission happens only at particular energies after excitation. An example is the emission spectrum of nebulae. Rapidly moving electrons are most sharply accelerated when they encounter a region of force, so they are responsible for producing much of the highest frequency electromagnetic radiation observed in nature. These phenomena can aid various chemical determinations for the composition of gases lit from behind (absorption spectra) and for glowing gases (emission spectra). Spectroscopy (for example) determines what chemical elements comprise a particular star. Spectroscopy is also used in the determination of the distance of a star, using the red shift. Propagation speed When any wire (or other conducting object such as an antenna) conducts alternating current, electromagnetic radiation is propagated at the same frequency as the current. In many such situations it is possible to identify an electrical dipole moment that arises from separation of charges due to the exciting electrical potential, and this dipole moment oscillates in time, as the charges move back and forth. This oscillation at a given frequency gives rise to changing electric and magnetic fields, which then set the electromagnetic radiation in motion. At the quantum level, electromagnetic radiation is produced when the wavepacket of a charged particle oscillates or otherwise accelerates. Charged particles in a stationary state do not move, but a superposition of such states may result in a transition state that has an electric dipole moment that oscillates in time. This oscillating dipole moment is responsible for the phenomenon of radiative transition between quantum states of a charged particle. Such states occur (for example) in atoms when photons are radiated as the atom shifts from one stationary state to another. As a wave, light is characterized by a velocity (the speed of light), wavelength, and frequency. As particles, light is a stream of photons. Each has an energy related to the frequency of the wave given by Planck's relation E = hf, where E is the energy of the photon, h is Planck's constant, 6.626 × 10−34 J·s, and f is the frequency of the wave. One rule is obeyed regardless of circumstances: EM radiation in a vacuum travels at the speed of light, relative to the observer, regardless of the observer's velocity. In a medium (other than vacuum), velocity factor or refractive index are considered, depending on frequency and application. Both of these are ratios of the speed in a medium to speed in a vacuum. Special theory of relativity By the late nineteenth century, various experimental anomalies could not be explained by the simple wave theory. One of these anomalies involved a controversy over the speed of light. The speed of light and other EMR predicted by Maxwell's equations did not appear unless the equations were modified in a way first suggested by FitzGerald and Lorentz (see history of special relativity), or else otherwise that speed would depend on the speed of observer relative to the "medium" (called luminiferous aether) which supposedly "carried" the electromagnetic wave (in a manner analogous to the way air carries sound waves). Experiments failed to find any observer effect. In 1905, Einstein proposed that space and time appeared to be velocity-changeable entities for light propagation and all other processes and laws. These changes accounted for the constancy of the speed of light and all electromagnetic radiation, from the viewpoints of all observers—even those in relative motion. History
Lewis, Hemingway was a journalist before becoming a novelist. After leaving high school he went to work for The Kansas City Star as a cub reporter. Although he stayed there for only six months, he relied on the Stars style guide as a foundation for his writing: "Use short sentences. Use short first paragraphs. Use vigorous English. Be positive, not negative." World War I In December 1917, after being rejected by the U.S. Army for poor eyesight, Hemingway responded to a Red Cross recruitment effort and signed on to be an ambulance driver in Italy, In May 1918, he sailed from New York, and arrived in Paris as the city was under bombardment from German artillery. That June he arrived at the Italian Front. On his first day in Milan, he was sent to the scene of a munitions factory explosion to join rescuers retrieving the shredded remains of female workers. He described the incident in his 1932 non-fiction book Death in the Afternoon: "I remember that after we searched quite thoroughly for the complete dead we collected fragments." A few days later, he was stationed at Fossalta di Piave. On July 8, he was seriously wounded by mortar fire, having just returned from the canteen bringing chocolate and cigarettes for the men at the front line. Despite his wounds, Hemingway assisted Italian soldiers to safety, for which he was decorated with the Italian Silver Medal of Military Valor. He was still only 18 at the time. Hemingway later said of the incident: "When you go to war as a boy you have a great illusion of immortality. Other people get killed; not you ... Then when you are badly wounded the first time you lose that illusion and you know it can happen to you." He sustained severe shrapnel wounds to both legs, underwent an immediate operation at a distribution center, and spent five days at a field hospital before he was transferred for recuperation to the Red Cross hospital in Milan. He spent six months at the hospital, where he met and formed a strong friendship with "Chink" Dorman-Smith that lasted for decades and shared a room with future American foreign service officer, ambassador, and author Henry Serrano Villard. While recuperating he fell in love with Agnes von Kurowsky, a Red Cross nurse seven years his senior. When Hemingway returned to the United States in January 1919, he believed Agnes would join him within months and the two would marry. Instead, he received a letter in March with her announcement that she was engaged to an Italian officer. Biographer Jeffrey Meyers writes Agnes's rejection devastated and scarred the young man; in future relationships, Hemingway followed a pattern of abandoning a wife before she abandoned him. Toronto and Chicago Hemingway returned home early in 1919 to a time of readjustment. Before the age of 20, he had gained from the war a maturity that was at odds with living at home without a job and with the need for recuperation. As Reynolds explains, "Hemingway could not really tell his parents what he thought when he saw his bloody knee." He was not able to tell them how scared he had been "in another country with surgeons who could not tell him in English if his leg was coming off or not." In September, he took a fishing and camping trip with high-school friends to the back-country of Michigan's Upper Peninsula. The trip became the inspiration for his short story "Big Two-Hearted River", in which the semi-autobiographical character Nick Adams takes to the country to find solitude after returning from war. A family friend offered him a job in Toronto, and with nothing else to do, he accepted. Late that year he began as a freelancer and staff writer for the Toronto Star Weekly. He returned to Michigan the following June and then moved to Chicago in September 1920 to live with friends, while still filing stories for the Toronto Star. In Chicago, he worked as an associate editor of the monthly journal Cooperative Commonwealth, where he met novelist Sherwood Anderson. When St. Louis native Hadley Richardson came to Chicago to visit the sister of Hemingway's roommate, Hemingway became infatuated. He later claimed, "I knew she was the girl I was going to marry." Hadley, red-haired, with a "nurturing instinct", was eight years older than Hemingway. Despite the age difference, Hadley, who had grown up with an overprotective mother, seemed less mature than usual for a young woman her age. Bernice Kert, author of The Hemingway Women, claims Hadley was "evocative" of Agnes, but that Hadley had a childishness that Agnes lacked. The two corresponded for a few months and then decided to marry and travel to Europe. They wanted to visit Rome, but Sherwood Anderson convinced them to visit Paris instead, writing letters of introduction for the young couple. They were married on September 3, 1921; two months later Hemingway was hired as a foreign correspondent for the Toronto Star, and the couple left for Paris. Of Hemingway's marriage to Hadley, Meyers claims: "With Hadley, Hemingway achieved everything he had hoped for with Agnes: the love of a beautiful woman, a comfortable income, a life in Europe." Paris Carlos Baker, Hemingway's first biographer, believes that while Anderson suggested Paris because "the monetary exchange rate" made it an inexpensive place to live, more importantly it was where "the most interesting people in the world" lived. In Paris, Hemingway met American writer and art collector Gertrude Stein, Irish novelist James Joyce, American poet Ezra Pound (who "could help a young writer up the rungs of a career") and other writers. The Hemingway of the early Paris years was a "tall, handsome, muscular, broad-shouldered, brown-eyed, rosy-cheeked, square-jawed, soft-voiced young man." He and Hadley lived in a small walk-up at 74 rue du Cardinal Lemoine in the Latin Quarter, and he worked in a rented room in a nearby building. Stein, who was the bastion of modernism in Paris, became Hemingway's mentor and godmother to his son Jack; she introduced him to the expatriate artists and writers of the Montparnasse Quarter, whom she referred to as the "Lost Generation"—a term Hemingway popularized with the publication of The Sun Also Rises. A regular at Stein's salon, Hemingway met influential painters such as Pablo Picasso, Joan Miró, and Juan Gris. He eventually withdrew from Stein's influence, and their relationship deteriorated into a literary quarrel that spanned decades. Ezra Pound met Hemingway by chance at Sylvia Beach's bookshop Shakespeare and Company in 1922. The two toured Italy in 1923 and lived on the same street in 1924. They forged a strong friendship, and in Hemingway, Pound recognized and fostered a young talent. Pound introduced Hemingway to James Joyce, with whom Hemingway frequently embarked on "alcoholic sprees". During his first 20 months in Paris, Hemingway filed 88 stories for the Toronto Star newspaper. He covered the Greco-Turkish War, where he witnessed the burning of Smyrna, and wrote travel pieces such as "Tuna Fishing in Spain" and "Trout Fishing All Across Europe: Spain Has the Best, Then Germany". He described also the retreat of the Greek army with civilians from East Thrace. Hemingway was devastated on learning that Hadley had lost a suitcase filled with his manuscripts at the Gare de Lyon as she was traveling to Geneva to meet him in December 1922. In the following September the couple returned to Toronto, where their son John Hadley Nicanor was born on October 10, 1923. During their absence, Hemingway's first book, Three Stories and Ten Poems, was published. Two of the stories it contained were all that remained after the loss of the suitcase, and the third had been written early the previous year in Italy. Within months a second volume, in our time (without capitals), was published. The small volume included six vignettes and a dozen stories Hemingway had written the previous summer during his first visit to Spain, where he discovered the thrill of the corrida. He missed Paris, considered Toronto boring, and wanted to return to the life of a writer, rather than live the life of a journalist. Hemingway, Hadley and their son (nicknamed Bumby) returned to Paris in January 1924 and moved into a new apartment on the rue Notre-Dame des Champs. Hemingway helped Ford Madox Ford edit The Transatlantic Review, which published works by Pound, John Dos Passos, Baroness Elsa von Freytag-Loringhoven, and Stein, as well as some of Hemingway's own early stories such as "Indian Camp". When In Our Time was published in 1925, the dust jacket bore comments from Ford. "Indian Camp" received considerable praise; Ford saw it as an important early story by a young writer, and critics in the United States praised Hemingway for reinvigorating the short story genre with his crisp style and use of declarative sentences. Six months earlier, Hemingway had met F. Scott Fitzgerald, and the pair formed a friendship of "admiration and hostility". Fitzgerald had published The Great Gatsby the same year: Hemingway read it, liked it, and decided his next work had to be a novel. With his wife Hadley, Hemingway first visited the Festival of San Fermín in Pamplona, Spain, in 1923, where he became fascinated by bullfighting. It is at this time that he began to be referred to as "Papa", even by much older friends. Hadley would much later recall that Hemingway had his own nicknames for everyone and that he often did things for his friends; she suggested that he liked to be looked up to. She didn't remember precisely how the nickname came into being; however, it certainly stuck. The Hemingways returned to Pamplona in 1924 and a third time in June 1925; that year they brought with them a group of American and British expatriates: Hemingway's Michigan boyhood friend Bill Smith, Donald Ogden Stewart, Lady Duff Twysden (recently divorced), her lover Pat Guthrie, and Harold Loeb. A few days after the fiesta ended, on his birthday (July 21), he began to write the draft of what would become The Sun Also Rises, finishing eight weeks later. A few months later, in December 1925, the Hemingways left to spend the winter in Schruns, Austria, where Hemingway began revising the manuscript extensively. Pauline Pfeiffer joined them in January and against Hadley's advice, urged Hemingway to sign a contract with Scribner's. He left Austria for a quick trip to New York to meet with the publishers, and on his return, during a stop in Paris, began an affair with Pfeiffer, before returning to Schruns to finish the revisions in March. The manuscript arrived in New York in April; he corrected the final proof in Paris in August 1926, and Scribner's published the novel in October. The Sun Also Rises epitomized the post-war expatriate generation, received good reviews and is "recognized as Hemingway's greatest work". Hemingway himself later wrote to his editor Max Perkins that the "point of the book" was not so much about a generation being lost, but that "the earth abideth forever"; he believed the characters in The Sun Also Rises may have been "battered" but were not lost. Hemingway's marriage to Hadley deteriorated as he was working on The Sun Also Rises. In early 1926, Hadley became aware of his affair with Pfeiffer, who came to Pamplona with them that July. On their return to Paris, Hadley asked for a separation; in November she formally requested a divorce. They split their possessions while Hadley accepted Hemingway's offer of the proceeds from The Sun Also Rises. The couple were divorced in January 1927, and Hemingway married Pfeiffer in May. Pfeiffer, who was from a wealthy Catholic Arkansas family, had moved to Paris to work for Vogue magazine. Before their marriage, Hemingway converted to Catholicism. They honeymooned in Le Grau-du-Roi, where he contracted anthrax, and he planned his next collection of short stories, Men Without Women, which was published in October 1927, and included his boxing story "Fifty Grand". Cosmopolitan magazine editor-in-chief Ray Long praised "Fifty Grand", calling it, "one of the best short stories that ever came to my hands ... the best prize-fight story I ever read ... a remarkable piece of realism." By the end of the year Pauline, who was pregnant, wanted to move back to America. John Dos Passos recommended Key West, and they left Paris in March 1928. Hemingway suffered a severe injury in their Paris bathroom when he pulled a skylight down on his head thinking he was pulling on a toilet chain. This left him with a prominent forehead scar, which he carried for the rest of his life. When Hemingway was asked about the scar, he was reluctant to answer. After his departure from Paris, Hemingway "never again lived in a big city". Key West and the Caribbean Hemingway and Pauline traveled to Kansas City, where their son Patrick was born on June 28, 1928. Pauline had a difficult delivery; Hemingway fictionalized a version of the event as a part of A Farewell to Arms. After Patrick's birth, Pauline and Hemingway traveled to Wyoming, Massachusetts, and New York. In the winter, he was in New York with Bumby, about to board a train to Florida, when he received a cable telling him that his father had killed himself. Hemingway was devastated, having earlier written to his father telling him not to worry about financial difficulties; the letter arrived minutes after the suicide. He realized how Hadley must have felt after her own father's suicide in 1903, and he commented, "I'll probably go the same way." Upon his return to Key West in December, Hemingway worked on the draft of A Farewell to Arms before leaving for France in January. He had finished it in August but delayed the revision. The serialization in Scribner's Magazine was scheduled to begin in May, but as late as April, Hemingway was still working on the ending, which he may have rewritten as many as seventeen times. The completed novel was published on September 27. Biographer James Mellow believes A Farewell to Arms established Hemingway's stature as a major American writer and displayed a level of complexity not apparent in The Sun Also Rises.(The story was turned into a play by war veteran Laurence Stallings which was the basis for the film starring Gary Cooper.) In Spain in mid-1929, Hemingway researched his next work, Death in the Afternoon. He wanted to write a comprehensive treatise on bullfighting, explaining the toreros and corridas complete with glossaries and appendices, because he believed bullfighting was "of great tragic interest, being literally of life and death." During the early 1930s, Hemingway spent his winters in Key West and summers in Wyoming, where he found "the most beautiful country he had seen in the American West" and hunted deer, elk, and grizzly bear. He was joined there by Dos Passos, and in November 1930, after bringing Dos Passos to the train station in Billings, Montana, Hemingway broke his arm in a car accident. The surgeon tended the compound spiral fracture and bound the bone with kangaroo tendon. Hemingway was hospitalized for seven weeks, with Pauline tending to him; the nerves in his writing hand took as long as a year to heal, during which time he suffered intense pain. His third child, Gloria Hemingway, was born a year later on November 12, 1931, in Kansas City as "Gregory Hancock Hemingway". Pauline's uncle bought the couple a house in Key West with a carriage house, the second floor of which was converted into a writing studio. While in Key West, Hemingway frequented the local bar Sloppy Joe's. He invited friends—including Waldo Peirce, Dos Passos, and Max Perkins—to join him on fishing trips and on an all-male expedition to the Dry Tortugas. Meanwhile, he continued to travel to Europe and to Cuba, and—although in 1933 he wrote of Key West, "We have a fine house here, and kids are all well"—Mellow believes he "was plainly restless". In 1933, Hemingway and Pauline went on safari to Kenya. The 10-week trip provided material for Green Hills of Africa, as well as for the short stories "The Snows of Kilimanjaro" and "The Short Happy Life of Francis Macomber". The couple visited Mombasa, Nairobi, and Machakos in Kenya; then moved on to Tanganyika Territory, where they hunted in the Serengeti, around Lake Manyara, and west and southeast of present-day Tarangire National Park. Their guide was the noted "white hunter" Philip Percival who had guided Theodore Roosevelt on his 1909 safari. During these travels, Hemingway contracted amoebic dysentery that caused a prolapsed intestine, and he was evacuated by plane to Nairobi, an experience reflected in "The Snows of Kilimanjaro". On Hemingway's return to Key West in early 1934, he began work on Green Hills of Africa, which he published in 1935 to mixed reviews. Hemingway bought a boat in 1934, named it the Pilar, and began sailing the Caribbean. In 1935 he first arrived at Bimini, where he spent a considerable amount of time. During this period he also worked on To Have and Have Not, published in 1937 while he was in Spain, the only novel he wrote during the 1930s. In 1934 Hemingway invited Charles Cadwalader, president of the Academy of Natural Sciences of Philadelphia) and the Academy's ichthyologist, Henry Weed Fowler, to Cuba to study billfishes, they stayed with Hemingway for six weeks and the three men developed a friendship which continued after this trip and Hemingway sent specimens to, and corresponded with, both Fowler and Cadwalader afterwards. Fowler named the spinycheek scorpionfish (Neomerithe hemingwayi) in honor of the author. Spanish Civil War In 1937, Hemingway left for Spain to cover the Spanish Civil War for the North American Newspaper Alliance (NANA), despite Pauline's reluctance to have him working in a war zone. He and Dos Passos both signed on to work with Dutch filmmaker Joris Ivens as screenwriters for The Spanish Earth. Dos Passos left the project after the execution of José Robles, his friend and Spanish translator, which caused a rift between the two writers. Hemingway was joined in Spain by journalist and writer Martha Gellhorn, who he had met in Key West a year earlier. Like Hadley, Martha was a St. Louis native, and like Pauline, she had worked for Vogue in Paris. Of Martha, Kert explains, "she never catered to him the way other women did". In July 1937 he attended the Second International Writers' Congress, the purpose of which was to discuss the attitude of intellectuals to the war, held in Valencia, Barcelona and Madrid and attended by many writers including André Malraux, Stephen Spender and Pablo Neruda. Late in 1937, while in Madrid with Martha, Hemingway wrote his only play, The Fifth Column, as the city was being bombarded by Francoist forces. He returned to Key West for a few months, then back to Spain twice in 1938, where he was present at the Battle of the Ebro, the last republican stand, and he was among the British and American journalists who were some of the last to leave the battle as they crossed the river. Cuba In early 1939, Hemingway crossed to Cuba in his boat to live in the Hotel Ambos Mundos in Havana. This was the separation phase of a slow and painful split from Pauline, which began when Hemingway met Martha Gellhorn. Martha soon joined him in Cuba, and they rented "Finca Vigía" ("Lookout Farm"), a property from Havana. Pauline and the children left Hemingway that summer, after the family was reunited during a visit to Wyoming; when his divorce from Pauline was finalized, he and Martha were married on November 20, 1940, in Cheyenne, Wyoming. Hemingway moved his primary summer residence to Ketchum, Idaho, just outside the newly built resort of Sun Valley, and moved his winter residence to Cuba. He had been disgusted when a Parisian friend allowed his cats to eat from the table, but he became enamored of cats in Cuba and kept dozens of them on the property. Descendants of his cats live at his Key West home. Gellhorn inspired him to write his most famous novel, For Whom the Bell Tolls, which he began in March 1939 and finished in July 1940. It was published in October 1940. His pattern was to move around while working on a manuscript, and he wrote For Whom the Bell Tolls in Cuba, Wyoming, and Sun Valley. It became a Book-of-the-Month Club choice, sold half a million copies within months, was nominated for a Pulitzer Prize and, in the words of Meyers, "triumphantly re-established Hemingway's literary reputation". In January 1941, Martha was sent to China on assignment for Collier's magazine. Hemingway went with her, sending in dispatches for the newspaper PM, but in general he disliked China. A 2009 book suggests during that period he may have been recruited to work for Soviet intelligence agents under the name "Agent Argo". They returned to Cuba before the declaration of war by the United States that December, when he convinced the Cuban government to help him refit the Pilar, which he intended to use to ambush German submarines off the coast of Cuba. World War II Hemingway was in Europe from May 1944 to March 1945. When he arrived in London, he met Time magazine correspondent Mary Welsh, with whom he became infatuated. Martha had been forced to cross the Atlantic in a ship filled with explosives because Hemingway refused to help her get a press pass on a plane, and she arrived in London to find him hospitalized with a concussion from a car accident. She was unsympathetic to his plight; she accused him of being a bully and told him that she was "through, absolutely finished". The last time that Hemingway saw Martha was in March 1945 as he was preparing to return to Cuba, and their divorce was finalized later that year. Meanwhile, he had asked Mary Welsh to marry him on their third meeting. Hemingway accompanied the troops to the Normandy Landings wearing a large head bandage, according to Meyers, but he was considered "precious cargo" and not allowed ashore. The landing craft came within sight of Omaha Beach before coming under enemy fire and turning back. Hemingway later wrote in Collier's that he could see "the first, second, third, fourth and fifth waves of [landing troops] lay where they had fallen, looking like so many heavily laden bundles on the flat pebbly stretch between the sea and first cover". Mellow explains that, on that first day, none of the correspondents were allowed to land and Hemingway was returned to the Dorothea Dix. Late in July, he attached himself to "the 22nd Infantry Regiment commanded by Col. Charles "Buck" Lanham, as it drove toward Paris", and Hemingway became de facto leader to a small band of village militia in Rambouillet outside of Paris. Paul Fussell remarks: "Hemingway got into considerable trouble playing infantry captain to a group of Resistance people that he gathered because a correspondent is not supposed to lead troops, even if he does it well." This was in fact in contravention of the Geneva Convention, and Hemingway was brought up on formal charges; he said that he "beat the rap" by claiming that he only offered advice. On August 25, he was present at the liberation of Paris as a journalist; contrary to the Hemingway legend, he was not the first into the city, nor did he liberate the Ritz. In Paris, he visited Sylvia Beach and Pablo Picasso with Mary Welsh, who joined him there; in a spirit of happiness, he forgave Gertrude Stein. Later that year, he observed heavy fighting in the Battle of Hürtgen Forest. On December 17, 1944, he had himself driven to Luxembourg in spite of illness to cover The Battle of the Bulge. As soon as he arrived, however, Lanham handed him to the doctors, who hospitalized him with pneumonia; he recovered a week later, but most of the fighting was over. In 1947, Hemingway was awarded a Bronze Star for his bravery during World War II. He was recognized for having been "under fire in combat areas in order to obtain an accurate picture of conditions", with the commendation that "through his talent of expression, Mr. Hemingway enabled readers to obtain a vivid picture of the difficulties and triumphs of the front-line soldier and his organization in combat". Cuba and the Nobel Prize Hemingway said he "was out of business as a writer" from 1942 to 1945 during his residence in Cuba. In 1946 he married Mary, who had an ectopic pregnancy five months later. The Hemingway family suffered a series of accidents and health problems in the years following the war: in a 1945 car accident, he "smashed his knee" and sustained another "deep wound on his forehead"; Mary broke first her right ankle and then her left in successive skiing accidents. A 1947 car accident left Patrick with a head wound and severely ill. Hemingway sank into depression as his literary friends began to die: in 1939 William Butler Yeats and Ford Madox Ford; in 1940 F. Scott Fitzgerald; in 1941 Sherwood Anderson and James Joyce; in 1946 Gertrude Stein ; and the following year in 1947, Max Perkins, Hemingway's long-time Scribner's editor, and friend. During this period, he suffered from severe headaches, high blood pressure, weight problems, and eventually diabetes—much of which was the result of previous accidents and many years of heavy drinking. Nonetheless, in January 1946, he began work on The Garden of Eden, finishing 800 pages by June. During the post-war years, he also began work on a trilogy tentatively titled "The Land", "The Sea" and "The Air", which he wanted to combine in one novel titled The Sea Book. However, both projects stalled, and Mellow says that Hemingway's inability to continue was "a symptom of his troubles" during these years. In 1948, Hemingway and Mary traveled to Europe, staying in Venice for several months. While there, Hemingway fell in love with the then 19-year-old Adriana Ivancich. The platonic love affair inspired the novel Across the River and into the Trees, written in Cuba during a time of strife with Mary, and published in 1950 to negative reviews. The following year, furious at the critical reception of Across the River and Into the Trees, he wrote the draft of The Old Man and the Sea in eight weeks, saying that it was "the best I can write ever for all of my life". The Old Man and the Sea became a book-of-the-month selection, made Hemingway an international celebrity, and won the Pulitzer Prize in May 1952, a month before he left for his second trip to Africa. In 1954, while in Africa, Hemingway was almost fatally injured in two successive plane crashes. He chartered a sightseeing flight over the Belgian Congo as a Christmas present to Mary. On their way to photograph Murchison Falls from the air, the plane struck an abandoned utility pole and "crash landed in heavy brush". Hemingway's injuries included a head wound, while Mary broke two ribs. The next day, attempting to reach medical care in Entebbe, they boarded a second plane that exploded at take-off, with Hemingway suffering burns and another concussion, this one serious enough to cause
slid into depression, from which he was unable to recover. The Finca Vigía became crowded with guests and tourists, as Hemingway, beginning to become unhappy with life there, considered a permanent move to Idaho. In 1959 he bought a home overlooking the Big Wood River, outside Ketchum, and left Cuba—although he apparently remained on easy terms with the Castro government, telling The New York Times he was "delighted" with Castro's overthrow of Batista. He was in Cuba in November 1959, between returning from Pamplona and traveling west to Idaho, and the following year for his 61st birthday; however, that year he and Mary decided to leave after hearing the news that Castro wanted to nationalize property owned by Americans and other foreign nationals. On July 25, 1960, the Hemingways left Cuba for the last time, leaving art and manuscripts in a bank vault in Havana. After the 1961 Bay of Pigs Invasion, the Finca Vigía was expropriated by the Cuban government, complete with Hemingway's collection of "four to six thousand books". President Kennedy arranged for Mary Hemingway to travel to Cuba where she met Fidel Castro and obtained her husband's papers and painting in return for donating Finca Vigía to Cuba. Idaho and suicide Hemingway continued to rework the material that was published as A Moveable Feast through the 1950s. In mid-1959, he visited Spain to research a series of bullfighting articles commissioned by Life magazine. Life wanted only 10,000 words, but the manuscript grew out of control. He was unable to organize his writing for the first time in his life, so he asked A. E. Hotchner to travel to Cuba to help him. Hotchner helped him trim the Life piece down to 40,000 words, and Scribner's agreed to a full-length book version (The Dangerous Summer) of almost 130,000 words. Hotchner found Hemingway to be "unusually hesitant, disorganized, and confused", and suffering badly from failing eyesight. Hemingway and Mary left Cuba for the last time on July 25, 1960. He set up a small office in his New York City apartment and attempted to work, but he left soon after. He then traveled alone to Spain to be photographed for the front cover of Life magazine. A few days later, the news reported that he was seriously ill and on the verge of dying, which panicked Mary until she received a cable from him telling her, "Reports false. Enroute Madrid. Love Papa." He was, in fact, seriously ill, and believed himself to be on the verge of a breakdown. Feeling lonely, he took to his bed for days, retreating into silence, despite having the first installments of The Dangerous Summer published in Life in September 1960 to good reviews. In October, he left Spain for New York, where he refused to leave Mary's apartment, presuming that he was being watched. She quickly took him to Idaho, where physician George Saviers met them at the train. At this time, Hemingway was constantly worried about money and his safety. He worried about his taxes and that he would never return to Cuba to retrieve the manuscripts that he had left in a bank vault. He became paranoid, thinking that the FBI was actively monitoring his movements in Ketchum. The FBI had, in fact, opened a file on him during World War II, when he used the Pilar to patrol the waters off Cuba, and J. Edgar Hoover had an agent in Havana watch him during the 1950s. Unable to care for her husband, Mary had Saviers fly Hemingway to the Mayo Clinic in Minnesota at the end of November for hypertension treatments, as he told his patient. The FBI knew that Hemingway was at the Mayo Clinic, as an agent later documented in a letter written in January 1961. Hemingway was checked in under Saviers's name to maintain anonymity. Meyers writes that "an aura of secrecy surrounds Hemingway's treatment at the Mayo" but confirms that he was treated with electroconvulsive therapy (ECT) as many as 15 times in December 1960 and was "released in ruins" in January 1961. Reynolds gained access to Hemingway's records at the Mayo, which document ten ECT sessions. The doctors in Rochester told Hemingway the depressive state for which he was being treated may have been caused by his long-term use of Reserpine and Ritalin. Hemingway was back in Ketchum in April 1961, three months after being released from the Mayo Clinic, when Mary "found Hemingway holding a shotgun" in the kitchen one morning. She called Saviers, who sedated him and admitted him to the Sun Valley Hospital; and once the weather cleared Saviers flew again to Rochester with his patient. Hemingway underwent three electroshock treatments during that visit. He was released at the end of June and was home in Ketchum on June 30. Two days later he "quite deliberately" shot himself with his favorite shotgun in the early morning hours of July 2, 1961. He had unlocked the basement storeroom where his guns were kept, gone upstairs to the front entrance foyer, and shot himself with the "double-barreled shotgun that he had used so often it might have been a friend", which was purchased from Abercrombie & Fitch. Mary was sedated and taken to the hospital, returning home the next day where she cleaned the house and saw to the funeral and travel arrangements. Bernice Kert writes that it "did not seem to her a conscious lie" when she told the press that his death had been accidental. In a press interview five years later, Mary confirmed that he had shot himself. Family and friends flew to Ketchum for the funeral, officiated by the local Catholic priest, who believed that the death had been accidental. An altar boy fainted at the head of the casket during the funeral, and Hemingway's brother Leicester wrote: "It seemed to me Ernest would have approved of it all." He is buried in the Ketchum cemetery. Hemingway's behavior during his final years had been similar to that of his father before he killed himself; his father may have had hereditary hemochromatosis, whereby the excessive accumulation of iron in tissues culminates in mental and physical deterioration. Medical records made available in 1991 confirmed that Hemingway had been diagnosed with hemochromatosis in early 1961. His sister Ursula and his brother Leicester also killed themselves. Other theories have arisen to explain Hemingway's decline in mental health, including that multiple concussions during his life may have caused him to develop chronic traumatic encephalopathy (CTE), leading to his eventual suicide. Hemingway's health was further complicated by heavy drinking throughout most of his life. A memorial to Hemingway just north of Sun Valley is inscribed on the base with a eulogy Hemingway had written for a friend several decades earlier: Best of all he loved the fall the leaves yellow on cottonwoods leaves floating on trout streams and above the hills the high blue windless skies ...Now he will be a part of them forever. Writing style The New York Times wrote in 1926 of Hemingway's first novel, "No amount of analysis can convey the quality of The Sun Also Rises. It is a truly gripping story, told in a lean, hard, athletic narrative prose that puts more literary English to shame." The Sun Also Rises is written in the spare, tight prose that made Hemingway famous, and, according to James Nagel, "changed the nature of American writing". In 1954, when Hemingway was awarded the Nobel Prize for Literature, it was for "his mastery of the art of narrative, most recently demonstrated in The Old Man and the Sea, and for the influence that he has exerted on contemporary style." Henry Louis Gates believes Hemingway's style was fundamentally shaped "in reaction to [his] experience of world war". After World War I, he and other modernists "lost faith in the central institutions of Western civilization" by reacting against the elaborate style of 19th-century writers and by creating a style "in which meaning is established through dialogue, through action, and silences—a fiction in which nothing crucial—or at least very little—is stated explicitly." Hemingway's fiction often used grammatical and stylistic structures from languages other than English. Critics Allen Josephs, Mimi Gladstein, and Jeffrey Herlihy-Mera have studied how Spanish influenced Hemingway's prose, which sometimes appears directly in the other language (in italics, as occurs in The Old Man and the Sea) or in English as literal translations. He also often used bilingual puns and crosslingual wordplay as stylistic devices. Because he began as a writer of short stories, Baker believes Hemingway learned to "get the most from the least, how to prune language, how to multiply intensities and how to tell nothing but the truth in a way that allowed for telling more than the truth." Hemingway called his style the iceberg theory: the facts float above water; the supporting structure and symbolism operate out of sight. The concept of the iceberg theory is sometimes referred to as the "theory of omission". Hemingway believed the writer could describe one thing (such as Nick Adams fishing in "The Big Two-Hearted River") though an entirely different thing occurs below the surface (Nick Adams concentrating on fishing to the extent that he does not have to think about anything else). Paul Smith writes that Hemingway's first stories, collected as In Our Time, showed he was still experimenting with his writing style. He avoided complicated syntax. About 70 percent of the sentences are simple sentences—a childlike syntax without subordination. Jackson Benson believes Hemingway used autobiographical details as framing devices about life in general—not only about his life. For example, Benson postulates that Hemingway used his experiences and drew them out with "what if" scenarios: "what if I were wounded in such a way that I could not sleep at night? What if I were wounded and made crazy, what would happen if I were sent back to the front?" Writing in "The Art of the Short Story", Hemingway explains: "A few things I have found to be true. If you leave out important things or events that you know about, the story is strengthened. If you leave or skip something because you do not know it, the story will be worthless. The test of any story is how very good the stuff that you, not your editors, omit." The simplicity of the prose is deceptive. Zoe Trodd believes Hemingway crafted skeletal sentences in response to Henry James's observation that World War I had "used up words". Hemingway offers a "multi-focal" photographic reality. His iceberg theory of omission is the foundation on which he builds. The syntax, which lacks subordinating conjunctions, creates static sentences. The photographic "snapshot" style creates a collage of images. Many types of internal punctuation (colons, semicolons, dashes, parentheses) are omitted in favor of short declarative sentences. The sentences build on each other, as events build to create a sense of the whole. Multiple strands exist in one story; an "embedded text" bridges to a different angle. He also uses other cinematic techniques of "cutting" quickly from one scene to the next; or of "splicing" a scene into another. Intentional omissions allow the reader to fill the gap, as though responding to instructions from the author and create three-dimensional prose. Hemingway habitually used the word "and" in place of commas. This use of polysyndeton may serve to convey immediacy. Hemingway's polysyndetonic sentence—or in later works his use of subordinate clauses—uses conjunctions to juxtapose startling visions and images. Benson compares them to haikus. Many of Hemingway's followers misinterpreted his lead and frowned upon all expression of emotion; Saul Bellow satirized this style as "Do you have emotions? Strangle them." However, Hemingway's intent was not to eliminate emotion, but to portray it more scientifically. Hemingway thought it would be easy, and pointless, to describe emotions; he sculpted collages of images in order to grasp "the real thing, the sequence of motion and fact which made the emotion and which would be as valid in a year or in ten years or, with luck and if you stated it purely enough, always". This use of an image as an objective correlative is characteristic of Ezra Pound, T. S. Eliot, James Joyce, and Marcel Proust. Hemingway's letters refer to Proust's Remembrance of Things Past several times over the years, and indicate he read the book at least twice. Themes Hemingway's writing includes themes of love, war, travel, wilderness, and loss. Hemingway often wrote about Americans abroad. "In six of the seven novels published during his lifetime," writes Jeffrey Herlihy in Hemingway's Expatriate Nationalism, "the protagonist is abroad, bilingual, and bicultural." Herlihy calls this "Hemingway's Transnational Archetype" and argues that the foreign settings, "far from being mere exotic backdrops or cosmopolitan milieus, are motivating factors in-character action." Critic Leslie Fiedler sees the theme he defines as "The Sacred Land"—the American West—extended in Hemingway's work to include mountains in Spain, Switzerland and Africa, and to the streams of Michigan. The American West is given a symbolic nod with the naming of the "Hotel Montana" in The Sun Also Rises and For Whom the Bell Tolls. According to Stoltzfus and Fiedler, in Hemingway's work, nature is a place for rebirth and rest; and it is where the hunter or fisherman might experience a moment of transcendence at the moment they kill their prey. Nature is where men exist without women: men fish; men hunt; men find redemption in nature. Although Hemingway does write about sports, such as fishing, Carlos Baker notes the emphasis is more on the athlete than the sport. At its core, much of Hemingway's work can be viewed in the light of American naturalism, evident in detailed descriptions such as those in "Big Two-Hearted River". Fiedler believes Hemingway inverts the American literary theme of the evil "Dark Woman" versus the good "Light Woman". The dark woman—Brett Ashley of The Sun Also Rises—is a goddess; the light woman—Margot Macomber of "The Short Happy Life of Francis Macomber"—is a murderess. Robert Scholes says early Hemingway stories, such as "A Very Short Story", present "a male character favorably and a female unfavorably". According to Rena Sanderson, early Hemingway critics lauded his male-centric world of masculine pursuits, and the fiction divided women into "castrators or love-slaves". Feminist critics attacked Hemingway as "public enemy number one", although more recent re-evaluations of his work "have given new visibility to Hemingway's female characters (and their strengths) and have revealed his own sensitivity to gender issues, thus casting doubts on the old assumption that his writings were one-sidedly masculine." Nina Baym believes that Brett Ashley and Margot Macomber "are the two outstanding examples of Hemingway's 'bitch women. The theme of women and death is evident in stories as early as "Indian Camp". The theme of death permeates Hemingway's work. Young believes the emphasis in "Indian Camp" was not so much on the woman who gives birth or the father who kills himself, but on Nick Adams who witnesses these events as a child, and becomes a "badly scarred and nervous young man". Hemingway sets the events in "Indian Camp" that shape the Adams persona. Young believes "Indian Camp" holds the "master key" to "what its author was up to for some thirty-five years of his writing career". Stoltzfus considers Hemingway's work to be more complex with a representation of the truth inherent in existentialism: if "nothingness" is embraced, then redemption is achieved at the moment of death. Those who face death with dignity and courage live an authentic life. Francis Macomber dies happy because the last hours of his life are authentic; the bullfighter in the corrida represents the pinnacle of a life lived with authenticity. In his paper The Uses of Authenticity: Hemingway and the Literary Field, Timo Müller writes that Hemingway's fiction is successful because the characters live an "authentic life", and the "soldiers, fishers, boxers and backwoodsmen are among the archetypes of authenticity in modern literature". The theme of emasculation is prevalent in Hemingway's work, notably in God Rest You Merry, Gentlemen and The Sun Also Rises. Emasculation, according to Fiedler, is a result of a generation of wounded soldiers; and of a generation in which women such as Brett gained emancipation. This also applies to the minor character, Frances Clyne, Cohn's girlfriend in the beginning of The Sun Also Rises. Her character supports the theme not only because the idea was presented early on in the novel but also the impact she had on Cohn in the start of the book while only appearing a small number of times. In God Rest You Merry, Gentlemen, the emasculation is literal, and related to religious guilt. Baker believes Hemingway's work emphasizes the "natural" versus the "unnatural". In "Alpine Idyll" the "unnaturalness" of skiing in the high country late spring snow is juxtaposed against the "unnaturalness" of the peasant who allowed his wife's dead body to linger too long in the shed during
surrender and assert he kidnapped her, to save her honour and her father's reputation. But she mentions that the coat had a box of matches from the Grand Hotel in a pocket. As Tisdall has never been there, he surmises perhaps the murderer has a connection to the hotel. The following evening, Erica and Will go to the hotel together, hoping to find him. In a memorably long, continuous sequence, the camera pans right from their entrance to the hotel and then moves forward from the very back of the hotel ballroom, finally focusing in extreme closeup on the drummer in a dance band performing in blackface. His eyes are twitching. He is Guy, the murderer. Recognizing Old Will in the audience, and seeing policemen nearby (who have actually followed Will hoping he'll lead them to Tisdall), Guy performs poorly due to fear. He is berated by the conductor and, during a break, takes medicine to try to control the twitching, but it makes him very sleepy. Eventually, in mid-performance, Guy passes out, drawing the attention of Erica and the police. Immediately after being revived and confronted, he confesses his crime and begins laughing hysterically. Reunited once again with Tisdall, Erica then tells her father that she thinks it is time they invited him to their home for dinner. Main cast Nova Pilbeam as Erica Burgoyne Derrick De Marney as Robert Tisdall Percy Marmont as Colonel Burgoyne Edward Rigby as Old Will Mary Clare as Erica's aunt Margaret John Longden as Inspector Kent George Curzon as Guy Basil Radford as Erica's uncle Basil Pamela Carme as Christine Clay George Merritt as Detective Sergeant Miller J. H. Roberts as the Solicitor, Henry Briggs Jerry Verno as Lorry Driver H. F. Maltby as Police Sergeant John Miller as Police Constable Syd Crossley as Policeman Torin Thatcher as the owner of Nobby's Lodging House Anna Konstam as Bathing Girl Bill Shine as Manager of Tom's Hat Cafe Beatrice Varley as Accused Man's Wife Peter Thompson as Erica Burgoyne's bespectacled brother Reception Variety called the film a "Pleasing, artless vehicle" for Nova Pilbeam who was "charming" in her role and concluded, "If the pic is not Hitchcock's best effort, it is by no means unworthy of him." Frank Nugent of The New York Times called it a "crisply
others, Erica is taken in by the police. Upon realizing that his daughter has fully allied herself with a murder suspect, her father chooses to resign his position as Chief Constable rather than arrest her for assisting a felon. Though mutually undeclared, by this point she and Tisdall are in love, Tisdall sneaks into their house to see her, intending to surrender and assert he kidnapped her, to save her honour and her father's reputation. But she mentions that the coat had a box of matches from the Grand Hotel in a pocket. As Tisdall has never been there, he surmises perhaps the murderer has a connection to the hotel. The following evening, Erica and Will go to the hotel together, hoping to find him. In a memorably long, continuous sequence, the camera pans right from their entrance to the hotel and then moves forward from the very back of the hotel ballroom, finally focusing in extreme closeup on the drummer in a dance band performing in blackface. His eyes are twitching. He is Guy, the murderer. Recognizing Old Will in the audience, and seeing policemen nearby (who have actually followed Will hoping he'll lead them to Tisdall), Guy performs poorly due to fear. He is berated by the conductor and, during a break, takes medicine to try to control the twitching, but it makes him very sleepy. Eventually, in mid-performance, Guy passes out, drawing the attention of Erica and the police. Immediately after being revived and confronted, he confesses his crime and begins laughing hysterically. Reunited once again with Tisdall, Erica then tells her father that she thinks it is time they invited him to their home for dinner. Main cast Nova Pilbeam as Erica Burgoyne Derrick De Marney as Robert Tisdall Percy Marmont as Colonel Burgoyne Edward Rigby as Old Will Mary Clare as Erica's aunt Margaret John Longden as Inspector Kent George Curzon as Guy Basil Radford as Erica's uncle Basil Pamela Carme as Christine Clay George Merritt as Detective Sergeant Miller J. H. Roberts as the Solicitor, Henry Briggs Jerry Verno as Lorry Driver H. F. Maltby as Police Sergeant John Miller as Police Constable Syd Crossley as Policeman Torin Thatcher as the owner of Nobby's Lodging House Anna Konstam as Bathing Girl Bill Shine as Manager of Tom's Hat Cafe Beatrice Varley as Accused Man's Wife Peter Thompson as Erica Burgoyne's bespectacled brother Reception Variety called the film a "Pleasing, artless vehicle" for Nova Pilbeam who was "charming" in her role and concluded, "If the pic is not Hitchcock's best effort, it is by no means unworthy of him." Frank Nugent of The New York Times called it a "crisply paced, excellently performed film." The Monthly Film Bulletin wrote, "Innumerable small touches show Hitchcock's keen and penetrating observation and his knowledge of human nature. Comedy, romance, and thrills are skilfully blended." Harrison's Reports wrote, "Good melodramatic entertainment. Because of the novelty of the story, the interesting plot developments, and the expert direction by Alfred Hitchcock, one's attention is held from the beginning
both the stories of the novel are "entirely plausible" but criticises for "remarkable dourness of its prose". While Dennis Lythgoe of Deseret News noted that "Bergen's book lives and breathes the Vietnam experience"; Ron Charles in his The Washington Post review mentioned that "Bergen's ability to dramatize trauma-induced disaffection is undeniable; whether readers will want to sink down that hole with his characters is less clear". Irene Wanner of The Seattle Times appreciated the novel for its writing. The novel won the Scotiabank Giller Prize in 2005 while being nominated along with Luck (by Joan Barfoot), Sweetness in the Belly (by Camilla Gibb), Alligator (by Lisa Moore), and A Wall of Light (by Edeet Ravel). The judges Warren Cariou, Elizabeth Hay, and Richard B. Wright noted "The Time in Between explores our need to understand the relationship between love and duty....[] This is a subtle and elegantly written novel by an author in complete command of his talent". It also won the McNally Robinson Book of the Year Award in 2005. Bergen had earlier won the award in 1996 for
Between as a war novel". The book was released as Audio book by Blackstone Audio in December 2005 and was narrated by Anna Fields, better known as Kate Fleming. Reviews and reception Kirkus reviews called the novel a "beautifully composed, unflinching and harrowing story". Nicholas Dinka in their Quill & Quire review mentions that the novel has "much decency and intelligence" and both the stories of the novel are "entirely plausible" but criticises for "remarkable dourness of its prose". While Dennis Lythgoe of Deseret News noted that "Bergen's book lives and breathes the Vietnam experience"; Ron Charles in his The Washington Post review mentioned that "Bergen's ability to dramatize trauma-induced disaffection is undeniable; whether readers will want to sink down that hole with his characters is less clear". Irene Wanner of The Seattle Times appreciated the novel for its writing. The novel won the Scotiabank
Ambassador to the United States, Fernando de los Ríos, began one of the film's screenings in New York in 1937. The second part, "They Shalt Not Pass", was based on a short film No Pasaran! done by the Artkino Film Company of the Soviet Union, where van Dongen was working at the time the film was made. John Dos Passos narrated parts of the film, and the commentary was written by Dos Passos, Ernest Hemingway, Archibald MacLeish, and Prudencio de Pareda. Erickson writes that, "The horrendous images of battlefield carnage, not to mention the close-ups of suffering and dying Spanish children, still pack a wallop when seen today." Later, Hemingway, Dos Passos, Lillian Hellman and others founded
film No Pasaran! done by the Artkino Film Company of the Soviet Union, where van Dongen was working at the time the film was made. John Dos Passos narrated parts of the film, and the commentary was written by Dos Passos, Ernest Hemingway, Archibald MacLeish, and Prudencio de Pareda. Erickson writes that, "The horrendous images of battlefield carnage, not to mention the close-ups of suffering and dying Spanish children, still pack a wallop when seen today." Later, Hemingway, Dos Passos, Lillian Hellman and others founded the company Contemporary Historians, which produced another film called The Spanish Earth (1937), directed by Joris Ivens and edited by van Dongen. Spain in Flames was banned in New Brunswick, New Jersey and Waterbury, Connecticut. A screening of the film, accompanied by a speech
considered industrial rock, industrial, alternative rock and industrial metal. Reznor regularly uses noise and distortion in his song arrangements that do not follow verse–chorus form, and incorporates dissonance with chromatic melody or harmony (or both). The treatment of metal guitars in Broken is carried over to The Downward Spiral, which includes innovative techniques such as expanded song structures and unconventional time signatures. The album features a wide range of textures and moods to illustrate the mental progress of the central protagonist. Reznor's singing follows a similar pattern from beginning to end, frequently moving from whispers to screams. These techniques are all used in the song "Hurt", which features a highly dissonant tritone played on guitar during the verses, a B5#11, emphasized when Reznor sings the eleventh note on the word "I" every time the B/E# dyad is played. "Mr. Self Destruct", a song about a powerful person, follows a build-up sampled from the 1971 film THX 1138 with an "industrial roar" and is accompanied by an audio loop of a pinion rotating. "The Becoming" expresses the state of being dead and the protagonist's transformation into a non-human organism. "Closer" concludes with a chromatic piano motif: The melody is introduced during the second verse of "Piggy" on organ, then reappears in power chords at drop D tuning throughout the chorus of "Heresy", and recurs for the final time on "The Downward Spiral". The album was chiefly inspired by David Bowie's Low, an experimental rock album which Reznor related to on songwriting, mood, and structures, as well as progressive rock group Pink Floyd's The Wall, a concept album featuring themes of abuse, isolation, and mental instability. Packaging Committere, an installation featuring artwork and sketches for The Downward Spiral, "Closer" and "March of the Pigs" by Russell Mills was displayed at the Glasgow School of Art. Mills explained the ideas and materials that made up the painting (titled "Wound") that was used for the album's cover art: Promotion Singles "March of the Pigs" and "Closer" were released as singles; two other songs, "Hurt" and "Piggy", were issued to radio without a commercial single release. "March of the Pigs" has an unusual meter, alternating three bars of 7/8 time with one of 8/8. The song's music video was directed by Peter Christopherson and was shot twice; the first version scrapped due to Reznor's involvement, and the released second version being a live performance. "Closer" features a heavily modified bass drum sample from the Iggy Pop song "Nightclubbing" from his album The Idiot. Lyrically, it is a meditation on self-hatred and obsession, but to Reznor's dismay, the song was widely misinterpreted as a lust anthem due to its chorus, which included the line "I wanna fuck you like an animal". The music video for "Closer" was directed by Mark Romanek and received frequent rotation on MTV, though the network heavily censored the original version, which they perceived to be too graphic. The video shows events in a laboratory dealing with religion, sexuality, animal cruelty, politics, and terror; controversial imagery included a nude bald woman with a crucifix mask, a monkey tied to a cross, a pig's head spinning on a machine, a diagram of a vulva, Reznor wearing an S&M mask while swinging in shackles, and of him wearing a ball gag. A radio edit that partially censored the song's explicit lyrics also received extensive airtime. The video has since been made part of the permanent collection of the Museum of Modern Art in New York City. "Piggy" uses "nothing can stop me now", a line that recurs in "Ruiner" and "Big Man with a Gun". The frantic drumming on the song's outro is Reznor's only attempt at performing drums on the record, and one of the few "live" drum performances on the album. He had stated that the recording was from him testing the microphone setup in studio, but he liked the sound too much not to include it. It was released as a promotional single in December 1994 and reached the Top 20 on the Billboard Modern Rock Tracks chart. Released in 1995, "Hurt" clearly includes references to self-harm and heroin addiction. Tour The Nine Inch Nails live band embarked on the Self Destruct tour in support of The Downward Spiral. Chris Vrenna and James Woolley performed drums and keyboards respectively, Robin Finck replaced Richard Patrick on guitar and bassist Danny Lohner was added to the line-up. The stage set-up consisted of dirty curtains which would be pulled down and up for visuals shown during songs such as "Hurt". The back of the stage was littered with darker and standing lights, along with very few actual ones. The tour debuted the band's grungy and messy image in which they would come out in ragged clothes slathered in corn starch. The concerts were violent and chaotic, with band members often injuring themselves. They would frequently destroy their instruments at the end of concerts, attack each other, and stage-dive into the crowd. The tour included a set at Woodstock '94 broadcast on pay-per-view and seen in as many as 24 million homes. The band being covered in mud was a result of pre-concert backstage play, contrary to the belief that it was an attention-grabbing ploy, thus making it difficult for Reznor to navigate the stage: Reznor pushed Lohner into the mud pit as the concert began and saw mud from his hair entering his eyes while performing. Nine Inch Nails was widely proclaimed to have "stolen the show" from its popular contemporaries, mostly classic rock bands, and its fan base expanded. The band received considerable mainstream success thereafter, performing with significantly higher production values and the addition of various theatrical visual elements. Its performance of "Happiness in Slavery" from the Woodstock concert earned the group a Grammy Award for Best Metal Performance in 1995. Entertainment Weekly commented about the band's Woodstock '94 performance: "Reznor unstrings rock to its horrifying, melodramatic core—an experience as draining as it is exhilarating". Despite this acclaim, Reznor attributed his dislike of the concert to its technical difficulties. The main leg of the tour featured Marilyn Manson as the supporting act, who featured bassist Jeordie White (then playing under the pseudonym "Twiggy Ramirez"); White later played bass with Nine Inch Nails from 2005 to 2007. After another tour leg supporting the remix album Further Down the Spiral, Nine Inch Nails contributed to the Alternative Nation Festival in Australia and subsequently embarked on the Dissonance Tour, which included 26 separate performances with co-headliner David Bowie. Nine Inch Nails was the opening act for the tour, and its set transitioned into Bowie's set with joint performances of both bands' songs. However, the crowds reportedly did not respond positively to the pairing due to their creative differences. The tour concluded with "Nights of Nothing", a three-night showcase of performances from Nothing Records bands Marilyn Manson, Prick, Meat Beat Manifesto, and Pop Will Eat Itself, which ended with an 80-minute set from Nine Inch Nails. Kerrang! described the Nine Inch Nails set during the Nights of Nothing showcase as "tight, brash and dramatic", but was disappointed at the lack of new material. On the second of the three nights, Richard Patrick was briefly reunited with the band and contributed guitar to a performance of "Head Like a Hole". After the Self Destruct tour, Chris Vrenna, member of the live band since 1988 and frequent contributor to Nine Inch Nails studio recordings, left the act permanently to pursue a career in producing and to form Tweaker. Release and reception The Downward Spirals release date was delayed at various times to slow down Reznor's intended pace of the album's recording. The first delay caused the process of setting up Le Pig to take longer than he expected, and its release was postponed again as he was educating himself different ways to write songs that did not resemble those on Broken and Pretty Hate Machine. He considered delivering the album to Interscope in early 1993, only to experience a writer's block as he was unable to produce any satisfactory material. Interscope grew impatient and concerned with this progress, but Reznor was not forced by their demands of expediency despite crediting the label for giving him creative freedom. He told rock music producer Rick Rubin that his motivation for creating the album was to get it finished, thus Rubin responded that Reznor might not do so until he makes music that is allowed to be heard. Reznor realized that he was in the most fortunate situation he imagined when the album was recorded with a normal budget, "cool" equipment, and a studio to work at. Released on March 8, 1994, to instant success, The Downward Spiral debuted at number two on the US Billboard 200, selling nearly 119,000 copies in its first week. On October 28, 1998, the Recording Industry Association of America (RIAA) certified the album quadruple platinum, and by December 2011, it had sold 3.7 million copies in the United States. The album peaked at number nine on the UK Albums Chart, and on July 22, 2013, it was certified gold by the British Phonographic Industry (BPI), denoting shipments in excess of 100,000 copies in the United Kingdom. It reached number 13 on the Canadian RPM albums chart and received a triple platinum certification from the Canadian Recording Industry Association (CRIA) for shipping 200,000 copies in Canada. A group of early listeners of the album viewed it as "commercial suicide", but Reznor did not make it for profit as his goal was to slightly broaden Nine Inch Nails' scope. Reznor felt that the finished product he delivered to Interscope was complete and faithful to his vision and thought its commercial potential was limited, but after its release he was surprised by the success and received questions about a follow-up single with a music video to be shown on MTV. The album has since sold over four million copies worldwide. Many music critics and audiences praised The Downward Spiral for its abrasive, eclectic nature and dark themes and commented on the concept of a destruction of a man. The New York Times writer Jon Pareles' review of the album found the music to be highly abrasive. Pareles asserted that unlike other electro-industrial groups like Ministry and Nitzer Ebb, "Reznor writes full-fledged tunes" with stronger use of melodies than riffs. He noticed criticisms of Nine Inch Nails from industrial purists for popularizing the genre and the album's transgression. Village Voice critic Robert Christgau gave it an "honorable mention" in his capsule review column and summed the record up as, "musically, Hieronymus Bosch as postindustrial atheist; lyrically, Transformers as kiddie porn." Jonathan Gold, writing for Rolling Stone, likened the album to cyberpunk fiction. Entertainment Weekly reviewer Tom Sinclair commented: "Reznor's pet topics (sex, power, S&M, hatred, transcendence) are all here, wrapped in hooks that hit your psyche with the force of a blowtorch." Accolades The Downward Spiral has been listed on several publications' best album lists. In 2003, the album was ranked number 200 on Rolling Stone magazine's list of The 500 Greatest Albums of All Time, then was re-ranked 201 in a 2012 revised list. The Rolling Stone staff wrote: "Holing up in the one-time home of Manson-family victim Sharon Tate, Trent Reznor made an overpowering meditation on NIN's central theme: control." It moved up to 122 on the magazine's revised list in 2020. The album was placed 10th on Spins 125 Best Albums of the Past 25 Years list; the Spin staff quoted Ann Powers' review that appreciated its bleak, aggressive style. It was ranked number 488 in the book The Top 500 Heavy Metal Albums of All Time by heavy metal music critic Martin Popoff. In 2001, Q named The Downward Spiral as one of the 50 Heaviest Albums of All Time; in 2010, the album was ranked number 102 on their 250 Best Albums of Q's Lifetime (1986–2011) list. The Downward Spiral was featured in Robert Dimery's book 1001 Albums You Must Hear Before You Die. In May 2014, Loudwire placed The Downward Spiral at number two on its "10 Best Hard Rock Albums of 1994" list. In July 2014, Guitar World placed The Downward Spiral at number 43 in their "Superunknown: 50 Iconic Albums That Defined 1994" list. Legacy The immediate success of The Downward Spiral established Nine Inch Nails as a reputable force in the 1990s. The band's image and musical style became so recognizable that a Gatorade commercial featured a remix of "Down in It" without their involvement. Reznor felt uncomfortable with the media hype and success the band earned, received false reports of his death, depression, and was falsely reported to have had a relationship with serial killer Jeffrey Dahmer, and was depicted as a sex icon due to his visual appearance. Nine Inch Nails received several honors, including Grammy Award nominations for Best Alternative Performance for The Downward Spiral and Best Rock Song for "Hurt". After the release of The Downward Spiral, many bands such as Gravity Kills, Stabbing Westward, Filter, and Mötley Crüe made albums that imitated the sound of Nine Inch Nails. Reznor interpreted The Downward Spiral as an extension of himself that "became the truth fulfilling itself," as he experienced personal and social issues presented in the album after its release. He had already struggled with social anxiety disorder and depression and started his abuse of narcotics including cocaine while he went on an alcohol binge. Around this time, his studio perfectionism, struggles with addiction, and bouts of writer's block prolonged the production of The Fragile, and Reznor completed rehabilitation from drugs in 2001. One year after The Downward Spiral’s release, the band released an accompanying remix album titled Further Down the Spiral. It features contributions from Coil with Danny Hyde, J. G. Thirlwell, electronic musician Aphex Twin, producer Rick Rubin, and Jane's Addiction guitarist Dave Navarro. The album peaked at number 23 on the Billboard 200 and received mixed reviews. Recoiled, a remix EP of "Gave Up", "Closer", "The Downward Spiral", and "Eraser" by Coil, was released on February 24, 2014, via British record label Cold Spring. Retrospective reviews regard The Downward Spiral as one of the
Flood, known for engineering and producing U2 and Depeche Mode albums, was employed as co-producer on The Downward Spiral. It became his last collaboration with Nine Inch Nails due to creative differences. A "very dangerously self-destructive," humorous short song written for the album, "Just Do It", was not included in the final version and criticized by Flood in that Reznor had "gone too far." Reznor completed the last song written for the album, "Big Man with a Gun", in late 1993. After the album's recording, Reznor moved out and the house was demolished shortly thereafter. The Downward Spiral entered its mixing and mastering processes, done at Record Plant Studios and A&M Studios with Alan Moulder, who subsequently took on more extensive production duties for future album releases. Music and lyrics Numerous layers of metaphors are present throughout The Downward Spiral, leaving it open to wide interpretation. The album relays nihilism and is defined by a prominent theme of self-abuse and self-control. It is a semi-autobiographical concept album, in which the overarching plot follows the protagonist's descent into madness in his own inner solipsistic world through a metaphorical "downward spiral", dealing with religion, dehumanization, violence, disease, society, drugs, sex, and finally, suicide. Reznor described the concept as consisting of "someone who sheds everything around them to a potential nothingness, but through career, religion, relationship, belief and so on." Media journalists like The New York Times writer Jon Pareles noted the album's theme of angst had already been used by grunge bands like Nirvana, and that Nine Inch Nails' depiction was more generalized. Using elements of genres such as techno, dance, electronic, heavy metal, and hard rock, The Downward Spiral is considered industrial rock, industrial, alternative rock and industrial metal. Reznor regularly uses noise and distortion in his song arrangements that do not follow verse–chorus form, and incorporates dissonance with chromatic melody or harmony (or both). The treatment of metal guitars in Broken is carried over to The Downward Spiral, which includes innovative techniques such as expanded song structures and unconventional time signatures. The album features a wide range of textures and moods to illustrate the mental progress of the central protagonist. Reznor's singing follows a similar pattern from beginning to end, frequently moving from whispers to screams. These techniques are all used in the song "Hurt", which features a highly dissonant tritone played on guitar during the verses, a B5#11, emphasized when Reznor sings the eleventh note on the word "I" every time the B/E# dyad is played. "Mr. Self Destruct", a song about a powerful person, follows a build-up sampled from the 1971 film THX 1138 with an "industrial roar" and is accompanied by an audio loop of a pinion rotating. "The Becoming" expresses the state of being dead and the protagonist's transformation into a non-human organism. "Closer" concludes with a chromatic piano motif: The melody is introduced during the second verse of "Piggy" on organ, then reappears in power chords at drop D tuning throughout the chorus of "Heresy", and recurs for the final time on "The Downward Spiral". The album was chiefly inspired by David Bowie's Low, an experimental rock album which Reznor related to on songwriting, mood, and structures, as well as progressive rock group Pink Floyd's The Wall, a concept album featuring themes of abuse, isolation, and mental instability. Packaging Committere, an installation featuring artwork and sketches for The Downward Spiral, "Closer" and "March of the Pigs" by Russell Mills was displayed at the Glasgow School of Art. Mills explained the ideas and materials that made up the painting (titled "Wound") that was used for the album's cover art: Promotion Singles "March of the Pigs" and "Closer" were released as singles; two other songs, "Hurt" and "Piggy", were issued to radio without a commercial single release. "March of the Pigs" has an unusual meter, alternating three bars of 7/8 time with one of 8/8. The song's music video was directed by Peter Christopherson and was shot twice; the first version scrapped due to Reznor's involvement, and the released second version being a live performance. "Closer" features a heavily modified bass drum sample from the Iggy Pop song "Nightclubbing" from his album The Idiot. Lyrically, it is a meditation on self-hatred and obsession, but to Reznor's dismay, the song was widely misinterpreted as a lust anthem due to its chorus, which included the line "I wanna fuck you like an animal". The music video for "Closer" was directed by Mark Romanek and received frequent rotation on MTV, though the network heavily censored the original version, which they perceived to be too graphic. The video shows events in a laboratory dealing with religion, sexuality, animal cruelty, politics, and terror; controversial imagery included a nude bald woman with a crucifix mask, a monkey tied to a cross, a pig's head spinning on a machine, a diagram of a vulva, Reznor wearing an S&M mask while swinging in shackles, and of him wearing a ball gag. A radio edit that partially censored the song's explicit lyrics also received extensive airtime. The video has since been made part of the permanent collection of the Museum of Modern Art in New York City. "Piggy" uses "nothing can stop me now", a line that recurs in "Ruiner" and "Big Man with a Gun". The frantic drumming on the song's outro is Reznor's only attempt at performing drums on the record, and one of the few "live" drum performances on the album. He had stated that the recording was from him testing the microphone setup in studio, but he liked the sound too much not to include it. It was released as a promotional single in December 1994 and reached the Top 20 on the Billboard Modern Rock Tracks chart. Released in 1995, "Hurt" clearly includes references to self-harm and heroin addiction. Tour The Nine Inch Nails live band embarked on the Self Destruct tour in support of The Downward Spiral. Chris Vrenna and James Woolley performed drums and keyboards respectively, Robin Finck replaced Richard Patrick on guitar and bassist Danny Lohner was added to the line-up. The stage set-up consisted of dirty curtains which would be pulled down and up for visuals shown during songs such as "Hurt". The back of the stage was littered with darker and standing lights, along with very few actual ones. The tour debuted the band's grungy and messy image in which they would come out in ragged clothes slathered in corn starch. The concerts were violent and chaotic, with band members often injuring themselves. They would frequently destroy their instruments at the end of concerts, attack each other, and stage-dive into the crowd. The tour included a set at Woodstock '94 broadcast on pay-per-view and seen in as many as 24 million homes. The band being covered in mud was a result of pre-concert backstage play, contrary to the belief that it was an attention-grabbing ploy, thus making it difficult for Reznor to navigate the stage: Reznor pushed Lohner into the mud pit as the concert began and saw mud from his hair entering his eyes while performing. Nine Inch Nails was widely proclaimed to have "stolen the show" from its popular contemporaries, mostly classic rock bands, and its fan base expanded. The band received considerable mainstream success thereafter, performing with significantly higher production values and the addition of various theatrical visual elements. Its performance of "Happiness in Slavery" from the Woodstock concert earned the group a Grammy Award for Best Metal Performance in 1995. Entertainment Weekly commented about the band's Woodstock '94 performance: "Reznor unstrings rock to its horrifying, melodramatic core—an experience as draining as it is exhilarating". Despite this acclaim, Reznor attributed his dislike of the concert to its technical difficulties. The main leg of the tour featured Marilyn Manson as the supporting act, who featured bassist Jeordie White (then playing under the pseudonym "Twiggy Ramirez"); White later played bass with Nine Inch Nails from 2005 to 2007. After another tour leg supporting the remix album Further Down the Spiral, Nine Inch Nails contributed to the Alternative Nation Festival in Australia and subsequently embarked on the Dissonance Tour, which included 26 separate performances with co-headliner David Bowie. Nine Inch Nails was the opening act for the tour, and its set transitioned into Bowie's set with joint performances of both bands' songs. However, the crowds reportedly did not respond positively to the pairing due to their creative differences. The tour concluded with "Nights of Nothing", a three-night showcase of performances from Nothing Records bands Marilyn Manson, Prick, Meat Beat Manifesto, and
"Futuristic" with Jamie Jupiter for Jupiter's new 12" single (never released) 2007 – Remade "Modernaire" by Dez Dickerson (from the film Purple Rain) for the label Citinite 2007 – Collaborated with Clone Machine and Egypt Ear Werk December 2008 – Released exclusive songs on iTunes: "Electro Pharaoh", "Freaky D.J.", and "Scandinavian Summer" 2008 – Joined Who Cares on the song "They Killed the Radio" 2008 – Worked with Jamie Jones on the song "Galactic Space Bar" 2008 – Worked with M.I.A. on "Rock off Shake off" for new artist Rye Rye May 2009 – Collaborated with Debonaire on "Do U Wanna Get Down?" for a new Street Sounds compilation May 2009 – New video "Freaky D.J." with producer/director Victor Brooks a.k.a. Who007 2009 – New album that included songs "Electro Pharaoh", "U.F.O.", "Freaky D.J.", "BellyDance", "Scandinavian Summer", and "Do U Wanna Get Down?" June 2009 – Remix of James Pants's Cosmic Rapp was released 2011 – work on new album entitled 1984 begins 2014 – Collaborated with Dye on the song "She's Bad" 2015 – 1984 released Touring The Egyptian Lover began touring again in 2004 throughout Europe, Asia, and North America. His performances often begin with mixing records on turntables before segueing into his original compositions. In 2008, he supported M.I.A. in
artists such as Rodney O & Joe Cooley 2 Oclock & Te & Joezee. His 2015 release, 1984, continues his tradition of using all analog equipment, including the Roland TR-808, along with much of the same gear used on his recordings of the 1980s. The name "1984" refers to his earlier albums. The album was recorded at Skip Saylor, Encore Studios, and at RUSK Studios, the same studio where On The Nile was recorded in 1984. It is widely available on double gatefold LP, CD and cassette tape. Currently 2005 – New single "Party", backed with "Dancefloor" February 2006 – Platinum Pyramids was released End of 2006 – Recorded "UFO" and "Futuristic" with Jamie Jupiter for Jupiter's new 12" single (never released) 2007 – Remade "Modernaire" by Dez Dickerson (from the film Purple Rain) for the label Citinite 2007 – Collaborated with Clone Machine and Egypt Ear Werk December 2008 – Released exclusive songs on iTunes: "Electro Pharaoh", "Freaky D.J.", and "Scandinavian Summer" 2008 – Joined Who Cares on the song "They Killed the Radio" 2008 – Worked with Jamie Jones on the song "Galactic Space Bar" 2008 – Worked with M.I.A. on "Rock off Shake off" for new artist Rye Rye May 2009 – Collaborated with
and the naval base at Kronstadt. However, the project was cancelled following Schilling's death in 1837. Schilling was also one of the first to put into practice the idea of the binary system of signal transmission. His work was taken over and developed by Moritz von Jacobi who invented telegraph equipment that was used by Tsar Alexander III to connect the Imperial palace at Tsarskoye Selo and Kronstadt Naval Base. In 1833, Carl Friedrich Gauss, together with the physics professor Wilhelm Weber in Göttingen installed a wire above the town's roofs. Gauss combined the Poggendorff-Schweigger multiplicator with his magnetometer to build a more sensitive device, the galvanometer. To change the direction of the electric current, he constructed a commutator of his own. As a result, he was able to make the distant needle move in the direction set by the commutator on the other end of the line. At first, Gauss and Weber used the telegraph to coordinate time, but soon they developed other signals and finally, their own alphabet. The alphabet was encoded in a binary code that was transmitted by positive or negative voltage pulses which were generated by means of moving an induction coil up and down over a permanent magnet and connecting the coil with the transmission wires by means of the commutator. The page of Gauss' laboratory notebook containing both his code and the first message transmitted, as well as a replica of the telegraph made in the 1850s under the instructions of Weber are kept in the faculty of physics at the University of Göttingen, in Germany. Gauss was convinced that this communication would be a help to his kingdom's towns. Later in the same year, instead of a Voltaic pile, Gauss used an induction pulse, enabling him to transmit seven letters a minute instead of two. The inventors and university did not have the funds to develop the telegraph on their own, but they received funding from Alexander von Humboldt. Carl August Steinheil in Munich was able to build a telegraph network within the city in 1835–1836. He installed a telegraph line along the first German railroad in 1835. Steinheil built a telegraph along the Nuremberg - Fürth railway line in 1838, the first earth-return telegraph put into service. By 1837, William Fothergill Cooke and Charles Wheatstone had co-developed a telegraph system which used a number of needles on a board that could be moved to point to letters of the alphabet. Any number of needles could be used, depending on the number of characters it was required to code. In May 1837 they patented their system. The patent recommended five needles, which coded twenty of the alphabet's 26 letters. Samuel Morse independently developed and patented a recording electric telegraph in 1837. Morse's assistant Alfred Vail developed an instrument that was called the register for recording the received messages. It embossed dots and dashes on a moving paper tape by a stylus which was operated by an electromagnet. Morse and Vail developed the Morse code signalling alphabet. The first telegram in the United States was sent by Morse on 11 January 1838, across of wire at Speedwell Ironworks near Morristown, New Jersey, although it was only later, in 1844, that he sent the message "WHAT HATH GOD WROUGHT" over the from the Capitol in Washington to the old Mt. Clare Depot in Baltimore. Commercial telegraphy Cooke and Wheatstone system The first commercial electrical telegraph was the Cooke and Wheatstone system. A demonstration four-needle system was installed on the Euston to Camden Town section of Robert Stephenson's London and Birmingham Railway in 1837 for signalling rope-hauling of locomotives. It was rejected in favour of pneumatic whistles. Cooke and Wheatstone had their first commercial success with a system installed on the Great Western Railway over the from Paddington station to West Drayton in 1838. This was a five-needle, six-wire system. This system suffered from failing insulation on the underground cables. When the line was extended to Slough in 1843, the telegraph was converted to a one-needle, two-wire system with uninsulated wires on poles. The one-needle telegraph proved highly successful on British railways, and 15,000 sets were still in use at the end of the nineteenth century. Some remained in service in the 1930s. The Electric Telegraph Company, the world's first public telegraphy company was formed in 1845 by financier John Lewis Ricardo and Cooke. Wheatstone ABC telegraph Wheatstone developed a practical alphabetical system in 1840 called the A.B.C. System, used mostly on private wires. This consisted of a "communicator" at the sending end and an "indicator" at the receiving end. The communicator consisted of a circular dial with a pointer and the 26 letters of the alphabet (and four punctuation marks) around its circumference. Against each letter was a key that could be pressed. A transmission would begin with the pointers on the dials at both ends set to the start position. The transmitting operator would then press down the key corresponding to the letter to be transmitted. In the base of the communicator was a magneto actuated by a handle on the front. This would be turned to apply an alternating voltage to the line. Each half cycle of the current would move the pointers at both ends on by one position. When the pointer reached the position of the depressed key, it would stop and the magneto would be disconnected from the line. The communicator's pointer was geared to the magneto mechanism. The indicator's pointer was moved by a polarised electromagnet whose armature was coupled to it through an escapement. Thus the alternating line voltage moved the indicator's pointer on to the position of the depressed key on the communicator. Pressing another key would then release the pointer and the previous key, and re-connect the magneto to the line. These machines were very robust and simple to operate, and they stayed in use in Britain until well into the 20th century. Morse system In 1851, a conference in Vienna of countries in the German-Austrian Telegraph Union (which included many central European countries) adopted the Morse telegraph as the system for international communications. The international Morse code adopted was considerably modified from the original American Morse code, and was based on a code used on Hamburg railways (Gerke, 1848). A common code was a necessary step to allow direct telegraph connection between countries. With different codes, additional operators were required to translate and retransmit the message. In 1865, a conference in Paris adopted Gerke's code as the International Morse code and was henceforth the international standard. The US, however, continued to use American Morse code internally for some time, hence international messages required retransmission in both directions. In the United States, the Morse/Vail telegraph was quickly deployed in the two decades following the first demonstration in 1844. The overland telegraph connected the west coast of the continent to the east coast by 24 October 1861, bringing an end to the Pony Express. Foy–Breguet system France was slow to adopt the electrical telegraph, because of the extensive optical telegraph system built during the Napoleonic era. There was also serious concern that an electrical telegraph could be quickly put out of action by enemy saboteurs, something that was much more difficult to do with optical telegraphs which had no exposed hardware between stations. The Foy-Breguet telegraph was eventually adopted. This was a two-needle system using two signal wires but displayed in a uniquely different way to other needle telegraphs. The needles made symbols similar to the Chappe optical system symbols, making it more familiar to the telegraph operators. The optical system was decommissioned starting in 1846, but not completely until 1855. In that year the Foy-Breguet system was replaced with the Morse system. Expansion As well as the rapid expansion of the use of the telegraphs along the railways, they soon spread into the field of mass communication with the instruments being installed in post offices. The era of mass personal communication had begun. Telegraph networks were expensive to build, but financing was readily available, especially from London bankers. By 1852, National systems were in operation in major countries: The New York and Mississippi Valley Printing Telegraph Company, for example, was created in 1852 in Rochester, New York and eventually became the Western Union Telegraph Company. Although many countries had telegraph networks, there was no worldwide interconnection. Message by post was still the primary means of communication to countries outside Europe. Telegraphic improvements A continuing goal in telegraphy was to reduce the cost per message by reducing hand-work, or increasing the sending rate. There were many experiments with moving pointers, and various electrical encodings. However, most systems were too complicated and unreliable. A successful expedient to reduce the cost per message was the development of telegraphese. The first system that did not require skilled technicians to operate was Charles Wheatstone's ABC system in 1840 in which the letters of the alphabet were arranged around a clock-face, and the signal caused a needle to indicate the letter. This early system required the receiver to be present in real time to record the message and it reached speeds of up to 15 words a minute. In 1846, Alexander Bain patented a chemical telegraph in Edinburgh. The signal current moved an iron pen across a moving paper tape soaked in a mixture of ammonium nitrate and potassium ferrocyanide, decomposing the chemical and producing readable blue marks in Morse code. The speed of the printing telegraph was 16 and a half words per minute, but messages still required translation into English by live copyists. Chemical telegraphy came to an end in the US in 1851, when the Morse group defeated the Bain patent in the US District Court. For a brief period, starting with the New York–Boston line in 1848, some telegraph networks began to employ sound operators, who were trained to understand Morse code aurally. Gradually, the use of sound operators eliminated the need for telegraph receivers to include register and tape. Instead, the receiving instrument was developed into a "sounder", an electromagnet that was energized by a current and attracted a small iron lever. When the sounding key was opened or closed, the sounder lever struck an anvil. The Morse operator distinguished a dot and a dash by the short or long interval between the two clicks. The message was then written out in long-hand. Royal Earl House developed and patented a letter-printing telegraph system in 1846 which employed an alphabetic keyboard for the transmitter and automatically printed the letters on paper at the receiver, and followed this up with a steam-powered version in 1852. Advocates of printing telegraphy said it would eliminate Morse operators' errors. The House machine was used on four main American telegraph lines by 1852. The speed of the House machine was announced as 2600 words an hour. David Edward Hughes invented the printing telegraph in 1855; it used a keyboard of 26 keys for the alphabet and a spinning type wheel that determined the letter being transmitted by the length of time that had elapsed since the previous transmission. The system allowed for automatic recording on the receiving end. The system was very stable and accurate and became accepted around the world. The next improvement was the Baudot code of 1874. French engineer Émile Baudot patented a printing telegraph in which the signals were translated automatically into typographic characters. Each character was assigned a five-bit code, mechanically interpreted from the state of five on/off switches. Operators had to maintain a steady rhythm, and the usual speed of operation was 30 words per minute. By this point, reception had been automated, but the speed and accuracy of the transmission were still limited to the skill of the human operator. The first practical automated system was patented by Charles Wheatstone. The message (in Morse code) was typed onto a piece of perforated tape using a keyboard-like device called the 'Stick Punch'. The transmitter automatically ran the tape through and transmitted the message at the then exceptionally high speed of 70 words per minute. Teleprinters An early successful teleprinter was invented by Frederick G. Creed. In Glasgow he created his first keyboard perforator, which used compressed air to punch the holes. He also created a reperforator (receiving perforator) and a printer. The reperforator punched incoming Morse signals onto paper tape and the printer decoded this tape to produce alphanumeric characters on plain paper. This was the origin of the Creed High Speed Automatic Printing System, which could run at an unprecedented 200 words per minute. His system was adopted by the Daily Mail for daily transmission of the newspaper contents. With the invention of the teletypewriter, telegraphic encoding became fully automated. Early teletypewriters used the ITA-1 Baudot code, a five-bit code. This yielded only thirty-two codes, so it was over-defined into two "shifts", "letters" and "figures". An explicit, unshared shift code prefaced each set of letters and figures. In 1901, Baudot's code was modified by Donald Murray. In the 1930s, teleprinters were produced by Teletype in the US, Creed in Britain and Siemens in Germany. By 1935, message routing was the last great barrier to full automation. Large telegraphy providers began to develop systems that used telephone-like rotary dialling to connect teletypewriters. These resulting systems were called "Telex" (TELegraph EXchange). Telex machines first performed rotary-telephone-style pulse dialling for circuit switching, and then sent data by ITA2. This "type A" Telex routing functionally automated message routing. The first wide-coverage Telex network was implemented in Germany during the 1930s as a network used to communicate within the government. At the rate of 45.45 (±0.5%) baud – considered speedy at the time – up to 25 telex channels could share a single long-distance telephone channel by using voice frequency telegraphy multiplexing, making telex the least expensive method of reliable long-distance communication. Automatic teleprinter exchange service was introduced into Canada by CPR Telegraphs and CN Telegraph in July 1957 and in 1958, Western Union started to build a Telex network in the United States. The harmonic telegraph The most expensive aspect of a telegraph system was the installation – the laying of the wire, which was often very long. The costs would be better covered by finding a way to send more than one message at a time through the single wire, thus increasing revenue per wire. Early devices included the duplex and the quadruplex which allowed, respectively, one or two telegraph transmissions in each direction. However, an even greater number of channels was desired on the busiest lines. In the latter half of the 1800s, several inventors worked towards creating a method for doing just that, including Charles Bourseul, Thomas Edison, Elisha Gray, and Alexander Graham Bell. One approach was to have resonators of several different frequencies act as carriers of a modulated on-off signal. This was the harmonic telegraph, a form of frequency-division multiplexing. These various frequencies, referred to as harmonics, could then be combined into one complex signal and sent down the single wire. On the receiving end, the frequencies would be separated with a matching set of resonators. With a set of frequencies being carried down a single wire, it was realized that the human voice itself could be transmitted electrically through the wire. This effort led to the invention of the telephone. (While the work toward packing multiple telegraph signals onto one wire led to telephony, later advances would pack multiple voice signals onto one wire by increasing the bandwidth by modulating frequencies much higher than human hearing. Eventually, the bandwidth was widened much further by using laser light signals sent through fiber optic cables. Fiber optic transmission can carry 25,000 telephone signals simultaneously down a single fiber.) Oceanic telegraph cables Soon after the first successful telegraph systems were operational, the possibility of transmitting messages across the sea by way of submarine communications cables was first proposed. One of the primary technical challenges was to sufficiently insulate the submarine cable to prevent the electric current from leaking out into the water. In 1842, a Scottish surgeon William Montgomerie introduced gutta-percha, the adhesive juice of the Palaquium gutta tree, to Europe. Michael Faraday and Wheatstone soon discovered the merits of gutta-percha as an insulator, and in 1845, the latter suggested that it should be employed to cover the wire which was proposed to be laid from Dover to Calais. Gutta-percha was used as insulation on a wire laid across the Rhine between Deutz and Cologne. In 1849, C. V. Walker, electrician to the South Eastern Railway, submerged a wire coated with gutta-percha off the coast from Folkestone, which was tested successfully. John Watkins Brett, an engineer from Bristol, sought and obtained permission from Louis-Philippe in 1847 to establish telegraphic communication between France and England. The first undersea cable was laid in 1850, connecting the two countries and was followed by connections to Ireland and the Low Countries. The Atlantic Telegraph Company was formed in London in 1856 to undertake to construct a commercial telegraph cable across the Atlantic Ocean. It was successfully completed on 18 July 1866 by the ship SS Great Eastern, captained by Sir James Anderson, after many mishaps along the away. John Pender, one of the men on the Great Eastern, later founded several telecommunications companies primarily laying cables between Britain and Southeast Asia. Earlier transatlantic submarine cables installations were attempted in 1857, 1858 and 1865. The 1857 cable only operated intermittently for a few days or weeks before it failed. The study of underwater telegraph cables accelerated interest in mathematical analysis of very long transmission lines. The telegraph lines from Britain to India were connected in 1870. (Those several companies combined to form the Eastern Telegraph Company in 1872.) The HMS Challenger expedition in 1873–1876 mapped the ocean floor for future underwater telegraph cables. Australia was first linked to the rest of the world in October 1872 by a submarine telegraph cable at Darwin. This brought news reports from the rest of the world. The telegraph across the Pacific was completed in 1902, finally encircling the world. From the 1850s until well into the 20th century, British submarine cable systems dominated the world system. This was set out as a formal strategic goal, which became known as the All Red Line. In 1896, there were thirty cable laying ships in the world and twenty-four of them were owned by British companies. In 1892, British companies owned and operated two-thirds of the world's cables and by 1923, their share was still 42.7 percent. Cable and Wireless Company Cable & Wireless was a British telecommunications company that traced its origins back to the 1860s, with Sir John Pender as the founder, although the name was only adopted in 1934. It was formed from successive mergers including: The Falmouth, Malta, Gibraltar Telegraph Company The British Indian Submarine Telegraph Company The Marseilles, Algiers and Malta Telegraph Company The Eastern Telegraph Company The Eastern Extension Australasia and China Telegraph Company The Eastern and Associated Telegraph Companies Telegraphy and longitude Main article § Section: . The telegraph was very important for sending time signals to determine longitude, providing greater accuracy than previously available. Longitude was measured by comparing local time (for example local noon occurs when the sun is at its highest above the horizon) with absolute time (a time that is the same for an observer anywhere on earth). If the local times of two places differ by one hour, the difference in longitude between them is 15° (360°/24h). Before telegraphy, absolute time could be obtained from astronomical events, such as eclipses, occultations or lunar distances, or by transporting an accurate clock (a chronometer) from one location to the other. The idea of using the telegraph to transmit a time signal for longitude determination
high resistance of long telegraph wires. During his tenure at The Albany Academy from 1826 to 1832, Henry first demonstrated the theory of the 'magnetic telegraph' by ringing a bell through of wire strung around the room in 1831. In 1835, Joseph Henry and Edward Davy independently invented the mercury dipping electrical relay, in which a magnetic needle is dipped into a pot of mercury when an electric current passes through the surrounding coil. In 1837, Davy invented the much more practical metallic make-and-break relay which became the relay of choice in telegraph systems and a key component for periodically renewing weak signals. Davy demonstrated his telegraph system in Regent's Park in 1837 and was granted a patent on 4 July 1838. Davy also invented a printing telegraph which used the electric current from the telegraph signal to mark a ribbon of calico infused with potassium iodide and calcium hypochlorite. First working systems The first working telegraph was built by the English inventor Francis Ronalds in 1816 and used static electricity. At the family home on Hammersmith Mall, he set up a complete subterranean system in a long trench as well as an long overhead telegraph. The lines were connected at both ends to revolving dials marked with the letters of the alphabet and electrical impulses sent along the wire were used to transmit messages. Offering his invention to the Admiralty in July 1816, it was rejected as "wholly unnecessary". His account of the scheme and the possibilities of rapid global communication in Descriptions of an Electrical Telegraph and of some other Electrical Apparatus was the first published work on electric telegraphy and even described the risk of signal retardation due to induction. Elements of Ronalds' design were utilised in the subsequent commercialisation of the telegraph over 20 years later. The Schilling telegraph, invented by Baron Schilling von Canstatt in 1832, was an early needle telegraph. It had a transmitting device that consisted of a keyboard with 16 black-and-white keys. These served for switching the electric current. The receiving instrument consisted of six galvanometers with magnetic needles, suspended from silk threads. The two stations of Schilling's telegraph were connected by eight wires; six were connected with the galvanometers, one served for the return current and one for a signal bell. When at the starting station the operator pressed a key, the corresponding pointer was deflected at the receiving station. Different positions of black and white flags on different disks gave combinations which corresponded to the letters or numbers. Pavel Schilling subsequently improved its apparatus by reducing the number of connecting wires from eight to two. On 21 October 1832, Schilling managed a short-distance transmission of signals between two telegraphs in different rooms of his apartment. In 1836, the British government attempted to buy the design but Schilling instead accepted overtures from Nicholas I of Russia. Schilling's telegraph was tested on a experimental underground and underwater cable, laid around the building of the main Admiralty in Saint Petersburg and was approved for a telegraph between the imperial palace at Peterhof and the naval base at Kronstadt. However, the project was cancelled following Schilling's death in 1837. Schilling was also one of the first to put into practice the idea of the binary system of signal transmission. His work was taken over and developed by Moritz von Jacobi who invented telegraph equipment that was used by Tsar Alexander III to connect the Imperial palace at Tsarskoye Selo and Kronstadt Naval Base. In 1833, Carl Friedrich Gauss, together with the physics professor Wilhelm Weber in Göttingen installed a wire above the town's roofs. Gauss combined the Poggendorff-Schweigger multiplicator with his magnetometer to build a more sensitive device, the galvanometer. To change the direction of the electric current, he constructed a commutator of his own. As a result, he was able to make the distant needle move in the direction set by the commutator on the other end of the line. At first, Gauss and Weber used the telegraph to coordinate time, but soon they developed other signals and finally, their own alphabet. The alphabet was encoded in a binary code that was transmitted by positive or negative voltage pulses which were generated by means of moving an induction coil up and down over a permanent magnet and connecting the coil with the transmission wires by means of the commutator. The page of Gauss' laboratory notebook containing both his code and the first message transmitted, as well as a replica of the telegraph made in the 1850s under the instructions of Weber are kept in the faculty of physics at the University of Göttingen, in Germany. Gauss was convinced that this communication would be a help to his kingdom's towns. Later in the same year, instead of a Voltaic pile, Gauss used an induction pulse, enabling him to transmit seven letters a minute instead of two. The inventors and university did not have the funds to develop the telegraph on their own, but they received funding from Alexander von Humboldt. Carl August Steinheil in Munich was able to build a telegraph network within the city in 1835–1836. He installed a telegraph line along the first German railroad in 1835. Steinheil built a telegraph along the Nuremberg - Fürth railway line in 1838, the first earth-return telegraph put into service. By 1837, William Fothergill Cooke and Charles Wheatstone had co-developed a telegraph system which used a number of needles on a board that could be moved to point to letters of the alphabet. Any number of needles could be used, depending on the number of characters it was required to code. In May 1837 they patented their system. The patent recommended five needles, which coded twenty of the alphabet's 26 letters. Samuel Morse independently developed and patented a recording electric telegraph in 1837. Morse's assistant Alfred Vail developed an instrument that was called the register for recording the received messages. It embossed dots and dashes on a moving paper tape by a stylus which was operated by an electromagnet. Morse and Vail developed the Morse code signalling alphabet. The first telegram in the United States was sent by Morse on 11 January 1838, across of wire at Speedwell Ironworks near Morristown, New Jersey, although it was only later, in 1844, that he sent the message "WHAT HATH GOD WROUGHT" over the from the Capitol in Washington to the old Mt. Clare Depot in Baltimore. Commercial telegraphy Cooke and Wheatstone system The first commercial electrical telegraph was the Cooke and Wheatstone system. A demonstration four-needle system was installed on the Euston to Camden Town section of Robert Stephenson's London and Birmingham Railway in 1837 for signalling rope-hauling of locomotives. It was rejected in favour of pneumatic whistles. Cooke and Wheatstone had their first commercial success with a system installed on the Great Western Railway over the from Paddington station to West Drayton in 1838. This was a five-needle, six-wire system. This system suffered from failing insulation on the underground cables. When the line was extended to Slough in 1843, the telegraph was converted to a one-needle, two-wire system with uninsulated wires on poles. The one-needle telegraph proved highly successful on British railways, and 15,000 sets were still in use at the end of the nineteenth century. Some remained in service in the 1930s. The Electric Telegraph Company, the world's first public telegraphy company was formed in 1845 by financier John Lewis Ricardo and Cooke. Wheatstone ABC telegraph Wheatstone developed a practical alphabetical system in 1840 called the A.B.C. System, used mostly on private wires. This consisted of a "communicator" at the sending end and an "indicator" at the receiving end. The communicator consisted of a circular dial with a pointer and the 26 letters of the alphabet (and four punctuation marks) around its circumference. Against each letter was a key that could be pressed. A transmission would begin with the pointers on the dials at both ends set to the start position. The transmitting operator would then press down the key corresponding to the letter to be transmitted. In the base of the communicator was a magneto actuated by a handle on the front. This would be turned to apply an alternating voltage to the line. Each half cycle of the current would move the pointers at both ends on by one position. When the pointer reached the position of the depressed key, it would stop and the magneto would be disconnected from the line. The communicator's pointer was geared to the magneto mechanism. The indicator's pointer was moved by a polarised electromagnet whose armature was coupled to it through an escapement. Thus the alternating line voltage moved the indicator's pointer on to the position of the depressed key on the communicator. Pressing another key would then release the pointer and the previous key, and re-connect the magneto to the line. These machines were very robust and simple to operate, and they stayed in use in Britain until well into the 20th century. Morse system In 1851, a conference in Vienna of countries in the German-Austrian Telegraph Union (which included many central European countries) adopted the Morse telegraph as the system for international communications. The international Morse code adopted was considerably modified from the original American Morse code, and was based on a code used on Hamburg railways (Gerke, 1848). A common code was a necessary step to allow direct telegraph connection between countries. With different codes, additional operators were required to translate and retransmit the message. In 1865, a conference in Paris adopted Gerke's code as the International Morse code and was henceforth the international standard. The US, however, continued to use American Morse code internally for some time, hence international messages required retransmission in both directions. In the United States, the Morse/Vail telegraph was quickly deployed in the two decades following the first demonstration in 1844. The overland telegraph connected the west coast of the continent to the east coast by 24 October 1861, bringing an end to the Pony Express. Foy–Breguet system France was slow to adopt the electrical telegraph, because of the extensive optical telegraph system built during the Napoleonic era. There was also serious concern that an electrical telegraph could be quickly put out of action by enemy saboteurs, something that was much more difficult to do with optical telegraphs which had no exposed hardware between stations. The Foy-Breguet telegraph was eventually adopted. This was a two-needle system using two signal wires but displayed in a uniquely different way to other needle telegraphs. The needles made symbols similar to the Chappe optical system symbols, making it more familiar to the telegraph operators. The optical system was decommissioned starting in 1846, but not completely until 1855. In that year the Foy-Breguet system was replaced with the Morse system. Expansion As well as the rapid expansion of the use of the telegraphs along the railways, they soon spread into the field of mass communication with the instruments being installed in post offices. The era of mass personal communication had begun. Telegraph networks were expensive to build, but financing was readily available, especially from London bankers. By 1852, National systems were in operation in major countries: The New York and Mississippi Valley Printing Telegraph Company, for example, was created in 1852 in Rochester, New York and eventually became the Western Union Telegraph Company. Although many countries had telegraph networks, there was no worldwide interconnection. Message by post was still the primary means of communication to countries outside Europe. Telegraphic improvements A continuing goal in telegraphy was to reduce the cost per message by reducing hand-work, or increasing the sending rate. There were many experiments with moving pointers, and various electrical encodings. However, most systems were too complicated and unreliable. A successful expedient to reduce the cost per message was the development of telegraphese. The first system that did not require skilled technicians to operate was Charles Wheatstone's ABC system in 1840 in which the letters of the alphabet were arranged around a clock-face, and the signal caused a needle to indicate the letter. This early system required the receiver to be present in real time to record the message and it reached speeds of up to 15 words a minute. In 1846, Alexander Bain patented a chemical telegraph in Edinburgh. The signal current moved an iron pen across a moving paper tape soaked in a mixture of ammonium nitrate and potassium ferrocyanide, decomposing the chemical and producing readable blue marks in Morse code. The speed of the printing telegraph was 16 and a half words per minute, but messages still required translation into English by live copyists. Chemical telegraphy came to an end in the US in 1851, when the Morse group defeated the Bain patent in the US District Court. For a brief period, starting with the New York–Boston line in 1848, some telegraph networks began to employ sound operators, who were trained to understand Morse code aurally. Gradually, the use of sound operators eliminated the need for telegraph receivers to include register and tape. Instead, the receiving instrument was developed into a "sounder", an electromagnet that was energized by a current and attracted a small iron lever. When the sounding key was opened or closed, the sounder lever struck an anvil. The Morse operator distinguished a dot and a dash by the short or long interval between the two clicks. The message was then written out in long-hand. Royal Earl House developed and patented a letter-printing telegraph system in 1846 which employed an alphabetic keyboard for the transmitter and automatically printed the
to the results just after a fundamental interaction took place between subatomic particles Event horizon, a boundary in spacetime, typically surrounding a black hole, beyond which events cannot affect an exterior observer Celestial event, an astronomical phenomenon of interest Extinction event, a sharp decrease in the number of extant species in a short period of time Impact event, in which an extraterrestrial object impacts planet Mental event, something that happens in the mind, such as a thought In arts and entertainment The Event, an American conspiracy thriller television series for NBC The Event (2003 film), directed by Thom Fitzgerald The Event (2015 film), directed by Sergei Loznitsa Derren Brown: The Events, a Channel 4 television series featuring the illusionist Derren Brown Event, a literary magazine published by Douglas College In business Event Communications, a
that something has happened, such as a keystroke or mouse click Event (philosophy), an object in time, or an instantiation of a property in an object Event (probability theory), a set of outcomes to which a probability is assigned Event (relativity), a point in space at an instant in time, i.e. a location in spacetime Event (synchronization primitive), a type of synchronization mechanism Event (UML), in Unified Modeling Language, a notable occurrence at a particular point in time Event (particle physics), refers to the results just after a fundamental interaction took place between subatomic particles Event horizon, a boundary in spacetime, typically surrounding a
the Law & Order franchise). In addition, the expositional nature of the shot may be unsuitable to scenes in mysteries, where details are intentionally obscured or left out. Use of establishing shots Location Establishing shots may use famous landmarks to indicate the city where the action is taking place or has moved. Time of day Sometimes the viewer is guided in their understanding of the action. For example, an exterior shot of a building at night followed by an interior shot of people talking implies that the conversation is taking place at night inside that building — the conversation may in fact have been filmed on a studio set far from the apparent location, because of budget, permits, time limitations or convenience. In the series JAG, 24-hour Coordinated Universal Time was used for these scenes to reinforce the military setting of the series. Relationship An establishing shot might be a long shot of a room that shows
day Sometimes the viewer is guided in their understanding of the action. For example, an exterior shot of a building at night followed by an interior shot of people talking implies that the conversation is taking place at night inside that building — the conversation may in fact have been filmed on a studio set far from the apparent location, because of budget, permits, time limitations or convenience. In the series JAG, 24-hour Coordinated Universal Time was used for these scenes to reinforce the military setting of the series. Relationship An establishing shot might be a long shot of a room that shows all the characters from a particular scene. For example, a scene about a murder in a college lecture hall might begin with a shot that shows the entire room, including the lecturing professor and the students taking notes. A close-up shot can also be used at the beginning of a scene to establish the setting (such as, for the lecture hall scene, a shot of a pencil writing notes). Concept An establishing shot may also establish a concept, rather than a location. For example, opening with a martial
It has also been proposed that this language family is related to the pre-Indo-European languages of Anatolia, based upon place name analysis. Others have suggested that Tyrsenian languages may yet be distantly related to early Indo-European languages, such as those of the Anatolian branch. More recently, Robert S. P. Beekes argued in 2002 that the people later known as the Lydians and Etruscans had originally lived in northwest Anatolia, with a coastline to the Sea of Marmara, whence they were driven by the Phrygians circa 1200 BC, leaving a remnant known in antiquity as the Tyrsenoi. A segment of this people moved south-west to Lydia, becoming known as the Lydians, while others sailed away to take refuge in Italy, where they became known as Etruscans. This account draws on the well-known story by Herodotus (I, 94) of the Lydian origin of the Etruscans or Tyrrhenians, famously rejected by Dionysius of Halicarnassus (book I), partly on the authority of Xanthus, a Lydian historian, who had no knowledge of the story, and partly on what he judged to be the different languages, laws, and religions of the two peoples. In 2006, Frederik Woudhuizen went further on Herodotus' traces, suggesting that Etruscan belongs to the Anatolian branch of the Indo-European family, specifically to Luwian. Woudhuizen revived a conjecture to the effect that the Tyrsenians came from Anatolia, including Lydia, whence they were driven by the Cimmerians in the early Iron Age, 750–675 BC, leaving some colonists on Lemnos. He makes a number of comparisons of Etruscan to Luwian and asserts that Etruscan is modified Luwian. He accounts for the non-Luwian features as a Mysian influence: "deviations from Luwian [...] may plausibly be ascribed to the dialect of the indigenous population of Mysia." According to Woudhuizen, the Etruscans were initially colonizing the Latins, bringing the alphabet from Anatolia. For both archaeological and linguistic reasons, a relationship between Etruscan and the Anatolian languages (Lydian or Luwian) and the idea that Etruscans were initially colonizing the Latins, bringing the alphabet from Anatolia, have not been accepted, just as the story of the Lydian origin reported by Herodotus is no longer considered trustworthy. Another proposal, pursued mainly by a few linguists from the former Soviet Union, suggested a relationship with Northeast Caucasian (or Nakh-Daghestanian) languages. Writing system Alphabet The Latin script owes its existence to the Etruscan alphabet, which was adapted for Latin in the form of the Old Italic script. The Etruscan alphabet employs a Euboean variant of the Greek alphabet using the letter digamma and was in all probability transmitted through Pithecusae and Cumae, two Euboean settlements in southern Italy. This system is ultimately derived from West Semitic scripts. The Etruscans recognized a 26-letter alphabet, which makes an early appearance incised for decoration on a small bucchero terracotta lidded vase in the shape of a cockerel at the Metropolitan Museum of Art, ca. 650–600 BC. The full complement of 26 has been termed the model alphabet. The Etruscans did not use four letters of it, mainly because Etruscan did not have the voiced stops b, d and g; the o was also not used. They innovated one letter for f (). Text Writing was from right to left except in archaic inscriptions, which occasionally used boustrophedon. An example found at Cerveteri used left to right. In the earliest inscriptions, the words are continuous. From the 6th century BC, they are separated by a dot or a colon, which might also be used to separate syllables. Writing was phonetic; the letters represented the sounds and not conventional spellings. On the other hand, many inscriptions are highly abbreviated and often casually formed, so the identification of individual letters is sometimes difficult. Spelling might vary from city to city, probably reflecting differences of pronunciation. Complex consonant clusters Speech featured a heavy stress on the first syllable of a word, causing syncopation by weakening of the remaining vowels, which then were not represented in writing: Alcsntre for Alexandros, Rasna for Rasena. This speech habit is one explanation of the Etruscan "impossible" consonant clusters. Some of the consonants, especially resonants, however, may have been syllabic, accounting for some of the clusters (see below under Consonants). In other cases, the scribe sometimes inserted a vowel: Greek Hēraklēs became Hercle by syncopation and then was expanded to Herecele. Pallottino regarded this variation in vowels as "instability in the quality of vowels" and accounted for the second phase (e.g. Herecele) as "vowel harmony, i.e., of the assimilation of vowels in neighboring syllables". Phases The writing system had two historical phases: the archaic from the seventh to fifth centuries BC, which used the early Greek alphabet, and the later from the fourth to first centuries BC, which modified some of the letters. In the later period, syncopation increased. The alphabet went on in modified form after the language disappeared. In addition to being the source of the Roman alphabet, it has been suggested that it passed northward into Veneto and from there through Raetia into the Germanic lands, where it became the Elder Futhark alphabet, the oldest form of the runes. Corpus The Etruscan corpus is edited in the Corpus Inscriptionum Etruscarum (CIE) and Thesaurus Linguae Etruscae (TLE). Bilingual text The Pyrgi Tablets are a bilingual text in Etruscan and Phoenician engraved on three gold leaves, one for the Phoenician and two for the Etruscan. The Etruscan language portion has 16 lines and 37 words. The date is roughly 500 BC. The tablets were found in 1964 by Massimo Pallottino during an excavation at the ancient Etruscan port of Pyrgi, now Santa Severa. The only new Etruscan word that could be extracted from close analysis of the tablets was the word for "three", ci. Longer texts According to Rix and his collaborators, only two unified (though fragmentary) texts are available in Etruscan: The Liber Linteus Zagrabiensis, which was later used for mummy wrappings in Egypt. Roughly 1,200 words of readable text, mainly repetitious prayers, yielded about 50 lexical items. The Tabula Capuana (the inscribed tile from Capua) has about 300 readable words in 62 lines, dating to the fifth century BC. Some additional longer texts are: The lead foils of Punta della Vipera have about 40 legible words having to do with ritual formulae. It is dated to about 500 BC. The Cippus Perusinus, a stone slab (cippus) found at Perugia, contains 46 lines and 130 words. The Piacenza Liver, a bronze model of a sheep's liver representing the sky, has the engraved names of the gods ruling different sections. The Tabula Cortonensis, a bronze tablet from Cortona, is believed to record a legal contract, with about 200 words. Discovered in 1992, this new tablet contributed the word for "lake", tisś, but not much else. The Vicchio stele, found in the 21st season of excavation at the Etruscan Sanctuary at Poggio Colla, believed to be connected with the cult of the goddess Uni, with about 120 letters. Only discovered in 2016, it is still in the process of being deciphered. Inscriptions on monuments The main material repository of Etruscan civilization, from the modern perspective, is its tombs. All other public and private buildings having been dismantled and the stone reused centuries ago. The tombs are the main source of Etruscan portables, provenance unknown, in collections throughout the world. Their incalculable value has created a brisk black market in Etruscan objets d'art – and equally brisk law enforcement effort, as it is illegal to remove any objects from Etruscan tombs without authorization from the Italian government. The magnitude of the task involved in cataloguing them means that the total number of tombs is unknown. They are of many types. Especially plentiful are the hypogeal or "underground" chambers or system of chambers cut into tuff and covered by a tumulus. The interior of these tombs represents a habitation of the living stocked with furniture and favorite objects. The walls may display painted murals, the predecessor of wallpaper. Tombs identified as Etruscan date from the Villanovan period to about 100 BC, when presumably the cemeteries were abandoned in favor of Roman ones. Some of the major cemeteries are as follows: Caere or Cerveteri, a UNESCO site. Three complete necropoleis with streets and squares. Many hypogea are concealed beneath tumuli retained by walls; others are cut into cliffs. The Banditaccia necropolis contains more than 1,000 tumuli. Access is through a door. Tarquinia, Tarquinii or Corneto, a UNESCO site: Approximately 6,000 graves dating from the Villanovan (ninth and eighth centuries BC) distributed in necropoleis, the main one being the Monterozzi hypogea of the sixth–fourth centuries BC. About 200 painted tombs display murals of various scenes with call-outs and descriptions in Etruscan. Elaborately carved sarcophagi of marble, alabaster, and nenfro include identificatory and achievemental inscriptions. The Tomb of Orcus at the Scatolini necropolis depicts scenes of the Spurinna family with call-outs. Inner walls and doors of tombs and sarcophagi Engraved steles (tombstones) ossuaries Inscriptions on portable objects Votives See Votive gifts. Specula A speculum is a circular or oval hand-mirror used predominantly by Etruscan women. Speculum is Latin; the Etruscan word is or . Specula were cast in bronze as one piece or with a tang into which a wooden, bone, or ivory handle fitted. The reflecting surface was created by polishing the flat side. A higher percentage of tin in the mirror improved its ability to reflect. The other side was convex and featured intaglio or cameo scenes from mythology. The piece was generally ornate. About 2,300 specula are known from collections all over the world. As they were popular plunderables, the provenance of only a minority is known. An estimated time window is 530–100 BC. Most probably came from tombs. Many bear inscriptions naming the persons depicted in the scenes, so they are often called picture bilinguals. In 1979, Massimo Pallottino, then president of the Istituto di Studi Etruschi ed Italici initiated the Committee of the Corpus Speculorum Etruscanorum, which resolved to publish all the specula and set editorial standards for doing so. Since then, the committee has grown, acquiring local committees and representatives from most institutions owning Etruscan mirror collections. Each collection is published in its own fascicle by diverse Etruscan scholars. Cistae A cista is a bronze container of circular, ovoid, or more rarely rectangular shape used by women for the storage of sundries. They are ornate, often with feet and lids to which figurines may be attached. The internal and external surfaces bear carefully crafted scenes usually from mythology, usually intaglio, or rarely part intaglio, part cameo. Cistae date from the Roman Republic of the fourth and third centuries BC in Etruscan contexts. They may bear various short inscriptions concerning the manufacturer or owner or subject matter. The writing may be Latin, Etruscan, or both. Excavations at Praeneste, an Etruscan city which became Roman, turned up about 118 cistae, one of which has been termed "the Praeneste cista" or "the Ficoroni cista" by art analysts, with special reference to the one manufactured by Novios Plutius and given by Dindia Macolnia
of Oscan writing in Pompeii's walls. Despite the apparent extinction of Etruscan, it appears that Etruscan religious rites continued much later, continuing to use the Etruscan names of deities and possibly with some liturgical usage of the language. In late Republican and early Augustan times, various Latin sources including Cicero noted the esteemed reputation of Etruscan soothsayers. An episode where lightning struck an inscription with the name Caesar, turning it into Aesar, was interpreted to have been a premonition of the deification of Caesar because of the resemblance to Etruscan , meaning "gods", although this indicates knowledge of a single word and not the language. Centuries later and long after Etruscan is thought to have died out, Ammianus Marcellinus reports that Julian the Apostate, the last pagan Emperor, apparently had Etruscan soothsayers accompany him on his military campaigns with books on war, lightning and celestial events, but the language of these books is unknown. According to Zosimus, when Rome was faced with destruction by Alaric in 408 AD, the protection of nearby Etruscan towns was attributed to Etruscan pagan priests who claimed to have summoned a raging thunderstorm, and they offered their services "in the ancestral manner" to Rome as well, but the devout Christians of Rome refused the offer, preferring death to help by pagans. Freeman notes that these events may indicate that a limited theological knowledge of Etruscan may have survived among the priestly caste much longer. One 19th-century writer argued in 1892 that Etruscan deities retained an influence on early modern Tuscan folklore. Around 180, the Latin author Aulus Gellius mentions Etruscan alongside the Gaulish language in an anecdote. Freeman notes that although Gaulish was clearly still alive during Gellius' time, his testimony may not indicate that Etruscan was still alive because the phrase could indicate a meaning of the sort of "it's all Greek (incomprehensible) to me". At the time of its extinction, only a few educated Romans with antiquarian interests, such as Marcus Terentius Varro, could read Etruscan. The Roman emperor Claudius (10 BC – AD 54) is considered to have possibly been able to read Etruscan, and authored a treatise on Etruscan history; a separate dedication made by Claudius implies a knowledge from "diverse Etruscan sources", but it is unclear if any were fluent speakers of Etruscan. Plautia Urgulanilla, the emperor's first wife, was Etruscan. Etruscan had some influence on Latin, as a few dozen Etruscan words and names were borrowed by the Romans, some of which remain in modern languages, among which are possibly voltur "vulture", tuba "trumpet", vagina "sheath", populus "people". Geographic distribution Inscriptions have been found in northwest and west-central Italy, in the region that even now bears the name of the Etruscan civilization, Tuscany (from Latin tuscī "Etruscans"), as well as in modern Latium north of Rome, in today's Umbria west of the Tiber, in Campania and in the Po Valley to the north of Etruria. This range may indicate a maximum Italian homeland where the language was at one time spoken. Outside Italy, inscriptions have been found in Corsica, Gallia Narbonensis, Greece, the Balkans. But by far, the greatest concentration is in Italy. Classification Tyrsenian family hypothesis In 1998, Helmut Rix put forward the view that Etruscan is related to other members of what he called the "Tyrsenian language family". Rix's Tyrsenian family of languages—composed of Raetic, spoken in ancient times in the eastern Alps, and Lemnian, together with Etruscan—has gained acceptance among scholars. Rix's Tyrsenian family has been confirmed by Stefan Schumacher, Norbert Oettinger, Carlo De Simone, and Simona Marchesini. Common features between Etruscan, Raetic, and Lemnian have been found in morphology, phonology, and syntax. On the other hand, few lexical correspondences are documented, at least partly due to the scant number of Raetic and Lemnian texts. The Tyrsenian family, or Common Tyrrhenic, in this case is often considered to be Paleo-European and to predate the arrival of Indo-European languages in southern Europe. Several scholars believe that the Lemnian language could have arrived in the Aegean Sea during the Late Bronze Age, when Mycenaean rulers recruited groups of mercenaries from Sicily, Sardinia and various parts of the Italian peninsula. Scholars such as Norbert Oettinger, Michel Gras and Carlo De Simone think that Lemnian is the testimony of an Etruscan commercial settlement on the island that took place before 700 BC, not related to the Sea Peoples. Some scholars think that the Camunic language, an extinct language spoken in the Central Alps of Northern Italy, may be also related to Etruscan and to Raetic. Superseded theories and fringe scholarship Over the centuries many hypotheses on the Etruscan language have been developed, many of which have not been accepted or have been considered highly speculative. The interest in Etruscan antiquities and the Etruscan language found its modern origin in a book by a Renaissance Dominican friar, Annio da Viterbo, a cabalist and orientalist now remembered mainly for literary forgeries. In 1498, Annio published his antiquarian miscellany titled Antiquitatum variarum (in 17 volumes) where he put together a theory in which both the Hebrew and Etruscan languages were said to originate from a single source, the "Aramaic" spoken by Noah and his descendants, founders of the Etruscan city Viterbo. The 19th century saw numerous attempts to reclassify Etruscan. Ideas of Semitic origins found supporters until this time. In 1858, the last attempt was made by Johann Gustav Stickel, Jena University in his Das Etruskische [...] als semitische Sprache erwiesen. A reviewer concluded that Stickel brought forward every possible argument which would speak for that hypothesis, but he proved the opposite of what he had attempted to do. In 1861, Robert Ellis proposed that Etruscan was related to Armenian, which is nowadays acknowledged as an Indo-European language. Exactly 100 years later, a relationship with Albanian was to be advanced by Zecharia Mayani, but Albanian is also known to be an Indo-European language. Several theories from the late 19th and early 20th centuries connected Etruscan to Uralic or even Altaic languages. In 1874, the British scholar Isaac Taylor brought up the idea of a genetic relationship between Etruscan and Hungarian, of which also Jules Martha would approve in his exhaustive study La langue étrusque (1913). In 1911, the French orientalist Baron Carra de Vaux suggested a connection between Etruscan and the Altaic languages. The Hungarian connection was revived by Mario Alinei, Emeritus Professor of Italian Languages at the University of Utrecht. Alinei's proposal has been rejected by Etruscan experts such as Giulio M. Facchetti, Finno-Ugric experts such as Angela Marcantonio, and by Hungarian historical linguists such as Bela Brogyanyi. The idea of a relation between the language of the Minoan Linear A scripts was taken into consideration as the main hypothesis by Michael Ventris before he discovered that, in fact, the language behind the later Linear B script was Mycenean, a Greek dialect. It has been proposed to possibly be part of a wider Paleo-European "Aegean" language family, which would also include Minoan, Eteocretan (possibly descended from Minoan) and Eteocypriot. This has been proposed by Giulio Mauro Facchetti, a researcher who has dealt with both Etruscan and Minoan, and supported by S. Yatsemirsky, referring to some similarities between Etruscan and Lemnian on one hand, and Minoan and Eteocretan on the other. It has also been proposed that this language family is related to the pre-Indo-European languages of Anatolia, based upon place name analysis. Others have suggested that Tyrsenian languages may yet be distantly related to early Indo-European languages, such as those of the Anatolian branch. More recently, Robert S. P. Beekes argued in 2002 that the people later known as the Lydians and Etruscans had originally lived in northwest Anatolia, with a coastline to the Sea of Marmara, whence they were driven by the Phrygians circa 1200 BC, leaving a remnant known in antiquity as the Tyrsenoi. A segment of this people moved south-west to Lydia, becoming known as the Lydians, while others sailed away to take refuge in Italy, where they became known as Etruscans. This account draws on the well-known story by Herodotus (I, 94) of the Lydian origin of the Etruscans or Tyrrhenians, famously rejected by Dionysius of Halicarnassus (book I), partly on the authority of Xanthus, a Lydian historian, who had no knowledge of the story, and partly on what he judged to be the different languages, laws, and religions of the two peoples. In 2006, Frederik Woudhuizen went further on Herodotus' traces, suggesting that Etruscan belongs to the Anatolian branch of the Indo-European family, specifically to Luwian. Woudhuizen revived a conjecture to the effect that the Tyrsenians came from Anatolia, including Lydia, whence they were driven by the Cimmerians in the early Iron Age, 750–675 BC, leaving some colonists on Lemnos. He makes a number of comparisons of Etruscan to Luwian and asserts that Etruscan is modified Luwian. He accounts for the non-Luwian features as a Mysian influence: "deviations from Luwian [...] may plausibly be ascribed to the dialect of the indigenous population of Mysia." According to Woudhuizen, the Etruscans were initially colonizing the Latins, bringing the alphabet from Anatolia. For both archaeological and linguistic reasons, a relationship between Etruscan and the Anatolian languages (Lydian or Luwian) and the idea that Etruscans were initially colonizing the Latins, bringing the alphabet from Anatolia, have not been accepted, just as the story of the Lydian origin reported by Herodotus is no longer considered trustworthy. Another proposal, pursued mainly by a few linguists from the former Soviet Union, suggested a relationship with Northeast Caucasian (or Nakh-Daghestanian) languages. Writing system Alphabet The Latin script owes its existence to the Etruscan alphabet, which was adapted for Latin in the form of the Old Italic script. The Etruscan alphabet employs a Euboean variant of the Greek alphabet using the letter digamma and was in all probability transmitted through Pithecusae and Cumae, two Euboean settlements in southern Italy. This system is ultimately derived from West Semitic scripts. The Etruscans recognized a 26-letter alphabet, which makes an early appearance incised for decoration on a small bucchero terracotta lidded vase in the shape of a cockerel at the Metropolitan Museum of Art, ca. 650–600 BC. The full complement of 26 has been termed the model alphabet. The Etruscans did not use four letters of it, mainly because Etruscan did not have the voiced stops b, d and g; the o was also not used. They innovated one letter for f (). Text Writing was from right to left except in archaic inscriptions, which occasionally used boustrophedon. An example found at Cerveteri used left to right. In the earliest inscriptions, the words are continuous. From the 6th century BC, they are separated by a dot or a colon, which might also be used to separate syllables. Writing was phonetic; the letters represented the sounds and not conventional spellings. On the other hand, many inscriptions are highly abbreviated and often casually formed, so the identification of individual letters is sometimes difficult. Spelling might vary from city to city, probably reflecting differences of pronunciation. Complex consonant clusters Speech featured a heavy stress on the first syllable of a word, causing syncopation by weakening of the remaining vowels, which then were not represented in writing: Alcsntre for Alexandros, Rasna for Rasena. This speech habit is one explanation of the Etruscan "impossible" consonant clusters. Some of the consonants, especially resonants, however, may have been syllabic, accounting for some of the clusters (see below under Consonants). In other cases, the scribe sometimes inserted a vowel: Greek Hēraklēs became Hercle by syncopation and then was expanded to Herecele. Pallottino regarded this variation in vowels as "instability in the quality of vowels" and accounted for the second phase (e.g. Herecele) as "vowel harmony, i.e., of the assimilation of vowels in neighboring syllables". Phases The writing system had two historical phases: the archaic from the seventh to fifth centuries BC, which used the early Greek alphabet, and the later from the fourth to first centuries BC, which modified some of the letters. In the later period, syncopation increased. The alphabet went on in modified form after the language disappeared. In addition to being the source of the Roman alphabet, it has been suggested that it passed northward into Veneto and from there through Raetia into the Germanic lands, where it became the Elder Futhark alphabet, the oldest form of the runes. Corpus The Etruscan corpus is edited in the Corpus Inscriptionum Etruscarum (CIE) and Thesaurus Linguae Etruscae (TLE). Bilingual text The Pyrgi Tablets are a bilingual text in Etruscan and Phoenician engraved on three gold leaves, one for the Phoenician and two for the Etruscan. The Etruscan language portion has 16 lines and 37 words. The date is roughly 500 BC. The tablets were found in 1964 by Massimo Pallottino during an excavation at the ancient Etruscan port of Pyrgi, now Santa Severa. The only new Etruscan word that could be extracted from close analysis of the tablets was the word for "three", ci. Longer texts According to Rix and his collaborators, only two unified (though fragmentary) texts are available in Etruscan: The Liber Linteus Zagrabiensis, which was later used for mummy wrappings in Egypt. Roughly 1,200 words of readable text, mainly repetitious prayers, yielded about 50 lexical items. The Tabula Capuana (the inscribed tile from Capua) has about 300 readable words in 62 lines, dating to the fifth century BC. Some additional longer texts are: The lead foils of Punta della Vipera have about 40 legible words having to do with ritual formulae. It is dated to about 500 BC. The Cippus Perusinus, a stone slab (cippus) found at Perugia, contains 46 lines and 130 words. The Piacenza Liver, a bronze model of a sheep's liver representing the sky, has the engraved names of the gods ruling different sections. The Tabula Cortonensis, a bronze tablet from Cortona, is believed to record a legal contract, with about 200 words. Discovered in 1992, this new tablet contributed the word for "lake", tisś, but not much else. The Vicchio stele, found in the 21st season of excavation at the Etruscan
prevent fair access to elections (see civil rights movement). Contexts of elections Elections are held in a variety of political, organizational, and corporate settings. Many countries hold elections to select people to serve in their governments, but other types of organizations hold elections as well. For example, many corporations hold elections among shareholders to select a board of directors, and these elections may be mandated by corporate law. In many places, an election to the government is usually a competition among people who have already won a primary election within a political party. Elections within corporations and other organizations often use procedures and rules that are similar to those of governmental elections. Electorate Suffrage The question of who may vote is a central issue in elections. The electorate does not generally include the entire population; for example, many countries prohibit those who are under the age of majority from voting, all jurisdictions require a minimum age for voting. In Australia, Aboriginal people were not given the right to vote until 1962 (see 1967 referendum entry) and in 2010 the federal government removed the rights of prisoners serving for 3 years or more to vote (a large proportion of which were Aboriginal Australians). Suffrage is typically only for citizens of the country, though further limits may be imposed. However, in the European Union, one can vote in municipal elections if one lives in the municipality and is an EU citizen; the nationality of the country of residence is not required. In some countries, voting is required by law; if an eligible voter does not cast a vote, he or she may be subject to punitive measures such as a fine. In Western Australia, the penalty for a first time offender failing to vote is a $20.00 fine, which increases to $50.00 if the offender refused to vote prior. Voting population Historically the size of eligible voters, the electorate, was small having the size of groups or communities of privileged men like aristocrats and men of a city (citizens). With the growth of the number of people with bourgeois citizen rights outside of cities, expanding the term citizen, the electorates grew to numbers beyond the thousands. Elections with an electorate in the hundred thousands appeared in the final decades of the Roman Republic, by extending voting rights to citizens outside of Rome with the Lex Julia of 90 BC, reaching an electorate of 910,000 and estimated voter turnout of maximum 10% in 70 BC, only again comparable in size to the first elections of the United States. At the same time the Kingdom of Great Britain had in 1780 about 214,000 eligible voters, 3% of the whole population. Candidates A representative democracy requires a procedure to govern nomination for political office. In many cases, nomination for office is mediated through preselection processes in organized political parties. Non-partisan systems tend to be different from partisan systems as concerns nominations. In a direct democracy, one type of non-partisan democracy, any eligible person can be nominated. Although elections were used in ancient Athens, in Rome, and in the selection of popes and Holy Roman emperors, the origins of elections in the contemporary world lie in the gradual emergence of representative government in Europe and North America beginning in the 17th century. In some systems no nominations take place at all, with voters free to choose any person at the time of voting—with some possible exceptions such as through a minimum age requirement—in the jurisdiction. In such cases, it is not required (or even possible) that the members of the electorate be familiar with all of the eligible persons, though such systems may involve indirect elections at larger geographic levels to ensure that some first-hand familiarity among potential electees can exist at these levels (i.e., among the elected delegates). As far as partisan systems, in some countries, only members of a particular party can be nominated (see one-party state). Or, any eligible person can be nominated through a process; thus allowing him or her to be listed. Electoral systems Electoral systems are the detailed constitutional arrangements and voting systems that convert the vote into a political decision. The first step is to tally the votes, for which various vote counting systems and ballot types are used. Voting systems then determine the result on the basis of the tally. Most systems can be categorized as either proportional, majoritarian or mixed. Among the proportional systems, the most commonly used are party-list proportional representation (list PR) systems, among majoritarian are First Past the Post electoral system (plurality, also known as relative majority) and absolute majority. Mixed systems combine elements of both proportional and majoritarian methods, with some typically producing results closer to the former (mixed-member proportional) or the other (e.g. parallel voting). Many countries have growing electoral reform movements, which advocate
(or even possible) that the members of the electorate be familiar with all of the eligible persons, though such systems may involve indirect elections at larger geographic levels to ensure that some first-hand familiarity among potential electees can exist at these levels (i.e., among the elected delegates). As far as partisan systems, in some countries, only members of a particular party can be nominated (see one-party state). Or, any eligible person can be nominated through a process; thus allowing him or her to be listed. Electoral systems Electoral systems are the detailed constitutional arrangements and voting systems that convert the vote into a political decision. The first step is to tally the votes, for which various vote counting systems and ballot types are used. Voting systems then determine the result on the basis of the tally. Most systems can be categorized as either proportional, majoritarian or mixed. Among the proportional systems, the most commonly used are party-list proportional representation (list PR) systems, among majoritarian are First Past the Post electoral system (plurality, also known as relative majority) and absolute majority. Mixed systems combine elements of both proportional and majoritarian methods, with some typically producing results closer to the former (mixed-member proportional) or the other (e.g. parallel voting). Many countries have growing electoral reform movements, which advocate systems such as approval voting, single transferable vote, instant runoff voting or a Condorcet method; these methods are also gaining popularity for lesser elections in some countries where more important elections still use more traditional counting methods. While openness and accountability are usually considered cornerstones of a democratic system, the act of casting a vote and the content of a voter's ballot are usually an important exception. The secret ballot is a relatively modern development, but it is now considered crucial in most free and fair elections, as it limits the effectiveness of intimidation. Campaigns When elections are called, politicians and their supporters attempt to influence policy by competing directly for the votes of constituents in what are called campaigns. Supporters for a campaign can be either formally organized or loosely affiliated, and frequently utilize campaign advertising. It is common for political scientists to attempt to predict elections via political forecasting methods. The most expensive election campaign included US$7 billion spent on the 2012 United States presidential election and is followed by the US$5 billion spent on the 2014 Indian general election. Election timing The nature of democracy is that elected officials are accountable to the people, and they must return to the voters at prescribed intervals to seek their mandate to continue in office. For that reason most democratic constitutions provide that elections are held at fixed regular intervals. In the United States, elections for public offices are typically held between every two and six years in most states and at the federal level, with exceptions for elected judicial positions that may have longer terms of office. There is a variety of schedules, for example presidents: the President of Ireland is elected every seven years, the President of Russia and the President of Finland every six years, the President of France every five years, President of the United States every four years. Pre-decided or fixed election dates have the advantage of fairness and predictability. However, they tend to greatly lengthen campaigns, and make dissolving the legislature (parliamentary system) more problematic if the date should happen to fall at time when dissolution is inconvenient (e.g. when war breaks out). Other states (e.g., the United Kingdom) only set maximum time in office, and the executive decides exactly when within that limit it will actually go to the polls. In practice, this means the government remains in power for close to its full term, and choose an election date it calculates to be in its best interests (unless something special happens, such as a motion of no-confidence). This calculation depends on a number of variables, such as its performance in opinion polls and the size of its majority. Non-democratic elections In many of the countries with weak rule of law, the most common reason why elections do not meet international standards of being "free and fair" is interference from the incumbent government. Dictators may use the powers of the executive (police, martial law, censorship, physical implementation of the election mechanism, etc.) to remain in power despite popular opinion in favour of removal. Members of a particular faction in a legislature may use the power of the majority or supermajority (passing criminal laws, defining the electoral mechanisms including eligibility and district boundaries) to prevent the balance of power in the body from shifting to a rival faction due to an election. Non-governmental entities can also interfere with elections, through physical force, verbal intimidation, or fraud, which can result in improper casting or counting of votes. Monitoring for and minimizing electoral fraud is also an ongoing task in countries with strong traditions of free and fair elections. Problems that prevent an election from being "free and fair" take various forms. Lack of open political debate or an informed electorate The electorate may be poorly informed about issues or candidates due to lack of freedom of the press, lack of objectivity in the press due to state or corporate control, and/or lack of access to news and political media. Freedom of speech may be curtailed by the state, favouring certain viewpoints or state propaganda. Unfair rules Gerrymandering, exclusion of opposition candidates from eligibility for office, needlessly high restrictions on who may be a candidate, like ballot access rules, and manipulating thresholds for electoral success are some of the ways the structure of an election can be changed to favour a specific faction or candidate. Interference with campaigns Those in power may arrest or assassinate candidates, suppress or even criminalize campaigning, close campaign headquarters, harass or beat campaign workers, or intimidate voters with violence. Foreign electoral intervention can also occur, with the United States interfering between 1946 and 2000 in 81 elections and Russia/USSR in 36. In 2018 the most intense interventions, by means of false information, were by China in Taiwan and by Russia in Latvia; the next highest levels were in Bahrain, Qatar and Hungary. Tampering with the election mechanism This can include falsifying voter instructions, violation of the secret ballot, ballot stuffing, tampering with voting machines, destruction of legitimately cast ballots, voter suppression, voter registration
Bill Fay also mentions Enniskillen in his song In Human Hands The Guardian noted that residential areas including Cooper Crescent and Chanterhill Road - inner suburbs just North of the town centre - were the 'poshest' with much of the fine housing stock located outside of the town centre. The Irish language novel Mo Dhá Mhicí by Séamus Mac Annaidh is set in Enniskillen. Demography On Census day (27 March 2011) there were 13,823 people living in Enniskillen (5,733 households), accounting for 0.76% of the NI total and representing an increase of 1.6% on the Census 2001 population of 13,599. Of these: 19.76% were aged under 16 years and 15.59% were aged 65 and over; 51.80% of the usually resident population were female and 48.20% were male; 61.62% belong to or were brought up in the Catholic Christian faith and 33.55% belong to or were brought up in various 'Protestant and Other Christian (including Christian related)' denominations; 35.59% indicated that they had a British national identity, 33.77% had an Irish national identity and 30.35% had a Northern Irish national identity (respondents could indicate more than one national identity); 39 years was the average (median) age of the population; 13.03% had some knowledge of Irish (Gaelic) and 3.65% had some knowledge of Ulster-Scots. Climate Enniskillen has a maritime climate with a narrow range of temperatures and rainfall. The nearest official Met Office weather station for which online records are available is at Lough Navar Forest, about northwest of Enniskillen. Data has also more recently been collected from Enniskillen/St Angelo Airport, under north of the town centre, which should in time give a more accurate representation of the climate of the Enniskillen area. The absolute maximum temperature is , recorded during July 2006. In an 'average' year, the warmest day is and only 2.4 days a year should rise to or above. The respective absolute maximum for St Angelo is The absolute minimum temperature is , recorded during January 1984. In an 'average' year, the coldest night should fall to . Lough Navar is a frosty location, with some 76 air frosts recorded in a typical year. It is likely that Enniskillen town centre is significantly less frosty than this. The absolute minimum at St Angelo is , reported during the record cold month of December 2010. The warmest month on record at St Angelo was August 1995 with a mean temperature of (mean maximum , mean minimum , while the coldest month was December 2010, with a mean temperature of (mean maximum , mean minimum . Rainfall is high, averaging over 1500 mm. 212 days of the year report at least 1 mm of precipitation, ranging from 15 days during April, May and June, to 20 days in October, November, December, January and March. The Köppen climate classification subtype for this climate is "Cfb" (Marine West Coast Climate/Oceanic climate). Places of interest Ardhowen Theatre Castle Coole Cole's Monument Enniskillen Castle Mount Lourdes Grammar School Portora Royal School (Now Enniskillen Royal Grammar School) Portora Castle St. Macartin's Cathedral St. Michael's College (Enniskillen) The Clinton Centre The Regimental Museum of the Inniskilling Regiment The Round O The Marble Arch Caves Cuilcagh Mountain Global Geo-Park Monea Castle Lough Navar and the Cliffs of Magho Sports Association football The town has two association football teams called Enniskillen Rangers and Enniskillen Town United F.C. Enniskillen Rangers are the current holders of the Irish Junior Cup, defeating Hill Street 5–1 on Monday, 1 May 2017. The match was played at the National Football Stadium at Windsor Park in Belfast. They play their home games at the Ball Range. Enniskillen Rangers have several notable former players including Sandy Fulton and Jim Cleary. Enniskillen Town United F.C. currently play in the Fermanagh & Western 1st Division. Their most notable former player is Michael McGovern who currently plays for Norwich City F.C. At the moment, Enniskillen Town play their home games at The Lakeland Forum playing fields in Enniskillen. Rugby Enniskillen Rugby Football Club was founded in 1925 and plays their home games at Mullaghmeen. The club currently fields 4 senior men's teams, a senior ladies teams, a range of male and female youth teams, a vibrant mini section and a disability tag team called The Enniskillen Elks. Enniskillen XV won the Ulster Towns Cup in the 2018/19 season, defeating Ballyclare 19–0. The team currently play in Kukri Ulster Rugby Championship Division 1. The rugby club was formed on 28 August 1925, when 37 attended a meeting in Enniskillen Town Hall. The name Enniskillen Rugby Club was agreed and the club adopted the rules of Dublin University. The first match was played on 30 September 1925 against Ballyshannon in County Donegal. Gaelic football Enniskillen Gaels are a Gaelic Athletic Association club founded in 1927. They play their home games at Brewster Park, Enniskillen. International events Enniskillen was the venue of the 39th G8 summit which was held on 17 and 18 June 2013. It was held at the Lough Erne Resort, a five-star hotel and golf resort on the shore of Lough Erne. The gathering was the biggest international diplomatic gathering ever held in Northern Ireland. Among the G8 leaders who attended were British Prime Minister David Cameron, United States President Barack Obama, German Chancellor Angela Merkel, and Russian President Vladimir Putin. In the past, Enniskillen has hosted an array of international events, most notably stages of the World Waterski World Cup, annually from 2005 to 2007 at the Broadmeadow. Despite its success, Enniskillen was not chosen as a World Cup Stop for 2008. In January 2009, Enniskillen hosted the ceremonial start of Rally Ireland 2009, the first stage of the WRC FIA World Rally Championship 2009 Calendar. Enniskillen has hosted the Happy Days arts festival since 2012, which celebrates "the work and influence of Nobel Prize-winning writer Samuel Beckett" and is the "first annual, international, multi-arts festival to be held in Northern Ireland since the launch of the Ulster Bank Belfast Festival at Queen's in 1962". Notable natives and residents Arts and Media Samuel Beckett, playwright, educated at Portora Royal School Charles Duff, Irish author of books on language learning and other subjects Adrian Dunbar, actor, born and brought up in Enniskillen Nial Fulton, film and television producer, educated at Portora Royal School Neil Hannon, lead singer/composer of the pop band The Divine Comedy, educated at Portora Royal School Charles Lawson, most notable for playing Jim McDonald in Coronation Street David McCann, author of children's books Lisa McHugh, country music singer; born in Glasgow, Scotland, she moved to Enniskillen as an adult. Fearghal McKinney, journalist, former UTV broadcaster and member of the Northern Ireland Assembly Nigel McLoughlin, poet, editor of Iota poetry journal and Professor of Creativity and Poetics, University of Gloucestershire Ciarán McMenamin, television actor and author Frank Ormsby, poet David Robinson, photographer and publisher, educated at Portora Royal School William Scott, artist Mick Softley singer and songwriter for Bob Dylan and
she never reached the other side, so the island was named in reference to her. It has been anglicised many ways over the centuries – Iniskellen, Iniskellin, Iniskillin, Iniskillen, Inishkellen, Inishkellin, Inishkillin, Inishkillen and so on. History The town's oldest building is Enniskillen Castle, built by Hugh (Maguire) the Hospitable who died in 1428. An earthwork, the Skonce on the shore of Lough Erne, may be the remains of an earlier motte. The castle was the stronghold of the junior branch of the Maguires. The first watergate was built around 1580 by Cú Chonnacht Maguire, though subsequent lowering of the level of the lough has left it without water. The strategic position of the castle made its capture important for the English in 1593, to support their plans for the control of Ulster. The castle was besieged three times in 1594–95. The English, led by a Captain Dowdall, captured it in February 1594. Maguire then laid siege to it, and defeated a relieving force at the Battle of the Ford of the Biscuits at Drumane Bridge on the Arney River. Although the defenders were relieved, Maguire gained possession of the castle from 1595 to 1598 and it was not finally captured by the English until 1607. This was part of a wider campaign to bring the province of Ulster under English control; the final capture of Enniskillen Castle in 1607 was followed by the Plantation of Ulster, during which the lands of the native Irish were seized and handed over to planters loyal to the English Crown. The Maguires were supplanted by William Cole, originally from Devon, who was appointed by James I to build an English settlement there. Captain Cole was installed as Constable and strengthened the castle wall and built a "fair house" on the old foundation as the centre point of the county town. The first Protestant parish church was erected on the hilltop in 1627. The Royal Free School of Fermanagh was moved onto the island in 1643. The first bridges were drawbridges; permanent bridges were not installed before 1688. By 1689 the town had grown significantly. During the conflict which resulted from the ousting of King James II by his Protestant rival, William III, Enniskillen and Derry were the focus of Williamite resistance in Ireland, including the nearby Battle of Newtownbutler. Enniskillen and Derry were the two garrisons in Ulster that were not wholly loyal to James II, and it was the last town to fall before the siege of Derry. As a direct result of this conflict, Enniskillen developed not only as a market town but also as a garrison, which became home to two regiments. The current site of Fermanagh College (now part of the South West College) was the former Enniskillen Gaol. Many people were tried and hanged in the square during the times of public execution. Part of the old Gaol is still used by the college. Enniskillen Town Hall was designed by William Scott and completed in 1901. Military history Enniskillen is the site of the foundation of two British Army regiments: Royal Inniskilling Fusiliers The Inniskillings (6th Dragoons) The town's name (with the archaic spelling) continues to form part of the title to The Royal Irish Regiment (27th (Inniskilling) 83rd and 87th and Ulster Defence Regiment). Enniskillen Castle features on the cap badge of both regiments. The Troubles Enniskillen was the site of several events during The Troubles, the most notable being the Remembrance Day bombing in which 11 people were killed. Bill Clinton opened the Clinton centre in 2002 on the site of the bombing. The Provisional Irish Republican Army claimed responsibility for the attack. Alleged sexual abuse and assault In 2019, at least nine men reported to the police and the press and said in public forums that, in the 1980s and 90s, when they were children, they were repeatedly molested and raped by a paedophile ring of at least 20 men in the Enniskillen area. Investigations are continuing. Miscellaneous The Enniskillen Dragoon is a famous Irish folk song associated with the Inniskilling Dragoons Regiment. Tommy Makem wrote additional verses and renamed the song Fare Thee Well, Enniskillen. The Chieftains sing a song that mentions Enniskillen titled "North Amerikay". Jim Kerr of Simple Minds was so moved by the horror of the Enniskillen bombing in 1987 that he wrote new words to the traditional folk song "She Moved Through The Fair" and the group recorded it with the name "Belfast Child". The recording reached No. 1 in the UK Charts, Ireland and several other countries in
American computer programmer and author. Eric Raymond may also refer to: Eric Scott Raymond (born 1956), American
(born 1956), American flight instructor and glider pilot Eric Raymond (Jem), a fictional character in the 1980s cartoon television
(). John Horton Conway and Landon Curt Noll developed an open-ended system for naming powers of 10, in which one , coming from the Latin name for 6560, is the name for 103(6560+1) = 1019683. Under the long number scale, it would be 106(6560) = 1039360. is sometimes cited as the longest binomial name—it is a kind of amphipod. However, this name, proposed by B. Dybowski, was invalidated by the International Code of Zoological Nomenclature in 1929 after being petitioned by Mary J. Rathbun to take up the case. Myxococcus llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogochensis is the longest accepted binomial name for an organism. It is a bacterium found in soil collected at Llanfairpwllgwyngyll (discussed below). Parastratiosphecomyia stratiosphecomyioides is the longest accepted binomial name for any animal, or any organism visible with the naked eye. It is a species of soldier fly. The genus name Parapropalaehoplophorus (a fossil glyptodont, an extinct family of mammals related to armadillos) is two letters longer, but does not contain a similarly long species name. , at 52 letters, describing the spa waters at Bath, England, is attributed to Dr. Edward Strother (1675–1737). The word is composed of the following elements: Aequeo: equal (Latin, aequo) Salino: containing salt (Latin, salinus) Calcalino: calcium (Latin, calx) Ceraceo: waxy (Latin, cera) Aluminoso: alumina (Latin) Cupreo: from "copper" Vitriolic: resembling vitriol Notable long words Place names The longest officially recognized place name in an English-speaking country is (85 letters), which is a hill in New Zealand. The name is in the Māori language. A widely recognized version of the name is Taumatawhakatangihangakoauauotamateaturipukakapikimaungahoronukupokaiwhenuakitanatahu (85 letters), which appears on the signpost at the location (see the photo on this page). In Māori, the digraphs ng and wh are each treated as single letters. In Canada, the longest place name is Dysart, Dudley, Harcourt, Guilford, Harburn, Bruton, Havelock, Eyre and Clyde, a township in Ontario, at 61 letters or 68 non-space characters. The 58-letter name Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch is the name of a town on Anglesey, an island of Wales. In terms of the traditional Welsh alphabet, the name is only 51 letters long, as certain digraphs in Welsh are considered as single letters, for instance ll, ng and ch. It is generally agreed, however, that this invented name, adopted in the mid-19th century, was contrived solely to be the longest name of any town in Britain. The official name of the place is Llanfairpwllgwyngyll, commonly abbreviated to Llanfairpwll or Llanfair PG. The longest non-contrived place name in the United Kingdom which is a single non-hyphenated word is Cottonshopeburnfoot (19 letters) and the longest which is hyphenated is Sutton-under-Whitestonecliffe (29 characters). The longest place name in the United States (45 letters) is , a lake in Webster, Massachusetts. It means "Fishing Place at the Boundaries – Neutral Meeting Grounds" and is sometimes facetiously translated as "you fish your side of the water, I fish my side of the water, nobody fishes the middle". The lake is also known as Webster Lake. The longest hyphenated names in the U.S. are Winchester-on-the-Severn, a town in Maryland, and Washington-on-the-Brazos, a notable place in Texas history. The longest official geographical name in Australia is . It has 26 letters and is a Pitjantjatjara word meaning "where the Devil urinates". In Ireland, the longest English place name at 19 letters is Newtownmountkennedy in County Wicklow. Personal names Guinness World Records formerly contained a category for longest personal name used. From about 1975 to 1985, the recordholder was Adolph Blaine Charles David Earl Frederick Gerald Hubert Irvin John Kenneth Lloyd Martin Nero Oliver Paul Quincy Randolph Sherman Thomas Uncas Victor William Xerxes Yancy Zeus Senior (746 letters), also known as Wolfe+585, Senior. After 1985 Guinness briefly awarded the record to a newborn girl with a longer name. The category was removed shortly afterward. Long birth names are often coined in protest of naming laws or for other personal reasons. The naming law in Sweden was challenged by parents Lasse Diding and Elisabeth Hallin, who proposed the given name "Brfxxccxxmnpcccclllmmnprxvclmnckssqlbb11116" for their child (pronounced , 43 characters), which was rejected by a district court in Halmstad, southern Sweden. Words with certain characteristics of notable length Schmaltzed and strengthed (10 letters) appear to be the longest monosyllabic words recorded in The Oxford English Dictionary, while scraunched and scroonched appear to be the longest monosyllabic words recorded in Webster's Third New International Dictionary; but squirrelled (11 letters) is the longest if pronounced as one syllable only (as permitted in The Shorter Oxford English Dictionary and Merriam-Webster Online Dictionary at squirrel, and in Longman Pronunciation Dictionary). Schtroumpfed (12 letters) was coined by Umberto Eco, while broughammed (11 letters) was coined by William Harmon after broughamed (10 letters) was coined by George Bernard Shaw. Strengths is the longest word in the English language containing only one vowel letter. Euouae, a medieval musical term, is the longest English word consisting only of vowels, and the word with the most consecutive vowels. However, the "word" itself is simply a mnemonic consisting of the vowels to be sung in the phrase "seculorum Amen" at the end of the lesser doxology. (Although u was often used interchangeably with v, and the variant "Evovae" is occasionally used, the v in these cases would still be a vowel.) The longest words with no repeated letters are dermatoglyphics and uncopyrightable. The longest word whose letters are in alphabetical order is the eight-letter Aegilops, a grass genus. However, this is arguably a proper noun. There are several six-letter English words with their letters in alphabetical order, including abhors, almost, begins, biopsy, chimps and chintz. There are few 7-letter words, such as "billowy". The longest words whose letters are in reverse alphabetical order are sponged and wronged. The longest words recorded in OED with each vowel only once, and in order, are abstemiously, affectiously, and (OED). Fracedinously and gravedinously (constructed from adjectives in OED) have thirteen letters; Gadspreciously, constructed from Gadsprecious (in OED), has fourteen letters. Facetiously is among the few other words directly attested in OED with single occurrences of all six vowels (counting y as a vowel). The longest single palindromic word in English is rotavator, another name for a rotary tiller for breaking and aerating soil. Typed words The longest words typable with only the left hand using conventional hand placement on a QWERTY keyboard are tesseradecades, aftercataracts, and the more common but sometimes hyphenated sweaterdresses. Using the right
from "copper" Vitriolic: resembling vitriol Notable long words Place names The longest officially recognized place name in an English-speaking country is (85 letters), which is a hill in New Zealand. The name is in the Māori language. A widely recognized version of the name is Taumatawhakatangihangakoauauotamateaturipukakapikimaungahoronukupokaiwhenuakitanatahu (85 letters), which appears on the signpost at the location (see the photo on this page). In Māori, the digraphs ng and wh are each treated as single letters. In Canada, the longest place name is Dysart, Dudley, Harcourt, Guilford, Harburn, Bruton, Havelock, Eyre and Clyde, a township in Ontario, at 61 letters or 68 non-space characters. The 58-letter name Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch is the name of a town on Anglesey, an island of Wales. In terms of the traditional Welsh alphabet, the name is only 51 letters long, as certain digraphs in Welsh are considered as single letters, for instance ll, ng and ch. It is generally agreed, however, that this invented name, adopted in the mid-19th century, was contrived solely to be the longest name of any town in Britain. The official name of the place is Llanfairpwllgwyngyll, commonly abbreviated to Llanfairpwll or Llanfair PG. The longest non-contrived place name in the United Kingdom which is a single non-hyphenated word is Cottonshopeburnfoot (19 letters) and the longest which is hyphenated is Sutton-under-Whitestonecliffe (29 characters). The longest place name in the United States (45 letters) is , a lake in Webster, Massachusetts. It means "Fishing Place at the Boundaries – Neutral Meeting Grounds" and is sometimes facetiously translated as "you fish your side of the water, I fish my side of the water, nobody fishes the middle". The lake is also known as Webster Lake. The longest hyphenated names in the U.S. are Winchester-on-the-Severn, a town in Maryland, and Washington-on-the-Brazos, a notable place in Texas history. The longest official geographical name in Australia is . It has 26 letters and is a Pitjantjatjara word meaning "where the Devil urinates". In Ireland, the longest English place name at 19 letters is Newtownmountkennedy in County Wicklow. Personal names Guinness World Records formerly contained a category for longest personal name used. From about 1975 to 1985, the recordholder was Adolph Blaine Charles David Earl Frederick Gerald Hubert Irvin John Kenneth Lloyd Martin Nero Oliver Paul Quincy Randolph Sherman Thomas Uncas Victor William Xerxes Yancy Zeus Senior (746 letters), also known as Wolfe+585, Senior. After 1985 Guinness briefly awarded the record to a newborn girl with a longer name. The category was removed shortly afterward. Long birth names are often coined in protest of naming laws or for other personal reasons. The naming law in Sweden was challenged by parents Lasse Diding and Elisabeth Hallin, who proposed the given name "Brfxxccxxmnpcccclllmmnprxvclmnckssqlbb11116" for their child (pronounced , 43 characters), which was rejected by a district court in Halmstad, southern Sweden. Words with certain characteristics of notable length Schmaltzed and strengthed (10 letters) appear to be the longest monosyllabic words recorded in The Oxford English Dictionary, while scraunched and scroonched appear to be the longest monosyllabic words recorded in Webster's Third New International Dictionary; but squirrelled (11 letters) is the longest if pronounced as one syllable only (as permitted in The Shorter Oxford English Dictionary and Merriam-Webster Online Dictionary at squirrel, and in Longman Pronunciation Dictionary). Schtroumpfed (12 letters) was coined by Umberto Eco, while broughammed (11 letters) was coined by William Harmon after broughamed (10 letters) was coined by George Bernard Shaw. Strengths is the longest word in the English language containing only one vowel letter. Euouae, a medieval musical term, is the longest English word consisting only of vowels, and the word with the most consecutive vowels. However, the "word" itself is simply a mnemonic consisting of the vowels to be sung in the phrase "seculorum Amen" at the end of the lesser doxology. (Although u was often used interchangeably with v, and the variant "Evovae" is occasionally used, the v in these cases would still be a vowel.) The longest words with no repeated letters are dermatoglyphics and uncopyrightable. The longest word whose letters are in alphabetical order is the eight-letter Aegilops, a grass genus. However, this is arguably a proper noun. There are several six-letter English words with their letters in alphabetical order, including abhors, almost, begins, biopsy, chimps and chintz. There are few 7-letter words, such as "billowy". The longest words whose letters are in reverse alphabetical order are sponged and wronged. The longest words recorded in OED with each vowel only once, and in order, are abstemiously, affectiously, and (OED). Fracedinously and gravedinously (constructed from adjectives in OED) have thirteen letters; Gadspreciously, constructed from Gadsprecious (in OED), has fourteen letters. Facetiously is among the few other words directly attested in OED with single occurrences of all six vowels (counting y as a vowel). The longest single palindromic word in English is rotavator, another name for a rotary tiller for breaking and aerating soil. Typed words The longest words typable with only the left hand using conventional hand placement on a QWERTY keyboard are tesseradecades, aftercataracts, and the more common but sometimes hyphenated sweaterdresses. Using the right hand alone, the longest word that can be typed is johnny-jump-up, or, excluding hyphens, monimolimnion and phyllophyllin. The longest English word typable using only the top row of letters has 11 letters: rupturewort. The word teetertotter (used in North American English) is longer at 12 letters, although it is usually spelled with a hyphen. The longest using only the middle row is shakalshas (10 letters). Nine-letter words include flagfalls; eight-letter words include galahads and alfalfas. Since the bottom row contains no vowels,
role of ambassador of open source to the press, business and public. He remains active in OSI, but stepped down as president of the initiative in February 2005. In early March 2020, he was removed from two Open Source Initiative mailing lists due to posts that violated the OSI's Code of Conduct. In 1998 Raymond received and published a Microsoft document expressing worry about the quality of rival open-source software. He named this document, together with others subsequently leaked, "The Halloween Documents". In 2000–2002 he created Configuration Menu Language 2 (CML2), a source code configuration system; while originally intended for the Linux operating system, it was rejected by kernel developers. (Raymond attributed this rejection to "kernel list politics", but Linus Torvalds said in a 2007 mailing list post that as a matter of policy, the development team preferred more incremental changes.) Raymond's 2003 book The Art of Unix Programming discusses user tools for programming and other tasks. Some versions of NetHack still include Raymond's guide. He has also contributed code and content to the free software video game The Battle for Wesnoth. Raymond is the main developer of NTPSec, a "secure, hardened replacement" for the Unix utility NTP. Views on open source Raymond coined an aphorism he dubbed Linus's law, inspired by Linus Torvalds: "Given enough eyeballs, all bugs are shallow". It first appeared in his book The Cathedral and the Bazaar. Raymond has refused to speculate on whether the "bazaar" development model could be applied to works such as books and music, saying that he does not want to "weaken the winning argument for open-sourcing software by tying it to a potential loser". Raymond has had a number of public disputes with other figures in the free software movement. As head of the Open Source Initiative, he argued that advocates should focus on the potential for better products. The "very seductive" moral and ethical rhetoric of Richard Stallman and the Free Software Foundation fails, he said, "not because his principles are wrong, but because that kind of language ... simply does not persuade anybody". In a 2008 essay he defended programmers' right to issue work under proprietary licenses: "I think that if a programmer wants to write a program and sell it, it's neither my business nor anyone else's but his customer's what the terms of sale are." In the same essay he said that the "logic of the system" puts developers into "dysfunctional roles", with bad code the result. Political beliefs and activism Raymond is a member of the Libertarian Party. He is a gun rights advocate. He has endorsed the open source firearms organization Defense Distributed, calling them "friends of freedom" and writing "I approve of any development that makes it more difficult for governments and criminals to monopolize the use of force. As 3D printers become less expensive and more ubiquitous, this could be a major step in the right direction." In 2015 Raymond accused the Ada Initiative and other women in tech groups of attempting to entrap male open source leaders and accuse them of rape, saying "Try to avoid even being alone, ever, because there is a chance that a 'women in tech' advocacy group is going to try to collect your scalp." Raymond has claimed that "Gays experimented with unfettered promiscuity in the 1970s and got AIDS as a consequence",
a child. His family moved to Pennsylvania in 1971. He developed cerebral palsy at birth; his weakened physical condition motivated him to go into computing. Career Raymond began his programming career writing proprietary software, between 1980 and 1985. In 1990, noting that the Jargon File had not been maintained since about 1983, he adopted it, but not without criticism; Paul Dourish maintains an archived original version of the Jargon File, because, he says, Raymond's updates "essentially destroyed what held it together." In 1996 Raymond took over development of the open-source email software "popclient", renaming it to Fetchmail. Soon after this experience, in 1997, he wrote the essay "The Cathedral and the Bazaar", detailing his thoughts on open-source software development and why it should be done as openly as possible (the "bazaar" approach). The essay was based in part on his experience in developing Fetchmail. He first presented his thesis at the annual Linux Kongress on May 27, 1997. He later expanded the essay into a book, The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary, in 1999. The essay has been widely cited. The internal white paper by Frank Hecker that led to the release of the Mozilla (then Netscape) source code in 1998 cited The Cathedral and the Bazaar as "independent validation" of ideas proposed by Eric Hahn and Jamie Zawinski. Hahn would later describe the 1999 book as "clearly influential". From the late 1990s onward, due in part to the popularity of his essay, Raymond became a prominent voice in the open source movement. He co-founded the Open Source Initiative (OSI) in 1998, taking on the self-appointed role of ambassador of open source to the press, business and public. He remains active in OSI, but stepped down as president of the initiative in February 2005. In early March 2020, he was removed from two Open Source Initiative mailing lists due to posts that violated the OSI's Code of Conduct. In 1998 Raymond received and published a Microsoft document expressing worry about the quality of rival open-source software. He named this document, together with others subsequently leaked, "The Halloween Documents". In 2000–2002 he created Configuration Menu Language 2 (CML2), a source code configuration system; while originally intended for the Linux operating system, it was rejected by kernel developers. (Raymond attributed this rejection to "kernel list politics", but Linus Torvalds said in a 2007 mailing list post that as a matter of policy, the development
an unconscious defense mechanism by which an individual projects their own internal characteristics onto the outside world, particularly onto other people. For example, a patient who is overly argumentative might instead perceive others as argumentative and themselves as blameless. Like other defense mechanisms,
who is overly argumentative might instead perceive others as argumentative and themselves as blameless. Like other defense mechanisms, externalization is a protection against anxiety and is, therefore, part of a healthy, normally functioning mind. However, if taken to excess, it can lead to
Unit (ECU). The notes and coins for the old currencies, however, continued to be used as legal tender until new euro notes and coins were introduced on 1 January 2002. The changeover period during which the former currencies' notes and coins were exchanged for those of the euro lasted about two months, until 28 February 2002. The official date on which the national currencies ceased to be legal tender varied from member state to member state. The earliest date was in Germany, where the mark officially ceased to be legal tender on 31 December 2001, though the exchange period lasted for two months more. Even after the old currencies ceased to be legal tender, they continued to be accepted by national central banks for periods ranging from several years to indefinitely (the latter for Austria, Germany, Ireland, Estonia and Latvia in banknotes and coins, and for Belgium, Luxembourg, Slovenia and Slovakia in banknotes only). The earliest coins to become non-convertible were the Portuguese escudos, which ceased to have monetary value after 31 December 2002, although banknotes remain exchangeable until 2022. Eurozone crisis Following the U.S. financial crisis in 2008, fears of a sovereign debt crisis developed in 2009 among investors concerning some European states, with the situation becoming particularly tense in early 2010. Greece was most acutely affected, but fellow Eurozone members Cyprus, Ireland, Italy, Portugal, and Spain were also significantly affected. All these countries utilized EU funds except Italy, which is a major donor to the EFSF. To be included in the eurozone, countries had to fulfil certain convergence criteria, but the meaningfulness of such criteria was diminished by the fact it was not enforced with the same level of strictness among countries. According to the Economist Intelligence Unit in 2011, "[I]f the [euro area] is treated as a single entity, its [economic and fiscal] position looks no worse and in some respects, rather better than that of the US or the UK" and the budget deficit for the euro area as a whole is much lower and the euro area's government debt/GDP ratio of 86% in 2010 was about the same level as that of the United States. "Moreover", they write, "private-sector indebtedness across the euro area as a whole is markedly lower than in the highly leveraged Anglo-Saxon economies". The authors conclude that the crisis "is as much political as economic" and the result of the fact that the euro area lacks the support of "institutional paraphernalia (and mutual bonds of solidarity) of a state". The crisis continued with S&P downgrading the credit rating of nine euro-area countries, including France, then downgrading the entire European Financial Stability Facility (EFSF) fund. A historical parallel – to 1931 when Germany was burdened with debt, unemployment and austerity while France and the United States were relatively strong creditors – gained attention in summer 2012 even as Germany received a debt-rating warning of its own. In the enduring of this scenario the euro serves as a mean of quantitative primitive accumulation. Direct and indirect usage Direct usage The euro is the sole currency of 19 EU member states: Austria, Belgium, Cyprus, Estonia, Finland, France, Germany, Greece, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, the Netherlands, Portugal, Slovakia, Slovenia, and Spain. These countries constitute the "eurozone", some 343 million people in total . With all but one (Denmark) EU members obliged to join when economic conditions permit, together with future members of the EU, the enlargement of the eurozone is set to continue. Outside the EU, the euro is also the sole currency of Montenegro and Kosovo and several European microstates (Andorra, Monaco, San Marino and the Vatican City) as well as in three overseas territories of France that are not themselves part of the EU, namely Saint Barthélemy, Saint Pierre and Miquelon, and the French Southern and Antarctic Lands. Together this direct usage of the euro outside the EU affects nearly 3 million people. The euro has been used as a trading currency in Cuba since 1998, Syria since 2006, and Venezuela since 2018. There are also various currencies pegged to the euro (see below). In 2009, Zimbabwe abandoned its local currency and used major currencies instead, including the euro and the United States dollar. Use as reserve currency Since its introduction, the euro has been the second most widely held international reserve currency after the U.S. dollar. The share of the euro as a reserve currency increased from 18% in 1999 to 27% in 2008. Over this period, the share held in U.S. dollar fell from 71% to 64% and that held in RMB fell from 6.4% to 3.3%. The euro inherited and built on the status of the Deutsche Mark as the second most important reserve currency. The euro remains underweight as a reserve currency in advanced economies while overweight in emerging and developing economies: according to the International Monetary Fund the total of euro held as a reserve in the world at the end of 2008 was equal to $1.1 trillion or €850 billion, with a share of 22% of all currency reserves in advanced economies, but a total of 31% of all currency reserves in emerging and developing economies. The possibility of the euro becoming the first international reserve currency has been debated among economists. Former Federal Reserve Chairman Alan Greenspan gave his opinion in September 2007 that it was "absolutely conceivable that the euro will replace the US dollar as reserve currency, or will be traded as an equally important reserve currency". In contrast to Greenspan's 2007 assessment, the euro's increase in the share of the worldwide currency reserve basket has slowed considerably since 2007 and since the beginning of the worldwide credit crunch related recession and European sovereign-debt crisis. Currencies pegged to the euro Outside the eurozone, a total of 22 countries and territories that do not belong to the EU have currencies that are directly pegged to the euro including 14 countries in mainland Africa (CFA franc), two African island countries (Comorian franc and Cape Verdean escudo), three French Pacific territories (CFP franc) and three Balkan countries, Bosnia and Herzegovina (Bosnia and Herzegovina convertible mark), Bulgaria (Bulgarian lev) and North Macedonia (Macedonian denar). On 28 July 2009, São Tomé and Príncipe signed an agreement with Portugal which will eventually tie its currency to the euro. Additionally, the Moroccan dirham is tied to a basket of currencies, including the euro and the US dollar, with the euro given the highest weighting. With the exception of Bosnia, Bulgaria, North Macedonia (which had pegged their currencies against the Deutsche Mark) and Cape Verde (formerly pegged to the Portuguese escudo), all of these non-EU countries had a currency peg to the French Franc before pegging their currencies to the euro. Pegging a country's currency to a major currency is regarded as a safety measure, especially for currencies of areas with weak economies, as the euro is seen as a stable currency, prevents runaway inflation and encourages foreign investment due to its stability. Within the EU several currencies are pegged to the euro, mostly as a precondition to joining the eurozone. The Danish krone, Croatian kuna and Bulgarian lev are pegged due to their participation in the ERM II. In total, , 182 million people in Africa use a currency pegged to the euro, 27 million people outside the eurozone in Europe, and another 545,000 people on Pacific islands. Since 2005, stamps issued by the Sovereign Military Order of Malta have been denominated in euro, although the Order's official currency remains the Maltese scudo. The Maltese scudo itself is pegged to the euro and is only recognised as legal tender within the Order. Economics Optimal currency area In economics, an optimum currency area, or region (OCA or OCR), is a geographical region in which it would maximise economic efficiency to have the entire region share a single currency. There are two models, both proposed by Robert Mundell: the stationary expectations model and the international risk sharing model. Mundell himself advocates the international risk sharing model and thus concludes in favour of the euro. However, even before the creation of the single currency, there were concerns over diverging economies. Before the late-2000s recession it was considered unlikely that a state would leave the euro or the whole zone would collapse. However the Greek government-debt crisis led to former British Foreign Secretary Jack Straw claiming the eurozone could not last in its current form. Part of the problem seems to be the rules that were created when the euro was set up. John Lanchester, writing for The New Yorker, explains it: Transaction costs and risks The most obvious benefit of adopting a single currency is to remove the cost of exchanging currency, theoretically allowing businesses and individuals to consummate previously unprofitable trades. For consumers, banks in the eurozone must charge the same for intra-member cross-border transactions as purely domestic transactions for electronic payments (e.g., credit cards, debit cards and cash machine withdrawals). Financial markets on the continent are expected to be far more liquid and flexible than they were in the past. The reduction in cross-border transaction costs will allow larger banking firms to provide a wider array of banking services that can compete across and beyond the eurozone. However, although transaction costs were reduced, some studies have shown that risk aversion has increased during the last 40 years in the Eurozone. Price parity Another effect of the common European currency is that differences in prices—in particular in price levels—should decrease because of the law of one price. Differences in prices can trigger arbitrage, i.e., speculative trade in a commodity across borders purely to exploit the price differential. Therefore, prices on commonly traded goods are likely to converge, causing inflation in some regions and deflation in others during the transition. Some evidence of this has been observed in specific eurozone markets. Macroeconomic stability Before the introduction of the euro, some countries had successfully contained inflation, which was then seen as a major economic problem, by establishing largely independent central banks. One such bank was the Bundesbank in Germany; the European Central Bank was modelled on the Bundesbank. The euro has come under criticism due to its regulation, lack of flexibility and rigidity towards sharing member States on issues such as nominal interest rates. Many national and corporate bonds denominated in euro are significantly more liquid and have lower interest rates than was historically the case when denominated in national currencies. While increased liquidity may lower the nominal interest rate on the bond, denominating the bond in a currency with low levels of inflation arguably plays a much larger role. A credible commitment to low levels of inflation and a stable debt reduces the risk that the value of the debt will be eroded by higher levels of inflation or default in the future, allowing debt to be issued at a lower nominal interest rate. Unfortunately, there is also a cost in structurally keeping inflation lower than in the United States, UK, and China. The result is that seen from those countries, the euro has become expensive, making European products increasingly expensive for its largest importers; hence export from the eurozone becomes more difficult. In general, those in Europe who own large amounts of euro are served by high stability and low inflation. A monetary union means states in that union lose the main mechanism of recovery of their international competitiveness by weakening (depreciating) their currency. When wages become too high compared to productivity in exports sector then these exports become more expensive and they are crowded out from the market within a country and abroad. This drives fall of employment and output in the exports sector and fall of trade and current account balances. Fall of output and employment in tradable goods sector may be offset by the growth of non-exports sectors, especially in construction and services. Increased purchases abroad and negative current account balance can be financed without a problem as long as credit is cheap. The need to finance trade deficit weakens currency making exports automatically more attractive in a country and abroad. A state in a monetary union cannot use weakening of currency to recover its international competitiveness. To achieve this a state has to reduce prices, including wages (deflation). This could result in high unemployment and lower incomes as it was during European sovereign-debt crisis. Trade The euro increased price transparency and stimulated cross-border trade. A 2009 consensus from the studies of the introduction of the euro concluded that it has increased trade within the eurozone by 5% to 10%, although one study suggested an increase of only 3% while another estimated 9 to 14%. However, a meta-analysis of all available studies suggests that the prevalence of positive estimates is caused by publication bias and that the underlying effect may be negligible. Although a more recent meta-analysis shows that publication bias decreases over time and that there are positive trade effects from the introduction of the euro, as long as results from before 2010 are taken into account. This may be because of the inclusion of the Financial crisis of 2007–2008 and ongoing integration within the EU. Furthermore, older studies accounting for time trend reflecting general cohesion policies in Europe that started before, and continue after implementing the common currency find no effect on trade. These results suggest that other policies aimed at European integration might be the source of observed increase in trade. According to Barry Eichengreen, studies disagree on the magnitude of the effect of the euro on trade, but they agree that it did have an effect. Investment Physical investment seems to have increased by 5% in the eurozone due to the introduction. Regarding foreign direct investment, a study found that the intra-eurozone FDI stocks have increased by about 20% during the first four years of the EMU. Concerning the effect on corporate investment, there is evidence that the introduction of the euro has resulted in an increase in investment rates and that it has made it easier for firms to access financing in Europe. The euro has most specifically stimulated investment in companies that come from countries that previously had weak currencies. A study found that the introduction of the euro accounts for 22% of the investment rate after 1998 in countries that previously had a weak currency. Inflation The introduction of the euro has led to extensive discussion about its possible effect on inflation. In the short term, there was a widespread impression in the population of the eurozone that the introduction of the euro had led to an increase in prices, but this impression was not confirmed by general indices of inflation and other studies. A study of this paradox found that this was due to an asymmetric effect of the introduction of the euro on prices: while it had no effect on most goods, it had an effect on cheap goods which have seen their price round up after the introduction of the euro. The study found that consumers based their beliefs on inflation of those cheap goods which are frequently purchased. It has also been suggested that the jump in small prices may be because prior to the introduction, retailers made fewer upward adjustments and waited for the introduction of the euro to do so. Exchange rate risk One of the advantages of the adoption of a common currency is the reduction of the risk associated with changes in currency exchange rates. It has been found that the introduction of the euro created "significant reductions in market risk exposures for nonfinancial firms both in and outside Europe". These reductions in market risk "were concentrated in firms domiciled in the eurozone and in non-euro firms with a high fraction of foreign sales or assets in Europe". Financial integration The introduction of the euro increased European financial integration, which helped stimulate growth of a European securities market (bond markets are characterized by economies of scale dynamics). According to a study on this question, it has "significantly reshaped the European financial system, especially with respect to the securities markets [...] However, the real and policy barriers to integration in the retail and corporate banking sectors remain significant, even if the wholesale end of banking has been largely integrated." Specifically, the euro has significantly decreased the cost of trade in bonds, equity, and banking assets within the eurozone. On a global level, there is evidence that the introduction of the euro has led to an integration in terms of investment in bond portfolios, with eurozone countries lending and borrowing more between each other than with other countries. Financial integration made it cheaper for
to the euro Outside the eurozone, a total of 22 countries and territories that do not belong to the EU have currencies that are directly pegged to the euro including 14 countries in mainland Africa (CFA franc), two African island countries (Comorian franc and Cape Verdean escudo), three French Pacific territories (CFP franc) and three Balkan countries, Bosnia and Herzegovina (Bosnia and Herzegovina convertible mark), Bulgaria (Bulgarian lev) and North Macedonia (Macedonian denar). On 28 July 2009, São Tomé and Príncipe signed an agreement with Portugal which will eventually tie its currency to the euro. Additionally, the Moroccan dirham is tied to a basket of currencies, including the euro and the US dollar, with the euro given the highest weighting. With the exception of Bosnia, Bulgaria, North Macedonia (which had pegged their currencies against the Deutsche Mark) and Cape Verde (formerly pegged to the Portuguese escudo), all of these non-EU countries had a currency peg to the French Franc before pegging their currencies to the euro. Pegging a country's currency to a major currency is regarded as a safety measure, especially for currencies of areas with weak economies, as the euro is seen as a stable currency, prevents runaway inflation and encourages foreign investment due to its stability. Within the EU several currencies are pegged to the euro, mostly as a precondition to joining the eurozone. The Danish krone, Croatian kuna and Bulgarian lev are pegged due to their participation in the ERM II. In total, , 182 million people in Africa use a currency pegged to the euro, 27 million people outside the eurozone in Europe, and another 545,000 people on Pacific islands. Since 2005, stamps issued by the Sovereign Military Order of Malta have been denominated in euro, although the Order's official currency remains the Maltese scudo. The Maltese scudo itself is pegged to the euro and is only recognised as legal tender within the Order. Economics Optimal currency area In economics, an optimum currency area, or region (OCA or OCR), is a geographical region in which it would maximise economic efficiency to have the entire region share a single currency. There are two models, both proposed by Robert Mundell: the stationary expectations model and the international risk sharing model. Mundell himself advocates the international risk sharing model and thus concludes in favour of the euro. However, even before the creation of the single currency, there were concerns over diverging economies. Before the late-2000s recession it was considered unlikely that a state would leave the euro or the whole zone would collapse. However the Greek government-debt crisis led to former British Foreign Secretary Jack Straw claiming the eurozone could not last in its current form. Part of the problem seems to be the rules that were created when the euro was set up. John Lanchester, writing for The New Yorker, explains it: Transaction costs and risks The most obvious benefit of adopting a single currency is to remove the cost of exchanging currency, theoretically allowing businesses and individuals to consummate previously unprofitable trades. For consumers, banks in the eurozone must charge the same for intra-member cross-border transactions as purely domestic transactions for electronic payments (e.g., credit cards, debit cards and cash machine withdrawals). Financial markets on the continent are expected to be far more liquid and flexible than they were in the past. The reduction in cross-border transaction costs will allow larger banking firms to provide a wider array of banking services that can compete across and beyond the eurozone. However, although transaction costs were reduced, some studies have shown that risk aversion has increased during the last 40 years in the Eurozone. Price parity Another effect of the common European currency is that differences in prices—in particular in price levels—should decrease because of the law of one price. Differences in prices can trigger arbitrage, i.e., speculative trade in a commodity across borders purely to exploit the price differential. Therefore, prices on commonly traded goods are likely to converge, causing inflation in some regions and deflation in others during the transition. Some evidence of this has been observed in specific eurozone markets. Macroeconomic stability Before the introduction of the euro, some countries had successfully contained inflation, which was then seen as a major economic problem, by establishing largely independent central banks. One such bank was the Bundesbank in Germany; the European Central Bank was modelled on the Bundesbank. The euro has come under criticism due to its regulation, lack of flexibility and rigidity towards sharing member States on issues such as nominal interest rates. Many national and corporate bonds denominated in euro are significantly more liquid and have lower interest rates than was historically the case when denominated in national currencies. While increased liquidity may lower the nominal interest rate on the bond, denominating the bond in a currency with low levels of inflation arguably plays a much larger role. A credible commitment to low levels of inflation and a stable debt reduces the risk that the value of the debt will be eroded by higher levels of inflation or default in the future, allowing debt to be issued at a lower nominal interest rate. Unfortunately, there is also a cost in structurally keeping inflation lower than in the United States, UK, and China. The result is that seen from those countries, the euro has become expensive, making European products increasingly expensive for its largest importers; hence export from the eurozone becomes more difficult. In general, those in Europe who own large amounts of euro are served by high stability and low inflation. A monetary union means states in that union lose the main mechanism of recovery of their international competitiveness by weakening (depreciating) their currency. When wages become too high compared to productivity in exports sector then these exports become more expensive and they are crowded out from the market within a country and abroad. This drives fall of employment and output in the exports sector and fall of trade and current account balances. Fall of output and employment in tradable goods sector may be offset by the growth of non-exports sectors, especially in construction and services. Increased purchases abroad and negative current account balance can be financed without a problem as long as credit is cheap. The need to finance trade deficit weakens currency making exports automatically more attractive in a country and abroad. A state in a monetary union cannot use weakening of currency to recover its international competitiveness. To achieve this a state has to reduce prices, including wages (deflation). This could result in high unemployment and lower incomes as it was during European sovereign-debt crisis. Trade The euro increased price transparency and stimulated cross-border trade. A 2009 consensus from the studies of the introduction of the euro concluded that it has increased trade within the eurozone by 5% to 10%, although one study suggested an increase of only 3% while another estimated 9 to 14%. However, a meta-analysis of all available studies suggests that the prevalence of positive estimates is caused by publication bias and that the underlying effect may be negligible. Although a more recent meta-analysis shows that publication bias decreases over time and that there are positive trade effects from the introduction of the euro, as long as results from before 2010 are taken into account. This may be because of the inclusion of the Financial crisis of 2007–2008 and ongoing integration within the EU. Furthermore, older studies accounting for time trend reflecting general cohesion policies in Europe that started before, and continue after implementing the common currency find no effect on trade. These results suggest that other policies aimed at European integration might be the source of observed increase in trade. According to Barry Eichengreen, studies disagree on the magnitude of the effect of the euro on trade, but they agree that it did have an effect. Investment Physical investment seems to have increased by 5% in the eurozone due to the introduction. Regarding foreign direct investment, a study found that the intra-eurozone FDI stocks have increased by about 20% during the first four years of the EMU. Concerning the effect on corporate investment, there is evidence that the introduction of the euro has resulted in an increase in investment rates and that it has made it easier for firms to access financing in Europe. The euro has most specifically stimulated investment in companies that come from countries that previously had weak currencies. A study found that the introduction of the euro accounts for 22% of the investment rate after 1998 in countries that previously had a weak currency. Inflation The introduction of the euro has led to extensive discussion about its possible effect on inflation. In the short term, there was a widespread impression in the population of the eurozone that the introduction of the euro had led to an increase in prices, but this impression was not confirmed by general indices of inflation and other studies. A study of this paradox found that this was due to an asymmetric effect of the introduction of the euro on prices: while it had no effect on most goods, it had an effect on cheap goods which have seen their price round up after the introduction of the euro. The study found that consumers based their beliefs on inflation of those cheap goods which are frequently purchased. It has also been suggested that the jump in small prices may be because prior to the introduction, retailers made fewer upward adjustments and waited for the introduction of the euro to do so. Exchange rate risk One of the advantages of the adoption of a common currency is the reduction of the risk associated with changes in currency exchange rates. It has been found that the introduction of the euro created "significant reductions in market risk exposures for nonfinancial firms both in and outside Europe". These reductions in market risk "were concentrated in firms domiciled in the eurozone and in non-euro firms with a high fraction of foreign sales or assets in Europe". Financial integration The introduction of the euro increased European financial integration, which helped stimulate growth of a European securities market (bond markets are characterized by economies of scale dynamics). According to a study on this question, it has "significantly reshaped the European financial system, especially with respect to the securities markets [...] However, the real and policy barriers to integration in the retail and corporate banking sectors remain significant, even if the wholesale end of banking has been largely integrated." Specifically, the euro has significantly decreased the cost of trade in bonds, equity, and banking assets within the eurozone. On a global level, there is evidence that the introduction of the euro has led to an integration in terms of investment in bond portfolios, with eurozone countries lending and borrowing more between each other than with other countries. Financial integration made it cheaper for European companies to borrow. Banks, firms and households could also invest more easily outside of their own country, thus creating greater international risk-sharing. Effect on interest rates As of January 2014, and since the introduction of the euro, interest rates of most member countries (particularly those with a weak currency) have decreased. Some of these countries had the most serious sovereign financing problems. The effect of declining interest rates, combined with excess liquidity continually provided by the ECB, made it easier for banks within the countries in which interest rates fell the most, and their linked sovereigns, to borrow significant amounts (above the 3% of GDP budget deficit imposed on the eurozone initially) and significantly inflate their public and private debt levels. Following the financial crisis of 2007–2008, governments in these countries found it necessary to bail out or nationalise their privately held banks to prevent systemic failure of the banking system when underlying hard or financial asset values were found to be grossly inflated and sometimes so near worthless there was no liquid market for them. This further increased the already high levels of public debt to a level the markets began to consider unsustainable, via increasing government bond interest rates, producing the ongoing European sovereign-debt crisis. Price convergence The evidence on the convergence of prices in the eurozone with the introduction of the euro is mixed. Several studies failed to find any evidence of convergence following the introduction of the euro after a phase of convergence in the early 1990s. Other studies have found evidence of price convergence, in particular for cars. A possible reason for the divergence between the different studies is that the
to be the ECB's first president. The French argued that since the ECB was to be located in Germany, its president should be French. This was opposed by the German, Dutch and Belgian governments who saw Duisenberg as a guarantor of a strong euro. Tensions were abated by a gentleman's agreement in which Duisenberg would stand down before the end of his mandate, to be replaced by Trichet. Trichet replaced Duisenberg as president in November 2003. Until 2007, the ECB had very successfully managed to maintain inflation close but below 2%. The ECB's response to the financial crises (2008–2014) The European Central Bank underwent through a deep internal transformation as it faced the global financial crisis and the Eurozone debt crisis. Early response to the Eurozone debt crisis The so-called European debt crisis began after Greece's new elected government uncovered the real level indebtedness and budget deficit and warned EU institutions of the imminent danger of a Greek sovereign default. Foreseeing a possible sovereign default in the eurozone, the general public, international and European institutions, and the financial community reassessed the economic situation and creditworthiness of some Eurozone member states, in particular Southern countries. Consequently, sovereign bonds yields of several Eurozone countries started to rise sharply. This provoked a self-fulfilling panic on financial markets: the more Greek bonds yields rose, the more likely a default became possible, the more bond yields increased in turn. Trichet's reluctance to intervene This panic was also aggravated because of the inability of the ECB to react and intervene on sovereign bonds markets for two reasons. First, because the ECB's legal framework normally forbids the purchase of sovereign bonds (Article 123. TFEU), This prevented the ECB from implementing quantitative easing like the Federal Reserve and the Bank of England did as soon as 2008, which played an important role in stabilizing markets. Secondly, a decision by the ECB made in 2005 introduced a minimum credit rating (BBB-) for all Eurozone sovereign bonds to be eligible as collateral to the ECB's open market operations. This meant that if a private rating agencies were to downgrade a sovereign bond below that threshold, many banks would suddenly become illiquid because they would lose access to ECB refinancing operations. According to former member of the governing council of the ECB Athanasios Orphanides, this change in the ECB's collateral framework "planted the seed" of the euro crisis. Faced with those regulatory constraints, the ECB led by Jean-Claude Trichet in 2010 was reluctant to intervene to calm down financial markets. Up until 6 May 2010, Trichet formally denied at several press conferences the possibility of the ECB to embark into sovereign bonds purchases, even though Greece, Portugal, Spain and Italy faced waves of credit rating downgrades and increasing interest rate spreads. ECB's market interventions (2010–2011) In a remarkable u-turn, the ECB announced on 10 May 2010, the launch of a "Securities Market Programme" (SMP) which involved the discretionary purchase of sovereign bonds in secondary markets. Extraordinarily, the decision was taken by the Governing Council during a teleconference call only three days after the ECB's usual meeting of 6 May (when Trichet still denied the possibility of purchasing sovereign bonds). The ECB justified this decision by the necessity to "address severe tensions in financial markets." The decision also coincided with the EU leaders decision of 10 May to establish the European Financial Stabilisation mechanism, which would serve as a crisis fighting fund to safeguard the euro area from future sovereign debt crisis. The ECB's bond buying focused primarily on Spanish and Italian debt. They were intended to dampen international speculation against those countries, and thus avoid a contagion of the Greek crisis towards other Eurozone countries. The assumption is that speculative activity will decrease over time and the value of the assets increase. Although SMP did involve an injection of new money into financial markets, all ECB injections were "sterilized" through weekly liquidity absorption. So the operation was neutral for the overall money supply. In September 2011, ECB's Board member Jürgen Stark, resigned in protest against the "Securities Market Programme" which involved the purchase of sovereign bonds from Southern member states, a move that he considered as equivalent to monetary financing, which is prohibited by the EU Treaty. The Financial Times Deutschland referred to this episode as "the end of the ECB as we know it", referring to its hitherto perceived "hawkish" stance on inflation and its historical Deutsche Bundesbank influence. As of 18 June 2012, the ECB in total had spent €212.1bn (equal to 2.2% of the Eurozone GDP) for bond purchases covering outright debt, as part of the Securities Markets Programme. Controversially, the ECB made substantial profits out of SMP, which were largely redistributed to Eurozone countries. In 2013, the Eurogroup decided to refund those profits to Greece, however the payments were suspended over 2014 until 2017 over the conflict between Yanis Varoufakis and ministers of the Eurogroup. In 2018, profits refunds were reinstalled by the Eurogroup. However, several NGOs complained that a substantial part of the ECB profits would never be refunded to Greece. Role in the Troika (2010–2015) The ECB played a controversial role in the "Troika" by rejecting all forms of debt restructuring of public and private debts, forcing governments to adopt bailout programmes and structural reforms through secret letters to Italian, Spanish, Greek and Irish governments. It has further been accused of interfering in the Greek referendum of July 2015 by constraining liquidity to Greek commercial banks. In November 2010, it became clear that Ireland would not be able to afford to bail out its failing banks, and Anglo Irish Bank in particular which needed around 30 billion euros, a sum the government obviously could not borrow from financial markets when its bond yields were soaring to comparable levels with the Greek bonds. Instead, the government issued a 31bn EUR "promissory note" (an IOU) to Anglo – which it had nationalized. In turn, the bank supplied the promissory note as collateral to the Central Bank of Ireland, so it could access emergency liquidity assistance (ELA). This way, Anglo was able to repay its bondholders. The operation became very controversial, as it basically shifted Anglo's private debts onto the government's balance sheet. It became clear later that the ECB played a key role in making sure the Irish government did not let Anglo default on its debts, in order to avoid a financial instability risks. On 15 October and 6 November 2010, the ECB President Jean-Claude Trichet sent two secret letters to the Irish finance Minister which essentially informed the Irish government of the possible suspension of ELA's credit lines, unless the government requested a financial assistance programme to the Eurogroup under condition of further reforms and fiscal consolidation. Over 2012 and 2013, the ECB repeatedly insisted that the promissory note should be repaid in full, and refused the Government's proposal to swap the notes with a long-term (and less costly) bond until February 2013. In addition, the ECB insisted that no debt restructuring (or bail-in) should be applied to the nationalized banks' bondholders, a measure which could have saved Ireland 8 billion euros. In April 2011, the ECB raised interest rates for the first time since 2008 from 1% to 1.25%, with a further increase to 1.50% in July 2011. However, in 2012–2013 the ECB sharply lowered interest rates to encourage economic growth, reaching the historically low 0.25% in November 2013. Soon after the rates were cut to 0.15%, then on 4 September 2014 the central bank reduced the rates by two thirds from 0.15% to 0.05%. Recently, the interest rates were further reduced reaching 0.00%, the lowest rates on record. The European Central Bank was not ready to manage the money supply under the crisis of 2008, therefore, it started using the instrument of quantitative easing only in 2015. In a report adopted on 13 March 2014, the European Parliament criticized the "potential conflict of interest between the current role of the ECB in the Troika as ‘technical advisor’ and its position as creditor of the four Member States, as well as its mandate under the Treaty". The report was led by Austrian right-wing MEP Othmar Karas and French Socialist MEP Liem Hoang Ngoc. The ECB's response under Mario Draghi (2012–2015) On 1 November 2011, Mario Draghi replaced Jean-Claude Trichet as President of the ECB. This change in leadership also marks the start of a new era under which the ECB will become more and more interventionist and eventually ended the Eurozone sovereign debt crisis. Draghi's presidency started with the impressive launch of a new round of 1% interest loans with a term of three years (36 months) – the Long-term Refinancing operations (LTRO). Under this programme, 523 Banks tapped as much as €489.2 bn (US$640 bn). Observers were surprised by the volume of the loans made when it was implemented. By far biggest amount of was tapped by banks in Greece, Ireland, Italy and Spain. Although those LTROs loans did not directly benefit EU governments, it effectively allowed banks to do a carry trade, by lending off the LTROs loans to governments with an interest margin. The operation also facilitated the rollover of of maturing bank debts in the first three months of 2012. "Whatever it takes" (26 July 2012) Facing renewed fears about sovereigns in the eurozone continued Mario Draghi made a decisive speech in London, by declaring that the ECB "...is ready to do whatever it takes to preserve the Euro. And believe me, it will be enough." In light of slow political progress on solving the eurozone crisis, Draghi's statement has been seen as a key turning point in the eurozone crisis, as it was immediately welcomed by European leaders, and led to a steady decline in bond yields for eurozone countries, in particular Spain, Italy and France. Following up on Draghi's speech, on 6 September 2012 the ECB announced the Outright Monetary Transactions programme (OMT). Unlike the previous SMP programme, OMT has no ex-ante time or size limit. However, the activation of the purchases remains conditioned to the adherence by the benefitting country to an adjustment programme to the ESM. The program was adopted with near unanimity, the Bundesbank president Jens Weidmann being the sole member of the ECB's Governing Council to vote against. Even if OMT was never actually implemented until today, it made the "Whatever it takes" pledge credible and significantly contributed in stabilizing financial markets and ended the sovereign debt crisis. According to various sources, the OMT programme and "whatever it takes" speeches were made possible because EU leaders previously agreed to build the banking union. Low inflation and quantitative easing (2015–2019) In November 2014, the bank moved into its new premises, while the Eurotower building was dedicated to host the newly established supervisory activities of the ECB under the Single Supervisory Mechanism. Although the sovereign debt crisis was almost solved by 2014, the ECB started to face a repeated decline in the Eurozone inflation rate, indicating that the economy was going towards a deflation. Responding to this threat, the ECB announced on 4 September 2014 the launch of two bond buying purchases programmes: the Covered Bond Purchasing Programme (CBPP3) and Asset-Backed Securities Programme (ABSPP). On 22 January 2015, the ECB announced an extension of those programmes within a full-fledge "quantitative easing" programme which also included sovereign bonds, to the tune of 60 billion euros per month up until at least September 2016. The programme was started on 9 March 2015. On 8 June 2016, the ECB added corporate bonds to its asset purchases portfolio with the launch of the corporate sector purchase programme (CSPP). Under this programme, it conducted net purchase of corporate bonds until January 2019 to reach about €177 billion. While the programme was halted for 11 months in January 2019, the ECB restarted net purchases in November 2019. As of 2021, the size of the ECB's quantitative easing programme had reached 2947 billion euros. Christine Lagarde's era (2019– ) In July 2019, EU leaders nominated Christine Lagarde to replace Mario Draghi as ECB President. Lagarde resigned from her position as managing director of the International Monetary Fund in July 2019 and formally took over the ECB's presidency on 1 November 2019. Lagarde immediately signaled a change of style in the ECB's leadership. She embarked the ECB's into a strategic review of the ECB's monetary policy strategy, an exercise the ECB had not done for 17 years. As part of this exercise, Lagarde committed the ECB to look into how monetary policy could contribute to address climate change, and promised that "no stone would be left unturned." The ECB president also adopted a change of communication style, in particular in her use of social media to promote gender equality, and by opening dialogue with civil society stakeholders. Response to the COVID-19 crisis However, Lagarde's ambitions were quickly slowed down with the outbreak of the COVID-19 pandemic crisis. In March 2020, the ECB responded quickly and boldly by launching a package of measures including a new asset purchase programme: the €1350 billion Pandemic Emergency Purchase Programme (PEPP) which aimed to lower borrowing costs and increase lending in the euro area. The PEPP was extended to cover an additional €500 billion in December 2020. The ECB also re-launched more TLTROs loans to banks at historically low levels and record-high take-up (EUR 1.3 trillion in June 2020). Lending by banks to SMEs was also facilitated by collateral easing measures, and other supervisory relaxations. The ECB also reactivated currency swap lines and enhanced existing swap lines with central banks across the globe Strategy Review As a consequence of the COVID-19 crisis, the ECB extended the duration of the strategy review until September 2021. On 13 July 2021, the ECB presented the outcomes of the strategy review, with the main following announcements: The ECB announced a new inflation target at 2% instead of its "close but below two percent" inflation target. The ECB also made it clear it could overshoot its target under certain circumstances. The ECB announced it would try to incorporate the cost of housing (imputed rents) into its inflation measurement The ECB announced and action plan on climate change The ECB also said it would carry out another strategy review in 2025. Mandate and inflation target Unlike many other central banks, the ECB does not have a dual mandate where it has to pursue two equally important objectives such as price stability and full employment (like the US Federal Reserve System). The ECB has only one primary objective – price stability – subject to which it may pursue secondary
the EMU reports of Pierre Werner and President Jacques Delors. It was established on 1 June 1998 The first President of the Bank was Wim Duisenberg, the former president of the Dutch central bank and the European Monetary Institute. While Duisenberg had been the head of the EMI (taking over from Alexandre Lamfalussy of Belgium) just before the ECB came into existence, the French government wanted Jean-Claude Trichet, former head of the French central bank, to be the ECB's first president. The French argued that since the ECB was to be located in Germany, its president should be French. This was opposed by the German, Dutch and Belgian governments who saw Duisenberg as a guarantor of a strong euro. Tensions were abated by a gentleman's agreement in which Duisenberg would stand down before the end of his mandate, to be replaced by Trichet. Trichet replaced Duisenberg as president in November 2003. Until 2007, the ECB had very successfully managed to maintain inflation close but below 2%. The ECB's response to the financial crises (2008–2014) The European Central Bank underwent through a deep internal transformation as it faced the global financial crisis and the Eurozone debt crisis. Early response to the Eurozone debt crisis The so-called European debt crisis began after Greece's new elected government uncovered the real level indebtedness and budget deficit and warned EU institutions of the imminent danger of a Greek sovereign default. Foreseeing a possible sovereign default in the eurozone, the general public, international and European institutions, and the financial community reassessed the economic situation and creditworthiness of some Eurozone member states, in particular Southern countries. Consequently, sovereign bonds yields of several Eurozone countries started to rise sharply. This provoked a self-fulfilling panic on financial markets: the more Greek bonds yields rose, the more likely a default became possible, the more bond yields increased in turn. Trichet's reluctance to intervene This panic was also aggravated because of the inability of the ECB to react and intervene on sovereign bonds markets for two reasons. First, because the ECB's legal framework normally forbids the purchase of sovereign bonds (Article 123. TFEU), This prevented the ECB from implementing quantitative easing like the Federal Reserve and the Bank of England did as soon as 2008, which played an important role in stabilizing markets. Secondly, a decision by the ECB made in 2005 introduced a minimum credit rating (BBB-) for all Eurozone sovereign bonds to be eligible as collateral to the ECB's open market operations. This meant that if a private rating agencies were to downgrade a sovereign bond below that threshold, many banks would suddenly become illiquid because they would lose access to ECB refinancing operations. According to former member of the governing council of the ECB Athanasios Orphanides, this change in the ECB's collateral framework "planted the seed" of the euro crisis. Faced with those regulatory constraints, the ECB led by Jean-Claude Trichet in 2010 was reluctant to intervene to calm down financial markets. Up until 6 May 2010, Trichet formally denied at several press conferences the possibility of the ECB to embark into sovereign bonds purchases, even though Greece, Portugal, Spain and Italy faced waves of credit rating downgrades and increasing interest rate spreads. ECB's market interventions (2010–2011) In a remarkable u-turn, the ECB announced on 10 May 2010, the launch of a "Securities Market Programme" (SMP) which involved the discretionary purchase of sovereign bonds in secondary markets. Extraordinarily, the decision was taken by the Governing Council during a teleconference call only three days after the ECB's usual meeting of 6 May (when Trichet still denied the possibility of purchasing sovereign bonds). The ECB justified this decision by the necessity to "address severe tensions in financial markets." The decision also coincided with the EU leaders decision of 10 May to establish the European Financial Stabilisation mechanism, which would serve as a crisis fighting fund to safeguard the euro area from future sovereign debt crisis. The ECB's bond buying focused primarily on Spanish and Italian debt. They were intended to dampen international speculation against those countries, and thus avoid a contagion of the Greek crisis towards other Eurozone countries. The assumption is that speculative activity will decrease over time and the value of the assets increase. Although SMP did involve an injection of new money into financial markets, all ECB injections were "sterilized" through weekly liquidity absorption. So the operation was neutral for the overall money supply. In September 2011, ECB's Board member Jürgen Stark, resigned in protest against the "Securities Market Programme" which involved the purchase of sovereign bonds from Southern member states, a move that he considered as equivalent to monetary financing, which is prohibited by the EU Treaty. The Financial Times Deutschland referred to this episode as "the end of the ECB as we know it", referring to its hitherto perceived "hawkish" stance on inflation and its historical Deutsche Bundesbank influence. As of 18 June 2012, the ECB in total had spent €212.1bn (equal to 2.2% of the Eurozone GDP) for bond purchases covering outright debt, as part of the Securities Markets Programme. Controversially, the ECB made substantial profits out of SMP, which were largely redistributed to Eurozone countries. In 2013, the Eurogroup decided to refund those profits to Greece, however the payments were suspended over 2014 until 2017 over the conflict between Yanis Varoufakis and ministers of the Eurogroup. In 2018, profits refunds were reinstalled by the Eurogroup. However, several NGOs complained that a substantial part of the ECB profits would never be refunded to Greece. Role in the Troika (2010–2015) The ECB played a controversial role in the "Troika" by rejecting all forms of debt restructuring of public and private debts, forcing governments to adopt bailout programmes and structural reforms through secret letters to Italian, Spanish, Greek and Irish governments. It has further been accused of interfering in the Greek referendum of July 2015 by constraining liquidity to Greek commercial banks. In November 2010, it became clear that Ireland would not be able to afford to bail out its failing banks, and Anglo Irish Bank in particular which needed around 30 billion euros, a sum the government obviously could not borrow from financial markets when its bond yields were soaring to comparable levels with the Greek bonds. Instead, the government issued a 31bn EUR "promissory note" (an IOU) to Anglo – which it had nationalized. In turn, the bank supplied the promissory note as collateral to the Central Bank of Ireland, so it could access emergency liquidity assistance (ELA). This way, Anglo was able to repay its bondholders. The operation became very controversial, as it basically shifted Anglo's private debts onto the government's balance sheet. It became clear later that the ECB played a key role in making sure the Irish government did not let Anglo default on its debts, in order to avoid a financial instability risks. On 15 October and 6 November 2010, the ECB President Jean-Claude Trichet sent two secret letters to the Irish finance Minister which essentially informed the Irish government of the possible suspension of ELA's credit lines, unless the government requested a financial assistance programme to the Eurogroup under condition of further reforms and fiscal consolidation. Over 2012 and 2013, the ECB repeatedly insisted that the promissory note should be repaid in full, and refused the Government's proposal to swap the notes with a long-term (and less costly) bond until February 2013. In addition, the ECB insisted that no debt restructuring (or bail-in) should be applied to the nationalized banks' bondholders, a measure which could have saved Ireland 8 billion euros. In April 2011, the ECB raised interest rates for the first time since 2008 from 1% to 1.25%, with a further increase to 1.50% in July 2011. However, in 2012–2013 the ECB sharply lowered interest rates to encourage economic growth, reaching the historically low 0.25% in November 2013. Soon after the rates were cut to 0.15%, then on 4 September 2014 the central bank reduced the rates by two thirds from 0.15% to 0.05%. Recently, the interest rates were further reduced reaching 0.00%, the lowest rates on record. The European Central Bank was not ready to manage the money supply under the crisis of 2008, therefore, it started using the instrument of quantitative easing only in 2015. In a report adopted on 13 March 2014, the European Parliament criticized the "potential conflict of interest between the current role of the ECB in the Troika as ‘technical advisor’ and its position as creditor of the four Member States, as well as its mandate under the Treaty". The report was led by Austrian right-wing MEP Othmar Karas and French Socialist MEP Liem Hoang Ngoc. The ECB's response under Mario Draghi (2012–2015) On 1 November 2011, Mario Draghi replaced Jean-Claude Trichet as President of the ECB. This change in leadership also marks the start of a new era under which the ECB will become more and more interventionist and eventually ended the Eurozone sovereign debt crisis. Draghi's presidency started with the impressive launch of a new round of 1% interest loans with a term of three years (36 months) – the Long-term Refinancing operations (LTRO). Under this programme, 523 Banks tapped as much as €489.2 bn (US$640 bn). Observers were surprised by the volume of the loans made when it was implemented. By far biggest amount of was tapped by banks in Greece, Ireland, Italy and Spain. Although those LTROs loans did not directly benefit EU governments, it effectively allowed banks to do a carry trade, by lending off the LTROs loans to governments with an interest margin. The operation also facilitated the rollover of of maturing bank debts in the first three months of 2012. "Whatever it takes" (26 July 2012) Facing renewed fears about sovereigns in the eurozone continued Mario Draghi made a decisive speech in London, by declaring that the ECB "...is ready to do whatever it takes to preserve the Euro. And believe me, it will be enough." In light of slow political progress on solving the eurozone crisis, Draghi's statement has been seen as a key turning point in the eurozone crisis, as it was immediately welcomed by European leaders, and led to a steady decline in bond yields for eurozone countries, in particular Spain, Italy and France. Following up on Draghi's speech, on 6 September 2012 the ECB announced the Outright Monetary Transactions programme (OMT). Unlike the previous SMP programme, OMT has no ex-ante time or size limit. However, the activation of the purchases remains conditioned to the adherence by the benefitting country to an adjustment programme to the ESM. The program was adopted with near unanimity, the Bundesbank president Jens Weidmann being the sole member of the ECB's Governing Council to vote against. Even if OMT was never actually implemented until today, it made the "Whatever it takes" pledge credible and significantly contributed in stabilizing financial markets and ended the sovereign debt crisis. According to various sources, the OMT programme and "whatever it takes" speeches were made possible because EU leaders previously agreed to build the banking union. Low inflation and quantitative easing (2015–2019) In November 2014, the bank moved into its new premises, while the Eurotower building was dedicated to host the newly established supervisory activities of the ECB under the Single Supervisory Mechanism. Although the sovereign debt crisis was almost solved by 2014, the ECB started to face a repeated decline in the Eurozone inflation rate, indicating that the economy was going towards a deflation. Responding to this threat, the ECB announced on 4 September 2014 the launch of two bond buying purchases programmes: the Covered Bond Purchasing Programme (CBPP3) and Asset-Backed Securities Programme (ABSPP). On 22 January 2015, the ECB announced an extension of those programmes within a full-fledge "quantitative easing" programme which also included sovereign bonds, to the tune of 60 billion euros per month up until at least September 2016. The programme was started on 9 March 2015. On 8 June 2016, the ECB added corporate bonds to its asset purchases portfolio with the launch of the corporate sector purchase programme (CSPP). Under this programme, it conducted net purchase of corporate bonds until January 2019 to reach about €177 billion. While the programme was halted for 11 months in January 2019, the ECB restarted net purchases in November 2019. As of 2021, the size of the ECB's quantitative easing programme had reached 2947 billion euros. Christine Lagarde's era (2019– ) In July 2019, EU leaders nominated Christine Lagarde to replace Mario Draghi as ECB President. Lagarde resigned from her position as managing director of the International Monetary Fund in July 2019 and formally took over the ECB's presidency on 1 November 2019. Lagarde immediately signaled a change of style in the ECB's leadership. She embarked the ECB's into a strategic review of the ECB's monetary policy strategy, an exercise the ECB had not done for 17 years. As part of this exercise, Lagarde committed the ECB to look into how monetary policy could contribute to address climate change, and promised that "no stone would be left unturned." The ECB president also adopted a change of communication style, in particular in her use of social media to promote gender equality, and by opening dialogue with civil society stakeholders. Response to the COVID-19 crisis However, Lagarde's ambitions were quickly slowed down with the outbreak of the COVID-19 pandemic crisis. In March 2020, the ECB responded quickly and boldly by launching a package of measures including a new asset purchase programme: the €1350 billion Pandemic Emergency Purchase Programme (PEPP) which aimed to lower borrowing costs and increase lending in the euro area. The PEPP was extended to cover an additional €500 billion in December 2020. The ECB also re-launched more TLTROs loans to banks at historically low levels and record-high take-up (EUR 1.3 trillion in June 2020). Lending by banks to SMEs was also facilitated by collateral easing measures, and other supervisory relaxations. The ECB also reactivated currency swap lines and enhanced existing swap lines with central banks across the globe Strategy Review As a consequence of the COVID-19 crisis, the ECB extended the duration of the strategy review until September 2021. On 13 July 2021, the ECB presented the outcomes of the strategy review, with the main following announcements: The ECB announced a new inflation target at 2% instead of its "close but below two percent" inflation target. The ECB also made it clear it could overshoot its target under certain circumstances. The ECB announced it would try to incorporate the cost of housing (imputed rents) into its inflation measurement The ECB announced and action plan on climate change The ECB also said it would carry out another strategy review in 2025. Mandate and inflation target Unlike many other central banks, the ECB does not have a dual mandate where it has to pursue two equally important objectives such as price stability and full employment (like the US Federal Reserve System). The ECB has only one primary objective – price stability – subject to which it may pursue secondary objectives. Primary mandate The primary objective of the European Central Bank, set out in Article 127(1) of the Treaty on the Functioning of the European Union, is to maintain price stability within the Eurozone. However the EU Treaties do not specify exactly how the ECB should pursue this objective. The European Central Bank has ample discretion over the way it pursues its price stability objective, as it can self-decide on the inflation target, and may also influence the way inflation is being measured. The Governing Council in October 1998 defined price stability as inflation of under 2%, "a year-on-year increase in the Harmonised Index of Consumer Prices (HICP) for the euro area of below 2%" and added that price stability "was to be maintained over the medium term". In May 2003, following a thorough review of the ECB's monetary policy strategy, the Governing Council clarified that "in the pursuit of price stability, it aims to maintain inflation rates below, but close to, 2% over the medium term". Since 2016, the European Central Bank's president has further adjusted
values, was provided by the Dutch physicists Samuel Goudsmit and George Uhlenbeck. In 1925, they suggested that an electron, in addition to the angular momentum of its orbit, possesses an intrinsic angular momentum and magnetic dipole moment. This is analogous to the rotation of the Earth on its axis as it orbits the Sun. The intrinsic angular momentum became known as spin, and explained the previously mysterious splitting of spectral lines observed with a high-resolution spectrograph; this phenomenon is known as fine structure splitting. Quantum mechanics In his 1924 dissertation (Research on Quantum Theory), French physicist Louis de Broglie hypothesized that all matter can be represented as a de Broglie wave in the manner of light. That is, under the appropriate conditions, electrons and other matter would show properties of either particles or waves. The corpuscular properties of a particle are demonstrated when it is shown to have a localized position in space along its trajectory at any given moment. The wave-like nature of light is displayed, for example, when a beam of light is passed through parallel slits thereby creating interference patterns. In 1927, George Paget Thomson discovered the interference effect was produced when a beam of electrons was passed through thin metal foils and by American physicists Clinton Davisson and Lester Germer by the reflection of electrons from a crystal of nickel. De Broglie's prediction of a wave nature for electrons led Erwin Schrödinger to postulate a wave equation for electrons moving under the influence of the nucleus in the atom. In 1926, this equation, the Schrödinger equation, successfully described how electron waves propagated. Rather than yielding a solution that determined the location of an electron over time, this wave equation also could be used to predict the probability of finding an electron near a position, especially a position near where the electron was bound in space, for which the electron wave equations did not change in time. This approach led to a second formulation of quantum mechanics (the first by Heisenberg in 1925), and solutions of Schrödinger's equation, like Heisenberg's, provided derivations of the energy states of an electron in a hydrogen atom that were equivalent to those that had been derived first by Bohr in 1913, and that were known to reproduce the hydrogen spectrum. Once spin and the interaction between multiple electrons were describable, quantum mechanics made it possible to predict the configuration of electrons in atoms with atomic numbers greater than hydrogen. In 1928, building on Wolfgang Pauli's work, Paul Dirac produced a model of the electron – the Dirac equation, consistent with relativity theory, by applying relativistic and symmetry considerations to the hamiltonian formulation of the quantum mechanics of the electro-magnetic field. In order to resolve some problems within his relativistic equation, Dirac developed in 1930 a model of the vacuum as an infinite sea of particles with negative energy, later dubbed the Dirac sea. This led him to predict the existence of a positron, the antimatter counterpart of the electron. This particle was discovered in 1932 by Carl Anderson, who proposed calling standard electrons negatons and using electron as a generic term to describe both the positively and negatively charged variants. In 1947, Willis Lamb, working in collaboration with graduate student Robert Retherford, found that certain quantum states of the hydrogen atom, which should have the same energy, were shifted in relation to each other; the difference came to be called the Lamb shift. About the same time, Polykarp Kusch, working with Henry M. Foley, discovered the magnetic moment of the electron is slightly larger than predicted by Dirac's theory. This small difference was later called anomalous magnetic dipole moment of the electron. This difference was later explained by the theory of quantum electrodynamics, developed by Sin-Itiro Tomonaga, Julian Schwinger and Richard Feynman in the late 1940s. Particle accelerators With the development of the particle accelerator during the first half of the twentieth century, physicists began to delve deeper into the properties of subatomic particles. The first successful attempt to accelerate electrons using electromagnetic induction was made in 1942 by Donald Kerst. His initial betatron reached energies of 2.3 MeV, while subsequent betatrons achieved 300 MeV. In 1947, synchrotron radiation was discovered with a 70 MeV electron synchrotron at General Electric. This radiation was caused by the acceleration of electrons through a magnetic field as they moved near the speed of light. With a beam energy of 1.5 GeV, the first high-energy particle collider was ADONE, which began operations in 1968. This device accelerated electrons and positrons in opposite directions, effectively doubling the energy of their collision when compared to striking a static target with an electron. The Large Electron–Positron Collider (LEP) at CERN, which was operational from 1989 to 2000, achieved collision energies of 209 GeV and made important measurements for the Standard Model of particle physics. Confinement of individual electrons Individual electrons can now be easily confined in ultra small (, ) CMOS transistors operated at cryogenic temperature over a range of −269 °C (4 K) to about −258 °C (15 K). The electron wavefunction spreads in a semiconductor lattice and negligibly interacts with the valence band electrons, so it can be treated in the single particle formalism, by replacing its mass with the effective mass tensor. Characteristics Classification In the Standard Model of particle physics, electrons belong to the group of subatomic particles called leptons, which are believed to be fundamental or elementary particles. Electrons have the lowest mass of any charged lepton (or electrically charged particle of any type) and belong to the first-generation of fundamental particles. The second and third generation contain charged leptons, the muon and the tau, which are identical to the electron in charge, spin and interactions, but are more massive. Leptons differ from the other basic constituent of matter, the quarks, by their lack of strong interaction. All members of the lepton group are fermions, because they all have half-odd integer spin; the electron has spin . Fundamental properties The invariant mass of an electron is approximately kilograms, or atomic mass units. Due to mass–energy equivalence, this corresponds to a rest energy of 0.511 MeV. The ratio between the mass of a proton and that of an electron is about 1836. Astronomical measurements show that the proton-to-electron mass ratio has held the same value, as is predicted by the Standard Model, for at least half the age of the universe. Electrons have an electric charge of coulombs, which is used as a standard unit of charge for subatomic particles, and is also called the elementary charge. Within the limits of experimental accuracy, the electron charge is identical to the charge of a proton, but with the opposite sign. As the symbol e is used for the elementary charge, the electron is commonly symbolized by , where the minus sign indicates the negative charge. The positron is symbolized by because it has the same properties as the electron but with a positive rather than negative charge. The electron has an intrinsic angular momentum or spin of . This property is usually stated by referring to the electron as a spin- particle. For such particles the spin magnitude is , while the result of the measurement of a projection of the spin on any axis can only be ±. In addition to spin, the electron has an intrinsic magnetic moment along its spin axis. It is approximately equal to one Bohr magneton, which is a physical constant equal to . The orientation of the spin with respect to the momentum of the electron defines the property of elementary particles known as helicity. The electron has no known substructure. Nevertheless, in condensed matter physics, spin–charge separation can occur in some materials. In such cases, electrons 'split' into three independent particles, the spinon, the orbiton and the holon (or chargon). The electron can always be theoretically considered as a bound state of the three, with the spinon carrying the spin of the electron, the orbiton carrying the orbital degree of freedom and the chargon carrying the charge, but in certain conditions they can behave as independent quasiparticles. The issue of the radius of the electron is a challenging problem of modern theoretical physics. The admission of the hypothesis of a finite radius of the electron is incompatible to the premises of the theory of relativity. On the other hand, a point-like electron (zero radius) generates serious mathematical difficulties due to the self-energy of the electron tending to infinity. Observation of a single electron in a Penning trap suggests the upper limit of the particle's radius to be 10−22 meters. The upper bound of the electron radius of 10−18 meters can be derived using the uncertainty relation in energy. There is also a physical constant called the "classical electron radius", with the much larger value of , greater than the radius of the proton. However, the terminology comes from a simplistic calculation that ignores the effects of quantum mechanics; in reality, the so-called classical electron radius has little to do with the true fundamental structure of the electron. There are elementary particles that spontaneously decay into less massive particles. An example is the muon, with a mean lifetime of seconds, which decays into an electron, a muon neutrino and an electron antineutrino. The electron, on the other hand, is thought to be stable on theoretical grounds: the electron is the least massive particle with non-zero electric charge, so its decay would violate charge conservation. The experimental lower bound for the electron's mean lifetime is years, at a 90% confidence level. Quantum properties As with all particles, electrons can act as waves. This is called the wave–particle duality and can be demonstrated using the double-slit experiment. The wave-like nature of the electron allows it to pass through two parallel slits simultaneously, rather than just one slit as would be the case for a classical particle. In quantum mechanics, the wave-like property of one particle can be described mathematically as a complex-valued function, the wave function, commonly denoted by the Greek letter psi (ψ). When the absolute value of this function is squared, it gives the probability that a particle will be observed near a location—a probability density. Electrons are identical particles because they cannot be distinguished from each other by their intrinsic physical properties. In quantum mechanics, this means that a pair of interacting electrons must be able to swap positions without an observable change to the state of the system. The wave function of fermions, including electrons, is antisymmetric, meaning that it changes sign when two electrons are swapped; that is, , where the variables r1 and r2 correspond to the first and second electrons, respectively. Since the absolute value is not changed by a sign swap, this corresponds to equal probabilities. Bosons, such as the photon, have symmetric wave functions instead. In the case of antisymmetry, solutions of the wave equation for interacting electrons result in a zero probability that each pair will occupy the same location or state. This is responsible for the Pauli exclusion principle, which precludes any two electrons from occupying the same quantum state. This principle explains many of the properties of electrons. For example, it causes groups of bound electrons to occupy different orbitals in an atom, rather than all overlapping each other in the same orbit. Virtual particles In a simplified picture, which often tends to give the wrong idea but may serve to illustrate some aspects, every photon spends some time as a combination of a virtual electron plus its antiparticle, the virtual positron, which rapidly annihilate each other shortly thereafter. The combination of the energy variation needed to create these particles, and the time during which they exist, fall under the threshold of detectability expressed by the Heisenberg uncertainty relation, ΔE · Δt ≥ ħ. In effect, the energy needed to create these virtual particles, ΔE, can be "borrowed" from the vacuum for a period of time, Δt, so that their product is no more than the reduced Planck constant, . Thus, for a virtual electron, Δt is at most . While an electron–positron virtual pair is in existence, the Coulomb force from the ambient electric field surrounding an electron causes a created positron to be attracted to the original electron, while a created electron experiences a repulsion. This causes what is called vacuum polarization. In effect, the vacuum behaves like a medium having a dielectric permittivity more than unity. Thus the effective charge of an electron is actually smaller than its true value, and the charge decreases with increasing distance from the electron. This polarization was confirmed experimentally in 1997 using the Japanese TRISTAN particle accelerator. Virtual particles cause a comparable shielding effect for the mass of the electron. The interaction with virtual particles also explains the small (about 0.1%) deviation of the intrinsic magnetic moment of the electron from the Bohr magneton (the anomalous magnetic moment). The extraordinarily precise agreement of this predicted difference with the experimentally determined value is viewed as one of the great achievements of quantum electrodynamics. The apparent paradox in classical physics of a point particle electron having intrinsic angular momentum and magnetic moment can be explained by the formation of virtual photons in the electric field generated by the electron. These photons can heuristically be thought of as causing the electron to shift about in a jittery fashion (known as zitterbewegung), which results in a net circular motion with precession. This motion produces both the spin and the magnetic moment of the electron. In atoms, this creation of virtual photons explains the Lamb shift observed in spectral lines. The Compton Wavelength shows that near elementary particles such as the electron, the uncertainty of the energy allows for the creation of virtual particles near the electron. This wavelength explains the "static" of virtual particles around elementary particles at a close distance. Interaction An electron generates an electric field that exerts an attractive force on a particle with a positive charge, such as the proton, and a repulsive force on a particle with a negative charge. The strength of this force in nonrelativistic approximation is determined by Coulomb's inverse square law. When an electron is in motion, it generates a magnetic field. The Ampère-Maxwell law relates the magnetic field to the mass motion of electrons (the current) with respect to an observer. This property of induction supplies the magnetic field that drives an electric motor. The electromagnetic field of an arbitrary moving charged particle is expressed by the Liénard–Wiechert potentials, which are valid even when the particle's speed is close to that of light (relativistic). When an electron is moving through a magnetic field, it is subject to the Lorentz force that acts perpendicularly to the plane defined by the magnetic field and the electron velocity. This centripetal force causes the electron to follow a helical trajectory through the field at a radius called
them. Later, in 1927, Walter Heitler and Fritz London gave the full explanation of the electron-pair formation and chemical bonding in terms of quantum mechanics. In 1919, the American chemist Irving Langmuir elaborated on the Lewis's static model of the atom and suggested that all electrons were distributed in successive "concentric (nearly) spherical shells, all of equal thickness". In turn, he divided the shells into a number of cells each of which contained one pair of electrons. With this model Langmuir was able to qualitatively explain the chemical properties of all elements in the periodic table, which were known to largely repeat themselves according to the periodic law. In 1924, Austrian physicist Wolfgang Pauli observed that the shell-like structure of the atom could be explained by a set of four parameters that defined every quantum energy state, as long as each state was occupied by no more than a single electron. This prohibition against more than one electron occupying the same quantum energy state became known as the Pauli exclusion principle. The physical mechanism to explain the fourth parameter, which had two distinct possible values, was provided by the Dutch physicists Samuel Goudsmit and George Uhlenbeck. In 1925, they suggested that an electron, in addition to the angular momentum of its orbit, possesses an intrinsic angular momentum and magnetic dipole moment. This is analogous to the rotation of the Earth on its axis as it orbits the Sun. The intrinsic angular momentum became known as spin, and explained the previously mysterious splitting of spectral lines observed with a high-resolution spectrograph; this phenomenon is known as fine structure splitting. Quantum mechanics In his 1924 dissertation (Research on Quantum Theory), French physicist Louis de Broglie hypothesized that all matter can be represented as a de Broglie wave in the manner of light. That is, under the appropriate conditions, electrons and other matter would show properties of either particles or waves. The corpuscular properties of a particle are demonstrated when it is shown to have a localized position in space along its trajectory at any given moment. The wave-like nature of light is displayed, for example, when a beam of light is passed through parallel slits thereby creating interference patterns. In 1927, George Paget Thomson discovered the interference effect was produced when a beam of electrons was passed through thin metal foils and by American physicists Clinton Davisson and Lester Germer by the reflection of electrons from a crystal of nickel. De Broglie's prediction of a wave nature for electrons led Erwin Schrödinger to postulate a wave equation for electrons moving under the influence of the nucleus in the atom. In 1926, this equation, the Schrödinger equation, successfully described how electron waves propagated. Rather than yielding a solution that determined the location of an electron over time, this wave equation also could be used to predict the probability of finding an electron near a position, especially a position near where the electron was bound in space, for which the electron wave equations did not change in time. This approach led to a second formulation of quantum mechanics (the first by Heisenberg in 1925), and solutions of Schrödinger's equation, like Heisenberg's, provided derivations of the energy states of an electron in a hydrogen atom that were equivalent to those that had been derived first by Bohr in 1913, and that were known to reproduce the hydrogen spectrum. Once spin and the interaction between multiple electrons were describable, quantum mechanics made it possible to predict the configuration of electrons in atoms with atomic numbers greater than hydrogen. In 1928, building on Wolfgang Pauli's work, Paul Dirac produced a model of the electron – the Dirac equation, consistent with relativity theory, by applying relativistic and symmetry considerations to the hamiltonian formulation of the quantum mechanics of the electro-magnetic field. In order to resolve some problems within his relativistic equation, Dirac developed in 1930 a model of the vacuum as an infinite sea of particles with negative energy, later dubbed the Dirac sea. This led him to predict the existence of a positron, the antimatter counterpart of the electron. This particle was discovered in 1932 by Carl Anderson, who proposed calling standard electrons negatons and using electron as a generic term to describe both the positively and negatively charged variants. In 1947, Willis Lamb, working in collaboration with graduate student Robert Retherford, found that certain quantum states of the hydrogen atom, which should have the same energy, were shifted in relation to each other; the difference came to be called the Lamb shift. About the same time, Polykarp Kusch, working with Henry M. Foley, discovered the magnetic moment of the electron is slightly larger than predicted by Dirac's theory. This small difference was later called anomalous magnetic dipole moment of the electron. This difference was later explained by the theory of quantum electrodynamics, developed by Sin-Itiro Tomonaga, Julian Schwinger and Richard Feynman in the late 1940s. Particle accelerators With the development of the particle accelerator during the first half of the twentieth century, physicists began to delve deeper into the properties of subatomic particles. The first successful attempt to accelerate electrons using electromagnetic induction was made in 1942 by Donald Kerst. His initial betatron reached energies of 2.3 MeV, while subsequent betatrons achieved 300 MeV. In 1947, synchrotron radiation was discovered with a 70 MeV electron synchrotron at General Electric. This radiation was caused by the acceleration of electrons through a magnetic field as they moved near the speed of light. With a beam energy of 1.5 GeV, the first high-energy particle collider was ADONE, which began operations in 1968. This device accelerated electrons and positrons in opposite directions, effectively doubling the energy of their collision when compared to striking a static target with an electron. The Large Electron–Positron Collider (LEP) at CERN, which was operational from 1989 to 2000, achieved collision energies of 209 GeV and made important measurements for the Standard Model of particle physics. Confinement of individual electrons Individual electrons can now be easily confined in ultra small (, ) CMOS transistors operated at cryogenic temperature over a range of −269 °C (4 K) to about −258 °C (15 K). The electron wavefunction spreads in a semiconductor lattice and negligibly interacts with the valence band electrons, so it can be treated in the single particle formalism, by replacing its mass with the effective mass tensor. Characteristics Classification In the Standard Model of particle physics, electrons belong to the group of subatomic particles called leptons, which are believed to be fundamental or elementary particles. Electrons have the lowest mass of any charged lepton (or electrically charged particle of any type) and belong to the first-generation of fundamental particles. The second and third generation contain charged leptons, the muon and the tau, which are identical to the electron in charge, spin and interactions, but are more massive. Leptons differ from the other basic constituent of matter, the quarks, by their lack of strong interaction. All members of the lepton group are fermions, because they all have half-odd integer spin; the electron has spin . Fundamental properties The invariant mass of an electron is approximately kilograms, or atomic mass units. Due to mass–energy equivalence, this corresponds to a rest energy of 0.511 MeV. The ratio between the mass of a proton and that of an electron is about 1836. Astronomical measurements show that the proton-to-electron mass ratio has held the same value, as is predicted by the Standard Model, for at least half the age of the universe. Electrons have an electric charge of coulombs, which is used as a standard unit of charge for subatomic particles, and is also called the elementary charge. Within the limits of experimental accuracy, the electron charge is identical to the charge of a proton, but with the opposite sign. As the symbol e is used for the elementary charge, the electron is commonly symbolized by , where the minus sign indicates the negative charge. The positron is symbolized by because it has the same properties as the electron but with a positive rather than negative charge. The electron has an intrinsic angular momentum or spin of . This property is usually stated by referring to the electron as a spin- particle. For such particles the spin magnitude is , while the result of the measurement of a projection of the spin on any axis can only be ±. In addition to spin, the electron has an intrinsic magnetic moment along its spin axis. It is approximately equal to one Bohr magneton, which is a physical constant equal to . The orientation of the spin with respect to the momentum of the electron defines the property of elementary particles known as helicity. The electron has no known substructure. Nevertheless, in condensed matter physics, spin–charge separation can occur in some materials. In such cases, electrons 'split' into three independent particles, the spinon, the orbiton and the holon (or chargon). The electron can always be theoretically considered as a bound state of the three, with the spinon carrying the spin of the electron, the orbiton carrying the orbital degree of freedom and the chargon carrying the charge, but in certain conditions they can behave as independent quasiparticles. The issue of the radius of the electron is a challenging problem of modern theoretical physics. The admission of the hypothesis of a finite radius of the electron is incompatible to the premises of the theory of relativity. On the other hand, a point-like electron (zero radius) generates serious mathematical difficulties due to the self-energy of the electron tending to infinity. Observation of a single electron in a Penning trap suggests the upper limit of the particle's radius to be 10−22 meters. The upper bound of the electron radius of 10−18 meters can be derived using the uncertainty relation in energy. There is also a physical constant called the "classical electron radius", with the much larger value of , greater than the radius of the proton. However, the terminology comes from a simplistic calculation that ignores the effects of quantum mechanics; in reality, the so-called classical electron radius has little to do with the true fundamental structure of the electron. There are elementary particles that spontaneously decay into less massive particles. An example is the muon, with a mean lifetime of seconds, which decays into an electron, a muon neutrino and an electron antineutrino. The electron, on the other hand, is thought to be stable on theoretical grounds: the electron is the least massive particle with non-zero electric charge, so its decay would violate charge conservation. The experimental lower bound for the electron's mean lifetime is years, at a 90% confidence level. Quantum properties As with all particles, electrons can act as waves. This is called the wave–particle duality and can be demonstrated using the double-slit experiment. The wave-like nature of the electron allows it to pass through two parallel slits simultaneously, rather than just one slit as would be the case for a classical particle. In quantum mechanics, the wave-like property of one particle can be described mathematically as a complex-valued function, the wave function, commonly denoted by the Greek letter psi (ψ). When the absolute value of this function is squared, it gives the probability that a particle will be observed near a location—a probability density. Electrons are identical particles because they cannot be distinguished from each other by their intrinsic physical properties. In quantum mechanics, this means that a pair of interacting electrons must be able to swap positions without an observable change to the state of the system. The wave function of fermions, including electrons, is antisymmetric, meaning that it changes sign when two electrons are swapped; that is, , where the variables r1 and r2 correspond to the first and second electrons, respectively. Since the absolute value is not changed by a sign swap, this corresponds to equal probabilities. Bosons, such as the photon, have symmetric wave functions instead. In the case of antisymmetry, solutions of the wave equation for interacting electrons result in a zero probability that each pair will occupy the same location or state. This is responsible for the Pauli exclusion principle, which precludes any two electrons from occupying the same quantum state. This principle explains many of the properties of electrons. For example, it causes groups of bound electrons to occupy different orbitals in an atom, rather than all overlapping each other in the same orbit. Virtual particles In a simplified picture, which often tends to give the wrong idea but may serve to illustrate some aspects, every photon spends some time as a combination of a virtual electron plus its antiparticle, the virtual positron, which rapidly annihilate each other shortly thereafter. The combination of the energy variation needed to create these particles, and the time during which they exist, fall under the threshold of detectability expressed by the Heisenberg uncertainty relation, ΔE · Δt ≥ ħ. In effect, the energy needed to create these virtual particles, ΔE, can be "borrowed" from the vacuum for a period of time, Δt, so that their product is no more than the reduced Planck constant, . Thus, for a virtual electron, Δt is at most . While an electron–positron virtual pair is in existence, the Coulomb force from the ambient electric field surrounding an electron causes a created positron to be attracted to the original electron, while a created electron experiences a repulsion. This causes what is called vacuum polarization. In effect, the vacuum behaves like a medium having a dielectric permittivity more than unity. Thus the effective charge of an electron is actually smaller than its true value, and the charge decreases with increasing distance from the electron. This polarization was confirmed experimentally in 1997 using the Japanese TRISTAN particle accelerator. Virtual particles cause a comparable shielding effect for the mass of the electron. The interaction with virtual particles also explains the small (about 0.1%) deviation of the intrinsic magnetic moment of the electron from the Bohr magneton (the anomalous magnetic moment). The extraordinarily precise agreement of this predicted difference with the experimentally determined value is viewed as one of the great achievements of quantum electrodynamics. The apparent paradox in classical physics of a point particle electron having intrinsic angular momentum and magnetic moment can be explained by the formation of virtual photons in the electric field generated by the electron. These photons can heuristically be thought of as causing the electron to shift about in a jittery fashion (known as zitterbewegung), which results in a net circular motion with precession. This motion produces both the spin and the magnetic moment of the electron. In atoms, this creation of virtual photons explains the Lamb shift observed in spectral lines. The Compton Wavelength shows that near elementary particles such as the electron, the uncertainty of the energy allows for the creation of virtual particles near the electron. This wavelength explains the "static" of virtual particles around elementary particles at a close distance. Interaction An electron generates an electric field that exerts an attractive force on a particle with a positive charge, such as the proton, and a repulsive force on a particle with a negative charge. The strength of this force in nonrelativistic approximation is determined by Coulomb's inverse square law. When an electron is in motion, it generates a magnetic field. The Ampère-Maxwell law relates the magnetic field to the mass motion of electrons (the current) with respect to an observer. This property of induction supplies the magnetic field that drives an electric motor. The electromagnetic field of an arbitrary moving charged particle is expressed by the Liénard–Wiechert potentials, which are valid even when the particle's speed is close to that of light (relativistic). When an electron is moving through a magnetic field, it is subject to the Lorentz force that acts perpendicularly to the plane defined by the magnetic field and the electron velocity. This centripetal force causes the electron to follow a helical trajectory through the field at a radius called the gyroradius. The acceleration from this curving motion induces the electron to radiate energy in the form of synchrotron radiation. The energy emission in turn causes a recoil of the electron, known as the Abraham–Lorentz–Dirac Force, which creates a friction that slows the electron. This force is caused by a back-reaction of the electron's own field upon itself. Photons mediate electromagnetic interactions between particles in quantum electrodynamics. An isolated electron at a constant velocity cannot emit or absorb a real photon; doing so would violate conservation of energy and momentum. Instead, virtual photons can transfer momentum between two charged particles. This exchange of virtual photons, for example, generates the Coulomb force. Energy emission can occur when a moving electron is deflected by a charged particle, such as a proton. The acceleration of the electron results in the emission of Bremsstrahlung radiation. An inelastic collision between a photon (light) and a solitary (free) electron is called Compton scattering. This collision results in a transfer of momentum and energy between the particles, which modifies the wavelength of the photon by an amount called the Compton shift. The maximum magnitude of this wavelength shift is h/mec, which is known as the Compton wavelength. For an electron, it has a value of . When the wavelength of the light is long (for instance, the wavelength of the visible light is 0.4–0.7 μm) the wavelength shift becomes negligible. Such interaction between the light and free electrons is called Thomson scattering or linear Thomson scattering. The relative strength of the electromagnetic interaction between two charged particles, such as an electron and a proton, is given by the fine-structure constant. This value is a dimensionless quantity formed by the ratio of two energies: the electrostatic energy of attraction (or repulsion) at a separation of one Compton wavelength, and the rest energy of the charge. It is given by α ≈ , which is approximately equal to . When electrons and positrons collide, they annihilate each other, giving rise to two or more gamma ray photons. If the electron and positron have negligible momentum, a positronium atom can form before annihilation results in two or three gamma ray photons totalling 1.022 MeV. On the other hand, a high-energy photon can transform into an electron and a positron by a process called pair production, but only in the presence of a nearby charged particle, such as a nucleus. In the theory of electroweak interaction, the left-handed component of electron's wavefunction forms a weak isospin doublet with the electron neutrino. This means that during weak interactions, electron neutrinos behave like electrons. Either member of this doublet can undergo a charged current interaction by emitting or absorbing a and be converted into the other member. Charge is conserved during this reaction because the W boson also carries a charge, canceling out any net change during the transmutation. Charged current interactions are responsible for the phenomenon of beta decay in a radioactive atom. Both the electron and electron neutrino can undergo a neutral current interaction via a exchange, and this is responsible for neutrino-electron elastic scattering. Atoms and molecules An electron can be bound to the nucleus of an atom by the attractive Coulomb force. A system of one or more electrons bound to a nucleus is called an atom. If the number of electrons is different from the nucleus's electrical charge, such an atom is called an ion. The wave-like behavior of a bound electron is described by a function called an atomic orbital. Each orbital has its own set of quantum numbers such as energy, angular momentum and projection of angular momentum, and only a discrete set of these orbitals exist around the nucleus. According to the Pauli exclusion principle each orbital can be occupied by up to two electrons, which must differ in their spin quantum number. Electrons can transfer between different orbitals by the emission or absorption of photons with an energy that matches the difference in potential. Other methods of orbital transfer include collisions with particles, such as electrons, and the Auger effect. To escape the atom, the energy of the electron must be increased above its binding energy to the atom. This occurs, for example, with the photoelectric effect, where an incident photon exceeding the atom's ionization energy is absorbed by the electron. The orbital angular momentum of electrons is quantized. Because the electron is charged, it produces an orbital magnetic moment that is proportional to the angular momentum. The net magnetic moment of an atom is equal to the vector sum of orbital and spin magnetic moments of all electrons and the nucleus. The magnetic moment of the nucleus is negligible compared with that of the electrons. The magnetic moments of the electrons that occupy the same orbital (so called, paired electrons) cancel each other out. The chemical bond between atoms occurs as a result of electromagnetic interactions, as described by the laws of quantum mechanics. The strongest bonds are formed by the sharing or transfer of electrons between atoms, allowing the formation of molecules. Within a molecule, electrons move under the influence of several nuclei, and occupy molecular orbitals; much as they can occupy atomic orbitals in isolated atoms. A fundamental factor in these molecular structures is the existence of electron pairs. These are electrons with opposed spins, allowing them to occupy the same molecular orbital without violating the Pauli exclusion principle (much like in atoms). Different molecular orbitals have different spatial distribution of the electron density. For instance, in bonded pairs (i.e. in the pairs that actually bind atoms together) electrons can be found with the maximal probability in a relatively small volume between the nuclei. By contrast, in non-bonded pairs electrons are distributed in a large volume around nuclei. Conductivity If a body has more or fewer electrons than are required to balance the positive charge of the nuclei, then that object has a net electric charge. When there is an excess of electrons, the object is said to be negatively charged. When there are fewer electrons than the number of protons in nuclei, the object is said to be positively charged. When the number of electrons and the number of protons are equal, their charges cancel each other and the object is said to be electrically neutral. A macroscopic body can develop an electric charge through rubbing, by the triboelectric effect. Independent electrons moving in vacuum are termed free electrons. Electrons in metals also behave as if they were free. In reality the particles that are commonly termed electrons in metals and other solids are quasi-electrons—quasiparticles, which have the same electrical charge, spin, and magnetic moment as real electrons but might have a different mass. When free electrons—both in vacuum and metals—move, they produce a net flow of charge called an electric current, which generates a magnetic field. Likewise a current can be created by a changing magnetic field. These interactions are described mathematically by Maxwell's equations. At a given temperature, each material has an electrical conductivity that determines the value of electric current when an electric potential is applied. Examples of good conductors include metals such as copper and gold, whereas glass and Teflon are poor conductors. In any dielectric material, the electrons remain bound to their respective atoms and the material behaves as an insulator. Most semiconductors have a variable level of conductivity that lies between the extremes of conduction and insulation. On the other hand, metals have an electronic band structure containing partially filled electronic bands. The presence of such bands allows electrons in metals to behave as if they were free or delocalized electrons. These electrons are not associated with specific atoms, so when an electric field is applied, they are free to move like a gas (called Fermi gas) through the material much like free electrons. Because of collisions between electrons and atoms, the drift velocity of electrons in a conductor is on the order of millimeters per second. However, the speed at which a change of current at one point in the material causes changes in currents in other parts of the material, the velocity of propagation, is typically about 75% of light speed. This occurs because electrical signals propagate as a wave, with the velocity dependent on the dielectric constant of the material. Metals make relatively good conductors of heat, primarily because the delocalized electrons are free to transport thermal energy between atoms. However, unlike electrical conductivity, the thermal conductivity of a metal is nearly independent of temperature. This is expressed mathematically by the Wiedemann–Franz law, which states that the ratio of thermal conductivity to the electrical conductivity is proportional to the temperature. The thermal disorder in the metallic lattice increases the electrical resistivity of the material, producing a temperature dependence for electric current. When cooled below a point called the critical temperature, materials can undergo a phase transition in which they lose all resistivity to electric current, in a process known as superconductivity. In BCS theory, pairs of electrons called Cooper pairs have their motion coupled to nearby matter via lattice vibrations called phonons, thereby avoiding the collisions with atoms that normally create electrical resistance. (Cooper pairs have a radius of roughly 100 nm, so they can overlap each other.) However, the mechanism by which higher temperature superconductors operate remains uncertain. Electrons inside conducting solids, which are quasi-particles themselves, when tightly confined at temperatures close to absolute zero, behave as though they had split into three other quasiparticles: spinons, orbitons and holons. The former carries spin and magnetic moment, the next carries its orbital location while the latter electrical charge. Motion and energy According to Einstein's theory of special relativity, as an electron's speed approaches the
+2 state has an electron configuration 4f7 because the half-filled f-shell provides more stability. In terms of size and coordination number, europium(II) and barium(II) are similar. The sulfates of both barium and europium(II) are also highly insoluble in water. Divalent europium is a mild reducing agent, oxidizing in air to form Eu(III) compounds. In anaerobic, and particularly geothermal conditions, the divalent form is sufficiently stable that it tends to be incorporated into minerals of calcium and the other alkaline earths. This ion-exchange process is the basis of the "negative europium anomaly", the low europium content in many lanthanide minerals such as monazite, relative to the chondritic abundance. Bastnäsite tends to show less of a negative europium anomaly than does monazite, and hence is the major source of europium today. The development of easy methods to separate divalent europium from the other (trivalent) lanthanides made europium accessible even when present in low concentration, as it usually is. Isotopes Naturally occurring europium is composed of 2 isotopes, 151Eu and 153Eu, which occur in almost equal proportions; 153Eu is slightly more abundant (52.2% natural abundance). While 153Eu is stable, 151Eu was found to be unstable to alpha decay with a half-life of in 2007, giving about 1 alpha decay per two minutes in every kilogram of natural europium. This value is in reasonable agreement with theoretical predictions. Besides the natural radioisotope 151Eu, 35 artificial radioisotopes have been characterized, the most stable being 150Eu with a half-life of 36.9 years, 152Eu with a half-life of 13.516 years, and 154Eu with a half-life of 8.593 years. All the remaining radioactive isotopes have half-lives shorter than 4.7612 years, and the majority of these have half-lives shorter than 12.2 seconds. This element also has 8 meta states, with the most stable being 150mEu (t1/2=12.8 hours), 152m1Eu (t1/2=9.3116 hours) and 152m2Eu (t1/2=96 minutes). The primary decay mode for isotopes lighter than 153Eu is electron capture, and the primary mode for heavier isotopes is beta minus decay. The primary decay products before 153Eu are isotopes of samarium (Sm) and the primary products after are isotopes of gadolinium (Gd). Europium as a nuclear fission product Europium is produced by nuclear fission, but the fission product yields of europium isotopes are low near the top of the mass range for fission products. As with other lanthanides, many isotopes of europium, especially those that have odd mass numbers or are neutron-poor like 152Eu, have high cross sections for neutron capture, often high enough to be neutron poisons. 151Eu is the beta decay product of samarium-151, but since this has a long decay half-life and short mean time to neutron absorption, most 151Sm instead ends up as 152Sm. 152Eu (half-life 13.516 years) and 154Eu (half-life 8.593 years) cannot be beta decay products because 152Sm and 154Sm are non-radioactive, but 154Eu is the only long-lived "shielded" nuclide, other than 134Cs, to have a fission yield of more than 2.5 parts per million fissions. A larger amount of 154Eu is produced by neutron activation of a significant portion of the non-radioactive 153Eu; however, much of this is further converted to 155Eu. 155Eu (half-life 4.7612 years) has a fission yield of 330 parts per million (ppm) for uranium-235 and thermal neutrons; most of it is transmuted to non-radioactive and nonabsorptive gadolinium-156 by the end of fuel burnup. Overall, europium is overshadowed by caesium-137 and strontium-90 as a radiation hazard, and by samarium and others as a neutron poison. Occurrence Europium is not found in nature as a free element. Many minerals contain europium, with the most important sources being bastnäsite, monazite, xenotime and loparite-(Ce). No europium-dominant minerals are known yet, despite a single find of a tiny possible Eu–O or Eu–O–C system phase in the Moon's regolith. Depletion or enrichment of europium in minerals relative to other rare-earth elements is known as the europium anomaly. Europium is commonly included in trace element studies in geochemistry and petrology to understand the processes that form igneous rocks (rocks that cooled from magma or lava). The nature of the europium anomaly found helps reconstruct the relationships within a suite of igneous rocks. The average crustal abundance of europium is 2–2.2 ppm. Divalent europium (Eu2+) in small amounts is the activator of the bright blue fluorescence of some samples of the mineral fluorite (CaF2). The reduction from Eu3+ to Eu2+ is induced by irradiation with energetic particles. The most outstanding examples of this originated around Weardale and adjacent parts of northern England; it was the fluorite found here that fluorescence was named after in 1852, although it was not until much later that europium was determined to be the cause. In astrophysics, the signature of europium in stellar spectra can be used to classify stars and inform theories of how or where a particular star was born. For instance, astronomers in 2019 identified higher-than-expected levels of europium within the star J1124+4535, hypothesizing that this star originated in a dwarf galaxy that collided with the Milky Way billions of years ago. Production Europium is associated with
insoluble in water. Divalent europium is a mild reducing agent, oxidizing in air to form Eu(III) compounds. In anaerobic, and particularly geothermal conditions, the divalent form is sufficiently stable that it tends to be incorporated into minerals of calcium and the other alkaline earths. This ion-exchange process is the basis of the "negative europium anomaly", the low europium content in many lanthanide minerals such as monazite, relative to the chondritic abundance. Bastnäsite tends to show less of a negative europium anomaly than does monazite, and hence is the major source of europium today. The development of easy methods to separate divalent europium from the other (trivalent) lanthanides made europium accessible even when present in low concentration, as it usually is. Isotopes Naturally occurring europium is composed of 2 isotopes, 151Eu and 153Eu, which occur in almost equal proportions; 153Eu is slightly more abundant (52.2% natural abundance). While 153Eu is stable, 151Eu was found to be unstable to alpha decay with a half-life of in 2007, giving about 1 alpha decay per two minutes in every kilogram of natural europium. This value is in reasonable agreement with theoretical predictions. Besides the natural radioisotope 151Eu, 35 artificial radioisotopes have been characterized, the most stable being 150Eu with a half-life of 36.9 years, 152Eu with a half-life of 13.516 years, and 154Eu with a half-life of 8.593 years. All the remaining radioactive isotopes have half-lives shorter than 4.7612 years, and the majority of these have half-lives shorter than 12.2 seconds. This element also has 8 meta states, with the most stable being 150mEu (t1/2=12.8 hours), 152m1Eu (t1/2=9.3116 hours) and 152m2Eu (t1/2=96 minutes). The primary decay mode for isotopes lighter than 153Eu is electron capture, and the primary mode for heavier isotopes is beta minus decay. The primary decay products before 153Eu are isotopes of samarium (Sm) and the primary products after are isotopes of gadolinium (Gd). Europium as a nuclear fission product Europium is produced by nuclear fission, but the fission product yields of europium isotopes are low near the top of the mass range for fission products. As with other lanthanides, many isotopes of europium, especially those that have odd mass numbers or are neutron-poor like 152Eu, have high cross sections for neutron capture, often high enough to be neutron poisons. 151Eu is the beta decay product of samarium-151, but since this has a long decay half-life and short mean time to neutron absorption, most 151Sm instead ends up as 152Sm. 152Eu (half-life 13.516 years) and 154Eu (half-life 8.593 years) cannot be beta decay products because 152Sm and 154Sm are non-radioactive, but 154Eu is the only long-lived "shielded" nuclide, other than 134Cs, to have a fission yield of more than 2.5 parts per million fissions. A larger amount of 154Eu is produced by neutron activation of a significant portion of the non-radioactive 153Eu; however, much of this is further converted to 155Eu. 155Eu (half-life 4.7612 years) has a fission yield of 330 parts per million (ppm) for uranium-235 and thermal neutrons; most of it is transmuted to non-radioactive and nonabsorptive gadolinium-156 by the end of fuel burnup. Overall, europium is overshadowed by caesium-137 and strontium-90 as a radiation hazard, and by samarium and others as a neutron poison. Occurrence Europium is not found in nature as a free element. Many minerals contain europium, with the most important sources being bastnäsite, monazite, xenotime and loparite-(Ce). No europium-dominant minerals are known yet, despite a single find of a tiny possible Eu–O or Eu–O–C system phase in the Moon's regolith. Depletion or enrichment of europium in minerals relative to other rare-earth elements is known as the europium anomaly. Europium is commonly included in trace element studies in geochemistry and petrology to understand the processes that form igneous rocks (rocks that cooled from magma or lava). The nature of the europium anomaly found helps reconstruct the relationships within a suite of igneous rocks. The average crustal abundance of europium is 2–2.2 ppm. Divalent europium (Eu2+) in small amounts is the activator of the bright blue fluorescence of some samples of the mineral fluorite (CaF2). The reduction from Eu3+ to Eu2+ is induced by irradiation with energetic particles. The most outstanding examples of this originated around Weardale and adjacent parts of northern England; it was the fluorite found here that fluorescence was named after in 1852, although it was not until much later that europium was determined to be the cause. In astrophysics, the signature of europium in stellar spectra can be used to classify stars and inform theories of how or where a particular star was born. For instance, astronomers in 2019 identified higher-than-expected levels of europium within the star J1124+4535, hypothesizing that this star originated in a dwarf galaxy that collided with the Milky Way billions of years ago. Production Europium is associated with the other rare-earth elements and is, therefore, mined together with them. Separation of the rare-earth elements occurs during later processing. Rare-earth elements are found in the minerals bastnäsite, loparite-(Ce), xenotime, and monazite in mineable quantities. Bastnäsite is a group of related fluorocarbonates, Ln(CO3)(F,OH). Monazite is a group of related of orthophosphate minerals (Ln denotes a mixture of all the lanthanides except promethium), loparite-(Ce) is an oxide, and xenotime is an orthophosphate (Y,Yb,Er,...)PO4. Monazite also contains thorium and yttrium, which complicates handling because thorium and its decay products are radioactive. For the extraction from the ore and the isolation of individual lanthanides, several methods have been developed. The choice of method is based on the concentration and composition of the ore
After 1860, terbia was renamed erbia and after 1877 what had been known as erbia was renamed terbia. Fairly pure Er2O3 was independently isolated in 1905 by Georges Urbain and Charles James. Reasonably pure erbium metal was not produced until 1934 when Wilhelm Klemm and Heinrich Bommer reduced the anhydrous chloride with potassium vapor. It was only in the 1990s that the price for Chinese-derived erbium oxide became low enough for erbium to be considered for use as a colorant in art glass. Occurrence The concentration of erbium in the Earth crust is about 2.8 mg/kg and in the sea water 0.9 ng/L. This concentration is enough to make erbium about 45th in elemental abundance in the Earth's crust. Like other rare earths, this element is never found as a free element in nature but is found bound in monazite sand ores. It has historically been very difficult and expensive to separate rare earths from each other in their ores but ion-exchange chromatography methods developed in the late 20th century have greatly brought down the cost of production of all rare-earth metals and their chemical compounds. The principal commercial sources of erbium are from the minerals xenotime and euxenite, and most recently, the ion adsorption clays of southern China; in consequence, China has now become the principal global supplier of this element. In the high-yttrium versions of these ore concentrates, yttrium is about two-thirds of the total by weight, and erbia is about 4–5%. When the concentrate is dissolved in acid, the erbia liberates enough erbium ion to impart a distinct and characteristic pink color to the solution. This color behavior is similar to what Mosander and the other early workers in the lanthanides would have seen in their extracts from the gadolinite minerals of Ytterby. Production Crushed minerals are attacked by hydrochloric or sulfuric acid that transforms insoluble rare-earth oxides into soluble chlorides or sulfates. The acidic filtrates are partially neutralized with caustic soda (sodium hydroxide) to pH 3–4. Thorium precipitates out of solution as hydroxide and is removed. After that the solution is treated with ammonium oxalate to convert rare earths into their insoluble oxalates. The oxalates are converted to oxides by annealing. The oxides are dissolved in nitric acid that excludes one of the main components, cerium, whose oxide is insoluble in HNO3. The solution is treated with magnesium nitrate to produce a crystallized mixture of double salts of rare-earth metals. The salts are separated by ion exchange. In this process, rare-earth ions are sorbed onto suitable ion-exchange resin by exchange with hydrogen, ammonium or cupric ions present in the resin. The rare earth ions are then selectively washed out by suitable complexing agent. Erbium metal is obtained from its oxide or salts by heating with calcium at under argon atmosphere. Applications Erbium's everyday uses are varied. It is commonly used as a photographic filter, and because of its resilience it is useful as a metallurgical additive. Lasers and optics A large variety of medical applications (i.e. dermatology, dentistry) utilize erbium ion's emission (see Er:YAG laser), which is highly absorbed in water (absorption coefficient about ). Such shallow tissue deposition of laser energy is necessary for laser surgery, and the efficient production of steam for laser enamel ablation in dentistry. Erbium-doped optical silica-glass fibers are the active element in erbium-doped fiber amplifiers (EDFAs), which
+ 6 H2O (l) → 2 Er(OH)3 (aq) + 3 H2 (g) Erbium metal reacts with all the halogens: 2 Er (s) + 3 F2 (g) → 2 ErF3 (s) [pink] 2 Er (s) + 3 Cl2 (g) → 2 ErCl3 (s) [violet] 2 Er (s) + 3 Br2 (g) → 2 ErBr3 (s) [violet] 2 Er (s) + 3 I2 (g) → 2 ErI3 (s) [violet] Erbium dissolves readily in dilute sulfuric acid to form solutions containing hydrated Er(III) ions, which exist as rose red [Er(OH2)9]3+ hydration complexes: 2 Er (s) + 3 H2SO4 (aq) → 2 Er3+ (aq) + 3 (aq) + 3 H2 (g) Oxidation states Like most rare-earth elements and Lanthanides, erbium is usually found in the +3 oxidation state. However, it is possible for erbium to also be found in the 0, +1 and +2 oxidation states. Organoerbium compounds Organoerbium compounds are very similar to those of the other lanthanides, as they all share an inability to undergo π backbonding. They are thus mostly restricted to the mostly ionic cyclopentadienides (isostructural with those of lanthanum) and the σ-bonded simple alkyls and aryls, some of which may be polymeric. Isotopes Naturally occurring erbium is composed of 6 stable isotopes, , , , , , and , with being the most abundant (33.503% natural abundance). 29 radioisotopes have been characterized, with the most stable being with a half-life of , with a half-life of , with a half-life of , with a half-life of , and with a half-life of . All of the remaining radioactive isotopes have half-lives that are less than , and the majority of these have half-lives that are less than 4 minutes. This element also has 13 meta states, with the most stable being with a half-life of . The isotopes of erbium range in atomic weight from () to (). The primary decay mode before the most abundant stable isotope, , is electron capture, and the primary mode after is beta decay. The primary decay products before are element 67 (holmium) isotopes, and the primary products after are element 69 (thulium) isotopes. History Erbium (for Ytterby, a village in Sweden) was discovered by Carl Gustaf Mosander in 1843. Mosander was working with a sample of what was thought to be the single metal oxide yttria, derived from the mineral gadolinite. He discovered that the sample contained at least two metal oxides in addition to pure yttria, which he named "erbia" and "terbia" after the village of Ytterby where the gadolinite had been found. Mosander was not certain of the purity of the oxides and later tests confirmed his uncertainty. Not only did the "yttria" contain yttrium, erbium, and terbium; in the ensuing years, chemists, geologists and spectroscopists discovered five additional elements: ytterbium, scandium, thulium, holmium, and gadolinium. Erbia and terbia, however, were confused at this time. A spectroscopist mistakenly switched the names of the two elements during spectroscopy. After 1860, terbia was renamed erbia and after 1877 what had been known as erbia was renamed terbia. Fairly pure Er2O3 was independently isolated in 1905 by Georges Urbain and Charles James. Reasonably pure erbium metal was not produced until 1934 when Wilhelm Klemm and Heinrich Bommer reduced the anhydrous chloride with potassium vapor. It was only in the 1990s that the price for Chinese-derived erbium oxide became low enough for erbium to be considered for use as a colorant in art glass. Occurrence The concentration of erbium in the Earth crust is about 2.8 mg/kg and in the sea water 0.9 ng/L. This concentration is enough to make erbium about 45th in elemental abundance in the Earth's crust. Like other rare earths, this element is never found as a free element in nature but is found bound in monazite sand ores. It has historically been very difficult and expensive to separate rare earths from each other in their ores but ion-exchange chromatography methods developed in the late 20th century have greatly brought down
dependence of the transuranium elements yield on the amount of retrieved radioactive rock. Shafts were drilled at the site before the test in order to accelerate sample collection after explosion, so that explosion would expel radioactive material from the epicenter through the shafts and to collecting volumes near the surface. This method was tried in two tests and instantly provided hundreds kilograms of material, but with actinide concentration 3 times lower than in samples obtained after drilling. Whereas such method could have been efficient in scientific studies of short-lived isotopes, it could not improve the overall collection efficiency of the produced actinides. Although no new elements (apart from einsteinium and fermium) could be detected in the nuclear test debris, and the total yields of transuranium elements were disappointingly low, these tests did provide significantly higher amounts of rare heavy isotopes than previously available in laboratories. Separation Separation procedure of einsteinium depends on the synthesis method. In the case of light-ion bombardment inside a cyclotron, the heavy ion target is attached to a thin foil, and the generated einsteinium is simply washed off the foil after the irradiation. However, the produced amounts in such experiments are relatively low. The yields are much higher for reactor irradiation, but there, the product is a mixture of various actinide isotopes, as well as lanthanides produced in the nuclear fission decays. In this case, isolation of einsteinium is a tedious procedure which involves several repeating steps of cation exchange, at elevated temperature and pressure, and chromatography. Separation from berkelium is important, because the most common einsteinium isotope produced in nuclear reactors, 253Es, decays with a half-life of only 20 days to 249Bk, which is fast on the timescale of most experiments. Such separation relies on the fact that berkelium easily oxidizes to the solid +4 state and precipitates, whereas other actinides, including einsteinium, remain in their +3 state in solutions. Separation of trivalent actinides from lanthanide fission products can be done by a cation-exchange resin column using a 90% water/10% ethanol solution saturated with hydrochloric acid (HCl) as eluant. It is usually followed by anion-exchange chromatography using 6 molar HCl as eluant. A cation-exchange resin column (Dowex-50 exchange column) treated with ammonium salts is then used to separate fractions containing elements 99, 100 and 101. These elements can be then identified simply based on their elution position/time, using α-hydroxyisobutyrate solution (α-HIB), for example, as eluant. Separation of the 3+ actinides can also be achieved by solvent extraction chromatography, using bis-(2-ethylhexyl) phosphoric acid (abbreviated as HDEHP) as the stationary organic phase, and nitric acid as the mobile aqueous phase. The actinide elution sequence is reversed from that of the cation-exchange resin column. The einsteinium separated by this method has the advantage to be free of organic complexing agent, as compared to the separation using a resin column. Preparation of the metal Einsteinium is highly reactive and therefore strong reducing agents are required to obtain the pure metal from its compounds. This can be achieved by reduction of einsteinium(III) fluoride with metallic lithium: EsF3 + 3 Li → Es + 3 LiF However, owing to its low melting point and high rate of self-radiation damage, einsteinium has high vapor pressure, which is higher than that of lithium fluoride. This makes this reduction reaction rather inefficient. It was tried in the early preparation attempts and quickly abandoned in favor of reduction of einsteinium(III) oxide with lanthanum metal: Es2O3 + 2 La → 2 Es + La2O3 Chemical compounds Oxides Einsteinium(III) oxide (Es2O3) was obtained by burning einsteinium(III) nitrate. It forms colorless cubic crystals, which were first characterized from microgram samples sized about 30 nanometers. Two other phases, monoclinic and hexagonal, are known for this oxide. The formation of a certain Es2O3 phase depends on the preparation technique and sample history, and there is no clear phase diagram. Interconversions between the three phases can occur spontaneously, as a result of self-irradiation or self-heating. The hexagonal phase is isotypic with lanthanum oxide where the Es3+ ion is surrounded by a 6-coordinated group of O2− ions. Halides Einsteinium halides are known for the oxidation states +2 and +3. The most stable state is +3 for all halides from fluoride to iodide. Einsteinium(III) fluoride (EsF3) can be precipitated from einsteinium(III) chloride solutions upon reaction with fluoride ions. An alternative preparation procedure is to exposure einsteinium(III) oxide to chlorine trifluoride (ClF3) or F2 gas at a pressure of 1–2 atmospheres and a temperature between 300 and 400 °C. The EsF3 crystal structure is hexagonal, as in californium(III) fluoride (CfF3) where the Es3+ ions are 8-fold coordinated by fluorine ions in a bicapped trigonal prism arrangement. Einsteinium(III) chloride (EsCl3) can be prepared by annealing einsteinium(III) oxide in the atmosphere of dry hydrogen chloride vapors at about 500 °C for some 20 minutes. It crystallizes upon cooling at about 425 °C into an orange solid with a hexagonal structure of UCl3 type, where einsteinium atoms are 9-fold coordinated by chlorine atoms in a tricapped trigonal prism geometry. Einsteinium(III) bromide (EsBr3) is a pale-yellow solid with a monoclinic structure of AlCl3 type, where the einsteinium atoms are octahedrally coordinated by bromine (coordination number 6). The divalent compounds of einsteinium are obtained by reducing the trivalent halides with hydrogen: 2 EsX3 + H2 → 2 EsX2 + 2 HX, X = F, Cl, Br, I Einsteinium(II) chloride (EsCl2), einsteinium(II) bromide (EsBr2), and einsteinium(II) iodide (EsI2) have been produced and characterized by optical absorption, with no structural information available yet. Known oxyhalides of einsteinium include EsOCl, EsOBr and EsOI. These salts are synthesized by treating a trihalide with a vapor mixture of water and the corresponding hydrogen halide: for example, EsCl3 + H2O/HCl to obtain EsOCl. Organoeinsteinium compounds The high radioactivity of einsteinium has a potential use in radiation therapy, and organometallic complexes have been synthesized in order to deliver einsteinium atoms to an appropriate organ in the body. Experiments have been performed on injecting einsteinium citrate (as well as fermium compounds) to dogs. Einsteinium(III) was also incorporated into beta-diketone chelate complexes, since analogous complexes with lanthanides previously showed strongest UV-excited luminescence among metallorganic compounds. When preparing einsteinium complexes, the Es3+ ions were 1000 times diluted with Gd3+ ions. This allowed reducing the radiation damage so that the compounds did not disintegrate during the period of 20 minutes required for the measurements. The resulting luminescence from Es3+ was much too weak to be detected. This was explained by the unfavorable relative energies of the individual constituents of the compound that hindered efficient energy transfer from the chelate matrix to Es3+ ions. Similar conclusion was drawn for other actinides americium, berkelium and fermium. Luminescence of Es3+ ions was however observed in inorganic hydrochloric acid solutions as well as in organic solution with di(2-ethylhexyl)orthophosphoric acid. It shows a broad peak at about 1064 nanometers (half-width about 100 nm) which can be resonantly excited by green light (ca. 495 nm wavelength). The luminescence has a lifetime of several microseconds and the quantum yield below 0.1%. The relatively high, compared to lanthanides, non-radiative decay rates in Es3+ were associated with the stronger interaction of f-electrons with the inner Es3+ electrons. Applications There is almost no use for any isotope of einsteinium outside basic scientific research aiming at production of higher transuranium elements and superheavy elements. In 1955, mendelevium was synthesized by irradiating a target consisting of about 109 atoms of 253Es in the 60-inch cyclotron at Berkeley Laboratory. The resulting 253Es(α,n)256Md reaction yielded 17 atoms of the new element with the atomic number of 101. The rare isotope einsteinium-254 is favored for production of ultraheavy elements because of its large mass, relatively long half-life of 270 days, and availability in significant amounts of several micrograms. Hence einsteinium-254 was used as a target in the attempted synthesis of ununennium (element 119) in 1985 by bombarding it with calcium-48 ions at the superHILAC linear particle accelerator at Berkeley, California. No atoms were identified, setting an upper limit for the cross section of this reaction at 300 nanobarns. {^{254}_{99}Es} + {^{48}_{20}Ca} -> {^{302}_{119}Uue^\ast} -> no\ atoms Einsteinium-254 was used as the calibration marker in the chemical analysis spectrometer ("alpha-scattering surface analyzer") of the Surveyor 5 lunar probe. The large mass of this isotope reduced the spectral overlap between signals from the marker and the studied lighter elements of the lunar surface. Safety Most of the available einsteinium toxicity data originates from research on animals. Upon ingestion by rats, only
a = 575 pm. However, there is a report of room-temperature hexagonal einsteinium metal with a = 398 pm and c = 650 pm, which converted to the fcc phase upon heating to 300 °C. The self-damage induced by the radioactivity of einsteinium is so strong that it rapidly destroys the crystal lattice, and the energy release during this process, 1000 watts per gram of 253Es, induces a visible glow. These processes may contribute to the relatively low density and melting point of einsteinium. Further, owing to the small size of the available samples, the melting point of einsteinium was often deduced by observing the sample being heated inside an electron microscope. Thus, the surface effects in small samples could reduce the melting point value. The metal is trivalent and has a noticeably high volatility. In order to reduce the self-radiation damage, most measurements of solid einsteinium and its compounds are performed right after thermal annealing. Also, some compounds are studied under the atmosphere of the reductant gas, for example H2O+HCl for EsOCl so that the sample is partly regrown during its decomposition. Apart from the self-destruction of solid einsteinium and its compounds, other intrinsic difficulties in studying this element include scarcity – the most common 253Es isotope is available only once or twice a year in sub-milligram amounts – and self-contamination due to rapid conversion of einsteinium to berkelium and then to californium at a rate of about 3.3% per day: ^{253}_{99}Es ->[\alpha][20 \ce{d}] ^{249}_{97}Bk ->[\beta^-][314 \ce{d}] ^{249}_{98}Cf Thus, most einsteinium samples are contaminated, and their intrinsic properties are often deduced by extrapolating back experimental data accumulated over time. Other experimental techniques to circumvent the contamination problem include selective optical excitation of einsteinium ions by a tunable laser, such as in studying its luminescence properties. Magnetic properties have been studied for einsteinium metal, its oxide and fluoride. All three materials showed Curie–Weiss paramagnetic behavior from liquid helium to room temperature. The effective magnetic moments were deduced as for Es2O3 and for the EsF3, which are the highest values among actinides, and the corresponding Curie temperatures are 53 and 37 K. Chemical Like all actinides, einsteinium is rather reactive. Its trivalent oxidation state is most stable in solids and aqueous solution where it induces a pale pink color. The existence of divalent einsteinium is firmly established, especially in the solid phase; such +2 state is not observed in many other actinides, including protactinium, uranium, neptunium, plutonium, curium and berkelium. Einsteinium(II) compounds can be obtained, for example, by reducing einsteinium(III) with samarium(II) chloride. The oxidation state +4 was postulated from vapor studies and is as yet uncertain. Isotopes Nineteen isotopes and three nuclear isomers are known for einsteinium, with mass numbers ranging from 240 to 257. All are radioactive and the most stable nuclide, 252Es, has a half-life of 471.7 days. The next most stable isotopes are 254Es (half-life 275.7 days), 255Es (39.8 days), and 253Es (20.47 days). All of the remaining isotopes have half-lives shorter than 40 hours, most shorter than 30 minutes. Of the three nuclear isomers, the most stable is 254mEs with a half-life of 39.3 hours. Nuclear fission Einsteinium has a high rate of nuclear fission that results in a low critical mass for a sustained nuclear chain reaction. This mass is 9.89 kilograms for a bare sphere of 254Es isotope, and can be lowered to 2.9 kilograms by adding a 30-centimeter-thick steel neutron reflector, or even to 2.26 kilograms with a 20-cm-thick reflector made of water. However, even this small critical mass greatly exceeds the total amount of einsteinium isolated thus far, especially of the rare 254Es isotope. Natural occurrence Because of the short half-life of all isotopes of einsteinium, any primordial einsteinium—that is, einsteinium that could possibly have been present on the Earth during its formation—has long since decayed. Synthesis of einsteinium from naturally-occurring actinides uranium and thorium in the Earth's crust requires multiple neutron capture, which is an extremely unlikely event. Therefore, all terrestrial einsteinium is produced in scientific laboratories, high-power nuclear reactors, or in nuclear weapons tests, and is present only within a few years from the time of the synthesis. The transuranic elements from americium to fermium, including einsteinium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so. Einsteinium was observed and detected in Przybylski's Star in 2008. Synthesis and extraction Einsteinium is produced in minute quantities by bombarding lighter actinides with neutrons in dedicated high-flux nuclear reactors. The world's major irradiation sources are the 85-megawatt High Flux Isotope Reactor (HFIR) at the Oak Ridge National Laboratory in Tennessee, U.S., and the SM-2 loop reactor at the Research Institute of Atomic Reactors (NIIAR) in Dimitrovgrad, Russia, which are both dedicated to the production of transcurium (Z > 96) elements. These facilities have similar power and flux levels, and are expected to have comparable production capacities for transcurium elements, although the quantities produced at NIIAR are not widely reported. In a "typical processing campaign" at Oak Ridge, tens of grams of curium are irradiated to produce decigram quantities of californium, milligram quantities of berkelium (249Bk) and einsteinium and picogram quantities of fermium. The first microscopic sample of 253Es sample weighing about 10 nanograms was prepared in 1961 at HFIR. A special magnetic balance was designed to estimate its weight. Larger batches were produced later starting from several kilograms of plutonium with the einsteinium yields (mostly 253Es) of 0.48 milligrams in 1967–1970, 3.2 milligrams in 1971–1973, followed by steady production of about 3 milligrams per year between 1974 and 1978. These quantities however refer to the integral amount in the target right after irradiation. Subsequent separation procedures reduced the amount of isotopically pure einsteinium roughly tenfold. Laboratory synthesis Heavy neutron irradiation of plutonium results in four major isotopes of einsteinium: 253Es (α-emitter with half-life of 20.47 days and with a spontaneous fission half-life of 7×105 years); 254mEs (β-emitter with half-life of 39.3 hours), 254Es (α-emitter with half-life of about 276 days) and 255Es (β-emitter with half-life of 39.8 days). An alternative route involves bombardment of uranium-238 with high-intensity nitrogen or oxygen ion beams. Einsteinium-247 (half-life 4.55 minutes) was produced by irradiating americium-241 with carbon or uranium-238 with nitrogen ions. The latter reaction was first realized in 1967 in Dubna, Russia, and the involved scientists were awarded the Lenin Komsomol Prize. The isotope 248Es was produced by irradiating 249Cf with deuterium ions. It mainly decays by emission of electrons to 248Cf with a half-life of minutes, but also releases α-particles of 6.87 MeV energy, with the ratio of electrons to α-particles of about 400. The heavier isotopes 249Es, 250Es, 251Es and 252Es were obtained by bombarding 249Bk with α-particles. One to four neutrons are liberated in this process making possible the formation of four different isotopes in one reaction. ^{249}_{97}Bk ->[+\alpha] ^{249,250,251,252}_{99}Es Einsteinium-253 was produced by irradiating a 0.1–0.2 milligram 252Cf target with a thermal neutron flux of (2–5)×1014 neutrons·cm−2·s−1 for 500–900 hours: ^{252}_{98}Cf ->[\ce{(n,\gamma)}] ^{253}_{98}Cf ->[\beta^-][17.81 \ce{d}] ^{253}_{99}Es In 2020, scientists at the Oak Ridge National Laboratory were able to create 233 nanograms of 254Es, a new world record. This allowed some chemical properties of the element to be studied for the first time. Synthesis in nuclear explosions The analysis of the debris at the 10-megaton Ivy Mike nuclear test was a part of long-term project. One of the goals of which was studying the efficiency of production of transuranium elements in high-power nuclear explosions. The motivation for these experiments was that synthesis of such elements from uranium requires multiple neutron capture. The probability of such events increases with the neutron flux, and nuclear explosions are the most powerful man-made neutron sources, providing densities of the order 1023 neutrons/cm2 within a microsecond, or about 1029 neutrons/(cm2·s). In comparison, the flux of the HFIR reactor is 5 neutrons/(cm2·s). A dedicated laboratory was set up right at Enewetak Atoll for preliminary analysis of debris, as some isotopes could have decayed by the time the debris samples reached the mainland U.S. The laboratory was receiving samples for analysis as soon as possible, from airplanes equipped with paper filters which flew over the atoll after the tests. Whereas it was hoped to discover new chemical elements heavier than fermium, none of these were found even after a series of megaton explosions conducted between 1954 and 1956 at the atoll. The atmospheric results were supplemented by the underground test data accumulated in the 1960s at the Nevada Test Site, as it was hoped that powerful explosions conducted in confined space might result in improved yields and heavier isotopes. Apart from traditional uranium charges, combinations of uranium with americium and thorium have been tried, as well as a mixed plutonium-neptunium charge, but they were less successful in terms of yield and was attributed to stronger losses of heavy isotopes due to enhanced fission rates in heavy-element charges. Product isolation was problematic as the explosions were spreading debris through melting and vaporizing the surrounding rocks at depths of 300–600 meters. Drilling to such depths to extract the products was both slow and inefficient in terms of collected volumes. Among the nine underground tests that were carried between 1962 and 1969, the last one was the most powerful and had the highest yield of transuranium elements. Milligrams of einsteinium that would normally take a year of irradiation in a high-power reactor, were produced within a microsecond. However, the major practical problem of the entire proposal was collecting the radioactive debris dispersed by the powerful blast. Aircraft filters adsorbed only about 4 of the total amount, and collection of tons of corals at Enewetak Atoll increased this fraction by only two orders of magnitude. Extraction of about 500 kilograms of underground rocks 60 days after the Hutch explosion recovered only about 1 of the total charge. The amount of transuranium elements in this 500-kg batch was only 30 times higher than in a 0.4 kg rock picked up 7 days after the test which demonstrated the highly non-linear dependence of the transuranium elements yield on the amount of retrieved radioactive rock. Shafts were drilled at the site before the test in order to accelerate sample collection after explosion, so that explosion would expel radioactive material from the epicenter through the shafts and to collecting volumes near the surface. This method was tried in two tests and instantly provided hundreds kilograms of material, but with actinide concentration 3 times lower than in samples obtained after drilling. Whereas such method could have been efficient in scientific studies of short-lived isotopes, it could not improve the overall collection efficiency of the produced actinides. Although no new elements (apart from einsteinium and fermium) could be detected in the nuclear test debris, and the total yields of transuranium elements were disappointingly low, these tests did provide significantly higher amounts of rare heavy isotopes than previously available in laboratories. Separation Separation procedure of einsteinium depends on the synthesis method. In the case of light-ion bombardment inside a cyclotron, the heavy ion target is attached to a thin foil, and the generated einsteinium is simply washed off the foil after the irradiation. However, the produced amounts in such experiments are relatively low. The yields are much higher for reactor irradiation, but there, the product is a mixture of various actinide isotopes, as well as lanthanides produced in the nuclear fission decays. In this case, isolation of einsteinium is a tedious procedure which involves several repeating steps of cation exchange, at elevated temperature and pressure, and chromatography. Separation from berkelium is important, because the most common einsteinium isotope produced in nuclear reactors, 253Es, decays with a half-life of only 20 days to 249Bk, which is fast on the timescale of most experiments. Such separation relies on the fact that berkelium easily oxidizes to the solid +4 state and precipitates, whereas other actinides, including einsteinium, remain in their +3 state in solutions. Separation of trivalent actinides from lanthanide fission products can be done by a cation-exchange resin column using
him the religious vote, strongest in Bavaria, it has weakened his support at the national level. In 2005, Stoiber successfully lobbied Novartis, the Swiss pharmaceuticals group, to move the headquarters of its Sandoz subsidiary to Munich, making it one of Europe's highest-profile corporate relocations that year as well as a significant boost to Stoiber's attempts to build up Bavaria as a pharmaceuticals and biotechnology center. During his time as Minister-President of Bavaria, Stoiber pushed for the construction of a roughly 40-kilometer high-speed magnetic-levitation link from Munich's main station to its airport, to be built by Transrapid International, a consortium including ThyssenKrupp and Munich-based Siemens. After he left office, the German federal government abandoned the plans in 2008 because of spiraling costs of as much as €3.4 billion. Domestic policy Stoiber, as a minister in the state of Bavaria, was widely known for advocating a reduction in the number of asylum seekers Germany accepts, something that prompted critics to label him xenophobic, anti-Turkish and anti-Islam. In the late 1990s, he criticized the incoming Chancellor Gerhard Schröder for saying that he would work hard in the interest of Germans and people living in Germany. Stoiber's remarks drew heavy criticism in the press. When Germany's Federal Constitutional Court decided in 1995 that a Bavarian law requiring a crucifix to be hung in each of the state's 40,000 classrooms was unconstitutional, Stoiber said he would not order the removal of crucifixes "for the time being", and asserted that he was under no obligation to remove them in schools where parents unanimously opposed such action. During his 2002 election campaign, Stoiber indicated he would not ban same-sex marriages—sanctioned by the Schröder government—a policy he had vehemently objected to when it was introduced. Media policy Stoiber has been a staunch advocate of changes in German law that would give more power to owners of private TV channels. In 1995, he publicly called for the abolition of Germany's public television service ARD and a streamlining of its regional services, adding that he and Minister-President Kurt Biedenkopf of Saxony would break the contract ARD has with regional governments if reforms were not undertaken. However, when European Commissioner for Competition Karel van Miert unveiled ideas for reforming the rules governing the financing of public service broadcasters in 1998, Stoiber led the way in rejecting moves to reform established practice. Controversies Comments on East Germany During the run-up to the German general election in 2005, which was held ahead of schedule, Stoiber created controversy through a campaign speech held in the beginning of August 2005 in the federal state of Baden-Württemberg. He said, "I do not accept that the East [of Germany] will again decide who will be Germany's chancellor. It cannot be allowed that the frustrated determine Germany's fate." People in the new federal states of Germany (the former German Democratic Republic) were offended by Stoiber's remarks. While the CSU attempted to portray them as "misinterpreted", Stoiber created further controversy when he claimed that "if it was like Bavaria everywhere, there wouldn't be any problems. Unfortunately, not everyone in Germany is as intelligent as in Bavaria." The tone of the comments was exacerbated by a perception by some within Germany of the state of Bavaria as "arrogant". Many, including members of the CDU, attribute Stoiber's comments and behavior as a contributing factor to the CDU's losses in the 2005 general election. He was accused by many in the CDU/CSU of offering "half-hearted" support to Angela Merkel, with some even accusing him of being reluctant to support a female candidate from the East. (This also contrasted unfavorably with Merkel's robust support for his candidacy in the 2002 election.) He has insinuated that votes were lost because of the choice of a female candidate. He came under heavy fire for these comments from press and politicians alike, especially since he himself lost almost 10% of the Bavarian vote—a dubious feat in itself as Bavarians tend to consistently vote conservatively. Nonetheless, a poll has suggested over 9% may have voted differently if the conservative candidate was a man from the West, although this does not clearly show if such a candidate would have gained or lost votes for the conservatives. BayernLB activities When the Croatian National Bank turned down BayernLB's original bid to take over the local arm of Hypo Alpe-Adria-Bank International, this drew strong criticism from Stoiber, who said the decision was "unacceptable" and a "severe strain" for Bavaria's relations with Croatia. Croatia was seeking to join the European Union at the time. The central bank's board later reviewed and accepted BayernLB's offer of 1.6 billion euros. The investment in Hypo Group Alpe Adria was part of a series of ill-fated investments, which later forced BayernLB to take a 10 billion-euro bailout in the financial crisis. European Commission job In September 2015, Emily O'Reilly, the European Ombudsman, received a complaint from two NGOs, Corporate Europe Observatory and Friends of the Earth, according to which Stoiber's appointment as special adviser on the commission's better regulation agenda broke internal rules on appointments. Personal life Stoiber is Roman Catholic. He is married to Karin Stoiber. They have three children: Constanze (born 1971, married Hausmann), Veronica (born 1977, married Saß), Dominic (born 1980) and five grandchildren: Johannes (1999), Benedikt (2001), Theresa Marie (2005), Ferdinand (2009) and another grandson (2011). Stoiber is a keen football fan and operative. In his youth, he played for local football side BCF Wolfratshausen. Stoiber serves as Member of the Supervisory Board of FC Bayern München AG (the stock corporation that runs the professional football section) and Chairman of the Administrative Advisory Board of FC Bayern Munich e.V. (the club that owns the majority of the club corporation). Before the 2002 election, FC Bayern
he had vehemently objected to when it was introduced. Media policy Stoiber has been a staunch advocate of changes in German law that would give more power to owners of private TV channels. In 1995, he publicly called for the abolition of Germany's public television service ARD and a streamlining of its regional services, adding that he and Minister-President Kurt Biedenkopf of Saxony would break the contract ARD has with regional governments if reforms were not undertaken. However, when European Commissioner for Competition Karel van Miert unveiled ideas for reforming the rules governing the financing of public service broadcasters in 1998, Stoiber led the way in rejecting moves to reform established practice. Controversies Comments on East Germany During the run-up to the German general election in 2005, which was held ahead of schedule, Stoiber created controversy through a campaign speech held in the beginning of August 2005 in the federal state of Baden-Württemberg. He said, "I do not accept that the East [of Germany] will again decide who will be Germany's chancellor. It cannot be allowed that the frustrated determine Germany's fate." People in the new federal states of Germany (the former German Democratic Republic) were offended by Stoiber's remarks. While the CSU attempted to portray them as "misinterpreted", Stoiber created further controversy when he claimed that "if it was like Bavaria everywhere, there wouldn't be any problems. Unfortunately, not everyone in Germany is as intelligent as in Bavaria." The tone of the comments was exacerbated by a perception by some within Germany of the state of Bavaria as "arrogant". Many, including members of the CDU, attribute Stoiber's comments and behavior as a contributing factor to the CDU's losses in the 2005 general election. He was accused by many in the CDU/CSU of offering "half-hearted" support to Angela Merkel, with some even accusing him of being reluctant to support a female candidate from the East. (This also contrasted unfavorably with Merkel's robust support for his candidacy in the 2002 election.) He has insinuated that votes were lost because of the choice of a female candidate. He came under heavy fire for these comments from press and politicians alike, especially since he himself lost almost 10% of the Bavarian vote—a dubious feat in itself as Bavarians tend to consistently vote conservatively. Nonetheless, a poll has suggested over 9% may have voted differently if the conservative candidate was a man from the West, although this does not clearly show if such a candidate would have gained or lost votes for the conservatives. BayernLB activities When the Croatian National Bank turned down BayernLB's original bid to take over the local arm of Hypo Alpe-Adria-Bank International, this drew strong criticism from Stoiber, who said the decision was "unacceptable" and a "severe strain" for Bavaria's relations with Croatia. Croatia was seeking to join the European Union at the time. The central bank's board later reviewed and accepted BayernLB's offer of 1.6 billion euros. The investment in Hypo Group Alpe Adria was part of a series of ill-fated investments, which later forced BayernLB to take a 10 billion-euro bailout in the financial crisis. European Commission job In September 2015, Emily O'Reilly, the European Ombudsman, received a complaint from two NGOs, Corporate Europe Observatory and Friends of the Earth, according to which Stoiber's appointment as special adviser on the commission's better regulation agenda broke internal rules on appointments. Personal life Stoiber is Roman Catholic. He is married to Karin Stoiber. They have three children: Constanze (born 1971, married Hausmann), Veronica (born 1977, married Saß), Dominic (born 1980) and five grandchildren: Johannes (1999), Benedikt (2001), Theresa Marie (2005), Ferdinand (2009) and another grandson (2011). Stoiber is a keen football fan and operative. In his youth, he played for local football side BCF Wolfratshausen. Stoiber serves as Member of the Supervisory Board of FC Bayern München AG (the stock corporation that runs the professional football section) and Chairman of the Administrative Advisory Board of FC Bayern Munich e.V. (the club that owns the majority of the club corporation). Before the 2002 election, FC Bayern general manager Uli Hoeneß expressed his support for Stoiber and the CSU. Football legend, former FC Bayern president and DFB vice president Franz Beckenbauer showed his support for Stoiber by letting him join the German national football team on their flight home from Japan after the 2002 FIFA World Cup. Honours and awards 1984: Bavarian Order of Merit 1996: Karl Valentin Order 1996: Grand Order of King Dmitar Zvonimir 1999: Grand Cross of the Order of the Star of Romania 2000: Orden wider den tierischen Ernst 2002: Commander of the Legion of Honour 2003: Officer of the Ordre national du Québec 2004: Grand Cross of the Order of Merit of the Federal Republic of Germany 2005: Grand Decoration of Honour in Gold with Sash for Services to the Republic of Austria 2006: Grand Cross of Order of Merit of the Italian Republic 2007: Large Gold Medal of the province of Upper Austria 2007: Honorary degree awarded by the Sogang University 2008: Steiger Award 2009: Order of Merit of Baden-Württemberg Literature Michael Stiller: Edmund Stoiber: der Kandidat. Econ, München 2002, . Jürgen Roth, Peter Köhler: Edmund G. Stoiber: Weltstaatsmann und Freund des Volkes. Eichborn, Frankfurt 2002, . Jule Philippi: Wer für alles offen ist, ist nicht ganz dicht. Weisheiten des Edmund Stoiber. Rowohlt, Reinbek bei Hamburg 2007, . See also List of Minister-Presidents of Bavaria References External links Stoiber quotes (critic) Stoiber and the East – Deutsche Welle Parlazzo Media Awards – Nomination 2007 – 10 Minutes – 10 Minuten |- 1941 births Living people People from Rosenheim (district) German Roman Catholics Christian Social Union in Bavaria politicians Presidents of the German Bundesrat Ministers-President of Bavaria Ministers of the Bavaria State Government Members of the Landtag of Bavaria FC Bayern Munich board members Cartellverband members Ludwig Maximilian University of Munich alumni Leaders of political parties in Germany Grand Crosses 1st
Erfurt. It occurred after reunification for a short time in the 1990s, but most of the suburban areas were situated within the administrative city borders. The birth deficit was 200 in 2012, this is −1.0 per 1,000 inhabitants (Thuringian average: -4.5; national average: -2.4). The net migration rate was +8.3 per 1,000 inhabitants in 2012 (Thuringian average: -0.8; national average: +4.6). The most important regions of origin of Erfurt migrants are rural areas of Thuringia, Saxony-Anhalt and Saxony as well as foreign countries like Poland, Russia, Syria, Afghanistan and Hungary. Like other eastern German cities, foreigners account only for a small share of Erfurt's population: circa 3.0% are non-Germans by citizenship and overall 5.9% are migrants (according to the 2011 EU census). Due to the official atheism of the former GDR, most of the population is non-religious. 14.8% are members of the Evangelical Church in Central Germany and 6.8% are Catholics (according to the 2011 EU census). The Jewish Community consists of 500 members. Most of them migrated to Erfurt from Russia and Ukraine in the 1990s. Culture, sights and cityscape Residents notable in cultural history Martin Luther (1483–1546) studied law and philosophy at the University of Erfurt from 1501. He lived in St. Augustine's Monastery in Erfurt, as a friar from 1505 to 1511. The theologian, philosopher and mystic Meister Eckhart (c. 1260–1328) entered the Dominican monastery in Erfurt when he was aged about 18 (around 1275). Eckhart was the Dominican Prior at Erfurt from 1294 until 1298, and Vicar of Thuringia from 1298 to 1302. After a year in Paris, he returned to Erfurt in 1303 and administered his duties as Provincial of Saxony from there until 1311. Max Weber (1864–1920) was born in Erfurt. He was a sociologist, philosopher, jurist, and political economist whose ideas have profoundly influenced modern social theory and social research. The textile designer Margaretha Reichardt (1907–1984) was born and died in Erfurt. She studied at the Bauhaus from 1926 to 1930, and while there worked with Marcel Breuer on his innovative chair designs. Her former home and weaving workshop in Erfurt, the Margaretha Reichardt Haus, is now a museum, managed by the Angermuseum Erfurt. Johann Pachelbel (1653–1706) served as organist at the Prediger church in Erfurt from June 1678 until August 1690. Pachelbel composed approximately seventy pieces for organ while in Erfurt. After 1906 the composer Richard Wetz (1875–1935) lived in Erfurt and became the leading person in the town's musical life. His major works were written here, including three symphonies, a Requiem and a Christmas Oratorio. Alexander Müller (1808–1863) pianist, conductor and composer, was born in Erfurt. He later moved to Zürich, where he served as leader of the General Music Society's subscription concerts series. The city is the birthplace of one of Johann Sebastian Bach's cousins, Johann Bernhard Bach, as well as Johann Sebastian Bach's father Johann Ambrosius Bach. Bach's parents were married in 1668 in a small church, the (Merchant's Church), that still exists on the main square, Anger. Famous modern musicians from Erfurt are Clueso, the Boogie Pimps and Yvonne Catterfeld. Museums Erfurt has a great variety of museums: The (municipal museum) shows aspects of Erfurt's history with a focus on the Middle Ages, early modern history, Martin Luther and the university. Other parts of the are the (new mill), an old water mill still in operation, and the (Benary's magazine) with an exhibition of old printing machines. The (Old Synagogue) is one of the oldest synagogue buildings in Europe. It is now a museum of local Jewish history. It houses facsimiles of medieval Hebrew manuscripts and the Erfurt Treasure, a hoard of coins and goldsmiths' work that is assumed to have belonged to Jews who hid them in 1349 at the time of the Black Death pogroms. The (Topf and Sons memorial) is on the site of the factory of the company which constructed crematoria for Auschwitz and other concentration camps. Its exhibitions explore the collaboration of a civilian company with the National Socialist regime in the holocaust. Memorial and Education Centre Andreasstrasse, (Stasi Museum). On the site of the former Erfurt Stasi prison, where over 5000 people were held. On 4 December 1989, the building was occupied by local residents. It was the first of many such takeovers of Stasi buildings in the former East Germany. Today it has exhibitions on the history of East Germany and the activities of its regime. The Angermuseum is one of the main art museums of Erfurt, named after Anger Square, where it is located. It focuses on modern graphic arts, medieval sculpture and early modern artisanal handicraft. The (Erfurt City Art Gallery) has exhibitions of contemporary art, of local, national and international artists. The Margaretha Reichardt Haus is the home and workshop of the textile designer and former Bauhaus student, Margaretha Reichardt (1907–1984). The (Saint Peter's church) houses an exhibition of concrete art, i.e. totally abstract art (not art made out of concrete). The (German Horticulture Museum) is housed at the Cyriaksburg Citadel. The (Natural History Museum) is situated in a medieval woad warehouse and explores Thuringian flora and fauna, geology and ecology. The (Museum of Folk Art and Cultural Anthropology) looks at the ordinary life of people in Thuringia in the past and shows exhibits of peasant and artisan traditions. The (Museum of Electrical Engineering) shows the history of electric engines, which have featured prominently in Erfurt's economy. in the district of Molsdorf is a Baroque palace with an exhibition about the painter . Image gallery Theatre Since 2003, the modern opera house is home to Theater Erfurt and its Philharmonic Orchestra. The "grand stage" section has 800 seats and the "studio stage" can hold 200 spectators. In September 2005, the opera Waiting for the Barbarians by Philip Glass premiered in the opera house. The Erfurt Theater has been a source of controversy recently. In 2005, a performance of Engelbert Humperdinck's opera stirred up the local press since the performance contained suggestions of pedophilia and incest. The opera was advertised in the program with the addition "for adults only". On 12 April 2008, a version of Verdi's opera directed by Johann Kresnik opened at the Erfurt Theater. The production stirred deep controversy by featuring nude performers in Mickey Mouse masks dancing on the ruins of the World Trade Center and a female singer with a painted on Hitler toothbrush moustache performing a straight arm Nazi salute, along with sinister portrayals of American soldiers, Uncle Sam, and Elvis Presley impersonators. The director described the production as a populist critique of modern American society, aimed at showing up the disparities between rich and poor. The controversy prompted one local politician to call for locals to boycott the performances, but this was largely ignored and the première was sold out. Sport The Messe Erfurt serves as home court for the Oettinger Rockets, a professional basketball team in Germany's first division, the Basketball Bundesliga. Notable types of sport in Erfurt are athletics, ice skating, cycling (with the oldest velodrome in use in the world, opened in 1885), swimming, handball, volleyball, tennis and football. The city's football club is member of and based in with a capacity of 20,000. The was the second indoor speed skating arena in Germany. Cityscape Erfurt's cityscape features a medieval core of narrow, curved alleys in the centre surrounded by a belt of architecture, created between 1873 and 1914. In 1873, the city's fortifications were demolished and it became possible to build houses in the area in front of the former city walls. In the following years, Erfurt saw a construction boom. In the northern area (districts Andreasvorstadt, Johannesvorstadt and Ilversgehofen) tenements for the factory workers were built whilst the eastern area (Krämpfervorstadt and Daberstedt) featured apartments for white-collar workers and clerks and the southwestern part (Löbervorstadt and Brühlervorstadt) with its beautiful valley landscape saw the construction of villas and mansions of rich factory owners and notables. During the interwar period, some settlements in Bauhaus style were realized, often as housing cooperatives. After World War II and over the whole GDR period, housing shortages remained a problem even though the government started a big apartment construction programme. Between 1970 and 1990 large settlements with high-rise blocks on the northern (for 50,000 inhabitants) and southeastern (for 40,000 inhabitants) periphery were constructed. After reunification the renovation of old houses in city centre and the areas was a big issue. The federal government granted substantial subsidies, so that many houses could be restored. Compared to many other German cities, little of Erfurt was destroyed in World War II. This is one reason why the centre today offers a mixture of medieval, Baroque and Neoclassical architecture as well as buildings from the last 150 years. Public green spaces are located along Gera river and in several parks like the , the and the . The largest green area is the , a horticultural exhibition park and botanic garden established in 1961. Sights and architectural heritage Churches, monasteries and synagogues The city centre has about 25 churches and monasteries, most of them in Gothic style, some also in Romanesque style or a mixture of Romanesque and Gothic elements, and a few in later styles. The various steeples characterize the medieval centre and led to one of Erfurt's nicknames as the "Thuringian Rome". Catholic churches and monasteries The (All Saints' Church) is a 14th-century Gothic parish church in Market Street, which hosts a columbarium. The (St Mary's Cathedral) perches above Domplatz, the Cathedral square. It is the Episcopal see and one of the main sights of Erfurt. It combines Romanesque and Gothic elements and has the largest medieval bell in the world, which is named Gloriosa. One of the works of art inside the cathedral is Lucas Cranach the Elder's 'The Mystic Marriage of St. Catherine' painted around 1520. The (St Laurence's Church) is a small 14th-century Gothic parish church at Anger Square. The (St Martin's Church) was built in the 15th century in Gothic style and later converted to Baroque style. It was both a Cistercian monastery and a parish church of Brühl, a medieval suburban zone. The (church of the new work/Holy Cross Church) is a 15th-century Gothic parish church at Neuwerk Street, that was later converted to Baroque style. Until 1285, it was used as an Augustinian monastery. The (Scots Monks' Church of St Nicholas and St James) is an 11th-century Romanesque monastery church with a Baroque façade, which was later used as a parish church. The (St Severus' Church) is the second-largest parish church after the cathedral and stands next to it on the Domberg hill. It is a Gothic church and was built around 1300. The , St. Ursula's Church, is a Gothic church at Anger Square. It is attached to the Ursulinenkloster, St. Ursula's Nunnery, founded in 1136. It is the only medieval monastery or nunnery in Erfurt which has been in continuous operation since it opened. The (St Wigbert's Church) is a 15th-century Gothic parish church at Anger Square. Protestant churches and monasteries (St Giles' Church) is a 14th-century Gothic parish church at Square. It is the surviving one of formerly two bridge-head churches of the located on both ends of the bridge. As a result, the nave is on the 1st floor, while on ground level is a passage to the bridge. The steeple is open to the public and offers a good view over the city centre. Today, St Giles' Church is a Methodist parish church. (St Andrew's Church) is a 14th-century Gothic parish church at Andrew's Street. The old craftsmen's quarter around it is named after the church. St. Augustine's Monastery dates from 1277. Martin Luther lived there as a monk between 1505 and 1511. The site has had a varied history and the restored complex has both modern and medieval buildings. Today it belongs to the Evangelical Church in Germany and as well as being a place of worship it is also a meeting and conference centre, and provides simple guest accommodation. In 2016 an application was made for it to be included in the already existing UNESCO World Heritage Site "Luther sites in Central Germany". The (Merchant's Church St Gregory) is a 14th-century Gothic parish church at Anger Square. It is one of the largest and most important original parish churches in Erfurt. The parents of Johann Sebastian Bach, Johann Ambrosius Bach and Maria Elisabeth Lämmerhirt married here in 1668. (St Michael's Church) is a 13th-century Gothic parish church in Michaelisstrasse. It became the church of the university in 1392. The (Dominican Church) is a Gothic monastery church of the Dominicans at . Since the Reformation in the 16th century, it is the main Protestant church of Erfurt and furthermore one of the largest former churches of the mendicant orders in Germany. The theologian and mystic Meister Eckhart (c. 1260 – 1328) entered Prediger Monastery around 1275. He was Prior from 1294 until 1298, and Vicar of Thuringia from 1298 to 1302. After a year in Paris, he returned to the monastery in 1303 and administered his duties as Provincial of Saxony from there until 1311. The baroque composer Johann Pachelbel (1653–1706) was organist at the church from 1678 until 1690. The (Regulated St Augustine's Church) is a 12th-century Romanesque-Gothic monastery church of the Augustinians at Station Street. After the Reformation, it became a Protestant parish church. Former churches The is a 14th-century Gothic monastery church at . The former Franciscan monastery became a Protestant parish church after the Reformation. In 1944, the church was badly damaged by Allied bombing. Since that time its ruins have been preserved as a war memorial. The (St Bartholomew's Church) was a parish church at Anger Square. The church was demolished before 1667 and only the steeple remained. Today, the steeple hosts a carillon with 60 bells. The (St George's Church) was a parish church in Michaelisstraße. It was demolished in 1632 and only the church tower now remains. The (Hospital Church) was the church of the former Great City Hospital at . It is a 14th-century Gothic building and is used today as a depot by the Museum für Thüringer Volkskunde (Museum of Thuringian Ethnology). The (St John's Church) was a parish church at John's Street. It was demolished in 1819, but the steeple remained. The (Carthusian Church, Mount St Saviour) was a monastery church at . The Baroque church was closed in 1803 and afterwards used for many different purposes. Today, it is part of a housing complex. The (St Nicholas' Church) was a parish church in Augustine's Street. It was demolished in 1747 and only the steeple remained. The (St Paul's Church) was a parish church in Paul's Street. It was demolished before 1759. The steeple remains and is in use as the belfry of the Prediger Church. The (St Peter's Church) was built in the 12th century in Romanesque style as a church of the Benedictine monastery of St Peter and Paul on Petersberg hill, now the site of Petersberg Citadel. It was secularised in 1803 and used as a military store house. Today it houses an art gallery. Synagogues The oldest parts of Erfurt's Alte Synagoge (Old Synagogue) date to the 11th century. It was used until 1349 when the Jewish community was destroyed in a pogrom known as the Erfurt Massacre. The building had many other uses since then. It was conserved in the 1990s and in 2009 it became a museum of Jewish history. A rare Mikveh, a ritual bath, dating from c.1250, was discovered by archeologists in 2007. It has been accessible to visitors on guided tours since September 2011. In 2015 the Old Synagogue and Mikveh were nominated as a World Heritage Site. It has been tentatively listed but a final decision has not yet been made. As religious freedom was granted in the 19th century, some Jews returned to Erfurt. They built their synagogue on the banks of the Gera river and used it from 1840 until 1884. The neoclassical building is known as the Kleine Synagoge (Small Synagogue). Today it is used an events centre. It is also open to visitors. A larger synagogue, the Große Synagoge (Great Synagogue), was opened in 1884 because the community had become larger and wealthier. This moorish style building was destroyed during nationwide Nazi riots, known as on 9–10 November 1938. In 1947 the land which the Great Synagogue had occupied was returned to the Jewish community and they built their current place of worship, the Neue Synagoge (New Synagogue) which opened in 1952. It was the only synagogue building erected under communist rule in East Germany. Secular architecture Besides the religious buildings there is a lot of historic secular architecture in Erfurt, mostly concentrated in the city centre, but some 19th- and 20th-century buildings are located on the outskirts. Street and square ensembles The (Merchants' bridge) is the most famous tourist attraction of Erfurt. This 15th-century bridge is completely covered with dwellings and unique in Europe north of the Alps. Today, there are some art handicraft and souvenir shops in the houses. The (Cathedral Square) is the largest square in Erfurt and one of the largest historical market squares in Germany. The cathedral and St Severus' Church on its western side can be reached over the , a wide flight of stairs. On the north side lies the courthouse, a historic building from 1880. The eastern and southern side is fronted by early-modern patrician houses. On the square are the Minerva Fountain from 1784 and the Erthal Obelisk from 1777. The Domplatz is the main setting of the Erfurt Christmas Market in December and the location for "DomStufen-Festival", an open-air theatre festival in summer. The (Fish Market) is the central square of Erfurt's city centre. It is surrounded by renaissance-style patrician houses and the town hall, a neo-gothic building from 1882. In the middle of the square is a statue called (Roman), a symbol of the city's independence, erected by the citizens in 1591. The (Minor Market) is a small square on the east side of the Gera river (opposite to the Fischmarkt on the west side), surrounded by early-modern patrician and merchants' houses. The fountain on this square with the sculpture "Scuffling Boys" was created in 1975. Today, square also has various cafés and bars. Next to the in is the building, a neoclassicistic event hall from 1831 (current building). The Congress of Erfurt took place here in 1808. The (originally the German term for "village green") is a protracted square in the eastern city centre. All tram lines are linked here, so that it became the new city centre during the 20th century with many important buildings. On its northern side is the main post office, built in 1886 in neo-gothic style with its prominent clock tower. In the north-east there is the Martin Luther monument from 1889 in front of the Merchants' Church. Between the church and the Ursuline monastery lies the "Anger 1" department store from 1908. On the south side next to Station Street is the , the art history museum of Erfurt, inside a Baroque palace from 1711. The western part of Anger square is surrounded by large historicist business houses from the late 19th century. The west end of the square is marked by the Angerbrunnen fountain from 1890. The Jesuit College near was built in 1737 and used until the ban of the Jesuits in 1773. The Willy Brandt Square is the southern gate to the city centre in front of the main station. Opposite to the station is the former hotel , where the first meeting of the East- and West-German heads of government took place in 1970. On the western side is the building of the old Erfurt station (1847–95) with a clock tower and the former offices of the Thuringian Railway Company. The (Deer Garden) is a small park in front of the Thuringian government seat in the western city centre. The minister-president's seat is the , a Renaissance-Baroque palace from the 17th century. The (Michael's Street) is known as "the lithic chronicle of Erfurt", because of its mostly medieval buildings. It is the main street of the Latin quarter around the old university and today one of the favourite nightlife districts of the Erfurters with various bars, restaurants and cafés. The central building of the old university,
from there until 1311. Max Weber (1864–1920) was born in Erfurt. He was a sociologist, philosopher, jurist, and political economist whose ideas have profoundly influenced modern social theory and social research. The textile designer Margaretha Reichardt (1907–1984) was born and died in Erfurt. She studied at the Bauhaus from 1926 to 1930, and while there worked with Marcel Breuer on his innovative chair designs. Her former home and weaving workshop in Erfurt, the Margaretha Reichardt Haus, is now a museum, managed by the Angermuseum Erfurt. Johann Pachelbel (1653–1706) served as organist at the Prediger church in Erfurt from June 1678 until August 1690. Pachelbel composed approximately seventy pieces for organ while in Erfurt. After 1906 the composer Richard Wetz (1875–1935) lived in Erfurt and became the leading person in the town's musical life. His major works were written here, including three symphonies, a Requiem and a Christmas Oratorio. Alexander Müller (1808–1863) pianist, conductor and composer, was born in Erfurt. He later moved to Zürich, where he served as leader of the General Music Society's subscription concerts series. The city is the birthplace of one of Johann Sebastian Bach's cousins, Johann Bernhard Bach, as well as Johann Sebastian Bach's father Johann Ambrosius Bach. Bach's parents were married in 1668 in a small church, the (Merchant's Church), that still exists on the main square, Anger. Famous modern musicians from Erfurt are Clueso, the Boogie Pimps and Yvonne Catterfeld. Museums Erfurt has a great variety of museums: The (municipal museum) shows aspects of Erfurt's history with a focus on the Middle Ages, early modern history, Martin Luther and the university. Other parts of the are the (new mill), an old water mill still in operation, and the (Benary's magazine) with an exhibition of old printing machines. The (Old Synagogue) is one of the oldest synagogue buildings in Europe. It is now a museum of local Jewish history. It houses facsimiles of medieval Hebrew manuscripts and the Erfurt Treasure, a hoard of coins and goldsmiths' work that is assumed to have belonged to Jews who hid them in 1349 at the time of the Black Death pogroms. The (Topf and Sons memorial) is on the site of the factory of the company which constructed crematoria for Auschwitz and other concentration camps. Its exhibitions explore the collaboration of a civilian company with the National Socialist regime in the holocaust. Memorial and Education Centre Andreasstrasse, (Stasi Museum). On the site of the former Erfurt Stasi prison, where over 5000 people were held. On 4 December 1989, the building was occupied by local residents. It was the first of many such takeovers of Stasi buildings in the former East Germany. Today it has exhibitions on the history of East Germany and the activities of its regime. The Angermuseum is one of the main art museums of Erfurt, named after Anger Square, where it is located. It focuses on modern graphic arts, medieval sculpture and early modern artisanal handicraft. The (Erfurt City Art Gallery) has exhibitions of contemporary art, of local, national and international artists. The Margaretha Reichardt Haus is the home and workshop of the textile designer and former Bauhaus student, Margaretha Reichardt (1907–1984). The (Saint Peter's church) houses an exhibition of concrete art, i.e. totally abstract art (not art made out of concrete). The (German Horticulture Museum) is housed at the Cyriaksburg Citadel. The (Natural History Museum) is situated in a medieval woad warehouse and explores Thuringian flora and fauna, geology and ecology. The (Museum of Folk Art and Cultural Anthropology) looks at the ordinary life of people in Thuringia in the past and shows exhibits of peasant and artisan traditions. The (Museum of Electrical Engineering) shows the history of electric engines, which have featured prominently in Erfurt's economy. in the district of Molsdorf is a Baroque palace with an exhibition about the painter . Image gallery Theatre Since 2003, the modern opera house is home to Theater Erfurt and its Philharmonic Orchestra. The "grand stage" section has 800 seats and the "studio stage" can hold 200 spectators. In September 2005, the opera Waiting for the Barbarians by Philip Glass premiered in the opera house. The Erfurt Theater has been a source of controversy recently. In 2005, a performance of Engelbert Humperdinck's opera stirred up the local press since the performance contained suggestions of pedophilia and incest. The opera was advertised in the program with the addition "for adults only". On 12 April 2008, a version of Verdi's opera directed by Johann Kresnik opened at the Erfurt Theater. The production stirred deep controversy by featuring nude performers in Mickey Mouse masks dancing on the ruins of the World Trade Center and a female singer with a painted on Hitler toothbrush moustache performing a straight arm Nazi salute, along with sinister portrayals of American soldiers, Uncle Sam, and Elvis Presley impersonators. The director described the production as a populist critique of modern American society, aimed at showing up the disparities between rich and poor. The controversy prompted one local politician to call for locals to boycott the performances, but this was largely ignored and the première was sold out. Sport The Messe Erfurt serves as home court for the Oettinger Rockets, a professional basketball team in Germany's first division, the Basketball Bundesliga. Notable types of sport in Erfurt are athletics, ice skating, cycling (with the oldest velodrome in use in the world, opened in 1885), swimming, handball, volleyball, tennis and football. The city's football club is member of and based in with a capacity of 20,000. The was the second indoor speed skating arena in Germany. Cityscape Erfurt's cityscape features a medieval core of narrow, curved alleys in the centre surrounded by a belt of architecture, created between 1873 and 1914. In 1873, the city's fortifications were demolished and it became possible to build houses in the area in front of the former city walls. In the following years, Erfurt saw a construction boom. In the northern area (districts Andreasvorstadt, Johannesvorstadt and Ilversgehofen) tenements for the factory workers were built whilst the eastern area (Krämpfervorstadt and Daberstedt) featured apartments for white-collar workers and clerks and the southwestern part (Löbervorstadt and Brühlervorstadt) with its beautiful valley landscape saw the construction of villas and mansions of rich factory owners and notables. During the interwar period, some settlements in Bauhaus style were realized, often as housing cooperatives. After World War II and over the whole GDR period, housing shortages remained a problem even though the government started a big apartment construction programme. Between 1970 and 1990 large settlements with high-rise blocks on the northern (for 50,000 inhabitants) and southeastern (for 40,000 inhabitants) periphery were constructed. After reunification the renovation of old houses in city centre and the areas was a big issue. The federal government granted substantial subsidies, so that many houses could be restored. Compared to many other German cities, little of Erfurt was destroyed in World War II. This is one reason why the centre today offers a mixture of medieval, Baroque and Neoclassical architecture as well as buildings from the last 150 years. Public green spaces are located along Gera river and in several parks like the , the and the . The largest green area is the , a horticultural exhibition park and botanic garden established in 1961. Sights and architectural heritage Churches, monasteries and synagogues The city centre has about 25 churches and monasteries, most of them in Gothic style, some also in Romanesque style or a mixture of Romanesque and Gothic elements, and a few in later styles. The various steeples characterize the medieval centre and led to one of Erfurt's nicknames as the "Thuringian Rome". Catholic churches and monasteries The (All Saints' Church) is a 14th-century Gothic parish church in Market Street, which hosts a columbarium. The (St Mary's Cathedral) perches above Domplatz, the Cathedral square. It is the Episcopal see and one of the main sights of Erfurt. It combines Romanesque and Gothic elements and has the largest medieval bell in the world, which is named Gloriosa. One of the works of art inside the cathedral is Lucas Cranach the Elder's 'The Mystic Marriage of St. Catherine' painted around 1520. The (St Laurence's Church) is a small 14th-century Gothic parish church at Anger Square. The (St Martin's Church) was built in the 15th century in Gothic style and later converted to Baroque style. It was both a Cistercian monastery and a parish church of Brühl, a medieval suburban zone. The (church of the new work/Holy Cross Church) is a 15th-century Gothic parish church at Neuwerk Street, that was later converted to Baroque style. Until 1285, it was used as an Augustinian monastery. The (Scots Monks' Church of St Nicholas and St James) is an 11th-century Romanesque monastery church with a Baroque façade, which was later used as a parish church. The (St Severus' Church) is the second-largest parish church after the cathedral and stands next to it on the Domberg hill. It is a Gothic church and was built around 1300. The , St. Ursula's Church, is a Gothic church at Anger Square. It is attached to the Ursulinenkloster, St. Ursula's Nunnery, founded in 1136. It is the only medieval monastery or nunnery in Erfurt which has been in continuous operation since it opened. The (St Wigbert's Church) is a 15th-century Gothic parish church at Anger Square. Protestant churches and monasteries (St Giles' Church) is a 14th-century Gothic parish church at Square. It is the surviving one of formerly two bridge-head churches of the located on both ends of the bridge. As a result, the nave is on the 1st floor, while on ground level is a passage to the bridge. The steeple is open to the public and offers a good view over the city centre. Today, St Giles' Church is a Methodist parish church. (St Andrew's Church) is a 14th-century Gothic parish church at Andrew's Street. The old craftsmen's quarter around it is named after the church. St. Augustine's Monastery dates from 1277. Martin Luther lived there as a monk between 1505 and 1511. The site has had a varied history and the restored complex has both modern and medieval buildings. Today it belongs to the Evangelical Church in Germany and as well as being a place of worship it is also a meeting and conference centre, and provides simple guest accommodation. In 2016 an application was made for it to be included in the already existing UNESCO World Heritage Site "Luther sites in Central Germany". The (Merchant's Church St Gregory) is a 14th-century Gothic parish church at Anger Square. It is one of the largest and most important original parish churches in Erfurt. The parents of Johann Sebastian Bach, Johann Ambrosius Bach and Maria Elisabeth Lämmerhirt married here in 1668. (St Michael's Church) is a 13th-century Gothic parish church in Michaelisstrasse. It became the church of the university in 1392. The (Dominican Church) is a Gothic monastery church of the Dominicans at . Since the Reformation in the 16th century, it is the main Protestant church of Erfurt and furthermore one of the largest former churches of the mendicant orders in Germany. The theologian and mystic Meister Eckhart (c. 1260 – 1328) entered Prediger Monastery around 1275. He was Prior from 1294 until 1298, and Vicar of Thuringia from 1298 to 1302. After a year in Paris, he returned to the monastery in 1303 and administered his duties as Provincial of Saxony from there until 1311. The baroque composer Johann Pachelbel (1653–1706) was organist at the church from 1678 until 1690. The (Regulated St Augustine's Church) is a 12th-century Romanesque-Gothic monastery church of the Augustinians at Station Street. After the Reformation, it became a Protestant parish church. Former churches The is a 14th-century Gothic monastery church at . The former Franciscan monastery became a Protestant parish church after the Reformation. In 1944, the church was badly damaged by Allied bombing. Since that time its ruins have been preserved as a war memorial. The (St Bartholomew's Church) was a parish church at Anger Square. The church was demolished before 1667 and only the steeple remained. Today, the steeple hosts a carillon with 60 bells. The (St George's Church) was a parish church in Michaelisstraße. It was demolished in 1632 and only the church tower now remains. The (Hospital Church) was the church of the former Great City Hospital at . It is a 14th-century Gothic building and is used today as a depot by the Museum für Thüringer Volkskunde (Museum of Thuringian Ethnology). The (St John's Church) was a parish church at John's Street. It was demolished in 1819, but the steeple remained. The (Carthusian Church, Mount St Saviour) was a monastery church at . The Baroque church was closed in 1803 and afterwards used for many different purposes. Today, it is part of a housing complex. The (St Nicholas' Church) was a parish church in Augustine's Street. It was demolished in 1747 and only the steeple remained. The (St Paul's Church) was a parish church in Paul's Street. It was demolished before 1759. The steeple remains and is in use as the belfry of the Prediger Church. The (St Peter's Church) was built in the 12th century in Romanesque style as a church of the Benedictine monastery of St Peter and Paul on Petersberg hill, now the site of Petersberg Citadel. It was secularised in 1803 and used as a military store house. Today it houses an art gallery. Synagogues The oldest parts of Erfurt's Alte Synagoge (Old Synagogue) date to the 11th century. It was used until 1349 when the Jewish community was destroyed in a pogrom known as the Erfurt Massacre. The building had many other uses since then. It was conserved in the 1990s and in 2009 it became a museum of Jewish history. A rare Mikveh, a ritual bath, dating from c.1250, was discovered by archeologists in 2007. It has been accessible to visitors on guided tours since September 2011. In 2015 the Old Synagogue and Mikveh were nominated as a World Heritage Site. It has been tentatively listed but a final decision has not yet been made. As religious freedom was granted in the 19th century, some Jews returned to Erfurt. They built their synagogue on the banks of the Gera river and used it from 1840 until 1884. The neoclassical building is known as the Kleine Synagoge (Small Synagogue). Today it is used an events centre. It is also open to visitors. A larger synagogue, the Große Synagoge (Great Synagogue), was opened in 1884 because the community had become larger and wealthier. This moorish style building was destroyed during nationwide Nazi riots, known as on 9–10 November 1938. In 1947 the land which the Great Synagogue had occupied was returned to the Jewish community and they built their current place of worship, the Neue Synagoge (New Synagogue) which opened in 1952. It was the only synagogue building erected under communist rule in East Germany. Secular architecture Besides the religious buildings there is a lot of historic secular architecture in Erfurt, mostly concentrated in the city centre, but some 19th- and 20th-century buildings are located on the outskirts. Street and square ensembles The (Merchants' bridge) is the most famous tourist attraction of Erfurt. This 15th-century bridge is completely covered with dwellings and unique in Europe north of the Alps. Today, there are some art handicraft and souvenir shops in the houses. The (Cathedral Square) is the largest square in Erfurt and one of the largest historical market squares in Germany. The cathedral and St Severus' Church on its western side can be reached over the , a wide flight of stairs. On the north side lies the courthouse, a historic building from 1880. The eastern and southern side is fronted by early-modern patrician houses. On the square are the Minerva Fountain from 1784 and the Erthal Obelisk from 1777. The Domplatz is the main setting of the Erfurt Christmas Market in December and the location for "DomStufen-Festival", an open-air theatre festival in summer. The (Fish Market) is the central square of Erfurt's city centre. It is surrounded by renaissance-style patrician houses and the town hall, a neo-gothic building from 1882. In the middle of the square is a statue called (Roman), a symbol of the city's independence, erected by the citizens in 1591. The (Minor Market) is a small square on the east side of the Gera river (opposite to the Fischmarkt on the west side), surrounded by early-modern patrician and merchants' houses. The fountain on this square with the sculpture "Scuffling Boys" was created in 1975. Today, square also has various cafés and bars. Next to the in is the building, a neoclassicistic event hall from 1831 (current building). The Congress of Erfurt took place here in 1808. The (originally the German term for "village green") is a protracted square in the eastern city centre. All tram lines are linked here, so that it became the new city centre during the 20th century with many important buildings. On its northern side is the main post office, built in 1886 in neo-gothic style with its prominent clock tower. In the north-east there is the Martin Luther monument from 1889 in front of the Merchants' Church. Between the church and the Ursuline monastery lies the "Anger 1" department store from 1908. On the south side next to Station Street is the , the art history museum of Erfurt, inside a Baroque palace from 1711. The western part of Anger square is surrounded by large historicist business houses from the late 19th century. The west end of the square is marked by the Angerbrunnen fountain from 1890. The Jesuit College near was built in 1737 and used until the ban of the Jesuits in 1773. The Willy Brandt Square is the southern gate to the city centre in front of the main station. Opposite to the station is the former hotel , where the first meeting of the East- and West-German heads of government took place in 1970. On the western side is the building of the old Erfurt station (1847–95) with a clock tower and the former offices of the Thuringian Railway Company. The (Deer Garden) is a small park in front of the Thuringian government seat in the western city centre. The minister-president's seat is the , a Renaissance-Baroque palace from the 17th century. The (Michael's Street) is known as "the lithic chronicle of Erfurt", because of its mostly medieval buildings. It is the main street of the Latin quarter around the old university and today one of the favourite nightlife districts of the Erfurters with various bars, restaurants and cafés. The central building of the old university, , was built in 1515, destroyed by Allied bombs in 1945 and originally rebuilt in 1999. The is an inner-city circular road following the former inner city wall. The road was set out in the 1890s by closing a branch of the Gera river. The buildings along the street originate from all periods of the 20th century, including some GDR-era highrise residence buildings. An old building complex here is the former Great Hospital, established in the 14th century. Today, it hosts the museum of popular art and cultural anthropology. The (St Andrew's Quarter) is a small quarter in the northern part of the city centre between in the south-west and in the north-east. It was the former craftsmen quarter with narrow alleys and old (16th/17th century) little houses. During the 20th century, there were plans to demolish the quarter because of its bad housing conditions. After 1990, the houses were redeveloped by private individuals so that it is one of the favourite neighbourhoods today. The largest building here is the former Municipal Corn Storage in Gothic style from 1466 with a floor area of . Fortifications From 1066 until 1873 the old town of Erfurt was encircled by a fortified wall. About 1168 this was extended to run around the western side of Petersberg hill, enclosing it within the city boundaries. After German Unification in 1871, Erfurt became part of the newly created German Empire. The threat to the city from its Saxon neighbours and from Bavaria was no longer present, so it was decided to dismantle the city walls. Only a few remnants remain today. A piece of inner wall can be found in a small park at the corner Juri-Gagarin-Ring and Johannesstraße and another piece at the flood ditch (Flutgraben) near Franckestraße. There is also a small restored part of the wall in the Brühler Garten, behind the Catholic orphanage. Only one of the wall's fortified towers was left standing, on Boyneburgufer, but this was destroyed in an air raid in 1944. The Petersberg Citadel is one of the largest and best preserved city fortresses in Europe, covering an area of 36 hectares in the north-west of the city centre. It was built from 1665 on Petersberg hill and was in military use until 1963. Since 1990, it has been significantly restored and is now open to the public as an historic site. The is a smaller citadel south-west of the city centre, dating from 1480. Today, it houses the German horticulture museum. 19th- and 20th-century architecture in the outskirts Between 1873 and 1914, a belt of architecture emerged around the city centre. The mansion district in the south-west around , and hosts some interesting and Art Nouveau buildings. The "Mühlenviertel" ("mill quarter"), is an area of beautiful Art Nouveau apartment buildings, cobblestone streets and street trees just to the north of the old city, in the vicinity of Nord Park, bordered by the Gera river on its east side. The Schmale Gera stream runs through the area. In the Middle Ages numerous small enterprises using the power of water mills occupied the area, hence the name "Mühlenviertel", with street names such as Waidmühlenweg (woad, or indigo, mill way), Storchmühlenweg (stork mill way) and Papiermühlenweg (paper mill way). The Bauhaus style is represented by some housing cooperative projects in the east around and and in the north around . Lutherkirke Church in (1927), is an Art Deco building. The former malt factory "Wolff" at in the east of Erfurt is a large industrial complex built between 1880 and 1939, and in use until 2000. A new use has not been found yet, but the area is sometimes used as a location in movie productions because of its atmosphere. Examples of Nazi architecture include the buildings of the (Thuringian parliament) and (an event hall) in the south at . While the building (1930s) represents more the neo-Roman/fascist style, (1940s) is marked by some neo-Germanic style elements. The Stalinist early-GDR style is manifested in the main building of the university at (1953) and the later more international modern GDR style is represented by the horticultural exhibition centre "" at , the housing complexes like Rieth or and the redevelopment of and area along in the city centre. The current international glass and steel architecture is dominant among most larger new buildings like the Federal Labour Court of Germany (1999), the new opera house (2003), the new main station (2007), the university library, the Erfurt Messe (convention centre) and the ice rink. Economy and infrastructure During recent years, the economic situation of the city improved: the unemployment rate declined from 21% in 2005 to 9% in 2013. Nevertheless, some 14,000 households with 24,500 persons (12% of population) are dependent upon state social benefits (Hartz IV). Agriculture, industry and services Farming has a great tradition in Erfurt: the cultivation of woad made the city rich during the Middle Ages. Today, horticulture and the production of flower seeds is still an important business in Erfurt. There is also growing of fruits (like apples, strawberries and sweet cherries), vegetables (e.g. cauliflowers, potatoes, cabbage and sugar beets) and grain on more than 60% of the municipal territory. Industrialization in Erfurt started around 1850. Until World War I, many factories were founded in different sectors like engine building, shoes, guns, malt and later electro-technics, so that there was no industrial monoculture in the city. After 1945, the companies were nationalized by the GDR government, which led to the decline of some of them. After reunification, nearly all factories were closed, either because they failed to successfully adopt to a free market economy or because the German government sold them to west German businessmen who closed them to avoid competition to their own enterprises. However, in the early 1990s the federal government started to subsidize the foundation of new companies. It still took a long time before the economic situation stabilized around 2006. Since this time, unemployment has decreased and overall, new jobs were created. Today, there are many small and medium-sized companies in Erfurt with electro-technics, semiconductors and photovoltaics in focus. Engine production, food production, the Braugold brewery, and Born Feinkost, a producer of Thuringian mustard, remain important industries. Erfurt is an (which means "supra-centre" according to Central place theory) in German regional planning. Such centres are always hubs of service businesses and public services like hospitals, universities, research, trade fairs, retail etc. Additionally, Erfurt is the capital of the
role their sixth studio album, Crann Úll (1980), with a line-up of siblings Máire, Pól, and Ciarán Brennan, and twin uncles Noel and Pádraig Duggan. She became an official member by the time their follow-up, Fuaim (1981), was released, and is photographed with the band on the front cover. Nicky maintained it was never his intention to make Enya a permanent member, and saw she was "fiercely independent ... intent on playing her own music. She was just not sure of how to go about it". This sparked discussions between the two on layering vocal tracks to create a "choir of one", a concept inspired by the Wall of Sound technique by producer Phil Spector that interested them both. During a Clannad tour in 1982, Nicky called for a band meeting to address internal issues that had arisen. He added, "It was short and only required a vote, I was a minority of one and lost. Roma and I were out. This left the question of what happened with Enya. I decided to stand back and say nothing". Enya chose to leave with the Ryans and pursue a solo career as she felt confined in the group, and disliked being "somebody in the background". This caused some friction between the two parties at first, but they settled their differences. Nicky suggested to Enya that either she return to Gweedore "with no particular definite future", or live with him and Roma in Artane, Dublin "and see what happens, musically", which she accepted. After their bank denied them a loan, Enya sold her saxophone and gave piano lessons for income and the Ryans used what they could afford from their savings to build a recording facility in a shed in their garden. They named it Aigle Studio, after the French word for "eagle", and rented it out to musicians to help recoup the costs. The trio formed a musical partnership, with Nicky as Enya's producer and arranger and Roma her lyricist, and established their music company, Aigle Music. In the following two years, Enya developed her technique and composition by listening to recordings of her reciting pieces of classical music, and repeated the process until she started to improvise sections and develop her own arrangements. Her first composition was "An Taibhse Uaighneach", Irish for "The Lonely Ghost". During this time Enya played the synthesiser on Ceol Aduaidh (1983) by Mairéad Ní Mhaonaigh and Frankie Kennedy, and performed with the duo and Mhaonaigh's brother Gearóid in their short lived group, Ragairne. Enya's first solo endeavour arrived in 1983 when she recorded two piano instrumentals, "An Ghaoth Ón Ghrian", Irish for "The Solar Wind", and "Miss Clare Remembers", at Windmill Lane Studios in Dublin which were released on Touch Travel (1984), a limited release cassette of music from various artists on the Touch label. She is credited as Eithne Ní Bhraonáin on its liner notes. After several months of preparation, Enya's first live solo performance took place on 23 September 1983 at the National Stadium in Dublin, which was televised for RTÉ's music show Festival Folk. Niall Morris, a musician who worked with her during this time, recalled she "was so nervous she could barely get on stage, and she cowered behind the piano until the gig was over." Morris assisted Enya in the production of a demo tape, playing additional keyboards to her compositions, which Roma thought would suit accompanying visuals and sent it to various film producers. Among them was David Puttnam, who liked the tape and offered Enya to compose the soundtrack to his upcoming romantic comedy film, The Frog Prince (1984). Enya scored nine pieces for the film but they were later rearranged and orchestrated against her wishes by Richard Myhill, except for two that she sang on, "The Frog Prince" and "Dreams"; the words to the latter were penned by Charlie McGettigan. Film editor Jim Clark said the rearrangements were necessary as Enya found it difficult to compose to picture. Released in 1985, the album is the first commercial release that credits her as Enya, after Nicky Ryan thought "Eithne" would be too difficult for non-Irish people to pronounce correctly and suggested the phonetic spelling of her name. Enya looked back on the film as a good career move, but a disappointing one as "we weren't part of it at the end". She then sang on three tracks on Ordinary Man (1985) by Christy Moore. 1985–1989: The Celts and Watermark In 1985, producer Tony McAuley asked Enya to contribute a track for a six-part BBC television documentary series The Celts. She had already written a Celtic-influenced song named "The March of the Celts", and submitted it to the project. Each episode was to feature a different composer at first, but director David Richardson liked her track so much, he had Enya score the entire series. Enya recorded 72 minutes of music at Aigle Studio and the BBC studios in Wood Lane, London without recording to picture, though she was required to portray certain themes and ideas that the producers wanted. Unlike The Frog Prince, she worked with little interference which granted her freedom to establish her sound that she would adopt throughout her future career, signified by layered vocals, keyboard-oriented music, and percussion with elements of Celtic, classical, church and folk music. In March 1987, two months before The Celts aired, a 40-minute selection of Enya's score was released as her debut solo album, Enya, by BBC Records in the United Kingdom and by Atlantic Records in the United States. The latter promoted it with a new-age imprint on the packaging, which Nicky later thought was "a cowardly thing for them to do". The album gained enough public attention to reach number 8 on the Irish Albums Chart and number 69 on the UK Albums Chart. "I Want Tomorrow" was released as Enya's first single. "Boadicea" was sampled by The Fugees on their 1996 song "Ready or Not"; the group neither sought permission nor gave credit, and Enya took legal action. The group subsequently gave her credit and paid a fee worth around $3 million. Later in 1987, Enya appeared on Sinéad O'Connor's debut album The Lion and the Cobra, reciting Psalm 91 in Irish on "Never Get Old". Several weeks after the release of Enya, Enya secured a recording contract with Warner Music UK after Rob Dickins, the label's chairman and a fan of Clannad, took a liking to Enya and found himself playing it "every night before I went to bed". He then met Enya and the Ryans at a chance meeting at the Irish Recorded Music Association award ceremony in Dublin, and learned Enya had entered negotiations with a rival label. Dickins seized the opportunity and signed her to Warner Music with a deal worth £75,000, granting her wish to write and record with artistic freedom, minimal interference from the label, and without set deadlines to finish albums. Dickins said: "Sometimes you sign an act to make money, and sometimes you sign an act to make music. This was clearly the latter ... I just wanted to be involved with this music." Enya then left Atlantic and signed with the Warner-led Geffen Records to handle her American distribution. With the green-light to produce a new studio album, Enya recorded Watermark from June 1987 to April 1988. It was initially recorded in analogue at Aigle Studio before Dickins requested to have it re-recorded digitally at Orinoco Studios in Bermondsey, London. Watermark was released in September 1988 and became an unexpected hit, reaching number 5 in the United Kingdom and number 25 on the Billboard 200 in the United States following its release there in January 1989. Its lead single, "Orinoco Flow", was the last song written for the album. It was not intended to be a single at first, but Enya and the Ryans chose it after Dickins asked for a single from them several times as a joke, knowing Enya's music was not made for the Top 40 chart. Dickins and engineer Ross Cullum are referenced in the songs' lyrics. "Orinoco Flow" became an international top 10 hit and was number one in the United Kingdom for three weeks. The new-found success propelled Enya to international fame and she received endorsement deals and offers to use her music in television commercials. She spent one year travelling worldwide to promote the album which increased her exposure through interviews, appearances, and live performances. By 1996, Watermark had sold in excess of 1.2 million copies in the United Kingdom and 4 million in the United States. 1989–1997: Shepherd Moons and The Memory of Trees After promoting Watermark, Enya purchased new recording equipment and started work on her next album, Shepherd Moons. She found the success of Watermark caused a considerable amount of pressure when it came to writing new songs, adding, "I kept thinking, 'Would this have gone on Watermark? Is it as good?' Eventually I had to forget about this and start on a blank canvas and just really go with what felt right." Enya wrote songs based on several ideas, including entries from her diary, the Blitz in London, and her grandparents. Shepherd Moons was released in November 1991, her first album released under Warner-led Reprise Records in the United States. It became a greater commercial success than Watermark, reaching number one in the UK for one week and number 17 in the United States. "Caribbean Blue", its lead single, charted at number thirteen in the United Kingdom. By 1997, the album had reached multi-platinum certification for selling in excess of 1.2 million copies in the United Kingdom and 5 million in the United States. In 1991, Warner Music released a collection of five Enya music videos as Moonshadows for home video. In 1993, Enya won her first Grammy Award for Best New Age Album for Shepherd Moons. Soon after, Enya and Nicky entered discussions with Industrial Light & Magic, founded by George Lucas, regarding an elaborate stage lighting system for a proposed concert tour, but nothing came out of the meetings. In November 1992, Warner had obtained the rights to Enya and re-released the album as The Celts with new artwork. It surpassed its initial sale performance, reaching number 10 in the United Kingdom and reached platinum certification in the United States in 1996 for one million copies shipped. After travelling worldwide to promote Shepherd Moons, Enya started to write and record her fourth album, The Memory of Trees. The album was released in November 1995. It peaked at number five in the United Kingdom and number nine in the United States, where it sold over 3 million copies. Its lead single, "Anywhere Is", reached number seven in the United Kingdom. The second, "On My Way Home", reached number twenty-six in the same country. In late 1994, Enya put out an extended play of Christmas music titled The Christmas EP. Enya was offered to compose the score for Titanic, but declined. A recording of her singing "Oíche Chiúin", an Irish-language version of "Silent Night", appeared on the charity album A Very Special Christmas 3, released in benefit of the Special Olympics in October 1997. In early 1997, Enya began to select tracks for her first compilation album, "trying to select the obvious ones, the hits, and others." She chose to work on the collection following the promotional tour for The Memory of Trees as she felt it was the right time in her career, and that her contract with WEA required her to release a "best of" album. The set, named Paint the Sky with Stars: The Best of Enya, features two new tracks, "Paint the Sky with Stars" and "Only If...". Released in November 1997, the album was a worldwide commercial success, reaching No. 4 in the UK and No. 30 in the US, where it went on to sell over 4 million copies. "Only If..." was released as a single in 1997. Enya described the album as "like a musical diary ... each melody has a little story and I live through that whole story from the beginning ... your mind goes back to that day and what you were thinking." 1998–2007: A Day Without Rain and Amarantine Enya started work on her fifth studio album, titled A Day Without Rain, in mid-1998. In a departure from her previous albums she incorporated the use of a string section into her compositions, something that was not a conscious decision at first, but Enya and Nicky Ryan agreed it complemented the songs that were being written. The album was released in November 2000, and reached number 6 in the United Kingdom and an initial peak of number 17 in the United States. In the aftermath of the 11 September attacks, sales of the album and its lead single, "Only Time", surged after the song was widely used during radio and television coverage of the events, leading to its description as "a post-September 11 anthem". The exposure caused A Day Without Rain to outperform its original chart performance to peak at number 2 on the Billboard 200, and the release of a maxi single containing the original and a pop remix of "Only Time" in November 2001. Enya donated its proceeds in aid of the International Association of Firefighters. The song topped the Billboard Hot Adult Contemporary Tracks chart and went to number 10 on the Hot 100 singles, Enya's highest charting US single to date. A second single, "Wild Child", was released in December 2001. A Day Without Rain remains Enya's biggest seller, with 7 million copies sold in the US and the most sold new-age album of all time with an estimated 13 million copies sold worldwide. In 2001, Enya agreed to write and perform on two tracks for the soundtrack of The Lord of the Rings: The Fellowship of the Ring (2001) at the request of director Peter Jackson. Its composer Howard Shore "imagined her voice" as he wrote the film's score, making an uncommon exception to include another artist in one of his soundtracks. After flying to New Zealand to observe the filming and to watch a rough cut of the film, Enya returned to Ireland and composed "Aníron (Theme for Aragorn and Arwen)" with lyrics by Roma in J. R. R. Tolkien's fictional Elvish language Sindarin, and "May It Be", sung in English and another Tolkien language, Quenya. Shore then based his orchestrations around Enya's recorded vocals and themes to create "a seamless sound". In 2002, Enya released "May It Be" as a single which earned her an Academy Award nomination for Best Original Song. She performed the song live at the 74th Academy Awards ceremony with an orchestra in March 2002, and later cited the moment as a career highlight. Enya undertook additional studio projects in 2001 and 2002. The first was work on the soundtrack to the Japanese romantic film Calmi Cuori Appassionati (2001) which was subsequently released as Themes from Calmi Cuori Appassionati (2001). The album is formed of tracks spanning her career from Enya to A Day Without Rain with two B-sides. The album went to number 2 in Japan, and became Enya's second to sell one million copies in the country. November 2002 saw the release of Only Time – The Collection, a box set of 51 tracks recorded through her career which received a limited release of 200,000 copies. In September 2003, Enya returned to Aigle Studio to start work on her sixth studio album, Amarantine. Roma said the title means "everlasting". The album marks the first instance of Enya singing in Loxian, a fictional language created by Roma that came about when Enya was working on "Water Shows the Hidden Heart". After numerous attempts to sing the song in English, Irish and Latin, Roma suggested a new language based on some of the sounds Enya would sing along to when developing her songs. It was a success, and Enya sang "Less Than a Pearl" and "The River Sings" in the same way. Roma worked on the language further, creating a "culture and history" behind it surrounding the Loxian people who are of another planet, questioning the existence of life on another. "Sumiregusa (Wild Violet)" is sung in Japanese. Amarantine was a global success, reaching number 6 on the Billboard 200 and number 8 in the UK. It has sold over 1 million certified copies in the US, a considerable drop in sales in comparison to her previous albums. Enya dedicated the album to BBC producer Tony McAuley, who had commissioned Enya to write the soundtrack to The Celts, following his death in 2003. The lead single, "Amarantine", was released in December 2005. A Christmas Special Edition was released in 2006, followed by a Deluxe Edition. In 2006, Enya released Sounds of the Season: The Enya Holiday Collection, a Christmas-themed EP released exclusively in the US following an exclusive partnership with the NBC network and the Target department store chain. It includes two new songs, "Christmas Secrets" and "The Magic of the Night". In June 2007, Enya received an honorary doctorate from the National University of Ireland, Galway. A month later, she received one from the University of Ulster. 2008–present: And Winter Came..., Dark Sky Island, and future Enya continued to write music with a winter and Christmas theme for her seventh studio album, And Winter Came.... Initially she intended to make an album of seasonal songs and hymns set for a release in late 2007, but decided to produce a winter-themed album instead. The track "My! My! Time Flies!", a tribute to the late Irish guitarist Jimmy Faulkner, incorporates a guitar solo performed by Pat Farrell, the
Enya has won numerous awards, including seven World Music Awards, four Grammy Awards for Best New Age Album, and an Ivor Novello Award. She was nominated for an Academy Award and a Golden Globe Award for "May It Be", written for The Lord of the Rings: The Fellowship of the Ring (2001). Early life Eithne Pádraigín Ní Bhraonáin was born on 17 May 1961 in Dore, a settlement in the parish of Gweedore, in County Donegal. It is a Gaeltacht region where Irish is the primary language. Her name is anglicised as Enya Patricia Brennan, where Enya is the phonetic spelling of how "Eithne" is pronounced in her native Ulster dialect of Irish; "Ní Bhraonáin" translates to "daughter of Brennan". The sixth of nine children, Enya was born into a Roman Catholic family of musicians. Her father, Leo Brennan, was the leader of the Slieve Foy Band, an Irish showband, and ran Leo's Tavern in Meenaleck. Her mother, Máire Brennan (née Duggan), who has distant Spanish roots whose ancestors settled on Tory Island, was an amateur musician who played in Leo's band and taught music at Gweedore Community School. Enya's maternal grandfather Aodh was the headmaster of the primary school in Dore, and her grandmother was a teacher there. Aodh was also the founder of the Gweedore Theatre company. Enya described her upbringing as "very quiet and happy." At age three, she took part in her first singing competition at the annual Feis Ceoil music festival. She took part in pantomimes at Gweedore Theatre and sang with her siblings in her mother's choir at St Mary's church in Derrybeg. She learned English at primary school and began piano lessons at age four. "I had to do school work and then travel to a neighbouring town for piano lessons, and then more school work. I ... remember my brothers and sisters playing outside ... and I would be inside playing the piano. This one big book of scales, practising them over and over." When Enya turned eleven, her grandfather paid for her education at a strict convent boarding school in Milford run by nuns of the Loreto order, where she developed a taste for classical music, art, Latin and watercolour painting. She said: "It was devastating to be torn away from such a large family, but it was good for my music." Enya left the school at 17 and studied classical music in college for one year, with the aim of becoming "a piano teacher sort of person. I never thought of myself composing or being on stage." Career 1980–1985: Clannad and early solo career In 1970, several members of Enya's family formed Clannad, a Celtic folk band who later acquired Nicky Ryan as their manager, sound engineer, and producer, and his future wife, Roma Ryan, as tour manager and administrator. In 1980, after her year at college, Enya decided against pursuing music at university and accepted Ryan's invitation to join the group as he wanted to expand their sound with an additional vocalist and introducing keyboards. Enya performed an uncredited role their sixth studio album, Crann Úll (1980), with a line-up of siblings Máire, Pól, and Ciarán Brennan, and twin uncles Noel and Pádraig Duggan. She became an official member by the time their follow-up, Fuaim (1981), was released, and is photographed with the band on the front cover. Nicky maintained it was never his intention to make Enya a permanent member, and saw she was "fiercely independent ... intent on playing her own music. She was just not sure of how to go about it". This sparked discussions between the two on layering vocal tracks to create a "choir of one", a concept inspired by the Wall of Sound technique by producer Phil Spector that interested them both. During a Clannad tour in 1982, Nicky called for a band meeting to address internal issues that had arisen. He added, "It was short and only required a vote, I was a minority of one and lost. Roma and I were out. This left the question of what happened with Enya. I decided to stand back and say nothing". Enya chose to leave with the Ryans and pursue a solo career as she felt confined in the group, and disliked being "somebody in the background". This caused some friction between the two parties at first, but they settled their differences. Nicky suggested to Enya that either she return to Gweedore "with no particular definite future", or live with him and Roma in Artane, Dublin "and see what happens, musically", which she accepted. After their bank denied them a loan, Enya sold her saxophone and gave piano lessons for income and the Ryans used what they could afford from their savings to build a recording facility in a shed in their garden. They named it Aigle Studio, after the French word for "eagle", and rented it out to musicians to help recoup the costs. The trio formed a musical partnership, with Nicky as Enya's producer and arranger and Roma her lyricist, and established their music company, Aigle Music. In the following two years, Enya developed her technique and composition by listening to recordings of her reciting pieces of classical music, and repeated the process until she started to improvise sections and develop her own arrangements. Her first composition was "An Taibhse Uaighneach", Irish for "The Lonely Ghost". During this time Enya played the synthesiser on Ceol Aduaidh (1983) by Mairéad Ní Mhaonaigh and Frankie Kennedy, and performed with the duo and Mhaonaigh's brother Gearóid in their short lived group, Ragairne. Enya's first solo endeavour arrived in 1983 when she recorded two piano instrumentals, "An Ghaoth Ón Ghrian", Irish for "The Solar Wind", and "Miss Clare Remembers", at Windmill Lane Studios in Dublin which were released on Touch Travel (1984), a limited release cassette of music from various artists on the Touch label. She is credited as Eithne Ní Bhraonáin on its liner notes. After several months of preparation, Enya's first live solo performance took place on 23 September 1983 at the National Stadium in Dublin, which was televised for RTÉ's music show Festival Folk. Niall Morris, a musician who worked with her during this time, recalled she "was so nervous she could barely get on stage, and she cowered behind the piano until the gig was over." Morris assisted Enya in the production of a demo tape, playing additional keyboards to her compositions, which Roma thought would suit accompanying visuals and sent it to various film producers. Among them was David Puttnam, who liked the tape and offered Enya to compose the soundtrack to his upcoming romantic comedy film, The Frog Prince (1984). Enya scored nine pieces for the film but they were later rearranged and orchestrated against her wishes by Richard Myhill, except for two that she sang on, "The Frog Prince" and "Dreams"; the words to the latter were penned by Charlie McGettigan. Film editor Jim Clark said the rearrangements were necessary as Enya found it difficult to compose to picture. Released in 1985, the album is the first commercial release that credits her as Enya, after Nicky Ryan thought "Eithne" would be too difficult for non-Irish people to pronounce correctly and suggested the phonetic spelling of her name. Enya looked back on the film as a good career move, but a disappointing one as "we weren't part of it at the end". She then sang on three tracks on Ordinary Man (1985) by Christy Moore. 1985–1989: The Celts and Watermark In 1985, producer Tony McAuley asked Enya to contribute a track for a six-part BBC television documentary series The Celts. She had already written a Celtic-influenced song named "The March of the Celts", and submitted it to the project. Each episode was to feature a different composer at first, but director David Richardson liked her track so much, he had Enya score the entire series. Enya recorded 72 minutes of music at Aigle Studio and the BBC studios in Wood Lane, London without recording to picture, though she was required to portray certain themes and ideas that the producers wanted. Unlike The Frog Prince, she worked with little interference which granted her freedom to establish her sound that she would adopt throughout her future career, signified by layered vocals, keyboard-oriented music, and percussion with elements of Celtic, classical, church and folk music. In March 1987, two months before The Celts aired, a 40-minute selection of Enya's score was released as her debut solo album, Enya, by BBC Records in the United Kingdom and by Atlantic Records in the United States. The latter promoted it with a new-age imprint on the packaging, which Nicky later thought was "a cowardly thing for them to do". The album gained enough public attention to reach number 8 on the Irish Albums Chart and number 69 on the UK Albums Chart. "I Want Tomorrow" was released as Enya's first single. "Boadicea" was sampled by The Fugees on their 1996 song "Ready or Not"; the group neither sought permission nor gave credit, and Enya took legal action. The group subsequently gave her credit and paid a fee worth around $3 million. Later in 1987, Enya appeared on Sinéad O'Connor's debut album The Lion and the Cobra, reciting Psalm 91 in Irish on "Never Get Old". Several weeks after the release of Enya, Enya secured a recording contract with Warner Music UK after Rob Dickins, the label's chairman and a fan of Clannad, took a liking to Enya and found himself playing it "every night before I went to bed". He then met Enya and the Ryans at a chance meeting at the Irish Recorded Music Association award ceremony in Dublin, and learned Enya had entered negotiations with a rival label. Dickins seized the opportunity and signed her to Warner Music with a deal worth £75,000, granting her wish to write and record with artistic freedom, minimal interference from the label, and without set deadlines to finish albums. Dickins said: "Sometimes you sign an act to make money, and sometimes you sign an act to make music. This was clearly the latter ... I just wanted to be involved with this music." Enya then left Atlantic and signed with the Warner-led Geffen Records to handle her American distribution. With the green-light to produce a new studio album, Enya recorded Watermark from June 1987 to April 1988. It was initially recorded in analogue at Aigle Studio before Dickins requested to have it re-recorded digitally at Orinoco Studios in Bermondsey, London. Watermark was released in September 1988 and became an unexpected hit, reaching number 5 in the United Kingdom and number 25 on the Billboard 200 in the United States following its release there in January 1989. Its lead single, "Orinoco Flow", was the last song written for the album. It was not intended to be a single at first, but Enya and the Ryans chose it after Dickins asked for a single from them several times as a joke, knowing Enya's music was not made for the Top 40 chart. Dickins and engineer Ross Cullum are referenced in the songs' lyrics. "Orinoco Flow" became an international top 10 hit and was number one in the United Kingdom for three weeks. The new-found success propelled Enya to international fame and she received endorsement deals and offers to use her music in television commercials. She spent one year travelling worldwide to promote the album which increased her exposure through interviews, appearances, and live performances. By 1996, Watermark had sold in excess of 1.2 million copies in the United Kingdom and 4 million in the United States. 1989–1997: Shepherd Moons and The Memory of Trees After promoting Watermark, Enya purchased new recording equipment and started work on her next album, Shepherd Moons. She found the success of Watermark caused a considerable amount of pressure when it came to writing new songs, adding, "I kept thinking, 'Would this have gone on Watermark? Is it as good?' Eventually I had to forget about this and start on a blank canvas and just really go with what felt right." Enya wrote songs based on several ideas, including entries from her diary, the Blitz in London, and her grandparents. Shepherd Moons was released in November 1991, her first album released under Warner-led Reprise Records in the United States. It became a greater commercial success than Watermark, reaching number one in the UK for one week and number 17 in the United States. "Caribbean Blue", its lead single, charted at number thirteen in the United Kingdom. By 1997, the album had reached multi-platinum certification for selling in excess of 1.2 million copies in the United Kingdom and 5 million in the United States. In 1991, Warner Music released a collection of five Enya music videos as Moonshadows for home video. In 1993, Enya won her first Grammy Award for Best New Age Album for Shepherd Moons. Soon after, Enya and Nicky entered discussions with Industrial Light & Magic, founded by George Lucas, regarding an elaborate stage lighting system for a proposed concert tour, but nothing came out of the meetings. In November 1992, Warner had obtained the rights to Enya and re-released the album as The Celts with new artwork. It surpassed its initial sale performance, reaching number 10 in the United Kingdom and reached platinum certification in the United States in 1996 for one million copies shipped. After travelling worldwide to promote Shepherd Moons, Enya started to write and record her fourth album, The Memory of Trees. The album was released in November 1995. It peaked at number five in the United Kingdom and number nine in the United States, where it sold over 3 million copies. Its lead single, "Anywhere Is", reached number seven in the United Kingdom. The second, "On My Way Home", reached number twenty-six in the same country. In late 1994, Enya put out an extended play of Christmas music titled The Christmas EP. Enya was offered to compose the score for Titanic, but declined. A recording of her singing "Oíche Chiúin", an Irish-language version of "Silent Night", appeared on the charity album A Very Special Christmas 3, released in benefit of the Special Olympics in October 1997. In early 1997, Enya began to select tracks for her first compilation album, "trying to select the obvious ones, the hits, and others." She chose to work on the collection following the promotional tour for The Memory of Trees as she felt it was the right time in her career, and that her contract with WEA required her to release a "best of" album. The set, named Paint the Sky with Stars: The Best of Enya, features two new tracks, "Paint the Sky with Stars" and "Only If...". Released in November 1997, the album was a worldwide commercial success, reaching No. 4 in the UK and No. 30 in the US, where it went on to sell over 4 million copies. "Only If..." was released as a single in 1997. Enya described the album as "like a musical diary ... each melody has a little story and I live through that whole story from the beginning ... your mind goes back to that day and what you were thinking." 1998–2007: A Day Without Rain and Amarantine Enya started work on her fifth studio album, titled A Day Without Rain, in mid-1998. In a departure from her previous albums she incorporated the use of a string section into her compositions, something that was not a conscious decision at first, but Enya and Nicky Ryan agreed it complemented the songs that were being written. The album was released in November 2000, and reached number 6 in the United Kingdom and an initial peak of number 17 in the United States. In the aftermath of the 11 September attacks, sales of the album and its lead single, "Only Time", surged after the song was widely used during radio and television coverage of the events, leading to its description as "a post-September 11 anthem". The exposure caused A Day Without Rain to outperform its original chart performance to peak at number 2 on the Billboard 200, and the release of a maxi single containing the original and a pop remix of "Only Time" in November 2001. Enya donated its proceeds in aid of the International Association of Firefighters. The song topped the Billboard Hot Adult Contemporary Tracks chart and went to number 10 on the Hot 100 singles, Enya's highest charting US single to date. A second single, "Wild Child", was released in December 2001. A Day Without Rain remains Enya's biggest seller, with 7 million copies sold in the US and the most sold new-age album of all time with an estimated 13 million copies sold worldwide. In 2001, Enya agreed to write and perform on two tracks for the soundtrack of The Lord of the Rings: The Fellowship of the Ring (2001) at the request of director Peter Jackson. Its composer Howard Shore "imagined her voice" as he wrote the film's score, making an uncommon exception to include another artist in one of his soundtracks. After flying to New Zealand to observe the filming and to watch a rough cut of the film, Enya returned to Ireland and composed "Aníron (Theme for Aragorn and Arwen)" with lyrics by Roma in J. R. R. Tolkien's fictional Elvish language Sindarin, and "May It Be", sung in English and another Tolkien language, Quenya. Shore then based his orchestrations around Enya's recorded vocals and themes to create "a seamless sound". In 2002, Enya released "May It Be" as a single which earned her an Academy Award nomination for Best Original Song. She performed the song live at the 74th Academy Awards ceremony with an orchestra in March 2002, and later cited the moment as a career highlight. Enya undertook additional studio projects in 2001 and 2002. The first was work on the soundtrack to the Japanese romantic film Calmi Cuori Appassionati (2001) which was subsequently released as Themes from Calmi Cuori Appassionati (2001). The album is formed of tracks spanning her career from Enya to A Day Without Rain with two B-sides. The album went to number 2 in Japan, and became Enya's second to sell one million copies in the country. November 2002 saw the release of Only Time – The Collection, a box set of 51 tracks recorded through her career which received a limited release of 200,000 copies. In September 2003, Enya returned to Aigle Studio to start work on her sixth studio album, Amarantine. Roma said the title means "everlasting". The album marks the first instance of Enya singing in Loxian, a fictional language created by Roma that came about when Enya was working on "Water Shows the Hidden Heart". After numerous attempts to sing the song in
the common administration broke apart during the following months. In the Soviet sector, a separate city government was established, which continued to call itself the "Magistrate of Greater Berlin". When the German Democratic Republic was established in 1949, it immediately claimed East Berlin as its capital—a claim that was recognized by all communist countries. Nevertheless, its representatives to the People's Chamber were not directly elected and did not have full voting rights until 1981. In June 1948, all railways and roads leading to West Berlin were blocked, and East Berliners were not allowed to emigrate. Nevertheless, more than 1,000 East Germans were escaping to West Berlin each day by 1960, caused by the strains on the East German economy from war reparations owed to the Soviet Union, massive destruction of industry, and lack of assistance from the Marshall Plan. In August 1961, the East German Government tried to stop the population exodus by enclosing West Berlin within the Berlin Wall. It was very dangerous for fleeing residents to cross because armed soldiers were trained to shoot illegal migrants. East Germany was a socialist republic. Eventually, Christian churches were allowed to operate without restraint after years of harassment by authorities. In the 1970s, the wages of East Berliners rose and working hours fell. The Soviet Union and the Communist bloc recognized East Berlin as the GDR's capital. However, Western Allies (the US, UK, and France) never formally acknowledged the authority of the East German government to govern East Berlin. Official Allied protocol recognized only the authority of the Soviet Union in East Berlin in accordance with the occupation status of Berlin as a whole. The United States Command Berlin, for example, published detailed instructions for U.S. military and civilian personnel wishing to visit East Berlin. In fact, the three Western commandants regularly protested against the presence of the East German National People's Army (NVA) in East Berlin, particularly on the occasion of military parades. Nevertheless, the three Western Allies eventually established embassies in East Berlin in the 1970s, although they never recognized it as the
and France) never formally acknowledged the authority of the East German government to govern East Berlin. Official Allied protocol recognized only the authority of the Soviet Union in East Berlin in accordance with the occupation status of Berlin as a whole. The United States Command Berlin, for example, published detailed instructions for U.S. military and civilian personnel wishing to visit East Berlin. In fact, the three Western commandants regularly protested against the presence of the East German National People's Army (NVA) in East Berlin, particularly on the occasion of military parades. Nevertheless, the three Western Allies eventually established embassies in East Berlin in the 1970s, although they never recognized it as the capital of East Germany. Treaties instead used terms such as "seat of government." On 3 October 1990, East and West Germany and East and West Berlin were reunited, thus formally ending the existence of East Berlin. Citywide elections in December 1990 resulted in the first "all-Berlin" mayor being elected to take office in January 1991, with the separate offices of mayors in East and West Berlin expiring at the time, and Eberhard Diepgen (a former mayor of West Berlin) became the first elected mayor of a reunited Berlin. East Berlin today Since reunification, the German government has spent vast amounts of money on reintegrating the two halves of the city and bringing services and infrastructure in the former East Berlin up to the standard established in West Berlin. After reunification, the East German economy suffered significantly. Under the adopted policy of privatization of state-owned firms under the auspices of the Treuhandanstalt, many East German factories were shut down—which also led to mass unemployment—due to gaps in productivity with and investment compared to West German companies, as well as an inability to comply with West German pollution and safety standards in a way that was deemed cost-effective. Because of this, a massive amount of West German economic aid was poured into East Germany to revitalize it. This stimulus was part-funded through a 7.5% tax on income for individuals and companies (in addition to normal income tax or company tax) known as the Solidaritätszuschlaggesetz (SolZG) or "solidarity surcharge", which though only in effect for 1991–1992 (later reintroduced in 1995 at 7.5 and then dropped down to 5.5% in 1998 and continues to be levied to this day) led to a great deal of resentment toward the East Germans. Despite the large sums of economic aid poured into East Berlin, there still remain obvious differences between the former East and West Berlins.
Natural Resources and Environment of the South Pacific Region, Nouméa, 1986 Convention on Assistance in the Case of a Nuclear Accident or Radiological Emergency (Assistance Convention), Vienna, 1986 Convention on the Ban of the Import into Africa and the Control of Transboundary Movements and Management of Hazardous Wastes within Africa, Bamako, 1991 Convention on Biological Diversity (CBD), Nairobi, 1992 Convention on Certain Conventional Weapons Convention on Civil Liability for Damage Caused during Carriage of Dangerous Goods by Road, Rail, and Inland Navigation Vessels (CRTD), Geneva, 1989 Convention on Cluster Munitions Convention on the Conservation of European Wildlife and Natural Habitats Convention on the Conservation of Migratory Species of Wild Animals (CMS), Bonn, 1979 Convention on Early Notification of a Nuclear Accident (Notification Convention), Vienna, 1986 Convention on Fishing and Conservation of Living Resources of the High Seas Convention on the International Trade in Endangered Species of Wild Flora and Fauna (CITES), Washington, DC, 1973 Convention on Long-Range Transboundary Air Pollution Convention on Nature Protection and Wild Life Preservation in the Western Hemisphere, Washington, DC, 1940 Convention on Nuclear Safety, Vienna, 1994 EMEP Protocol Heavy Metals Protocol Multi-effect Protocol (Gothenburg protocol) Nitrogen Oxide Protocol POP Air Pollution Protocol Sulphur Emissions Reduction Protocols 1985 and 1994 Volatile Organic Compounds Protocol Convention on the Prevention of Marine Pollution by Dumping Wastes and Other Matter Convention on the Prohibition of Military or Any Other Hostile Use of Environmental Modification Techniques Convention on the Protection and Use of Transboundary Watercourses and International Lakes (ECE Water Convention), Helsinki, 1992 Convention on the Protection of the Black Sea against Pollution, Bucharest, 1992 Convention on the Protection of the Marine Environment of the Baltic Sea Area 1992 Helsinki Convention, Helsinki, 1992 Convention on the Transboundary Effects of Industrial Accidents, Helsinki, 1992 Convention on Wetlands of International Importance Especially As Waterfowl Habitat (notably not a Multilateral Environmental Agreement) Convention to Ban the Importation into Forum Island Countries of Hazardous and Radioactive Wastes and to Control the Transboundary Movement and Management of Hazardous Wastes within the South Pacific Region, Waigani, 1995 Convention to Combat Desertification (CCD), Paris, 1994 Conventions within the UNEP Regional Seas Programme Directive on the legal protection of biotechnological inventions Energy Community (Energy Community South East Europe Treaty) (ECSEE) Espoo Convention Convention on Environmental Impact Assessment in a Transboundary Context, Espoo, 1991 European Agreement Concerning the International Carriage of Dangerous Goods by Inland Waterways (AND), Geneva, 2000 European Agreement Concerning the International Carriage of Dangerous Goods by Road (ADR), Geneva, 1957 FAO International Code of Conduct on the Distribution and Use of Pesticides, Rome, 1985 FAO International Undertaking on Plant Genetic Resources, Rome, 1983 Framework Convention for the Protection of the Marine Environment of the Caspian Sea Framework Convention on Climate Change (UNFCCC), New York, 1992 Geneva Protocol (Protocol for the Prohibition of the Use in War of Asphyxiating, Poisonous or other Gases, and of Bacteriological Methods of Warfare) International Convention for the Conservation of Atlantic Tunas (ICCAT), Rio de Janeiro, 1966 International Convention for the Prevention of Pollution from Ships International Convention for the Prevention of Pollution of the Sea by Oil, London, 1954, 1962, 1969 International Convention for the Regulation of Whaling (ICRW), Washington, 1946 International Treaty on Plant Genetic Resources for Food and Agriculture International Tropical Timber Agreement (expired), 1983 International Tropical Timber Agreement (ITTA), Geneva, 1994 Kuwait Regional Convention for Co-operation on the Protection of the Marine Environment from Pollution, Kuwait, 1978 Kyoto Protocol - greenhouse gas emission reductions Migratory Bird Treaty Act of 1918 Minamata Convention on Mercury, 2013 Montreal Protocol on Substances that Deplete the Ozone Layer, Montreal, 1989 Nagoya Protocol on Access and benefit sharing 2010, Japan North American Agreement on Environmental Cooperation Protocol on Environmental Protection to the Antarctic Treaty Putrajaya Declaration of Regional Cooperation for the Sustainable Development of the Seas of East Asia, Malaysia, 2003 Ramsar Convention Convention on Wetlands of International Importance, especially as Waterfowl Habitat, Ramsar, 1971 Regional Convention for the Conservation of the Red Sea and the Gulf of Aden Environment, Jeddah, 1982 Rotterdam Convention on the Prior Informed Consent Procedure for Certain Hazardous Chemicals and Pesticides in International Trade, Rotterdam, 1998 Stockholm Convention Stockholm Convention on Persistent Organic Pollutants Stockholm, 2001 Treaty Banning Nuclear Weapon Tests in the Atmosphere, in Outer Space, and Under Water United Nations Convention on the Law of the Sea United Nations Convention to Combat Desertification United Nations Framework Convention on Climate Change Vienna Convention for the Protection of the Ozone Layer, Vienna, 1985, including the Montreal Protocol on Substances that Deplete the Ozone Layer, Montreal, 1987 Vienna Convention on Civil Liability for Nuclear Damage, Vienna, 1963 Western Regional Climate Action Initiative Working Environment (Air Pollution, Noise and Vibration) Convention, 1977 Topic order General Aarhus Convention Convention on Access to Information, Public Participation in Decision-making and Access to Justice in Environmental Matters, Aarhus, 1998 Espoo Convention Convention on Environmental Impact Assessment in a Transboundary Context, Espoo, 1991 Atmosphere Convention on Long-Range Transboundary Air Pollution (LRTAP), Geneva, 1979 Environmental Protection: Aircraft Engine Emissions, Annex 16, vol. 2 to the Chicago Convention on International Civil Aviation, Montreal, 1981 Framework Convention on Climate Change (UNFCCC), New York, 1992, including the Kyoto Protocol, 1997, and the Paris Agreement, 2015 Georgia Basin-Puget Sound International Airshed Strategy, Vancouver, Statement of Intent, 2002 U.S.-Canada Air Quality Agreement (bilateral U.S.-Canadian agreement on acid rain), 1986 Vienna Convention for the Protection of the Ozone Layer, Vienna, 1985, including the Montreal Protocol on Substances that Deplete the Ozone Layer, Montreal, 1987 Freshwater resources Convention on the Protection and Use of Transboundary Watercourses and International Lakes (ECE Water Convention), Helsinki, 1992 Hazardous substances Convention on Civil Liability for Damage Caused during Carriage of Dangerous Goods by Road, Rail, and Inland Navigation Vessels (CRTD), Geneva, 1989 Convention on the Control of Transboundary Movements of Hazardous Wastes and their Disposal, Basel, 1989 Convention on the Ban of the Import into Africa and the Control of Transboundary Movements and Management of Hazardous Wastes Within Africa, Bamako, 1991 Convention on the Prior Informed Consent Procedure for Certain Hazardous Chemicals and Pesticides in International Trade, Rotterdam, 1998 Convention on the Transboundary Effects of Industrial Accidents, Helsinki, 1992 European Agreement Concerning the International Carriage of Dangerous Goods by Inland Waterways (AND), Geneva, 2000 European Agreement Concerning the International Carriage of Dangerous Goods by Road (ADR), Geneva, 1957 FAO International Code of Conduct on the Distribution and Use of Pesticides, Rome, 1985 Minamata Convention on Mercury, Minamata, 2013 Stockholm Convention Stockholm Convention on Persistent Organic Pollutants Stockholm, 2001 Convention to Ban the Importation into Forum Island Countries of Hazardous and Radioactive Wastes and to Control the Transboundary Movement and Management of Hazardous Wastes within the South Pacific Region, Waigani, 1995 Marine environment – global conventions Convention on the Prevention of Marine Pollution by Dumping of Wastes and Other Matter (London Convention), London, 1972 International Convention for the Prevention of Pollution from Ships, 1973, as modified by the Protocol of 1978 relating thereto (MARPOL 73/78), London, 1973 and 1978 International Convention for the Prevention of Pollution
of Bacteriological [Biological] and Toxin Weapons and on their Destruction) (BWC) Bonn Agreement Carpathian Convention Framework Convention on the Protection and Sustainable Development of the Carpathians Cartagena Protocol on Biosafety 2000 Chemical Weapons Convention China Australia Migratory Bird Agreement Comprehensive Nuclear-Test-Ban Treaty (CTBT), 1996 Convention for the Conservation of Antarctic Marine Living Resources (CCAMLR), Canberra, 1980 Agreed Measures for the Conservation of Antarctic Fauna and Flora Convention for the Conservation of Antarctic Marine Living Resources Convention for the Conservation of Antarctic Seals Protocol on Environmental Protection to the Antarctic Treaty Convention for Co-operation in the Protection and Development of the Marine and Coastal Environment of the West and Central African Region, Abidjan, 198 Coastal Marine and Island Biodiversity Conservation Project Convention for the Protection and Development of the Marine Environment of the Wider Caribbean Region, Cartagena de Indias, 1983 Convention of the Protection, Management and Development of the Marine and Coastal Environment of the Eastern African Region, Nairobi, 1985 Convention for the Protection of the Marine Environment and Coastal Area of the South-east Pacific, Lima, 1981 Convention for the Protection of the Marine Environment of the North-east Atlantic (OSPAR Convention), Paris, 1992 Convention for the Protection of the Natural Resources and Environment of the South Pacific Region, Nouméa, 1986 Convention on Assistance in the Case of a Nuclear Accident or Radiological Emergency (Assistance Convention), Vienna, 1986 Convention on the Ban of the Import into Africa and the Control of Transboundary Movements and Management of Hazardous Wastes within Africa, Bamako, 1991 Convention on Biological Diversity (CBD), Nairobi, 1992 Convention on Certain Conventional Weapons Convention on Civil Liability for Damage Caused during Carriage of Dangerous Goods by Road, Rail, and Inland Navigation Vessels (CRTD), Geneva, 1989 Convention on Cluster Munitions Convention on the Conservation of European Wildlife and Natural Habitats Convention on the Conservation of Migratory Species of Wild Animals (CMS), Bonn, 1979 Convention on Early Notification of a Nuclear Accident (Notification Convention), Vienna, 1986 Convention on Fishing and Conservation of Living Resources of the High Seas Convention on the International Trade in Endangered Species of Wild Flora and Fauna (CITES), Washington, DC, 1973 Convention on Long-Range Transboundary Air Pollution Convention on Nature Protection and Wild Life Preservation in the Western Hemisphere, Washington, DC, 1940 Convention on Nuclear Safety, Vienna, 1994 EMEP Protocol Heavy Metals Protocol Multi-effect Protocol (Gothenburg protocol) Nitrogen Oxide Protocol POP Air Pollution Protocol Sulphur Emissions Reduction Protocols 1985 and 1994 Volatile Organic Compounds Protocol Convention on the Prevention of Marine Pollution by Dumping Wastes and Other Matter Convention on the Prohibition of Military or Any Other Hostile Use of Environmental Modification Techniques Convention on the Protection and Use of Transboundary Watercourses and International Lakes (ECE Water Convention), Helsinki, 1992 Convention on the Protection of the Black Sea against Pollution, Bucharest, 1992 Convention on the Protection of the Marine Environment of the Baltic Sea Area 1992 Helsinki Convention, Helsinki, 1992 Convention on the Transboundary Effects of Industrial Accidents, Helsinki, 1992 Convention on Wetlands of International Importance Especially As Waterfowl Habitat (notably not a Multilateral Environmental Agreement) Convention to Ban the Importation into Forum Island Countries of Hazardous and Radioactive Wastes and to Control the Transboundary Movement and Management of Hazardous Wastes within the South Pacific Region, Waigani, 1995 Convention to Combat Desertification (CCD), Paris, 1994 Conventions within the UNEP Regional Seas Programme Directive on the legal protection of biotechnological inventions Energy Community (Energy Community South East Europe Treaty) (ECSEE) Espoo Convention Convention on Environmental Impact Assessment in a Transboundary Context, Espoo, 1991 European Agreement Concerning the International Carriage of Dangerous Goods by Inland Waterways (AND), Geneva, 2000 European Agreement Concerning the International Carriage of Dangerous Goods by Road (ADR), Geneva, 1957 FAO International Code of Conduct on the Distribution and Use of Pesticides, Rome, 1985 FAO International Undertaking on Plant Genetic Resources, Rome, 1983 Framework Convention for the Protection of the Marine Environment of the Caspian Sea Framework Convention on Climate Change (UNFCCC), New York, 1992 Geneva Protocol (Protocol for the Prohibition of the Use in War of Asphyxiating, Poisonous or other Gases, and of Bacteriological Methods of Warfare) International Convention for the Conservation of Atlantic Tunas (ICCAT), Rio de Janeiro, 1966 International Convention for the Prevention of Pollution from Ships International Convention for the Prevention of Pollution of the Sea by Oil, London, 1954, 1962, 1969 International Convention for the Regulation of Whaling (ICRW), Washington, 1946 International Treaty on Plant Genetic Resources for Food and Agriculture International Tropical Timber Agreement (expired), 1983 International Tropical Timber Agreement (ITTA), Geneva, 1994 Kuwait Regional Convention for Co-operation on the Protection of the Marine Environment from Pollution, Kuwait, 1978 Kyoto Protocol - greenhouse gas emission reductions Migratory Bird Treaty Act of 1918 Minamata Convention on Mercury, 2013 Montreal Protocol on Substances that Deplete the Ozone Layer, Montreal, 1989 Nagoya Protocol on Access and benefit sharing 2010, Japan North American Agreement on Environmental Cooperation Protocol on Environmental Protection to the Antarctic Treaty Putrajaya Declaration of Regional Cooperation for the Sustainable Development of the Seas of East Asia, Malaysia, 2003 Ramsar Convention Convention on Wetlands of International Importance, especially as Waterfowl Habitat, Ramsar, 1971 Regional Convention for the Conservation of the Red Sea and the Gulf of Aden Environment, Jeddah, 1982 Rotterdam Convention on the Prior Informed Consent Procedure for Certain Hazardous Chemicals and Pesticides in International Trade, Rotterdam, 1998 Stockholm Convention Stockholm Convention on Persistent Organic Pollutants Stockholm, 2001 Treaty Banning Nuclear Weapon Tests in the Atmosphere, in Outer Space, and Under Water United Nations Convention on the Law of the Sea United Nations Convention to Combat Desertification United Nations Framework Convention on Climate Change Vienna Convention for the Protection of the Ozone Layer, Vienna, 1985, including the Montreal Protocol on Substances that Deplete the Ozone
Phoenician letter He was , the earliest Greek sound value of Ε was determined by the vowel occurring in the Phoenician letter name, which made it a natural choice for being reinterpreted from a consonant symbol to a vowel symbol denoting an sound. Besides its classical Greek sound value, the short phoneme, it could initially also be used for other -like sounds. For instance, in early Attic before c. 500 BC, it was used also both for the long, open , and for the long close . In the former role, it was later replaced in the classic Greek alphabet by Eta (Η), which was taken over from eastern Ionic alphabets, while in the latter role it was replaced by the digraph spelling ΕΙ. Epichoric alphabets Some dialects used yet other ways of distinguishing between various e-like sounds. In Corinth, the normal function of Ε to denote and was taken by a glyph resembling a pointed B (), while Ε was used only for long close . The letter Beta, in turn, took the deviant shape . In Sicyon, a variant glyph resembling an X () was used in the same function as Corinthian . In Thespiai (Boeotia), a special letter form consisting of a vertical stem with a single rightward-pointing horizontal bar () was used for what was probably a raised variant of in pre-vocalic environments. This tack glyph was used elsewhere also as a form of "Heta", i.e. for the sound . Glyph variants After the establishment of the canonical classical Ionian (Eucleidean) Greek alphabet, new glyph variants for Ε were introduced through handwriting. In the uncial script (used for literary papyrus manuscripts in late antiquity and then in early medieval vellum codices), the "lunate" shape () became predominant. In cursive handwriting, a large number of shorthand glyphs came to be used, where the cross-bar and the curved stroke were linked in various ways. Some of them resembled a modern lowercase Latin "e", some a "6" with a connecting stroke to the next letter starting from the middle, and some a combination of two small "c"-like curves. Several of these shapes were later taken over into minuscule book hand. Of the various minuscule letter shapes, the inverted-3 form became the basis for lower-case Epsilon in Greek typography during the modern era. Uses International Phonetic Alphabet Despite its pronunciation as mid, in the International Phonetic Alphabet, the Latin epsilon represents open-mid front unrounded vowel, as in the English word pet . Symbol The uppercase Epsilon is not commonly
in Greek typography during the modern era. Uses International Phonetic Alphabet Despite its pronunciation as mid, in the International Phonetic Alphabet, the Latin epsilon represents open-mid front unrounded vowel, as in the English word pet . Symbol The uppercase Epsilon is not commonly used outside of the Greek language because of its similarity to the Latin letter E. However, it is commonly used in structural mechanics with Young's Modulus equations for calculating tensile, compressive and areal strain. The Greek lowercase epsilon , the lunate epsilon symbol , or the Latin lowercase epsilon (see above) is used in a variety of places: In engineering mechanics, strain calculations ϵ = increase of length / original length. Usually this relates to extensometer testing of metallic materials. In mathematics (particularly calculus), an arbitrarily small positive quantity is commonly denoted ε; see (ε, δ)-definition of limit. Hilbert introduced epsilon terms as an extension to first-order logic; see epsilon calculus. it is used to represent the Levi-Civita symbol. it is used to represent dual numbers: a + bε, with ε2 = 0 and ε ≠ 0. it is sometimes used to denote the Heaviside step function. in set theory, the epsilon numbers are ordinal numbers that satisfy the fixed point ε = ωε. The first epsilon number, ε0, is the limit ordinal of the set {ω, ωω, ωωω, ...}. in numerical analysis and statistics it is used as the error term in group theory it is used as the idempotent group when e is in use as a variable name In computer science, it often represents the empty string, though different writers use a variety of other symbols for the empty string as well; usually the lower-case Greek letter lambda (λ). In computer science, the machine epsilon indicates the upper bound on the relative error due to rounding in floating point arithmetic. In physics, it indicates the permittivity of a medium; with the subscript 0 (ε0) it is the permittivity of free space. it can also indicate the strain of a material (a ratio of extensions). In automata theory, it shows a transition that involves no shifting of an input symbol. In astronomy, it stands for the fifth-brightest star in a constellation (see Bayer designation). Epsilon is the name for the most distant and most visible ring of Uranus. In planetary science, ε denotes the axial tilt. In chemistry, it represents the molar extinction coefficient of a chromophore. In economics, ε refers to elasticity. In statistics, it is used to refer to error terms. it also can to refer to the degree of sphericity in repeated measures ANOVAs. In agronomy, it is used to represent the "photosynthetic efficiency" of a particular plant or crop.
borrowed in the 8th century BC by the Etruscan and other Old Italic alphabets, which were based on the Euboean form of the Greek alphabet. This also gave rise to the Latin alphabet with its letter H. Other regional variants of the Greek alphabet (epichoric alphabets), in dialects that still preserved the sound , employed various glyph shapes for consonantal heta side by side with the new vocalic eta for some time. In the southern Italian colonies of Heracleia and Tarentum, the letter shape was reduced to a "half-heta" lacking the right vertical stem (Ͱ). From this sign later developed the sign for rough breathing or spiritus asper, which brought back the marking of the sound into the standardized post-classical (polytonic) orthography. Dionysius Thrax in the second century BC records that the letter name was still pronounced heta (ἥτα), correctly explaining this irregularity by stating "in the old days the letter Η served to stand for the rough breathing, as it still does with the Romans." Long e In the East Ionic dialect, however, the sound disappeared by the sixth century BC, and the letter was re-used initially to represent a development of a long vowel , which later merged in East Ionic with instead. In 403 BC, Athens took over the Ionian spelling system and with it the vocalic use of H (even though it still also had the sound itself at that time). This later became the standard orthography in all of Greece. Itacism During the time of post-classical Koiné Greek, the sound represented by eta was raised and merged with several other formerly distinct vowels, a phenomenon called iotacism or itacism, after the new pronunciation of the letter name as ita instead of eta. Itacism is continued into Modern Greek, where the letter name is pronounced and represents the sound (a close front unrounded vowel). It shares this function with several other letters (ι, υ) and digraphs (ει, οι), which are all pronounced alike. This phenomenon at large is called iotacism. Cyrillic script Eta was also borrowed with the sound value of into the Cyrillic script, where it gave rise to the Cyrillic letter И. Uses Letter In Modern Greek, due to iotacism, the letter (pronounced ) represents a close front unrounded vowel, . In Classical Greek, it represented a long open-mid front unrounded vowel, . Symbol Upper case The uppercase letter Η is used as a
sound into the standardized post-classical (polytonic) orthography. Dionysius Thrax in the second century BC records that the letter name was still pronounced heta (ἥτα), correctly explaining this irregularity by stating "in the old days the letter Η served to stand for the rough breathing, as it still does with the Romans." Long e In the East Ionic dialect, however, the sound disappeared by the sixth century BC, and the letter was re-used initially to represent a development of a long vowel , which later merged in East Ionic with instead. In 403 BC, Athens took over the Ionian spelling system and with it the vocalic use of H (even though it still also had the sound itself at that time). This later became the standard orthography in all of Greece. Itacism During the time of post-classical Koiné Greek, the sound represented by eta was raised and merged with several other formerly distinct vowels, a phenomenon called iotacism or itacism, after the new pronunciation of the letter name as ita instead of eta. Itacism is continued into Modern Greek, where the letter name is pronounced and represents the sound (a close front unrounded vowel). It shares this function with several other letters (ι, υ) and digraphs (ει, οι), which are all pronounced alike. This phenomenon at large is called iotacism. Cyrillic script Eta was also borrowed with the sound value of into the Cyrillic script, where it gave rise to the Cyrillic letter И. Uses Letter In Modern Greek, due to iotacism, the letter (pronounced ) represents
and Siberian peoples. In the 21st century, usage in North America has declined. Linguistic, ethnic, and cultural differences exist between Yupik and Inuit. In Canada and Greenland, and to a certain extent in Alaska, the term Eskimo is predominantly seen as offensive and has been widely replaced by the term Inuit or terms specific to a particular group or community. This has resulted in a trend whereby some Canadians and Americans believe that they should use Inuit even for Yupik who are a non-Inuit people. The Inuit of Greenland generally refer to themselves as Greenlanders ("Kalaallit" or "Grønlændere") and speak the Greenlandic language and Danish. The Inuit of Greenland belong to three groups: the Kalaallit of west Greenland, who speak Kalaallisut; the Tunumiit of Tunu (east Greenland), who speak Tunumiit oraasiat ("East Greenlandic"); and the Inughuit of north Greenland, who speak Inuktun. The word "Eskimo" is a racially charged term in Canada. In Canada's Central Arctic, Inuinnaq is the preferred, and in the eastern Canadian Arctic Inuit. The language is often called Inuktitut, though other local designations are also used. Section 25 of the Canadian Charter of Rights and Freedoms and section 35 of the Canadian Constitution Act of 1982 recognized the Inuit as a distinctive group of Aboriginal peoples in Canada. Although Inuit can be applied to all of the Eskimo peoples in Canada and Greenland, that is not true in Alaska and Siberia. In Alaska, the term Eskimo is still used (has been commonly used but is decreasing in prevalence) because it includes both Iñupiat (singular: Iñupiaq), who are Inuit, and Yupik, who are not. Alaskans also use the term Alaska Native, which is inclusive of (and under U.S. and Alaskan law, as well as the linguistic and cultural legacy of Alaska, refers to) all Indigenous peoples of Alaska, including not only the Iñupiat (Alaskan Inuit) and the Yupik, but also groups such as the Aleut, who share a recent ancestor, as well as the largely unrelated indigenous peoples of the Pacific Northwest Coast and the Alaskan Athabaskans, such as the Eyak people. The term Alaska Native has important legal usage in Alaska and the rest of the United States as a result of the Alaska Native Claims Settlement Act of 1971. It does not apply to Inuit or Yupik originating outside the state. As a result, the term Eskimo is still in use in Alaska. Alternative terms, such as Inuit-Yupik, have been proposed, but none has gained widespread acceptance. Recent (early 21st century) population estimates registered more than 135,000 individuals of Eskimo descent, with approximately 85,000 living in North America, 50,000 in Greenland, and the rest residing in Siberia. Inuit Circumpolar Council In 1977, the Inuit Circumpolar Conference (ICC) meeting in Utqiaġvik, Alaska, officially adopted Inuit as a designation for all circumpolar Native peoples, regardless of their local view on an appropriate term. They voted to replace the word Eskimo with Inuit. Even at that time, such a designation was not accepted by all. As a result, the Canadian government usage has replaced the term Eskimo with Inuit (Inuk in singular). The ICC charter defines Inuit as including "the Inupiat, Yupik (Alaska), Inuit, Inuvialuit (Canada), Kalaallit (Greenland) and Yupik (Russia)". Despite the ICC's 1977 decision to adopt the term Inuit, this was never accepted by the Yupik and others as they are proud of the term Eskimo. In 2010, the ICC passed a resolution in which they implored scientists to use Inuit and Paleo-Inuit instead of Eskimo or Paleo-Eskimo. Academic response In a 2015 commentary in the journal Arctic, Canadian archaeologist Max Friesen argued fellow Arctic archaeologists should follow the ICC and use Paleo-Inuit instead of Paleo-Eskimo. In 2016, Lisa Hodgetts and Arctic editor Patricia Wells wrote: "In the Canadian context, continued use of any term that incorporates Eskimo is potentially harmful to the relationships between archaeologists and the Inuit and Inuvialuit communities who are our hosts and increasingly our research partners." Hodgetts and Wells suggested using more specific terms when possible (e.g., Dorset and Groswater) and agreed with Frieson in using the Inuit tradition to replace Neo-Eskimo, although they noted replacement for Palaeoeskimo was still an open question and discussed Paleo-Inuit, Arctic Small Tool Tradition, and pre-Inuit, as well as Inuktitut loanwords like Tuniit and Sivullirmiut, as possibilities. In 2020, Katelyn Braymer-Hayes and colleagues argued in the Journal of Anthropological Archaeology that there is a "clear need" to replace the terms Neo-Eskimo and Paleo-Eskimo, citing the ICC resolution, but finding a consensus within the Alaskan context particularly is difficult, since Alaska Natives do not use the word Inuit to describe themselves nor is the term legally applicable only to Iñupiat and Yupik in Alaska, and as such, terms used in Canada like Paleo Inuit and Ancestral Inuit would not be acceptable. American linguist Lenore Grenoble has also explicitly deferred to the ICC resolution and used Inuit–Yupik instead of Eskimo with regards to the language branch. History Genetic evidence suggests that the Americas were populated from northeastern Asia in multiple waves. While the great majority of indigenous American peoples can be traced to a single early migration of Paleo-Indians, the Na-Dené, Inuit and Indigenous Alaskan populations exhibit admixture from distinct populations that migrated into America at a later date and are closely linked to the peoples of far northeastern Asia (e.g. Chukchi), and only more remotely to the majority indigenous American type. For modern Eskimo–Aleut speakers, this later ancestral component makes up almost half of their genomes. The ancient Paleo-Eskimo population was genetically distinct from the modern circumpolar populations, but eventually derives from the same far northeastern Asian cluster. It is understood that some or all of these ancient people migrated across the Chukchi Sea to North America during the pre-neolithic era, somewhere around 5,000 to 10,000 years ago. It is believed that ancestors of the Aleut people inhabited the Aleutian Chain 10,000 years ago. The earliest positively identified Paleo-Eskimo cultures (Early Paleo-Eskimo) date to 5,000 years ago. Several earlier indigenous peoples existed in the northern circumpolar regions of eastern Siberia, Alaska, and Canada (although probably not in Greenland). The Paleo-Eskimo peoples appear to have developed in Alaska from people related to the Arctic small tool tradition in eastern Asia, whose ancestors had probably migrated to Alaska at least 3,000 to 5,000 years earlier. The Yupik languages and cultures in Alaska evolved in place, beginning with the original pre-Dorset Indigenous culture developed in Alaska. At least 4,000 years ago, the Unangan culture of the Aleut became distinct. It is not generally considered an Eskimo culture. However, there is some possibility of an Aleutian origin of the Dorset people, who in turn are a likely ancestor of Inuit and Yupik people today. Approximately 1,500 to 2,000 years ago, apparently in northwestern Alaska, two other distinct variations appeared. Inuit language became distinct and, over a period of several centuries, its speakers migrated across northern Alaska, through Canada, and into Greenland. The distinct culture of the Thule people (drawing strongly from the Birnirk culture) developed in northwestern Alaska. It very quickly spread over the entire area occupied by Eskimo peoples, though it was not necessarily adopted by all of them. Languages Language family The Eskimo–Aleut family of languages includes two cognate branches: the Aleut (Unangan) branch and the Eskimo branch. The number of cases varies, with Aleut languages having a greatly reduced case system compared to those of the Eskimo subfamily. Eskimo–Aleut languages possess voiceless plosives at the bilabial, coronal, velar and uvular positions in all languages except Aleut, which has lost the bilabial stops but retained the nasal. In the Eskimo subfamily a voiceless alveolar lateral fricative is also present. The Eskimo sub-family consists of the Inuit language and Yupik language sub-groups. The Sirenikski language, which is virtually extinct, is sometimes regarded as a third branch of the Eskimo language family. Other sources regard it as a group belonging to the Yupik branch. Inuit languages comprise a dialect continuum, or dialect chain, that stretches from Unalakleet and Norton Sound in Alaska, across northern Alaska and Canada, and east to Greenland. Changes from western (Iñupiaq) to eastern dialects are marked by the dropping of vestigial Yupik-related features, increasing consonant assimilation (e.g., kumlu, meaning "thumb", changes to kuvlu, changes to kublu, changes to kulluk, changes to kulluq,) and increased consonant lengthening, and lexical change. Thus, speakers of two adjacent Inuit dialects would usually be able to understand one another, but speakers from dialects distant from each other on the dialect continuum would have difficulty understanding one another. Seward Peninsula dialects in western Alaska, where much of the Iñupiat culture has been in place for perhaps less than 500 years, are greatly affected by phonological influence from the Yupik languages. Eastern Greenlandic, at the opposite end of the Inuit range, has had significant word replacement due to a unique form of ritual name avoidance. Ethnographically, Inuit of Greenland belong to three groups: the Kalaallit of west Greenland, who speak Kalaallisut; the Tunumiit of Tunu (east Greenland), who speak Tunumiit oraasiat ("East Greenlandic"), and the Inughuit of north Greenland, who speak Inuktun. The four Yupik languages, by contrast, including Alutiiq (Sugpiaq), Central Alaskan Yup'ik, Naukan (Naukanski), and Siberian Yupik, are distinct languages with phonological, morphological, and lexical differences. They demonstrate limited mutual intelligibility. Additionally, both Alutiiq and Central Yup'ik have considerable dialect diversity. The northernmost Yupik languages – Siberian Yupik and Naukan Yupik – are linguistically only slightly closer to Inuit than is Alutiiq, which is the southernmost of the Yupik languages. Although the grammatical structures of Yupik and Inuit languages are similar, they have pronounced differences phonologically. Differences of vocabulary between Inuit and any one of the Yupik languages are greater than between any two Yupik languages. Even the dialectal differences within Alutiiq and Central Alaskan Yup'ik sometimes are relatively great for locations that are relatively close geographically. Despite the relatively small population of Naukan speakers, documentation of the language dates back to 1732. While Naukan is only spoken in Siberia, the language acts as an intermediate between two Alaskan languages: Siberian Yupik Eskimo and Central Yup'ik Eskimo. The Sirenikski language is sometimes regarded as a third branch of the Eskimo language family, but other sources regard it as a group belonging to the Yupik branch. An overview of the Eskimo–Aleut languages family is given below: Aleut Aleut language Western-Central dialects: Atkan, Attuan, Unangan, Bering (60–80 speakers) Eastern dialect: Unalaskan, Pribilof (400 speakers) Eskimo (Yup'ik, Yuit, and Inuit) Yupik Central Alaskan Yup'ik (10,000 speakers) Alutiiq or Pacific Gulf Yup'ik (400 speakers) Central Siberian Yupik or Yuit (Chaplinon and St Lawrence Island, 1,400 speakers) Naukan (700 speakers) Inuit or Inupik (75,000 speakers) Iñupiaq (northern Alaska, 3,500 speakers) Inuvialuktun (western Canada; together with Siglitun, Natsilingmiutut, Inuinnaqtun and Uummarmiutun 765 speakers) Inuktitut (eastern Canada; together with Inuktun and Inuinnaqtun, 30,000 speakers) Kalaallisut (Greenlandic (Greenland, 47,000 speakers) Inuktun (Avanersuarmiutut, Thule dialect or Polar Eskimo, approximately 1,000 speakers) Tunumiit oraasiat (East Greenlandic known as Tunumiisut, 3,500 speakers) Sirenik Eskimo language (Sirenikskiy) (extinct) American linguist Lenore Grenoble has explicitly deferred to this resolution and used Inuit–Yupik instead of Eskimo with regards to the language branch. Words for snow There has been a long-running linguistic debate about whether or not the speakers of the Eskimo-Aleut language group have an unusually large number of words for
and pre-Inuit, as well as Inuktitut loanwords like Tuniit and Sivullirmiut, as possibilities. In 2020, Katelyn Braymer-Hayes and colleagues argued in the Journal of Anthropological Archaeology that there is a "clear need" to replace the terms Neo-Eskimo and Paleo-Eskimo, citing the ICC resolution, but finding a consensus within the Alaskan context particularly is difficult, since Alaska Natives do not use the word Inuit to describe themselves nor is the term legally applicable only to Iñupiat and Yupik in Alaska, and as such, terms used in Canada like Paleo Inuit and Ancestral Inuit would not be acceptable. American linguist Lenore Grenoble has also explicitly deferred to the ICC resolution and used Inuit–Yupik instead of Eskimo with regards to the language branch. History Genetic evidence suggests that the Americas were populated from northeastern Asia in multiple waves. While the great majority of indigenous American peoples can be traced to a single early migration of Paleo-Indians, the Na-Dené, Inuit and Indigenous Alaskan populations exhibit admixture from distinct populations that migrated into America at a later date and are closely linked to the peoples of far northeastern Asia (e.g. Chukchi), and only more remotely to the majority indigenous American type. For modern Eskimo–Aleut speakers, this later ancestral component makes up almost half of their genomes. The ancient Paleo-Eskimo population was genetically distinct from the modern circumpolar populations, but eventually derives from the same far northeastern Asian cluster. It is understood that some or all of these ancient people migrated across the Chukchi Sea to North America during the pre-neolithic era, somewhere around 5,000 to 10,000 years ago. It is believed that ancestors of the Aleut people inhabited the Aleutian Chain 10,000 years ago. The earliest positively identified Paleo-Eskimo cultures (Early Paleo-Eskimo) date to 5,000 years ago. Several earlier indigenous peoples existed in the northern circumpolar regions of eastern Siberia, Alaska, and Canada (although probably not in Greenland). The Paleo-Eskimo peoples appear to have developed in Alaska from people related to the Arctic small tool tradition in eastern Asia, whose ancestors had probably migrated to Alaska at least 3,000 to 5,000 years earlier. The Yupik languages and cultures in Alaska evolved in place, beginning with the original pre-Dorset Indigenous culture developed in Alaska. At least 4,000 years ago, the Unangan culture of the Aleut became distinct. It is not generally considered an Eskimo culture. However, there is some possibility of an Aleutian origin of the Dorset people, who in turn are a likely ancestor of Inuit and Yupik people today. Approximately 1,500 to 2,000 years ago, apparently in northwestern Alaska, two other distinct variations appeared. Inuit language became distinct and, over a period of several centuries, its speakers migrated across northern Alaska, through Canada, and into Greenland. The distinct culture of the Thule people (drawing strongly from the Birnirk culture) developed in northwestern Alaska. It very quickly spread over the entire area occupied by Eskimo peoples, though it was not necessarily adopted by all of them. Languages Language family The Eskimo–Aleut family of languages includes two cognate branches: the Aleut (Unangan) branch and the Eskimo branch. The number of cases varies, with Aleut languages having a greatly reduced case system compared to those of the Eskimo subfamily. Eskimo–Aleut languages possess voiceless plosives at the bilabial, coronal, velar and uvular positions in all languages except Aleut, which has lost the bilabial stops but retained the nasal. In the Eskimo subfamily a voiceless alveolar lateral fricative is also present. The Eskimo sub-family consists of the Inuit language and Yupik language sub-groups. The Sirenikski language, which is virtually extinct, is sometimes regarded as a third branch of the Eskimo language family. Other sources regard it as a group belonging to the Yupik branch. Inuit languages comprise a dialect continuum, or dialect chain, that stretches from Unalakleet and Norton Sound in Alaska, across northern Alaska and Canada, and east to Greenland. Changes from western (Iñupiaq) to eastern dialects are marked by the dropping of vestigial Yupik-related features, increasing consonant assimilation (e.g., kumlu, meaning "thumb", changes to kuvlu, changes to kublu, changes to kulluk, changes to kulluq,) and increased consonant lengthening, and lexical change. Thus, speakers of two adjacent Inuit dialects would usually be able to understand one another, but speakers from dialects distant from each other on the dialect continuum would have difficulty understanding one another. Seward Peninsula dialects in western Alaska, where much of the Iñupiat culture has been in place for perhaps less than 500 years, are greatly affected by phonological influence from the Yupik languages. Eastern Greenlandic, at the opposite end of the Inuit range, has had significant word replacement due to a unique form of ritual name avoidance. Ethnographically, Inuit of Greenland belong to three groups: the Kalaallit of west Greenland, who speak Kalaallisut; the Tunumiit of Tunu (east Greenland), who speak Tunumiit oraasiat ("East Greenlandic"), and the Inughuit of north Greenland, who speak Inuktun. The four Yupik languages, by contrast, including Alutiiq (Sugpiaq), Central Alaskan Yup'ik, Naukan (Naukanski), and Siberian Yupik, are distinct languages with phonological, morphological, and lexical differences. They demonstrate limited mutual intelligibility. Additionally, both Alutiiq and Central Yup'ik have considerable dialect diversity. The northernmost Yupik languages – Siberian Yupik and Naukan Yupik – are linguistically only slightly closer to Inuit than is Alutiiq, which is the southernmost of the Yupik languages. Although the grammatical structures of Yupik and Inuit languages are similar, they have pronounced differences phonologically. Differences of vocabulary between Inuit and any one of the Yupik languages are greater than between any two Yupik languages. Even the dialectal differences within Alutiiq and Central Alaskan Yup'ik sometimes are relatively great for locations that are relatively close geographically. Despite the relatively small population of Naukan speakers, documentation of the language dates back to 1732. While Naukan is only spoken in Siberia, the language acts as an intermediate between two Alaskan languages: Siberian Yupik Eskimo and Central Yup'ik Eskimo. The Sirenikski language is sometimes regarded as a third branch of the Eskimo language family, but other sources regard it as a group belonging to the Yupik branch. An overview of the Eskimo–Aleut languages family is given below: Aleut Aleut language Western-Central dialects: Atkan, Attuan, Unangan, Bering (60–80 speakers) Eastern dialect: Unalaskan, Pribilof (400 speakers) Eskimo (Yup'ik, Yuit, and Inuit) Yupik Central Alaskan Yup'ik (10,000 speakers) Alutiiq or Pacific Gulf Yup'ik (400 speakers) Central Siberian Yupik or Yuit (Chaplinon and St Lawrence Island, 1,400 speakers) Naukan (700 speakers) Inuit or Inupik (75,000 speakers) Iñupiaq (northern Alaska, 3,500 speakers) Inuvialuktun (western Canada; together with Siglitun, Natsilingmiutut, Inuinnaqtun and Uummarmiutun 765 speakers) Inuktitut (eastern Canada; together with Inuktun and Inuinnaqtun, 30,000 speakers) Kalaallisut (Greenlandic (Greenland, 47,000 speakers) Inuktun (Avanersuarmiutut, Thule dialect or Polar Eskimo, approximately 1,000 speakers) Tunumiit oraasiat (East Greenlandic known as Tunumiisut, 3,500 speakers) Sirenik Eskimo language (Sirenikskiy) (extinct) American linguist Lenore Grenoble has explicitly deferred to this resolution and used Inuit–Yupik instead of Eskimo with regards to the language branch. Words for snow There has been a long-running linguistic debate about whether or not the speakers of the Eskimo-Aleut language group have an unusually large number of words for snow. The general modern consensus is that, in multiple Eskimo languages, there are, or have been in simultaneous usage, indeed fifty plus words for snow. Diet Inuit The Inuit inhabit the Arctic and northern Bering Sea coasts of Alaska in the United States, and Arctic coasts of the Northwest Territories, Nunavut, Quebec, and Labrador in Canada, and Greenland (associated with Denmark). Until fairly recent times, there has been a remarkable homogeneity in the culture throughout this area, which traditionally relied on fish, marine mammals, and land animals for food, heat, light, clothing, and tools. Their food sources primarily relied on seals, whales, whale blubber, walrus, and fish, all of which they hunted using harpoons on the ice. Clothing consisted of robes made of wolfskin and reindeer skin to acclimate to the low temperatures. They maintain a unique Inuit culture. Greenland's Inuit Greenlandic Inuit make up 90% of Greenland's population. They belong to three major groups: Kalaallit of west Greenland, who speak Kalaallisut Tunumiit of east Greenland, who speak Tunumiisut Inughuit of north Greenland, who speak Inuktun or Polar Eskimo. Canadian Inuit Canadian Inuit live primarily in Inuit Nunangat (lit. "lands, waters and ices of the [Inuit] people"), their traditional homeland although some people live in southern parts of Canada. Inuit Nunangat ranges from the Yukon–Alaska border in the west across the Arctic to northern Labrador. The Inuvialuit live in the Inuvialuit Settlement Region, the northern part of Yukon and the Northwest Territories, which stretches to the Amundsen Gulf and the Nunavut border and includes the western Canadian Arctic Islands. The land was demarked in 1984 by the Inuvialuit Final Agreement. The majority of Inuit live in Nunavut (a territory of Canada), Nunavik (the northern part of Quebec) and in Nunatsiavut (the Inuit settlement region in Labrador). Alaska's Iñupiat The Iñupiat are the Inuit of Alaska's Northwest Arctic and North Slope boroughs and the Bering Straits region, including the Seward Peninsula. Utqiaġvik, the northernmost city in the United States, is above the Arctic Circle and in the Iñupiat region. Their language is known as Iñupiaq. Their current communities include 34 villages across Iñupiat Nunaŋat (Iñupiaq lands) including seven Alaskan villages in the North Slope Borough, affiliated with the Arctic Slope Regional Corporation; eleven villages in Northwest Arctic Borough; and sixteen villages affiliated with the Bering Straits Regional Corporation. Yupik The Yupik are indigenous or aboriginal peoples who live along the coast of western Alaska, especially on the Yukon-Kuskokwim delta and along the Kuskokwim River (Central Alaskan Yup'ik); in southern Alaska (the Alutiiq); and along the eastern coast of Chukotka in the Russian Far East and St. Lawrence Island in western Alaska (the Siberian Yupik). The Yupik economy has traditionally been strongly dominated by the harvest of marine mammals, especially seals, walrus, and whales. Alutiiq The Alutiiq language is relatively close to that spoken by the Yupik in the Bethel, Alaska area. But, it is considered a distinct language with two major dialects: the Koniag dialect, spoken on the Alaska Peninsula and on Kodiak Island, and the Chugach dialect, spoken on the southern Kenai Peninsula and in Prince William Sound. Residents of Nanwalek, located on southern part of the Kenai Peninsula near Seldovia, speak what they call Sugpiaq. They are able to understand those who speak Yupik in Bethel. With a population of approximately 3,000, and the number of speakers in the hundreds, Alutiiq communities are working to revitalize their language. Central Alaskan Yup'ik Yup'ik, with an apostrophe, denotes the speakers of the Central Alaskan Yup'ik language, who live in western Alaska and southwestern Alaska from southern Norton Sound to the north side of Bristol Bay, on the Yukon–Kuskokwim Delta, and on Nelson Island. The use of the apostrophe in the name Yup'ik is a written convention to denote the long pronunciation of the p sound; but it is spoken the same in other Yupik languages. Of all the Alaska Native languages, Central Alaskan Yup'ik has the most speakers, with about 10,000 of a total Yup'ik population of 21,000 still speaking the language. The five dialects of Central Alaskan Yup'ik include General Central Yup'ik, and the Egegik, Norton Sound, Hooper Bay-Chevak, and Nunivak dialects. In the latter two dialects, both the
in which the epiphenomenon has no causal impact at all, and Huxley's "steam whistle" epiphenomenalism, in which effects exist but are not functionally relevant. Arguments for A large body of neurophysiological data seems to support epiphenomenalism . Some of the oldest such data is the Bereitschaftspotential or "readiness potential" in which electrical activity related to voluntary actions can be recorded up to two seconds before the subject is aware of making a decision to perform the action. More recently Benjamin Libet et al. (1979) have shown that it can take 0.5 seconds before a stimulus becomes part of conscious experience even though subjects can respond to the stimulus in reaction time tests within 200 milliseconds. The methods and conclusions of this experiment have received much criticism (e.g., see the many critical commentaries in Libet's (1985) target article), including recently by neuroscientists such as Peter Tse, who claim to show that the readiness potential has nothing to do with consciousness at all. Recent research on the Event Related Potential also shows that conscious experience does not occur until the late phase of the potential (P3 or later) that occurs 300 milliseconds or more after the event. In Bregman's auditory continuity illusion, where a pure tone is followed by broadband noise and the noise is followed by the same pure tone it seems as if the tone occurs throughout the period of noise. This also suggests a delay for processing data before conscious experience occurs. Popular science author Tor Nørretranders has called the delay the "user illusion", implying that we only have the illusion of conscious control, most actions being controlled automatically by non-conscious parts of the brain with the conscious mind relegated to the role of spectator. The scientific data seem to support the idea that conscious experience is created by non-conscious processes in the brain (i.e., there is subliminal processing that becomes conscious experience). These results have been interpreted to suggest that people are capable of action before conscious experience of the decision to act occurs. Some argue that this supports epiphenomenalism, since it shows that the feeling of making a decision to act is actually an epiphenomenon; the action happens before the decision, so the decision did not cause the action to occur. Arguments against The most powerful argument against epiphenomenalism is that it is self-contradictory: if we have knowledge about epiphenomenalism, then our brains know about the existence of the mind, but if epiphenomenalism were correct, then our brains should not have any knowledge about the mind, because the mind does not affect anything physical. However, some philosophers do not accept this as a rigorous refutation. For example, Victor Argonov states that epiphenomenalism is a questionable, but experimentally falsifiable theory. He argues that the personal mind is not the only source of knowledge about the existence of mind in the world. A creature (even a zombie) could have knowledge about mind and the mind-body problem by virtue of some innate knowledge. The information about mind (and its problematic properties such as qualia) could have been, in principle, implicitly "written" in the material world since its creation. Epiphenomenalists can say that God created immaterial mind and a detailed "program" of material human behavior that makes it possible to speak about the mind–body problem. That version of epiphenomenalism seems highly exotic, but it cannot be excluded from consideration by pure theory. However, Argonov suggests that experiments could refute epiphenomenalism. In particular, epiphenomenalism could be refuted if neural correlates of consciousness can be found in the human brain, and it is proven that human speech about consciousness is caused by them. Some philosophers, such as Dennett, reject both epiphenomenalism and the existence of qualia with the same charge that Gilbert Ryle leveled against a Cartesian "ghost in the machine", that they too are category mistakes. A quale or conscious experience would not belong to the category of objects of reference on this account, but rather to the category of ways of doing things. Functionalists assert that mental states are well described by their overall role, their activity in relation to the organism as a whole. "This doctrine is rooted in Aristotle's conception of the soul, and has antecedents in Hobbes's conception of the mind as a 'calculating machine', but it has become fully articulated (and popularly endorsed) only in the last third of the 20th century." In so far as it mediates stimulus and response, a mental function is analogous to a program that processes input/output in automata theory. In principle, multiple realisability would guarantee platform dependencies can be avoided, whether in terms of hardware and operating system or, ex hypothesi, biology and philosophy. Because a high-level language is a practical requirement for developing the most complex programs, functionalism implies that a non-reductive physicalism would offer a similar advantage over a strictly eliminative materialism. Eliminative materialists believe "folk psychology" is so unscientific that, ultimately, it will be better to eliminate primitive concepts such as mind, desire and belief, in favor of a future neuro-scientific account. A more moderate position such as J. L. Mackie's error theory suggests that false beliefs should be stripped away from a mental concept without eliminating the concept itself, the legitimate core meaning being left intact. Benjamin Libet's results are quoted in favor of epiphenomenalism, but he believes subjects still have a "conscious veto", since the readiness potential does not invariably lead to an action. In Freedom Evolves, Daniel Dennett argues that a no-free-will conclusion is based on dubious assumptions about the location of consciousness, as well as questioning the accuracy and interpretation of Libet's results. Similar criticism of Libet-style research has been made by neuroscientist Adina Roskies and cognitive theorists Tim Bayne and Alfred Mele. Others have argued that data such as the Bereitschaftspotential undermine epiphenomenalism for the same reason, that such experiments rely on a subject reporting the
by broadband noise and the noise is followed by the same pure tone it seems as if the tone occurs throughout the period of noise. This also suggests a delay for processing data before conscious experience occurs. Popular science author Tor Nørretranders has called the delay the "user illusion", implying that we only have the illusion of conscious control, most actions being controlled automatically by non-conscious parts of the brain with the conscious mind relegated to the role of spectator. The scientific data seem to support the idea that conscious experience is created by non-conscious processes in the brain (i.e., there is subliminal processing that becomes conscious experience). These results have been interpreted to suggest that people are capable of action before conscious experience of the decision to act occurs. Some argue that this supports epiphenomenalism, since it shows that the feeling of making a decision to act is actually an epiphenomenon; the action happens before the decision, so the decision did not cause the action to occur. Arguments against The most powerful argument against epiphenomenalism is that it is self-contradictory: if we have knowledge about epiphenomenalism, then our brains know about the existence of the mind, but if epiphenomenalism were correct, then our brains should not have any knowledge about the mind, because the mind does not affect anything physical. However, some philosophers do not accept this as a rigorous refutation. For example, Victor Argonov states that epiphenomenalism is a questionable, but experimentally falsifiable theory. He argues that the personal mind is not the only source of knowledge about the existence of mind in the world. A creature (even a zombie) could have knowledge about mind and the mind-body problem by virtue of some innate knowledge. The information about mind (and its problematic properties such as qualia) could have been, in principle, implicitly "written" in the material world since its creation. Epiphenomenalists can say that God created immaterial mind and a detailed "program" of material human behavior that makes it possible to speak about the mind–body problem. That version of epiphenomenalism seems highly exotic, but it cannot be excluded from consideration by pure theory. However, Argonov suggests that experiments could refute epiphenomenalism. In particular, epiphenomenalism could be refuted if neural correlates of consciousness can be found in the human brain, and it is proven that human speech about consciousness is caused by them. Some philosophers, such as Dennett, reject both epiphenomenalism and the existence of qualia with the same charge that Gilbert Ryle leveled against a Cartesian "ghost in the machine", that they too are category mistakes. A quale or conscious experience would not belong to the category of objects of reference on this account, but rather to the category of ways of doing things. Functionalists assert that mental states are well described by their overall role, their activity in relation to the organism as a whole. "This doctrine is rooted in Aristotle's conception of the soul, and has antecedents in Hobbes's conception of the mind as a 'calculating machine', but it has become fully articulated (and popularly endorsed) only in the last third of the 20th century." In so far as it mediates stimulus and response, a mental function is analogous to a program that processes input/output in automata theory. In principle, multiple realisability would guarantee platform dependencies can be avoided, whether in terms of hardware and operating system or, ex hypothesi, biology and philosophy. Because a high-level language is a practical requirement for developing the most complex programs, functionalism implies that a non-reductive physicalism would offer a similar advantage over a strictly eliminative materialism. Eliminative materialists believe "folk psychology" is so unscientific that, ultimately, it will be better to eliminate primitive concepts such as mind, desire and belief, in favor of a future neuro-scientific account. A more moderate position such as J. L. Mackie's error theory suggests that false beliefs should be stripped away from a mental concept without eliminating the concept itself, the legitimate core meaning being left intact. Benjamin
other magazines in Esperanto throughout many countries in the world. Some of them are information media of Esperanto associations (Esperanto, Sennaciulo and Kontakto). Online Esperanto magazines like Libera Folio, launched in 2003, offer independent view of the Esperanto movement, aiming to soberly and critically shed light on current development. Most of the magazines deal with current events; one of such magazines is Monato, which is read in more than 60 countries. Its articles are written by correspondents from 40 countries, which know the local situation very well. Other most popular Esperanto newspapers are La Ondo de Esperanto, Beletra Almanako, Literatura Foiro, and Heroldo de Esperanto. Often national associations magazines are also published in order to inform about the movement in the country, such as Le Monde de l'espéranto of Espéranto-France. There are also scientific journals, such as Scienca Revuo of Internacia Scienca Asocio Esperantista (ISAE). Muzaiko is a radio that has broadcast an all-day international program of songs, interviews and current events in Esperanto since 2011. The latest two can be downloaded as podcasts. Besides Muzaiko, these other stations offer an hour of Esperanto-language broadcasting of various topics: Radio Libertaire, Polskie Radio, Vatican Radio, Varsovia Vento, Radio Verda and Kern.punkto. Internet Spread of the Internet has enabled more efficient communication among Esperanto speakers and slightly replaced slower media such as mail. Many massively used websites such as Facebook or Google offer Esperanto interface. On 15 December 2009, on the occasion of the jubilee of 150th birthday of L. L. Zamenhof, Google additionally made visible the Esperanto flag as a part of their Google Doodles. Media as Twitter, Telegram, Reddit or Ipernity also contain a significant number of people in this community. In addition, content-providers such as WordPress and YouTube also enable bloggers write in Esperanto. Esperanto versions of programs such as the office suite LibreOffice and Mozilla Firefox browser, or the educational program about programming Scratch are also available. Additionally, online games like Minecraft offer complete Esperanto interface. Monero, an anonymous cryptocurrency, was named after the Esperanto word for "coin" and its official wallet is available in Esperanto. The same applies to Monerujo ("Monero container"), the only open-source wallet for Android. Sport Although Esperanto is not a country, there is an Esperanto football team (eo, es), which has existed since 2014 and participates in matches during World Esperanto Congresses. The team is part of the N.F.-Board and not of FIFA, and have played against the teams of Armenian-originating Argentine Community in 2014 and the team from Western Sahara in 2015. Esperanto speakers and Esperantists Initially, Esperanto speakers learned the language as it was described by L. L. Zamenhof. In 1905, the Fundamento de Esperanto put together the first Esperanto textbook, an exercise book and a universal dictionary. The "Declaration about the essence of Esperantism" (1905) defines an "Esperantist" to be anyone who speaks and uses Esperanto. "Esperantism" was defined to be a movement to promote the widespread use of Esperanto as a supplement to mother tongues in international and inter-ethnic contexts. As the word "esperantist" is linked with this "esperantism" (the Esperanto movement) and as -ists and -isms are linked with ideologies, today many people who speak Esperanto prefer to be called "Esperanto speaker". The monthly magazine La Ondo de Esperanto every year since 1998 proclaims an 'Esperantist of the year', who remarkably contributed to the spreading of the language during the year. Economy Businesses Publishing and selling books, the so-called book services, is the main market and is often the first expenditure of many Esperanto associations. Some companies are already well known: for example Vinilkosmo, which publishes and makes popular Esperanto music since 1990. Then there are initiatives such as the job-seeking website Eklaboru, created by Chuck Smith, for job offers and candidates within Esperanto associations or Esperanto meetings. Currency In 1907, René de Saussure proposed the spesmilo ⟨₷⟩ as an international currency. It had some use before the First World War. In 1942 a currency called the stelo ("star"; plural, steloj) was created. It was used at meetings of the Universala Ligo and in Esperanto environments such as the annual Universal Congress. Over the years it slowly became unusable and at the official closing of the Universala Ligo in the 1990s, the remaining steloj coins were handed over to the UEA. You can buy them at the UEA's book service as souvenirs. The current steloj are made of plastic, they are used in a number of meetings, especially among young people. The currency is maintained by Stelaro, which calculates the rates, keeps the stock, and opened branches in various e-meetings. Currently, there are stelo-coins of 1 ★, 3 ★ and 10 ★. Quotes of Stars at 31 December 2014 were [25] 1 EUR = 4.189 ★.
was , from (a Frenchman). The term analogous to Francujo would be Esperantistujo (Esperantist-nation). However, that would convey the idea of the physical body of people, whereas using the name of the language as the basis of the word gives it the more abstract connotation of a cultural sphere. Currently, names of nation states are often formed with the suffix -io (traditionally reserved for deriving country names from geographic features — e.g. instead of ), and recently the form Esperantio has been used, among others, in the Pasporta Servo and the Esperanto Citizens' Community. History In 1908, Dr. Wilhelm Molly attempted to create an Esperanto state in the Prussian-Belgian condominium of Neutral Moresnet, known as "Amikejo" (place of friendship). What became of it is unclear, and Neutral Moresnet was annexed to Belgium in the Treaty of Versailles, 1919. During the 1960s came a new effort of creating an Esperanto state, which this time was called Republic of Rose Island. The state island stood in the Adriatic Sea near Italy. In Europe on 2 June 2001 a number of organizations (they prefer to call themselves establishments) founded the Esperanta Civito, which "aims to be a subject of international law" and "aims to consolidate the relations between the Esperantists who feel themselves belonging to the diaspora language group which does not belong to any country". Esperanto Civito always uses the name Esperantujo (introduced by Hector Hodler in 1908), which itself is defined according to their interpretation of raumism, and the meaning, therefore, may differ from the traditional Esperanto understanding of the word Esperantujo. A language learning partner application called Amikumu has been launched in 2017, allowing Esperanto speakers to find each other. Geography Esperantujo includes any physical place where Esperanto speakers meet, such as Esperanto gatherings or virtual networks. Sometimes it is said that it is everywhere where Esperanto speakers are connected. Although Esperantujo does not have its own official territory, a number of places around the world are owned by Esperanto organizations or are otherwise permanently connected to the Esperanto language and its community: Białystok, the birthplace of L. L. Zamenhof (the creator of Esperanto), and very much the place which inspired him to create an international auxiliary language and facilitate communication across language barriers. The German city Herzberg am Harz is home to the Interkultura Centro Herzberg, and, since 12 July 2006, advertises itself as "Esperanto city" (, ). There are bilingual signs and pointers, in both German and Esperanto. The Château de Grésillon () in France is owned by the non-profit organization "Cultural House of Esperanto" (), which hosts various Esperanto events in the summer and during French school holidays. The Esperanto Museum and Collection of Planned Languages, a department of the Austrian National Library, is a museum for Esperanto and other constructed languages, located in Vienna. Zamenhof-Esperanto objects can be found all over the world. These are places and objects — such as streets, memorials, public spaces, buildings, vehicles, or even geographic features — that are named after, or otherwise linked to the language, its creator L. L. Zamenhof, or its community of speakers . Judging by the members of the World Esperanto Association, the countries with the most Esperanto speakers are (in descending order): Brazil, Germany, Japan, France, the United States, China, Italy. Politics Associations There is no governmental system in Esperantujo because it is not a true state. However, there is a social hierarchy of associations: Universal Esperanto Association (UEA) is the principal association created in 1908, its central office is located in Rotterdam. The aim of the UEA is to promote the use of Esperanto, to strive for the solution of the language problem in international relations, to encourage all types of spiritual and material relations among people and to nurture among its members a strong sense of solidarity, and to develop in them understanding and respect for other peoples. Sometimes exist associations by continent, for example, the European Esperanto Union. On the same level exist UEA commissions dedicated to promote spreading of Esperanto in Africa, America (North & South), Asia, Middle-East & North Africa, and Oceania. In at least 120 countries in the world exist national associations: Brazilian Esperanto League, the German Esperanto Association, Japanese Esperanto Association, Esperanto-USA and Australian Esperanto Association are examples from all continents across the world. The goals are usually to help teach the language and use of Esperanto in the country. Finally, there exist local associations or Esperanto clubs where volunteers or activists offer courses to learn the language or get to know more about the culture of Esperanto. Sometimes they teach Esperanto in universities or schools. Also there are thematic associations worldwide, which are concerned with spirituality, hobbies, science or that brings together Esperantists which share common interests. There is also a number of global organizations, such as Sennacieca Asocio Tutmonda (SAT), or the World Esperanto Youth Organization (TEJO), which has 46 national sections. Foreign relations Universal Esperanto Association is not a governmental system; however, the association represents Esperanto worldwide. In addition to the United Nations and UNESCO, the UEA has consultative relationships with UNICEF and the Council of Europe and general cooperative relations with the Organization of American States. UEA officially collaborates with the International Organization for Standardization (ISO) by means of an active connection to the ISO Committee on terminology (ISO/TC 37). The association is active for information on the European Union and other interstate and international organizations and conferences. UEA is a member of European Language Council, a joint forum of universities and linguistic associations to promote the knowledge of languages and cultures within and outside the European Union. Moreover, on 10 May 2011, the UEA and the International Information Center for Terminology (Infoterm) signed an Agreement on Cooperation, its objectives are inter exchange information, support each other and help out for projects, meetings, publications in the field of terminology and by which the UEA become Associate Member of Infoterm. Political movement In 2003 there was a European political movement called Europe–Democracy–Esperanto created. Within it is found a European federation that brings together local associations whose statutes depends on the countries. The working language of the movement is Esperanto. The goal is "to provide the European Union with the necessary tools to set up member rights democracy". The international language is a tool to enable cross-border political and social dialogue and actively contribute to peace and understanding between peoples. The original idea in the first ballot was mainly to spread the existence and the use of Esperanto to the general public. However, in France voices have grown steadily: 25067 (2004) 28944 (2009) and 33115 (2014). In this country there are a number of movements which support the issue: France Équité, Europe-Liberté, and Politicat. Symbols The flag of Esperanto is called Verda Flago (Green Flag). It consists of: a rectangular shape, officially with a 2:3 ratio. a green field, where the green color symbolizes hope. There is no indication that any "official" color was ever chosen. The color green used varies in different
market needs and with 10BASE2, shift to inexpensive thin coaxial cable and from 1990, to the now-ubiquitous twisted pair with 10BASE-T. By the end of the 1980s, Ethernet was clearly the dominant network technology. In the process, 3Com became a major company. 3Com shipped its first 10 Mbit/s Ethernet 3C100 NIC in March 1981, and that year started selling adapters for PDP-11s and VAXes, as well as Multibus-based Intel and Sun Microsystems computers. This was followed quickly by DEC's Unibus to Ethernet adapter, which DEC sold and used internally to build its own corporate network, which reached over 10,000 nodes by 1986, making it one of the largest computer networks in the world at that time. An Ethernet adapter card for the IBM PC was released in 1982, and, by 1985, 3Com had sold 100,000. In the 1980s, IBM's own PC Network product competed with Ethernet for the PC, and through the 1980s, LAN hardware, in general, was not common on PCs. However, in the mid to late 1980s, PC networking did become popular in offices and schools for printer and fileserver sharing, and among the many diverse competing LAN technologies of that decade, Ethernet was one of the most popular. Parallel port based Ethernet adapters were produced for a time, with drivers for DOS and Windows. By the early 1990s, Ethernet became so prevalent that Ethernet ports began to appear on some PCs and most workstations. This process was greatly sped up with the introduction of 10BASE-T and its relatively small modular connector, at which point Ethernet ports appeared even on low-end motherboards. Since then, Ethernet technology has evolved to meet new bandwidth and market requirements. In addition to computers, Ethernet is now used to interconnect appliances and other personal devices. As Industrial Ethernet it is used in industrial applications and is quickly replacing legacy data transmission systems in the world's telecommunications networks. By 2010, the market for Ethernet equipment amounted to over $16 billion per year. Standardization In February 1980, the Institute of Electrical and Electronics Engineers (IEEE) started project 802 to standardize local area networks (LAN). The "DIX-group" with Gary Robinson (DEC), Phil Arst (Intel), and Bob Printis (Xerox) submitted the so-called "Blue Book" CSMA/CD specification as a candidate for the LAN specification. In addition to CSMA/CD, Token Ring (supported by IBM) and Token Bus (selected and henceforward supported by General Motors) were also considered as candidates for a LAN standard. Competing proposals and broad interest in the initiative led to strong disagreement over which technology to standardize. In December 1980, the group was split into three subgroups, and standardization proceeded separately for each proposal. Delays in the standards process put at risk the market introduction of the Xerox Star workstation and 3Com's Ethernet LAN products. With such business implications in mind, David Liddle (General Manager, Xerox Office Systems) and Metcalfe (3Com) strongly supported a proposal of Fritz Röscheisen (Siemens Private Networks) for an alliance in the emerging office communication market, including Siemens' support for the international standardization of Ethernet (April 10, 1981). Ingrid Fromm, Siemens' representative to IEEE 802, quickly achieved broader support for Ethernet beyond IEEE by the establishment of a competing Task Group "Local Networks" within the European standards body ECMA TC24. In March 1982, ECMA TC24 with its corporate members reached an agreement on a standard for CSMA/CD based on the IEEE 802 draft. Because the DIX proposal was most technically complete and because of the speedy action taken by ECMA which decisively contributed to the conciliation of opinions within IEEE, the IEEE 802.3 CSMA/CD standard was approved in December 1982. IEEE published the 802.3 standard as a draft in 1983 and as a standard in 1985. Approval of Ethernet on the international level was achieved by a similar, cross-partisan action with Fromm as the liaison officer working to integrate with International Electrotechnical Commission (IEC) Technical Committee 83 and International Organization for Standardization (ISO) Technical Committee 97 Sub Committee 6. The ISO 8802-3 standard was published in 1989. Evolution Ethernet has evolved to include higher bandwidth, improved medium access control methods, and different physical media. The coaxial cable was replaced with point-to-point links connected by Ethernet repeaters or switches. Ethernet stations communicate by sending each other data packets: blocks of data individually sent and delivered. As with other IEEE 802 LANs, adapters come programmed with globally unique 48-bit MAC address so that each Ethernet station has a unique address. The MAC addresses are used to specify both the destination and the source of each data packet. Ethernet establishes link-level connections, which can be defined using both the destination and source addresses. On reception of a transmission, the receiver uses the destination address to determine whether the transmission is relevant to the station or should be ignored. A network interface normally does not accept packets addressed to other Ethernet stations. An EtherType field in each frame is used by the operating system on the receiving station to select the appropriate protocol module (e.g., an Internet Protocol version such as IPv4). Ethernet frames are said to be self-identifying, because of the EtherType field. Self-identifying frames make it possible to intermix multiple protocols on the same physical network and allow a single computer to use multiple protocols together. Despite the evolution of Ethernet technology, all generations of Ethernet (excluding early experimental versions) use the same frame formats. Mixed-speed networks can be built using Ethernet switches and repeaters supporting the desired Ethernet variants. Due to the ubiquity of Ethernet, and the ever-decreasing cost of the hardware needed to support it, most manufacturers now build Ethernet interfaces directly into PC motherboards, eliminating the need for a separate network card. Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The method used was similar to those used in radio systems, with the common cable providing the communication channel likened to the Luminiferous aether in 19th-century physics, and it was from this reference that the name "Ethernet" was derived. Original Ethernet's shared coaxial cable (the shared medium) traversed a building or campus to every attached machine. A scheme known as carrier sense multiple access with collision detection (CSMA/CD) governed the way the computers shared the channel. This scheme was simpler than competing Token Ring or Token Bus technologies. Computers are connected to an Attachment Unit Interface (AUI) transceiver, which is in turn connected to the cable (with thin Ethernet the transceiver is usually integrated into the network adapter). While a simple passive wire is highly reliable for small networks, it is not reliable for large extended networks, where damage to the wire in a single place, or a single bad connector, can make the whole Ethernet segment unusable. Through the first half of the 1980s, Ethernet's 10BASE5 implementation used a coaxial cable in diameter, later called "thick Ethernet" or "thicknet". Its successor, 10BASE2, called "thin Ethernet" or "thinnet", used the RG-58 coaxial cable. The emphasis was on making installation of the cable easier and less costly. Since all communication happens on the same wire, any information sent by one computer is received by all, even if that information is intended for just one destination. The network interface card interrupts the CPU only when applicable packets are received: the card ignores information not addressed to it. Use of a single cable also means that the data bandwidth is shared, such that, for example, available data bandwidth to each device is halved when two stations are simultaneously active. A collision happens when two stations attempt to transmit at the same time. They corrupt transmitted data and require stations to re-transmit. The lost data and re-transmission reduces throughput. In the worst case, where multiple active hosts connected with maximum allowed cable length attempt to transmit many short frames, excessive collisions can reduce throughput dramatically. However, a Xerox report in 1980 studied performance of an existing Ethernet installation under both normal and artificially generated heavy load. The report claimed that 98% throughput on the LAN was observed. This is in contrast with token passing LANs (Token Ring, Token Bus), all of which suffer throughput degradation as each new node comes into the LAN, due to token waits. This report was controversial, as modeling showed that collision-based networks theoretically became unstable under loads as low as 37% of nominal capacity. Many early researchers failed to understand these results. Performance on real networks is significantly better. In a modern Ethernet, the stations do not all share one channel through a shared cable or a simple repeater hub; instead, each station communicates with a switch, which in turn forwards that traffic to the destination station. In this topology, collisions are only possible if station and switch attempt to communicate with each other at the same time, and collisions are limited to this link. Furthermore, the 10BASE-T standard introduced a full duplex mode of operation which became common with Fast Ethernet and the de facto standard with Gigabit Ethernet. In full duplex, switch and station can send and receive simultaneously, and therefore modern Ethernets are completely collision-free. Repeaters and hubs For signal degradation and timing reasons, coaxial Ethernet segments have a restricted size. Somewhat larger networks can be built by using an Ethernet repeater. Early repeaters had only two ports, allowing, at most, a doubling of network size. Once repeaters with more than two ports became available, it was possible to wire the network in a star topology. Early experiments with star topologies (called "Fibernet") using optical fiber were published by 1978. Shared cable Ethernet is always hard to install in offices because its bus topology is in conflict with the star topology cable plans designed into buildings for telephony. Modifying Ethernet to conform to twisted pair telephone wiring already installed in commercial buildings provided another opportunity to lower costs, expand the installed base, and leverage building design, and, thus, twisted-pair Ethernet was the next logical development in the mid-1980s. Ethernet on unshielded twisted-pair cables (UTP) began with StarLAN at 1 Mbit/s in the mid-1980s. In 1987 SynOptics introduced the first twisted-pair Ethernet at 10 Mbit/s in a star-wired cabling topology with a central hub, later called LattisNet. These evolved into 10BASE-T, which was designed for point-to-point links only, and all termination was built into the device. This changed repeaters from a specialist device used at the center of large networks to a device that every twisted pair-based network with more than two machines had to use. The tree structure that resulted from this made Ethernet networks easier to maintain by preventing most faults with one peer or its associated cable from affecting other devices on the network. Despite the physical star topology and the presence of separate transmit and receive channels in the twisted pair and fiber media, repeater-based Ethernet networks still use half-duplex and CSMA/CD, with only minimal activity by the repeater, primarily generation of the jam signal in dealing with packet collisions. Every packet is sent to every other port on the repeater, so bandwidth and security problems are not addressed. The total throughput of the repeater is limited to that of a single link, and all links must operate at the same speed. While repeaters can isolate some aspects of Ethernet segments, such as cable breakages, they still forward all traffic to all Ethernet devices. The entire network is one collision domain, and all hosts have to be able to detect collisions anywhere on the network. This limits the number of repeaters between the farthest nodes and creates practical limits on how many machines can communicate on an Ethernet network. Segments joined by repeaters have to all operate at the same speed, making phased-in upgrades impossible. To alleviate these problems, bridging was created to communicate at the
1980, the Institute of Electrical and Electronics Engineers (IEEE) started project 802 to standardize local area networks (LAN). The "DIX-group" with Gary Robinson (DEC), Phil Arst (Intel), and Bob Printis (Xerox) submitted the so-called "Blue Book" CSMA/CD specification as a candidate for the LAN specification. In addition to CSMA/CD, Token Ring (supported by IBM) and Token Bus (selected and henceforward supported by General Motors) were also considered as candidates for a LAN standard. Competing proposals and broad interest in the initiative led to strong disagreement over which technology to standardize. In December 1980, the group was split into three subgroups, and standardization proceeded separately for each proposal. Delays in the standards process put at risk the market introduction of the Xerox Star workstation and 3Com's Ethernet LAN products. With such business implications in mind, David Liddle (General Manager, Xerox Office Systems) and Metcalfe (3Com) strongly supported a proposal of Fritz Röscheisen (Siemens Private Networks) for an alliance in the emerging office communication market, including Siemens' support for the international standardization of Ethernet (April 10, 1981). Ingrid Fromm, Siemens' representative to IEEE 802, quickly achieved broader support for Ethernet beyond IEEE by the establishment of a competing Task Group "Local Networks" within the European standards body ECMA TC24. In March 1982, ECMA TC24 with its corporate members reached an agreement on a standard for CSMA/CD based on the IEEE 802 draft. Because the DIX proposal was most technically complete and because of the speedy action taken by ECMA which decisively contributed to the conciliation of opinions within IEEE, the IEEE 802.3 CSMA/CD standard was approved in December 1982. IEEE published the 802.3 standard as a draft in 1983 and as a standard in 1985. Approval of Ethernet on the international level was achieved by a similar, cross-partisan action with Fromm as the liaison officer working to integrate with International Electrotechnical Commission (IEC) Technical Committee 83 and International Organization for Standardization (ISO) Technical Committee 97 Sub Committee 6. The ISO 8802-3 standard was published in 1989. Evolution Ethernet has evolved to include higher bandwidth, improved medium access control methods, and different physical media. The coaxial cable was replaced with point-to-point links connected by Ethernet repeaters or switches. Ethernet stations communicate by sending each other data packets: blocks of data individually sent and delivered. As with other IEEE 802 LANs, adapters come programmed with globally unique 48-bit MAC address so that each Ethernet station has a unique address. The MAC addresses are used to specify both the destination and the source of each data packet. Ethernet establishes link-level connections, which can be defined using both the destination and source addresses. On reception of a transmission, the receiver uses the destination address to determine whether the transmission is relevant to the station or should be ignored. A network interface normally does not accept packets addressed to other Ethernet stations. An EtherType field in each frame is used by the operating system on the receiving station to select the appropriate protocol module (e.g., an Internet Protocol version such as IPv4). Ethernet frames are said to be self-identifying, because of the EtherType field. Self-identifying frames make it possible to intermix multiple protocols on the same physical network and allow a single computer to use multiple protocols together. Despite the evolution of Ethernet technology, all generations of Ethernet (excluding early experimental versions) use the same frame formats. Mixed-speed networks can be built using Ethernet switches and repeaters supporting the desired Ethernet variants. Due to the ubiquity of Ethernet, and the ever-decreasing cost of the hardware needed to support it, most manufacturers now build Ethernet interfaces directly into PC motherboards, eliminating the need for a separate network card. Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The method used was similar to those used in radio systems, with the common cable providing the communication channel likened to the Luminiferous aether in 19th-century physics, and it was from this reference that the name "Ethernet" was derived. Original Ethernet's shared coaxial cable (the shared medium) traversed a building or campus to every attached machine. A scheme known as carrier sense multiple access with collision detection (CSMA/CD) governed the way the computers shared the channel. This scheme was simpler than competing Token Ring or Token Bus technologies. Computers are connected to an Attachment Unit Interface (AUI) transceiver, which is in turn connected to the cable (with thin Ethernet the transceiver is usually integrated into the network adapter). While a simple passive wire is highly reliable for small networks, it is not reliable for large extended networks, where damage to the wire in a single place, or a single bad connector, can make the whole Ethernet segment unusable. Through the first half of the 1980s, Ethernet's 10BASE5 implementation used a coaxial cable in diameter, later called "thick Ethernet" or "thicknet". Its successor, 10BASE2, called "thin Ethernet" or "thinnet", used the RG-58 coaxial cable. The emphasis was on making installation of the cable easier and less costly. Since all communication happens on the same wire, any information sent by one computer is received by all, even if that information is intended for just one destination. The network interface card interrupts the CPU only when applicable packets are received: the card ignores information not addressed to it. Use of a single cable also means that the data bandwidth is shared, such that, for example, available data bandwidth to each device is halved when two stations are simultaneously active. A collision happens when two stations attempt to transmit at the same time. They corrupt transmitted data and require stations to re-transmit. The lost data and re-transmission reduces throughput. In the worst case, where multiple active hosts connected with maximum allowed cable length attempt to transmit many short frames, excessive collisions can reduce throughput dramatically. However, a Xerox report in 1980 studied performance of an existing Ethernet installation under both normal and artificially generated heavy load. The report claimed that 98% throughput on the LAN was observed. This is in contrast with token passing LANs (Token Ring, Token Bus), all of which suffer throughput degradation as each new node comes into the LAN, due to token waits. This report was controversial, as modeling showed that collision-based networks theoretically became unstable under loads as low as 37% of nominal capacity. Many early researchers failed to understand these results. Performance on real networks is significantly better. In a modern Ethernet, the stations do not all share one channel through a shared cable or a simple repeater hub; instead, each station communicates with a switch, which in turn forwards that traffic to the destination station. In this topology, collisions are only possible if station and switch attempt to communicate with each other at the same time, and collisions are limited to this link. Furthermore, the 10BASE-T standard introduced a full duplex mode of operation which became common with Fast Ethernet and the de facto standard with Gigabit Ethernet. In full duplex, switch and station can send and receive simultaneously, and therefore modern Ethernets are completely collision-free. Repeaters and hubs For signal degradation and timing reasons, coaxial Ethernet segments have a restricted size. Somewhat larger networks can be built by using an Ethernet repeater. Early repeaters had only two ports, allowing, at most, a doubling of network size. Once repeaters with more than two ports became available, it was possible to wire the network in a star topology. Early experiments with star topologies (called "Fibernet") using optical fiber
the most important explorations of State Societies, in chronological order: See also Age of Discovery Exploration of Australia
of the High Alps Portugal in the period of discoveries Chronology of European exploration
Anschluss with Germany, the Canettis moved to London. He became closely involved with the painter Marie-Louise von Motesiczky, who was to remain a close companion for many years. His name has also been linked with the author Iris Murdoch (see John Bayley's Iris, A Memoir of Iris Murdoch, which has several references to an author, referred to as "the Dichter", who was a Nobel Laureate and whose works included Die Blendung [English title Auto-da-Fé]). After Veza died in 1963, Canetti married Hera Buschor (1933–1988), with whom he had a daughter, Johanna, in 1972. Canetti's brother Jacques Canetti settled in Paris, where he championed a revival of French chanson. Despite being a German-language writer, Canetti settled in Britain until the 1970s, receiving British citizenship in 1952. For his last 20 years, Canetti lived mostly in Zürich. Career A writer in German, Canetti won the Nobel Prize in Literature in 1981, "for writings marked by a broad outlook, a wealth of ideas and artistic power". He is known chiefly for his celebrated trilogy of autobiographical memoirs of his childhood and of pre-Anschluss Vienna: Die Gerettete Zunge (The Tongue Set Free); Die Fackel im Ohr (The Torch in My Ear), and Das Augenspiel (The Play of the Eyes); for his modernist novel Auto-da-Fé (Die Blendung); and for Crowds and Power, a psychological study of crowd behaviour as it manifests itself in human activities ranging from mob violence to religious congregations. In the 1970s, Canetti began to travel more frequently to Zurich, where he settled and lived for his last 20 years. He died in Zürich in 1994. Honours and awards Prix International (France, 1949) Grand Austrian State Prize for Literature (1967) Literature Award of the Bavarian Academy of the Fine Arts (1969) Austrian Decoration for Science and Art (1972) Georg Büchner Prize (German Academy for Language and Literature, 1972) German recording prize, for reading "Ohrenzeuge" (Deutscher Schallplattenpreis) (1975) Nelly Sachs Prize (1975) Gottfried-Keller-Preis (1977) Pour le Mérite (1979) Johann-Peter-Hebel-Preis (Baden-Württemberg, 1980) Nobel Prize in Literature (1981) Franz Kafka Prize (1981) Grand Merit Cross of the Order of Merit of the Federal Republic of Germany (1983) In 1975, Canetti was awarded an honorary doctorate from the University of Manchester and another from the Ludwig Maximilian University of Munich, in 1976. Canetti Peak on Livingston Island in the South Shetland Islands, Antarctica, is named after him. Works Komödie der Eitelkeit 1934 (The Comedy of Vanity) Die Blendung 1935 (Auto-da-Fé, novel, tr. by Cicely Wedgwood (Jonathan Cape, Ltd., 1946). The first American edition of Wedgewood's translation was titled The Tower of Babel (Alfred A. Knopf, 1947). Die Befristeten 1956 (1956 premiere of the play in Oxford) (Their Days are Numbered) Masse und Macht 1960 (Crowds and Power, study, tr. 1962, published in Hamburg) Aufzeichnungen 1942 – 1948 (1965) (Sketches) Die Stimmen von Marrakesch 1968 published by Hanser in Munich (The Voices of Marrakesh, travelogue, tr. 1978) Der andere Prozess 1969 Kafkas Briefe an Felice (Kafka's Other Trial, tr. 1974). Hitler nach Speer (Essay) Die Provinz des Menschen Aufzeichnungen 1942 – 1972 (The Human Province, tr. 1978) Der Ohrenzeuge. Fünfzig Charaktere 1974 ("Ear Witness: Fifty Characters", tr. 1979). Das Gewissen der Worte 1975. Essays (The Conscience of Words) Die Gerettete Zunge 1977 (The Tongue Set Free, memoir, tr. 1979 by Joachim Neugroschel) Die Fackel im Ohr 1980 Lebensgeschichte 1921 – 1931 (The Torch in My Ear, memoir, tr. 1982) Das Augenspiel 1985 Lebensgeschichte 1931 – 1937 (The Play of the
named after Cañete, Cuenca, a village in Spain. In Ruse, Canetti's father and grandfather were successful merchants who operated out of a commercial building, which they had built in 1898. Canetti's mother descended from the Arditti family, one of the oldest Sephardi families in Bulgaria, who were among the founders of the Ruse Jewish colony in the late 18th century. The Ardittis can be traced to the 14th century, when they were court physicians and astronomers to the Aragonese royal court of Alfonso IV and Pedro IV. Before settling in Ruse, they had migrated into Italy and lived in Livorno in the 17th century. Canetti spent his childhood years, from 1905 to 1911, in Ruse until the family moved to Manchester, England, where Canetti's father joined a business established by his wife's brothers. In 1912, his father died suddenly, and his mother moved with their children first to Lausanne, then Vienna in the same year. They lived in Vienna from the time Canetti was aged seven onwards. His mother insisted that he speak German, and taught it to him. By this time Canetti already spoke Ladino (his native language), Bulgarian, English, and some French; the latter two he studied in the one year they were in Britain. Subsequently, the family moved first (from 1916 to 1921) to Zürich and then (until 1924) to Frankfurt, where Canetti graduated from high school. Canetti went back to Vienna in 1924 in order to study chemistry. However, his primary interests during his years in Vienna became philosophy and literature. Introduced into the literary circles of First-Republic-Vienna, he started writing. Politically leaning towards the left, he was present at the July Revolt of 1927 – he came near to the action accidentally, was most impressed by the burning of books (recalled frequently in his writings), and left the place quickly with his bicycle. He gained a degree in chemistry from the University of Vienna in 1929, but never worked as a chemist. He published two works in Vienna, Komödie der Eitelkeit 1934 (The Comedy of Vanity) and Die Blendung 1935 (Auto-da-Fé, 1935), before escaping to Great Britain. He reflected the experiences of Nazi Germany and political chaos in his works, especially exploring mob action and group thinking in the novel Die Blendung and in the non-fiction Crowds and Power (1960). He wrote several volumes of memoirs, contemplating the influence of his multi-lingual background and childhood. Personal life In 1934 in Vienna he married Veza (Venetiana) Taubner-Calderon (1897–1963), who acted as his muse and devoted literary assistant. Canetti remained open to relationships with other women. He had a short affair with Anna Mahler. In 1938, after the Anschluss with Germany, the Canettis moved to London. He became closely involved with the painter Marie-Louise von Motesiczky, who was to remain a close companion for many years. His name has also been linked with the author Iris Murdoch (see John Bayley's Iris, A Memoir of Iris Murdoch, which has several references to an author, referred to as "the Dichter", who was a Nobel Laureate and whose works included Die Blendung [English title Auto-da-Fé]). After Veza died in 1963, Canetti married Hera Buschor (1933–1988), with whom he had a daughter, Johanna, in 1972. Canetti's brother Jacques Canetti settled in Paris, where he championed a revival of French chanson. Despite being a German-language writer, Canetti settled in Britain until the 1970s, receiving British citizenship in 1952. For his last 20 years, Canetti lived mostly in Zürich. Career A writer in German, Canetti won the Nobel Prize in Literature in 1981, "for writings marked by a broad outlook, a wealth of ideas and artistic power". He is known chiefly for his celebrated trilogy of autobiographical memoirs of his childhood and of pre-Anschluss Vienna: Die Gerettete Zunge (The Tongue Set Free); Die Fackel im Ohr (The Torch in My Ear), and Das Augenspiel (The Play of the Eyes); for his modernist novel Auto-da-Fé (Die Blendung); and for Crowds and Power, a psychological study of crowd behaviour as it manifests itself in human activities ranging from mob violence to religious congregations. In the 1970s, Canetti began to travel more frequently to Zurich, where he settled and lived for his last 20 years. He died in Zürich in 1994. Honours and awards Prix International (France, 1949) Grand Austrian State Prize for Literature (1967) Literature Award of the Bavarian Academy of the Fine Arts (1969) Austrian Decoration for Science and Art (1972) Georg Büchner Prize (German Academy for Language and Literature, 1972) German recording prize, for reading "Ohrenzeuge" (Deutscher Schallplattenpreis) (1975) Nelly Sachs Prize (1975) Gottfried-Keller-Preis (1977) Pour le Mérite (1979) Johann-Peter-Hebel-Preis (Baden-Württemberg, 1980) Nobel Prize in Literature (1981) Franz Kafka Prize (1981) Grand Merit Cross of the Order of Merit of the Federal Republic of Germany (1983) In 1975, Canetti was awarded an honorary doctorate from the University of Manchester and another from the Ludwig Maximilian University of Munich, in 1976. Canetti Peak on Livingston Island in the South Shetland
tested in humans a cowpox vaccine against smallpox. For example, Dorset farmer Benjamin Jesty successfully vaccinated and presumably induced immunity with cowpox in his wife and two children during a smallpox epidemic in 1774, but it was not until Jenner's work that the procedure became widely understood. Jenner may have been aware of Jesty's procedures and success. A similar observation was later made in France by Jacques Antoine Rabaut-Pommier in 1780. Noting the common observation that milkmaids were generally immune to smallpox, Jenner postulated that the pus in the blisters that milkmaids received from cowpox (a disease similar to smallpox, but much less virulent) protected them from smallpox. On 14 May 1796, Jenner tested his hypothesis by inoculating James Phipps, an eight-year-old boy who was the son of Jenner's gardener. He scraped pus from cowpox blisters on the hands of Sarah Nelmes, a milkmaid who had caught cowpox from a cow called Blossom, whose hide now hangs on the wall of the St. George's Medical School library (now in Tooting). Phipps was the 17th case described in Jenner's first paper on vaccination. Jenner inoculated Phipps in both arms that day, subsequently producing in Phipps a fever and some uneasiness, but no full-blown infection. Later, he injected Phipps with variolous material, the routine method of immunization at that time. No disease followed. The boy was later challenged with variolous material and again showed no sign of infection. No unexpected side effects occurred, and neither Phipps nor any other recipients underwent any future 'breakthrough' cases. Donald Hopkins has written, "Jenner's unique contribution was not that he inoculated a few persons with cowpox, but that he then proved [by subsequent challenges] that they were immune to smallpox. Moreover, he demonstrated that the protective cowpox pus could be effectively inoculated from person to person, not just directly from cattle." Jenner successfully tested his hypothesis on 23 additional subjects. Jenner continued his research and reported it to the Royal Society, which did not publish the initial paper. After revisions and further investigations, he published his findings on the 23 cases, including his 11-month-old son Robert. Some of his conclusions were correct, some erroneous; modern microbiological and microscopic methods would make his studies easier to reproduce. The medical establishment deliberated at length over his findings before accepting them. Eventually, vaccination was accepted, and in 1840, the British government banned variolationthe use of smallpox to induce immunityand provided vaccination using cowpox free of charge (see Vaccination Act). The success of his discovery soon spread around Europe and was used en masse in the Spanish Balmis Expedition (1803–1806), a three-year-long mission to the Americas, the Philippines, Macao, China, led by Francisco Javier de Balmis with the aim of giving thousands the smallpox vaccine. The expedition was successful, and Jenner wrote: "I don’t imagine the annals of history furnish an example of philanthropy so noble, so extensive as this". Napoleon, who at the time was at war with Britain, had all his French troops vaccinated, awarded Jenner a medal, and at the request of Jenner, he released two English prisoners of war and permitted their return home. Napoleon remarked he could not "refuse anything to one of the greatest benefactors of mankind". Jenner's continuing work on vaccination prevented him from continuing his ordinary medical practice. He was supported by his colleagues and the King in petitioning Parliament, and was granted £10,000 in 1802 for his work on vaccination. In 1807, he was granted another £20,000 after the Royal College of Physicians confirmed the widespread efficacy of vaccination. Later life Jenner was later elected a foreign honorary member of the American Academy of Arts and Sciences in 1802, a member of the American Philosophical Society in 1804, and a foreign member of the Royal Swedish Academy of Sciences in 1806. In 1803 in London, he became president of the Jennerian Society, concerned with promoting vaccination to eradicate smallpox. The Jennerian ceased operations in 1809. Jenner became a member of the Medical and Chirurgical Society on its founding in 1805 (now the Royal Society of Medicine) and presented several papers there. In 1808, with government aid, the National Vaccine Establishment was founded, but Jenner felt dishonoured by the men selected to run it and resigned his directorship. Returning to London in 1811, Jenner observed a significant number of cases of smallpox after vaccination. He found that in these cases the severity of the illness was notably diminished by previous vaccination. In 1821, he was appointed physician extraordinary to King George IV, and was also made mayor of Berkeley and justice of the peace. He continued to investigate natural history, and in 1823, the last year of his life, he presented his "Observations on the Migration of Birds" to the Royal Society. Death Jenner was found in a state of apoplexy on 25 January 1823, with his right side paralysed. He did not recover and died the next day of an apparent stroke, his second, on 26 January 1823, aged 73. He was buried in the family vault at the Church of St Mary, Berkeley. Religious views Neither fanatic nor lax, Jenner was a Christian who in his personal correspondence showed himself quite spiritual; he treasured the Bible. Some days before his death, he stated to a friend: "I am not surprised that men are not grateful to me; but I wonder that they are not grateful to God for the good which He has made me the instrument of conveying to my fellow creatures". His contemporary Rabbi Israel Lipschitz in his classic commentary on the Mishnah, the Tiferes Yisrael, wrote that Jenner was one of the "righteous of the nations", deserving a lofty place in the World to Come, for having saved millions of people from smallpox. Legacy In 1980, the World Health Organization declared smallpox an eradicated disease. This was the result of coordinated public health efforts, but vaccination was an essential component. Although the disease was declared eradicated, some pus samples still remain in laboratories in Centers for Disease Control and Prevention in Atlanta
term devised by Jenner to denote cowpox. He used it in 1798 in the long title of his Inquiry into the Variolae vaccinae known as the Cow Pox, in which he described the protective effect of cowpox against smallpox. In the West, Jenner is often called "the father of immunology", and his work is said to have "saved more lives than the work of any other human". In Jenner's time, smallpox killed around 10% of the population, with the number as high as 20% in towns and cities where infection spread more easily. In 1821, he was appointed physician to King George IV, and was also made mayor of Berkeley and justice of the peace. A member of the Royal Society, in the field of zoology he was among the first to describe the brood parasitism of the cuckoo (Aristotle also noted this behavior in History of Animals). In 2002, Jenner was named in the BBC's list of the 100 Greatest Britons. Early life Edward Jenner was born on 6 May 1749 (17 May New Style) in Berkeley, Gloucestershire, England as the eighth of nine children. His father, the Reverend Stephen Jenner, was the vicar of Berkeley, so Jenner received a strong basic education. Education and training When he was young, he went to school in Wotton-under-Edge at Katherine Lady Berkeley's School and in Cirencester. During this time, he was inoculated (by variolation) for smallpox, which had a lifelong effect upon his general health. At the age of 14, he was apprenticed for seven years to Daniel Ludlow, a surgeon of Chipping Sodbury, South Gloucestershire, where he gained most of the experience needed to become a surgeon himself. In 1770, aged 21, Jenner became apprenticed in surgery and anatomy under surgeon John Hunter and others at St George's Hospital, London. William Osler records that Hunter gave Jenner William Harvey's advice, well known in medical circles (and characteristic of the Age of Enlightenment), "Don't think; try." Hunter remained in correspondence with Jenner over natural history and proposed him for the Royal Society. Returning to his native countryside by 1773, Jenner became a successful family doctor and surgeon, practising on dedicated premises at Berkeley. In 1792, "with twenty years' experience of general practice and surgery, Jenner obtained the degree of MD from the University of St Andrews". Later life Jenner and others formed the Fleece Medical Society or Gloucestershire Medical Society, so called because it met in the parlour of the Fleece Inn, Rodborough, Gloucestershire. Members dined together and read papers on medical subjects. Jenner contributed papers on angina pectoris, ophthalmia, and cardiac valvular disease and commented on cowpox. He also belonged to a similar society which met in Alveston, near Bristol. He became a master mason on 30 December 1802, in Lodge of Faith and Friendship #449. From 1812 to 1813, he served as worshipful master of Royal Berkeley Lodge of Faith and Friendship. Zoology Edward Jenner was elected fellow of the Royal Society in 1788, following his publication of a careful study of the previously misunderstood life of the nested cuckoo, a study that combined observation, experiment, and dissection. Edward Jenner described how the newly hatched cuckoo pushed its host's eggs and fledgling chicks out of the nest (contrary to existing belief that the adult cuckoo did it). Having observed this behaviour, Jenner demonstrated an anatomical adaptation for itthe baby cuckoo has a depression in its back, not present after 12 days of life, that enables it to cup eggs and other chicks. The adult does not remain long enough in the area to perform this task. Jenner's findings were published in Philosophical Transactions of the Royal Society in 1788. "The singularity of its shape is well adapted to these purposes; for, different from other newly hatched birds, its back from the scapula downwards is very broad, with a considerable depression in the middle. This depression seems formed by nature for the design of giving a more secure lodgement to the egg of the Hedge-sparrow, or its young one, when the young Cuckoo is employed in removing either of them from the nest. When it is about twelve days old, this cavity is quite filled up, and then the back assumes the shape of nestling birds in general." Jenner's nephew assisted in the study. He was born on 30 June 1737. Jenner's understanding of the cuckoo's behaviour was not entirely believed until the artist Jemima Blackburn, a keen observer of birdlife, saw a blind nestling pushing out a host's egg. Her description and illustration of this were enough to convince Charles Darwin to revise a later edition of On the Origin of Species. Jenner's interest in zoology played a large role in his first experiment with inoculation. Not only did he have a profound understanding of human anatomy due to his medical training, but he also understood animal biology and its role in human-animal trans-species boundaries in disease transmission. At the time, there was no way of knowing how important this connection would be to the history and discovery of vaccinations. We see this connection now; many present-day vaccinations include animal parts from cows, rabbits, and chicken eggs, which can be attributed
McHenry (1992–1997). Anita Wolff was listed as the Deputy Editor and Theodore Pappas as Executive Editor. Prior Executive Editors include John V. Dodge (1950–1964) and Philip W. Goetz. Paul T. Armstrong remains the longest working employee of Encyclopædia Britannica. He began his career there in 1934, eventually earning the positions of treasurer, vice president, and chief financial officer in his 58 years with the company, before retiring in 1992. The 2007 editorial staff of the Britannica included five Senior Editors and nine Associate Editors, supervised by Dale Hoiberg and four others. The editorial staff helped to write the articles of the and some sections of the . Editorial advisors The Britannica has an editorial board of advisors, which includes 12 distinguished scholars: non-fiction author Nicholas Carr, religion scholar Wendy Doniger, political economist Benjamin M. Friedman, Council on Foreign Relations President Emeritus Leslie H. Gelb, computer scientist David Gelernter, Physics Nobel laureate Murray Gell-Mann, Carnegie Corporation of New York President Vartan Gregorian, philosopher Thomas Nagel, cognitive scientist Donald Norman, musicologist Don Michael Randel, Stewart Sutherland, Baron Sutherland of Houndwood, President of the Royal Society of Edinburgh, and cultural anthropologist Michael Wesch. The Propædia and its Outline of Knowledge were produced by dozens of editorial advisors under the direction of Mortimer J. Adler. Roughly half of these advisors have since died, including some of the Outline's chief architects – Rene Dubos (d. 1982), Loren Eiseley (d. 1977), Harold D. Lasswell (d. 1978), Mark Van Doren (d. 1972), Peter Ritchie Calder (d. 1982) and Mortimer J. Adler (d. 2001). The also lists just under 4,000 advisors who were consulted for the unsigned articles. Corporate structure In January 1996, the Britannica was purchased from the Benton Foundation by billionaire Swiss financier Jacqui Safra, who serves as its current chair of the board. In 1997, Don Yannias, a long-time associate and investment advisor of Safra, became CEO of Encyclopædia Britannica, Inc. In 1999, a new company, Britannica.com Inc., was created to develop digital versions of the Britannica; Yannias assumed the role of CEO in the new company, while his former position at the parent company remained vacant for two years. Yannias' tenure at Britannica.com Inc. was marked by missteps, considerable lay-offs, and financial losses. In 2001, Yannias was replaced by Ilan Yeshua, who reunited the leadership of the two companies. Yannias later returned to investment management, but remains on the Britannica Board of Directors. In 2003, former management consultant Jorge Aguilar-Cauz was appointed President of Encyclopædia Britannica, Inc. Cauz is the senior executive and reports directly to the Britannica's Board of Directors. Cauz has been pursuing alliances with other companies and extending the Britannica brand to new educational and reference products, continuing the strategy pioneered by former CEO Elkan Harrison Powell in the mid-1930s. Under Safra's ownership, the company has experienced financial difficulties and has responded by reducing the price of its products and implementing drastic cost cuts. According to a 2003 report in the New York Post, the Britannica management has eliminated employee 401(k) accounts and encouraged the use of free images. These changes have had negative impacts, as freelance contributors have waited up to six months for checks and the Britannica staff have gone years without pay rises. In the fall of 2017, Karthik Krishnan was appointed global chief executive officer of the Encyclopædia Britannica Group. Krishnan brought a varied perspective to the role based on several high-level positions in digital media, including RELX (formerly known as Reed Elsevier, and one of the constituents of the FTSE 100 Index) and Rodale, in which he was responsible for "driving business and cultural transformation and accelerating growth". Taking the reins of the company as it was preparing to mark its 250th anniversary and define the next phase of its digital strategy for consumers and K-12 schools, Krishnan launched a series of new initiatives in his first year. First was Britannica Insights, a free, downloadable software extension to the Google Chrome browser that served up edited, fact-checked Britannica information with queries on search engines such as Google, Yahoo, and Bing. Its purpose, the company said, was to "provide trusted, verified information" in conjunction with search results that were thought to be increasingly unreliable in the era of misinformation and "fake news." The product was quickly followed by Britannica School Insights, which provided similar content for subscribers to Britannica's online classroom solutions, and a partnership with YouTube in which verified Britannica content appeared on the site as an antidote to user-generated video content that could be false or misleading. Krishnan, himself an educator at New York University's Stern School of Business, believes in the "transformative power of education" and set steering the company toward solidifying its place among leaders in educational technology and supplemental curriculum. Krishnan aimed at providing more useful and relevant solutions to customer needs, extending and renewing Britannica's historical emphasis on "Utility", which had been the watchword of its first edition in 1768. Krishnan also is active in civic affairs, with organizations such as the Urban Enterprise Initiative and Urban Upbound, whose board he serves on. Competition As the Britannica is a general encyclopaedia, it does not seek to compete with specialized encyclopaedias such as the Encyclopaedia of Mathematics or the Dictionary of the Middle Ages, which can devote much more space to their chosen topics. In its first years, the Britannica main competitor was the general encyclopaedia of Ephraim Chambers and, soon thereafter, Rees's Cyclopædia and Coleridge's Encyclopædia Metropolitana. In the 20th century, successful competitors included Collier's Encyclopedia, the Encyclopedia Americana, and the World Book Encyclopedia. Nevertheless, from the 9th edition onwards, the Britannica was widely considered to have the greatest authority of any general English-language encyclopaedia, especially because of its broad coverage and eminent authors. The print version of the Britannica was significantly more expensive than its competitors. Since the early 1990s, the Britannica has faced new challenges from digital information sources. The Internet, facilitated by the development of search engines, has grown into a common source of information for many people, and provides easy access to reliable original sources and expert opinions, thanks in part to initiatives such as Google Books, MIT's release of its educational materials and the open PubMed Central library of the National Library of Medicine. In general, the Internet tends to provide more current coverage than print media, due to the ease with which material on the Internet can be updated. In rapidly changing fields such as science, technology, politics, culture and modern history, the Britannica has struggled to stay up to date, a problem first analysed systematically by its former editor Walter Yust. Eventually, the Britannica turned to focus more on its online edition. Print encyclopaedias The has been compared with other print encyclopaedias, both qualitatively and quantitatively. A well-known comparison is that of Kenneth Kister, who gave a qualitative and quantitative comparison of the Britannica with two comparable encyclopaedias, Collier's Encyclopedia and the Encyclopedia Americana. For the quantitative analysis, ten articles were selected at random—circumcision, Charles Drew, Galileo, Philip Glass, heart disease, IQ, panda bear, sexual harassment, Shroud of Turin and Uzbekistan—and letter grades of A–D or F were awarded in four categories: coverage, accuracy, clarity, and recency. In all four categories and for all three encyclopaedias, the four average grades fell between B− and B+, chiefly because none of the encyclopaedias had an article on sexual harassment in 1994. In the accuracy category, the Britannica received one "D" and seven "A"s, Encyclopedia Americana received eight "A"s, and Collier's received one "D" and seven "A"s; thus, Britannica received an average score of 92% for accuracy to Americanas 95% and Collier's 92%. In the timeliness category, Britannica averaged an 86% to Americana'''s 90% and Collier's 85%. In 2013, the President of Encyclopædia Britannica announced that after 244 years, the encyclopedia would cease print production and all future editions would be entirely digital. Digital encyclopaedias on optical media The most notable competitor of the Britannica among CD/DVD-ROM digital encyclopaedias was Encarta, now discontinued, a modern, multimedia encyclopaedia that incorporated three print encyclopaedias: Funk & Wagnalls, Collier's and the New Merit Scholar's Encyclopedia. Encarta was the top-selling multimedia encyclopaedia, based on total US retail sales from January 2000 to February 2006. Both occupied the same price range, with the 2007 Encyclopædia Britannica Ultimate CD or DVD costing US$40–50 and the Microsoft Encarta Premium 2007 DVD costing US$45. The Britannica contains 100,000 articles and Merriam-Webster's Dictionary and Thesaurus (US only), and offers Primary and Secondary School editions. Encarta contained 66,000 articles, a user-friendly Visual Browser, interactive maps, math, language and homework tools, a US and UK dictionary, and a youth edition. Like Encarta, the Britannica has been criticized for being biased towards United States audiences; the United Kingdom-related articles are updated less often, maps of the United States are more detailed than those of other countries, and it lacks a UK dictionary. Like the Britannica, Encarta was available online by subscription, although some content could be accessed free. Internet encyclopaedias The dominant internet encyclopaedia and main alternative to Britannica is Wikipedia. The key differences between the two lie in accessibility; the model of participation they bring to an encyclopedic project; their respective style sheets and editorial policies; relative ages; the number of subjects treated; the number of languages in which articles are written and made available; and their underlying economic models: unlike Britannica, Wikipedia is a not-for-profit and is not connected with traditional profit- and contract-based publishing distribution networks. The 699 printed articles are generally written by identified contributors, and the roughly 65,000 printed articles are the work of the editorial staff and identified outside consultants. Thus, a Britannica article either has known authorship or a set of possible authors (the editorial staff). With the exception of the editorial staff, most of the Britannica contributors are experts in their field—some are Nobel laureates. By contrast, the articles of Wikipedia are written by people of unknown degrees of expertise: most do not claim any particular expertise, and of those who do, many are anonymous and have no verifiable credentials. It is for this lack of institutional vetting, or certification, that former Britannica editor-in-chief Robert McHenry notes his belief that Wikipedia cannot hope to rival the Britannica in accuracy. In 2005, the journal Nature chose articles from both websites in a wide range of science topics and sent them to what it called "relevant" field experts for peer review. The experts then compared the competing articles—one from each site on a given topic—side by side, but were not told which article came from which site. Nature got back 42 usable reviews. In the end, the journal found just eight serious errors, such as general misunderstandings of vital concepts: four from each site. It also discovered many factual errors, omissions or misleading statements: 162 in Wikipedia and 123 in Britannica, an average of 3.86 mistakes per article for Wikipedia and 2.92 for Britannica. Although Britannica was revealed as the more accurate encyclopedia, with fewer errors, Encyclopædia Britannica, Inc. in its detailed 20-page rebuttal called Nature's study flawed and misleading and called for a "prompt" retraction. It noted that two of the articles in the study were taken from a Britannica yearbook and not the encyclopaedia, and another two were from Compton's Encyclopedia (called the Britannica Student Encyclopedia on the company's website). The rebuttal went on to mention that some of the articles presented to reviewers were combinations of several articles, and that other articles were merely excerpts but were penalized for factual omissions. The company also noted that several of what Nature called errors were minor spelling variations, and that others were matters of interpretation. Nature defended its story and declined to retract, stating that, as it was comparing Wikipedia with the web version of Britannica, it used whatever relevant material was available on Britannicas website. Interviewed in February 2009, the managing director of Britannica UK said: In a January 2016 press release, Britannica called Wikipedia "an impressive achievement." Critical and popular assessments Reputation Since the 3rd edition, the Britannica has enjoyed a popular and critical reputation for general excellence. The 3rd and the 9th editions were pirated for sale in the United States, beginning with Dobson's Encyclopaedia. On the release of the 14th edition, Time magazine dubbed the Britannica the "Patriarch of the Library". In a related advertisement, naturalist William Beebe was quoted as saying that the Britannica was "beyond comparison because there is no competitor." References to the Britannica can be found throughout English literature, most notably in one of Sir Arthur Conan Doyle's favourite Sherlock Holmes stories, "The Red-Headed League". The tale was highlighted by the Lord Mayor of London, Gilbert Inglefield, at the bicentennial of the Britannica. The Britannica has a reputation for summarising knowledge. To further their education, some people have devoted themselves to reading the entire Britannica, taking anywhere from three to 22 years to do so. When Fat'h Ali became the Shah of Persia in 1797, he was given a set of the Britannica's 3rd edition, which he read completely; after this feat, he extended his royal title to include "Most Formidable Lord and Master of the ". Writer George Bernard Shaw claimed to have read the complete 9th edition—except for the science articles—and Richard Evelyn Byrd took the Britannica as reading material for his five-month stay at the South Pole in 1934, while Philip Beaver read it during a sailing expedition. More recently, A.J. Jacobs, an editor at Esquire magazine, read the entire 2002 version of the 15th edition, describing his experiences in the well-received 2004 book, The Know-It-All: One Man's Humble Quest to Become the Smartest Person in the World. Only two people are known to have read two independent editions: the author C. S. Forester and Amos Urban Shirk, an American businessman who read the 11th and 14th editions, devoting roughly three hours per night for four and a half years to read the 11th. Elon Musk read the Britannica twice. Several editors-in-chief of the Britannica are likely to have read their editions completely, such as William Smellie (1st edition), William Robertson Smith (9th edition), and Walter Yust (14th edition). Awards The CD/DVD-ROM version of the Britannica, Encyclopædia Britannica Ultimate Reference Suite, received the 2004 Distinguished Achievement Award from the Association of Educational Publishers. On 15 July 2009, was awarded a spot as one of "Top Ten Superbrands in the UK" by a panel of more than 2,000 independent reviewers, as reported by the BBC. Coverage of topics Topics are chosen in part by reference to the "Outline of Knowledge". The bulk of the Britannica is devoted to geography (26% of the ), biography (14%), biology and medicine (11%), literature (7%), physics and astronomy (6%), religion (5%), art (4%), Western philosophy (4%), and law (3%). A complementary study of the found that geography accounted for 25% of articles, science 18%, social sciences 17%, biography 17%, and all other humanities 25%. Writing in 1992, one reviewer judged that the "range, depth, and catholicity of coverage [of the Britannica] are unsurpassed by any other general Encyclopaedia." The Britannica does not cover topics in equivalent detail; for example, the whole of Buddhism and most other religions is covered in a single article, whereas 14 articles are devoted to Christianity, comprising nearly half of all religion articles. However, the Britannica has been lauded as the least biased of general Encyclopaedias marketed to Western readers and praised for its biographies of important women of all eras. Criticism of editorial decisions On rare occasions, the Britannica has been criticized for its editorial choices. Given its roughly constant size, the encyclopaedia has needed to reduce or eliminate some topics to accommodate others, resulting in controversial decisions. The initial 15th edition (1974–1985) was faulted for having reduced or eliminated coverage of children's literature, military decorations, and the French poet Joachim du Bellay; editorial mistakes were also alleged, such as inconsistent sorting of Japanese biographies. Its elimination of the index was condemned, as was the apparently arbitrary division of articles into the and . Summing up, one critic called the initial 15th edition a "qualified failure...[that] cares more for juggling its format than for preserving." More recently, reviewers from the American Library Association were surprised to find that most educational articles had been eliminated from the 1992 , along with the article on psychology. Some very few Britannica-appointed contributors are mistaken. A notorious instance from the Britannica's early
sale in the United States, beginning with Dobson's Encyclopaedia. On the release of the 14th edition, Time magazine dubbed the Britannica the "Patriarch of the Library". In a related advertisement, naturalist William Beebe was quoted as saying that the Britannica was "beyond comparison because there is no competitor." References to the Britannica can be found throughout English literature, most notably in one of Sir Arthur Conan Doyle's favourite Sherlock Holmes stories, "The Red-Headed League". The tale was highlighted by the Lord Mayor of London, Gilbert Inglefield, at the bicentennial of the Britannica. The Britannica has a reputation for summarising knowledge. To further their education, some people have devoted themselves to reading the entire Britannica, taking anywhere from three to 22 years to do so. When Fat'h Ali became the Shah of Persia in 1797, he was given a set of the Britannica's 3rd edition, which he read completely; after this feat, he extended his royal title to include "Most Formidable Lord and Master of the ". Writer George Bernard Shaw claimed to have read the complete 9th edition—except for the science articles—and Richard Evelyn Byrd took the Britannica as reading material for his five-month stay at the South Pole in 1934, while Philip Beaver read it during a sailing expedition. More recently, A.J. Jacobs, an editor at Esquire magazine, read the entire 2002 version of the 15th edition, describing his experiences in the well-received 2004 book, The Know-It-All: One Man's Humble Quest to Become the Smartest Person in the World. Only two people are known to have read two independent editions: the author C. S. Forester and Amos Urban Shirk, an American businessman who read the 11th and 14th editions, devoting roughly three hours per night for four and a half years to read the 11th. Elon Musk read the Britannica twice. Several editors-in-chief of the Britannica are likely to have read their editions completely, such as William Smellie (1st edition), William Robertson Smith (9th edition), and Walter Yust (14th edition). Awards The CD/DVD-ROM version of the Britannica, Encyclopædia Britannica Ultimate Reference Suite, received the 2004 Distinguished Achievement Award from the Association of Educational Publishers. On 15 July 2009, was awarded a spot as one of "Top Ten Superbrands in the UK" by a panel of more than 2,000 independent reviewers, as reported by the BBC. Coverage of topics Topics are chosen in part by reference to the "Outline of Knowledge". The bulk of the Britannica is devoted to geography (26% of the ), biography (14%), biology and medicine (11%), literature (7%), physics and astronomy (6%), religion (5%), art (4%), Western philosophy (4%), and law (3%). A complementary study of the found that geography accounted for 25% of articles, science 18%, social sciences 17%, biography 17%, and all other humanities 25%. Writing in 1992, one reviewer judged that the "range, depth, and catholicity of coverage [of the Britannica] are unsurpassed by any other general Encyclopaedia." The Britannica does not cover topics in equivalent detail; for example, the whole of Buddhism and most other religions is covered in a single article, whereas 14 articles are devoted to Christianity, comprising nearly half of all religion articles. However, the Britannica has been lauded as the least biased of general Encyclopaedias marketed to Western readers and praised for its biographies of important women of all eras. Criticism of editorial decisions On rare occasions, the Britannica has been criticized for its editorial choices. Given its roughly constant size, the encyclopaedia has needed to reduce or eliminate some topics to accommodate others, resulting in controversial decisions. The initial 15th edition (1974–1985) was faulted for having reduced or eliminated coverage of children's literature, military decorations, and the French poet Joachim du Bellay; editorial mistakes were also alleged, such as inconsistent sorting of Japanese biographies. Its elimination of the index was condemned, as was the apparently arbitrary division of articles into the and . Summing up, one critic called the initial 15th edition a "qualified failure...[that] cares more for juggling its format than for preserving." More recently, reviewers from the American Library Association were surprised to find that most educational articles had been eliminated from the 1992 , along with the article on psychology. Some very few Britannica-appointed contributors are mistaken. A notorious instance from the Britannica's early years is the rejection of Newtonian gravity by George Gleig, the chief editor of the 3rd edition (1788–1797), who wrote that gravity was caused by the classical element of fire. The Britannica has also staunchly defended a scientific approach to cultural topics, as it did with William Robertson Smith's articles on religion in the 9th edition, particularly his article stating that the Bible was not historically accurate (1875). Other criticisms The Britannica has received criticism, especially as editions become outdated. It is expensive to produce a completely new edition of the Britannica, and its editors delay for as long as fiscally sensible (usually about 25 years). For example, despite continuous revision, the 14th edition became outdated after 35 years (1929–1964). When American physicist Harvey Einbinder detailed its failings in his 1964 book, The Myth of the Britannica, the encyclopaedia was provoked to produce the 15th edition, which required 10 years of work. It is still difficult to keep the Britannica current; one recent critic writes, "it is not difficult to find articles that are out-of-date or in need of revision", noting that the longer articles are more likely to be outdated than the shorter articles. Information in the is sometimes inconsistent with the corresponding article(s), mainly because of the failure to update one or the other. The bibliographies of the articles have been criticized for being more out-of-date than the articles themselves. In 2005, 12-year-old schoolboy Lucian George found several inaccuracies in the Britannica‘s entries on Poland and wildlife in Eastern Europe. In 2010, an inaccurate entry about the Irish Civil War was discussed in the Irish press following a decision of the Department of Education and Science to pay for online access.Sheehy, Clodagh (4 February 2010). "Are they taking the Mick? It's the encyclopedia that thinks the Civil War was between the north and south" . Evening Herald (Dublin). Writing about the 3rd edition (1788–1797), Britannicas chief editor George Gleig observed that "perfection seems to be incompatible with the nature of works constructed on such a plan, and embracing such a variety of subjects." In March 2006, the Britannica wrote, "we in no way mean to imply that Britannica is error-free; we have never made such a claim" (although in 1962 Britannica's sales department famously said of the 14th edition "It is truth. It is unquestionable fact.") The sentiment is expressed by its original editor, William Smellie: However, Jorge Cauz (president of Encyclopædia Britannica Inc.) asserted in 2012 that "Britannica [...] will always be factually correct." History Past owners have included, in chronological order, the Edinburgh, Scotland printers Colin Macfarquhar and Andrew Bell, Scottish bookseller Archibald Constable, Scottish publisher A & C Black, Horace Everett Hooper, Sears Roebuck and William Benton. The present owner of Encyclopædia Britannica Inc. is Jacqui Safra, a Brazilian billionaire and actor. Recent advances in information technology and the rise of electronic encyclopaedias such as Encyclopædia Britannica Ultimate Reference Suite, Encarta and Wikipedia have reduced the demand for print encyclopaedias. To remain competitive, Encyclopædia Britannica, Inc. has stressed the reputation of the Britannica, reduced its price and production costs, and developed electronic versions on CD-ROM, DVD, and the World Wide Web. Since the early 1930s, the company has promoted spin-off reference works. Editions The Britannica has been issued in 15 editions, with multi-volume supplements to the 3rd and 4th editions (see the Table below). The 5th and 6th editions were reprints of the 4th, the 10th edition was only a supplement to the 9th, just as the 12th and 13th editions were supplements to the 11th. The 15th underwent massive reorganization in 1985, but the updated, current version is still known as the 15th. The 14th and 15th editions were edited every year throughout their runs, so that later printings of each were entirely different from early ones. Throughout history, the Britannica has had two aims: to be an excellent reference book, and to provide educational material. In 1974, the 15th edition adopted a third goal: to systematize all human knowledge. The history of the Britannica can be divided into five eras, punctuated by changes in management, or reorganization of the dictionary. 1768–1826 In the first era (1st–6th editions, 1768–1826), the Britannica was managed and published by its founders, Colin Macfarquhar and Andrew Bell, by Archibald Constable, and by others. The Britannica was first published between December 1768 and 1771 in Edinburgh as the Encyclopædia Britannica, or, A Dictionary of Arts and Sciences, compiled upon a New Plan. In part, it was conceived in reaction to the French Encyclopédie of Denis Diderot and Jean le Rond d'Alembert (published 1751–72), which had been inspired by Chambers's Cyclopaedia (first edition 1728). It went on sale 10 December. The Britannica of this period was primarily a Scottish enterprise, and it is one of the most enduring legacies of the Scottish Enlightenment. In this era, the Britannica moved from being a three-volume set (1st edition) compiled by one young editor—William Smellie—to a 20-volume set written by numerous authorities. Several other encyclopaedias competed throughout this period, among them editions of Abraham Rees's Cyclopædia and Coleridge's Encyclopædia Metropolitana and David Brewster's Edinburgh Encyclopædia. 1827–1901 During the second era (7th–9th editions, 1827–1901), the Britannica was managed by the Edinburgh publishing firm A & C Black. Although some contributors were again recruited through friendships of the chief editors, notably Macvey Napier, others were attracted by the Britannica's reputation. The contributors often came from other countries and included the world's most respected authorities in their fields. A general index of all articles was included for the first time in the 7th edition, a practice maintained until 1974. Production of the 9th edition was overseen by Thomas Spencer Baynes, the first English-born editor-in-chief. Dubbed the "Scholar's Edition", the 9th edition is the most scholarly of all Britannicas. After 1880, Baynes was assisted by William Robertson Smith. No biographies of living persons were included. James Clerk Maxwell and Thomas Huxley were special advisors on science. However, by the close of the 19th century, the 9th edition was outdated, and the Britannica faced financial difficulties. 1901–1973 In the third era (10th–14th editions, 1901–1973), the Britannica was managed by American businessmen who introduced direct marketing and door-to-door sales. The American owners gradually simplified articles, making them less scholarly for a mass market. The 10th edition was an eleven-volume supplement (including one each of maps and an index) to the 9th, numbered as volumes 25–35, but the 11th edition was a completely new work, and is still praised for excellence; its owner, Horace Hooper, lavished enormous effort on its perfection. When Hooper fell into financial difficulties, the Britannica was managed by Sears Roebuck for 18 years (1920–1923, 1928–1943). In 1932, the vice-president of Sears, Elkan Harrison Powell, assumed presidency of the Britannica; in 1936, he began the policy of continuous revision. This was a departure from earlier practice, in which the articles were not changed until a new edition was produced, at roughly 25-year intervals, some articles unchanged from earlier editions. Powell developed new educational products that built upon the Britannicas reputation. In 1943, Sears donated the to the University of Chicago. William Benton, then a vice president of the university, provided the working capital for its operation. The stock was divided between Benton and the university, with the university holding an option on the stock. Benton became chairman of the board and managed the Britannica until his death in 1973. Benton set up the Benton Foundation, which managed the Britannica until 1996, and whose sole beneficiary was the University of Chicago. In 1968, near the end of this era, the Britannica celebrated its bicentennial. 1974–1994 In the fourth era (1974–94), the Britannica introduced its 15th edition, which was reorganized into three parts: the , the , and the . Under Mortimer J. Adler (member of the Board of Editors of Encyclopædia Britannica since its inception in 1949, and its chair from 1974; director of editorial planning for the 15th edition of Britannica from 1965), the Britannica sought not only to be a good reference work and educational tool, but to systematize all human knowledge. The absence of a separate index and the grouping of articles into parallel encyclopaedias (the and ) provoked a "firestorm of criticism" of the initial 15th edition. In response, the 15th edition was completely reorganized and indexed for a re-release in 1985. This second version of the 15th edition continued to be published and revised until the 2010 print version. The official title of the 15th edition is the New Encyclopædia Britannica, although it has also been promoted as Britannica 3. On 9 March 1976 the US Federal Trade Commission entered an opinion and order enjoining Encyclopædia Britannica, Inc. from using: a) deceptive advertising practices in recruiting sales agents and obtaining sales leads, and b) deceptive sales practices in the door-to-door presentations of its sales agents. 1994–present In the fifth era (1994–present), digital versions have been developed and released on optical media and online. In 1996, the Britannica was bought by Jacqui Safra at well below its estimated value, owing to the company's financial difficulties. Encyclopædia Britannica, Inc. split in 1999. One part retained the company name and developed the print version, and the other, Britannica.com Inc., developed digital versions. Since 2001, the two companies have shared a CEO, Ilan Yeshua, who has continued Powell's strategy of introducing new products with the Britannica name. In March 2012, Britannica's president, Jorge Cauz, announced that it would not produce any new print editions of the encyclopaedia, with the 2010 15th edition being the last. The company will focus only on the online edition and other educational tools.Britannicas final print edition was in 2010, a 32-volume set. Britannica Global Edition was also printed in 2010, containing 30 volumes and 18,251 pages, with 8,500 photographs, maps, flags, and illustrations in smaller "compact" volumes, as well as over 40,000 articles written by scholars from across the world, including Nobel Prize winners. Unlike the 15th edition, it did not contain and sections, but ran A through Z as all editions up through the 14th had. The following is Britannicas description of the work: In 2020, Encyclopædia Britannica, Inc. released the Britannica All New Children's Encyclopedia: What We Know and What We Don't, an encyclopedia aimed primarily at younger readers, covering major topics. The encyclopedia was widely praised for bringing back the print format. It was Britannica's first encyclopedia for children since 1984. Dedications The Britannica was dedicated to the reigning British monarch from 1788 to 1901 and then, upon its sale to an American partnership, to the British monarch and the President of the United States. Thus, the 11th edition is "dedicated by Permission to His Majesty George the Fifth, King of Great Britain and Ireland and of the British Dominions beyond the Seas, Emperor of India, and to William Howard Taft, President of the United States of America." The order of the dedications has changed with the relative power of the United States and Britain, and with relative sales; the 1954 version of the 14th edition is "Dedicated by Permission to the Heads of the Two English-Speaking Peoples, Dwight David Eisenhower, President of the United States of America, and Her Majesty, Queen Elizabeth the Second." Consistent with this tradition, the 2007 version of the current 15th edition was "dedicated by permission to the current President of the United States of America, George W. Bush, and Her Majesty, Queen Elizabeth II", while the 2010 version of the current 15th edition is "dedicated by permission to Barack Obama, President of the United States of America, and Her Majesty Queen Elizabeth II." Edition summary See also Encyclopædia Britannica Films Great Books of the Western World List of encyclopedias by branch of knowledge List of encyclopedias by date List of online encyclopedias Notes References Further reading Boyles, Denis. (2016) Everything Explained That Is Explainable: On the Creation of the Encyclopædia Britannicas Celebrated Eleventh Edition, 1910–1911 (2016) online review Greenstein, Shane, and Michelle Devereux (2006). "The Crisis at Encyclopædia Britannica" case history, Kellogg School of Management, Northwestern University. Lee, Timothy. Techdirt Interviews Britannica President Jorge Cauz'', Techdirt.com, 2 June 2008 External links Encyclopaedia Britannica at the National Library of Scotland, first ten editions (and supplements) in PDF format. Encyclopaedia Britannica at the Online Books Page, currently including the 1st, 3rd, 4th, 6th and 11th editions in multiple formats. 3rd edition, (1797, first volume, use search facility for others) at Bavarian State Library MDZ-Reader | Band | Encyclopaedia Britannica; or, a dictionary of arts, sciences, and miscellaneous literature | Encyclopaedia Britannica; or,
many of these proteins vary depending on the menstrual cycle, for example the progesterone receptor and thyrotropin-releasing hormone both expressed in the proliferative phase, and PAEP expressed in the secretory phase. Other proteins such as the HOX11 protein that is required for female fertility, is expressed in endometrial stroma cells throughout the menstrual cycle. Certain specific proteins such as the estrogen receptor are also expressed in other types of female tissue types, such as the cervix, fallopian tubes, ovaries and breast. Microbiome speculation The uterus and endometrium was for a long time thought to be sterile. The cervical plug of mucosa was seen to prevent the entry of any microorganisms ascending from the vagina. In the 1980s this view was challenged when it was shown that uterine infections could arise from weaknesses in the barrier of the cervical plug. Organisms from the vaginal microbiota could enter the uterus during uterine contractions in the menstrual cycle. Further studies sought to identify microbiota specific to the uterus which would be of help in identifying cases of unsuccessful IVF and miscarriages. Their findings were seen to be unreliable due to the possibility of cross-contamination in the sampling procedures used. The well-documented presence of Lactobacillus species, for example, was easily explained by an increase in the vaginal population being able to seep into the cervical mucous. Another study highlighted the flaws of the earlier studies including cross-contamination. It was also argued that the evidence from studies using germ-free offspring of axenic animals (germ-free) clearly showed the sterility of the uterus. The authors concluded that in light of these findings there was no existence of a microbiome. The normal dominance of Lactobacilli in the vagina is seen as a marker for vaginal health. However, in the uterus this much lower population is seen as invasive in a closed environment that is highly regulated by female sex hormones, and that could have unwanted consequences. In studies of endometriosis Lactobacillus is not the dominant type and there are higher levels of Streptococcus and Staphylococcus species. Half of the cases of bacterial vaginitis showed a polymicrobial biofilm attached to the endometrium. Function The endometrium is the innermost lining layer of the uterus, and functions to prevent adhesions between the opposed walls of the myometrium, thereby maintaining the patency of the uterine cavity. During the menstrual cycle or estrous cycle, the endometrium grows to a thick, blood vessel-rich, glandular tissue layer. This represents an optimal environment for the implantation of a blastocyst upon its arrival in the uterus. The endometrium is central, echogenic (detectable using ultrasound scanners), and has an average thickness of 6.7 mm. During pregnancy, the glands and blood vessels in the endometrium further increase in size and number. Vascular spaces fuse and become interconnected, forming the placenta, which supplies oxygen and nutrition to the embryo and fetus. Cycle The endometrial lining undergoes cyclic regeneration. Humans, apes, and some other species display the menstrual cycle, whereas most other mammals are subject to an estrous cycle. In both cases, the endometrium initially proliferates under the influence of estrogen. However, once ovulation occurs, the ovary (specifically the corpus luteum) will produce much larger amounts of progesterone. This changes the proliferative pattern of the endometrium to a secretory lining. Eventually, the secretory lining provides a hospitable environment for one or more blastocysts. Upon fertilization, the egg may implant into the uterine wall and provide feedback to the body with human chorionic gonadotropin (HCG). HCG provides continued feedback throughout pregnancy by maintaining the corpus luteum, which will continue its role of releasing progesterone and estrogen. The endometrial lining is either reabsorbed (estrous cycle) or shed (menstrual cycle). In the latter case, the process of shedding involves the breaking down of the lining, the tearing of small connective blood vessels, and the loss of the tissue and blood that had constituted it through the vagina. The entire process occurs over a period of several days. Menstruation may be accompanied by a series of uterine contractions; these help expel the menstrual endometrium. In case of implantation, however, the endometrial lining is neither absorbed nor shed. Instead, it remains as decidua. The decidua becomes part of the placenta; it provides support and protection for the gestation. If there is inadequate stimulation of the lining, due to lack of hormones, the endometrium remains thin and inactive. In humans, this will result in amenorrhea, or the absence of a menstrual period. After menopause, the lining is often described as being atrophic. In contrast, endometrium that is chronically exposed to estrogens, but not to progesterone, may become hyperplastic. Long-term use of oral contraceptives with highly potent progestins can also induce endometrial atrophy. In humans, the cycle of building and shedding the endometrial lining lasts an average of 28 days. The endometrium develops at different rates in different mammals. Various factors including
no existence of a microbiome. The normal dominance of Lactobacilli in the vagina is seen as a marker for vaginal health. However, in the uterus this much lower population is seen as invasive in a closed environment that is highly regulated by female sex hormones, and that could have unwanted consequences. In studies of endometriosis Lactobacillus is not the dominant type and there are higher levels of Streptococcus and Staphylococcus species. Half of the cases of bacterial vaginitis showed a polymicrobial biofilm attached to the endometrium. Function The endometrium is the innermost lining layer of the uterus, and functions to prevent adhesions between the opposed walls of the myometrium, thereby maintaining the patency of the uterine cavity. During the menstrual cycle or estrous cycle, the endometrium grows to a thick, blood vessel-rich, glandular tissue layer. This represents an optimal environment for the implantation of a blastocyst upon its arrival in the uterus. The endometrium is central, echogenic (detectable using ultrasound scanners), and has an average thickness of 6.7 mm. During pregnancy, the glands and blood vessels in the endometrium further increase in size and number. Vascular spaces fuse and become interconnected, forming the placenta, which supplies oxygen and nutrition to the embryo and fetus. Cycle The endometrial lining undergoes cyclic regeneration. Humans, apes, and some other species display the menstrual cycle, whereas most other mammals are subject to an estrous cycle. In both cases, the endometrium initially proliferates under the influence of estrogen. However, once ovulation occurs, the ovary (specifically the corpus luteum) will produce much larger amounts of progesterone. This changes the proliferative pattern of the endometrium to a secretory lining. Eventually, the secretory lining provides a hospitable environment for one or more blastocysts. Upon fertilization, the egg may implant into the uterine wall and provide feedback to the body with human chorionic gonadotropin (HCG). HCG provides continued feedback throughout pregnancy by maintaining the corpus luteum, which will continue its role of releasing progesterone and estrogen. The endometrial lining is either reabsorbed (estrous cycle) or shed (menstrual cycle). In the latter case, the process of shedding involves the breaking down of the lining, the tearing of small connective blood vessels, and the loss of the tissue and blood that had constituted it through the vagina. The entire process occurs over a period of several days. Menstruation may be accompanied by a series of uterine contractions; these help expel the menstrual endometrium. In case of implantation, however, the endometrial lining is neither absorbed nor shed. Instead, it remains as decidua. The decidua becomes part of the placenta; it provides support and protection for the gestation. If there is inadequate stimulation of the lining, due to lack of hormones, the endometrium remains thin and inactive. In humans, this will result in amenorrhea, or the absence of a menstrual period. After menopause, the lining is often described as being atrophic. In contrast, endometrium that is chronically exposed to estrogens, but not to progesterone, may become hyperplastic. Long-term use of oral contraceptives with highly potent progestins can also induce endometrial atrophy. In humans, the cycle of building and shedding the endometrial lining lasts an average of 28 days. The endometrium develops at different rates in different mammals. Various factors including the seasons, climate, and stress can affect its development. The endometrium itself produces certain hormones at different stages of the cycle and this affects other parts of the reproductive system. Diseases related with endometrium Chorionic tissue can result in marked endometrial changes, known as an Arias-Stella reaction, that have
from America and England (early and recently) that computers may have played music earlier, but thorough research has debunked these stories as there is no evidence to support the newspaper reports (some of which were obviously speculative). Research has shown that people speculated about computers playing music, possibly because computers would make noises, but there is no evidence that they actually did it. The world's first computer to play music was CSIRAC, which was designed and built by Trevor Pearcey and Maston Beard in the 1950s. Mathematician Geoff Hill programmed the CSIRAC to play popular musical melodies from the very early 1950s. In 1951 it publicly played the "Colonel Bogey March" of which no known recordings exist. However, CSIRAC played standard repertoire and was not used to extend musical thinking or composition practice which is current computer-music practice. The first music to be performed in England was a performance of the British National Anthem that was programmed by Christopher Strachey on the Ferranti Mark I, late in 1951. Later that year, short extracts of three pieces were recorded there by a BBC outside broadcasting unit: the National Anthem, "Ba, Ba Black Sheep", and "In the Mood" and this is recognised as the earliest recording of a computer to play music. This recording can be heard at this Manchester University site. Researchers at the University of Canterbury, Christchurch declicked and restored this recording in 2016 and the results may be heard on SoundCloud. The late 1950s, 1960s, and 1970s also saw the development of large mainframe computer synthesis. Starting in 1957, Max Mathews of Bell Labs developed the MUSIC programs, culminating in MUSIC V, a direct digital synthesis language. Laurie Spiegel developed the algorithmic musical composition software "Music Mouse" (1986) for Macintosh, Amiga, and Atari computers. Stochastic music An important new development was the advent of computers to compose music, as opposed to manipulating or creating sounds. Iannis Xenakis began what is called musique stochastique, or stochastic music, which is a composing method that uses mathematical probability systems. Different probability algorithms were used to create a piece under a set of parameters. Xenakis used computers to compose pieces like ST/4 for string quartet and ST/48 for orchestra (both 1962), Morsima-Amorsima, ST/10, and Atrées. He developed the computer system UPIC for translating graphical images into musical results and composed Mycènes Alpha (1978) with it. Live electronics In Europe in 1964, Karlheinz Stockhausen composed Mikrophonie I for tam-tam, hand-held microphones, filters, and potentiometers, and Mixtur for orchestra, four sine-wave generators, and four ring modulators. In 1965 he composed Mikrophonie II for choir, Hammond organ, and ring modulators. In 1966–67, Reed Ghazala discovered and began to teach "circuit bending"—the application of the creative short circuit, a process of chance short-circuiting, creating experimental electronic instruments, exploring sonic elements mainly of timbre and with less regard to pitch or rhythm, and influenced by John Cage's aleatoric music concept. Cosey Fanni Tutti's performance art and musical career explored the concept of 'acceptable' music and she went on to explore the use of sound as a means of desire or discomfort. Wendy Carlos performed selections from her album Switched-On Bach on stage with a synthesizer with the St. Louis Symphony Orchestra; another live performance was with Kurzweil Baroque Ensemble for "Bach at the Beacon" in 1997. In June 2018, Suzanne Ciani released LIVE Quadraphonic, a live album documenting her first solo performance on a Buchla synthesizer in 40 years. It was one of the first quadraphonic vinyl releases in over 30 years. Japanese instruments In the 1950s, Japanese electronic musical instruments began influencing the international music industry. Ikutaro Kakehashi, who founded Ace Tone in 1960, developed his own version of electronic percussion that had been already popular on the overseas electronic organ. At NAMM 1964, he revealed it as the R-1 Rhythm Ace, a hand-operated percussion device that played electronic drum sounds manually as the user pushed buttons, in a similar fashion to modern electronic drum pads. In 1963, Korg released the Donca-Matic DA-20, an electro-mechanical drum machine. In 1965, Nippon Columbia patented a fully electronic drum machine. Korg released the Donca-Matic DC-11 electronic drum machine in 1966, which they followed with the Korg Mini Pops, which was developed as an option for the Yamaha Electone electric organ. Korg's Stageman and Mini Pops series were notable for "natural metallic percussion" sounds and incorporating controls for drum "breaks and fill-ins." In 1967, Ace Tone founder Ikutaro Kakehashi patented a preset rhythm-pattern generator using diode matrix circuit similar to the Seeburg's prior filed in 1964 (See Drum machine#History), which he released as the FR-1 Rhythm Ace drum machine the same year. It offered 16 preset patterns, and four buttons to manually play each instrument sound (cymbal, claves, cowbell and bass drum). The rhythm patterns could also be cascaded together by pushing multiple rhythm buttons simultaneously, and the possible combination of rhythm patterns were more than a hundred. Ace Tone's Rhythm Ace drum machines found their way into popular music from the late 1960s, followed by Korg drum machines in the 1970s. Kakehashi later left Ace Tone and founded Roland Corporation in 1972, with Roland synthesizers and drum machines becoming highly influential for the next several decades. The company would go on to have a big impact on popular music, and do more to shape popular electronic music than any other company. Turntablism has origins in the invention of direct-drive turntables. Early belt-drive turntables were unsuitable for turntablism, since they had a slow start-up time, and they were prone to wear-and-tear and breakage, as the belt would break from backspin or scratching. The first direct-drive turntable was invented by Shuichi Obata, an engineer at Matsushita (now Panasonic), based in Osaka, Japan. It eliminated belts, and instead employed a motor to directly drive a platter on which a vinyl record rests. In 1969, Matsushita released it as the SP-10, the first direct-drive turntable on the market, and the first in their influential Technics series of turntables. It was succeeded by the Technics SL-1100 and SL-1200 in the early 1970s, and they were widely adopted by hip hop musicians, with the SL-1200 remaining the most widely used turntable in DJ culture for several decades. Jamaican dub music In Jamaica, a form of popular electronic music emerged in the 1960s, dub music, rooted in sound system culture. Dub music was pioneered by studio engineers, such as Sylvan Morris, King Tubby, Errol Thompson, Lee "Scratch" Perry, and Scientist, producing reggae-influenced experimental music with electronic sound technology, in recording studios and at sound system parties. Their experiments included forms of tape-based composition comparable to aspects of musique concrète, an emphasis on repetitive rhythmic structures (often stripped of their harmonic elements) comparable to minimalism, the electronic manipulation of spatiality, the sonic electronic manipulation of pre-recorded musical materials from mass media, deejays toasting over pre-recorded music comparable to live electronic music, remixing music, turntablism, and the mixing and scratching of vinyl. Despite the limited electronic equipment available to dub pioneers such as King Tubby and Lee "Scratch" Perry, their experiments in remix culture were musically cutting-edge. King Tubby, for example, was a sound system proprietor and electronics technician, whose small front-room studio in the Waterhouse ghetto of western Kingston was a key site of dub music creation. Late 1960s to early 1980s Rise of popular electronic music In the late 1960s, pop and rock musicians, including the Beach Boys and the Beatles, began to use electronic instruments, like the theremin and Mellotron, to supplement and define their sound. In his book Electronic and Experimental Music, Thom Holmes recognises the Beatles' 1966 recording "Tomorrow Never Knows" as the song that "ushered in a new era in the use of electronic music in rock and pop music" due to the band's incorporation of tape loops and reversed and speed-manipulated tape sounds. Also in the late 1960s, the music duo Silver Apples and experimental rock bands like White Noise and the United States of America, are regarded as pioneers to the electronic rock and electronica genres for their work in melding psychedelic rock with oscillators and synthesizers. The 1969 instrumental titled "Popcorn" written by Gershon Kingsley, a German-American composer who released two albums with the French electronic musician Jean-Jacques Perrey, became a worldwide success due to the 1972 version made by Hot Butter. By the end of the 1960s, the Moog synthesizer took a leading place in the sound of emerging progressive rock with bands including Pink Floyd, Yes, Emerson, Lake & Palmer, and Genesis making them part of their sound. Instrumental prog rock was particularly significant in continental Europe, allowing bands like Kraftwerk, Tangerine Dream, Can, Neu!, and Faust to circumvent the language barrier. Their synthesiser-heavy "krautrock", along with the work of Brian Eno (for a time the keyboard player with Roxy Music), would be a major influence on subsequent electronic rock. Ambient dub was pioneered by King Tubby and other Jamaican sound artists, using DJ-inspired ambient electronics, complete with drop-outs, echo, equalization and psychedelic electronic effects. It featured layering techniques and incorporated elements of world music, deep basslines and harmonic sounds. Techniques such as a long echo delay were also used. Other notable artists within the genre include Dreadzone, Higher Intelligence Agency, The Orb, Ott, Loop Guru, Woob and Transglobal Underground. Dub music influenced electronic musical techniques later adopted by hip hop music when Jamaican immigrant DJ Kool Herc in the early 1970s introduced Jamaica's sound system culture and dub music techniques to America. One such technique that became popular in hip hop culture was playing two copies of the same record on two turntables in alternation, extending the b-dancers' favorite section. The turntable eventually went on to become the most visible electronic musical instrument, and occasionally the most virtuosic, in the 1980s and 1990s. Electronic rock was also produced by several Japanese musicians, including Isao Tomita's Electric Samurai: Switched on Rock (1972), which featured Moog synthesizer renditions of contemporary pop and rock songs, and Osamu Kitajima's progressive rock album Benzaiten (1974). The mid-1970s saw the rise of electronic art music musicians such as Jean Michel Jarre, Vangelis, Tomita and Klaus Schulze who were significant influences on the development of new-age music. The hi-tech appeal of these works created for some years the trend of listing the electronic musical equipment employed in the album sleeves, as a distinctive feature. Electronic music began to enter regularly in radio programming and top-sellers charts, as the French band Space with their 1977 single Magic Fly. In this era, the sound of rock musicians like Mike Oldfield and The Alan Parsons Project (who is credited the first rock song to feature a digital vocoder in 1975, The Raven) used to be arranged and blended with electronic effects and/or music as well, which became much more prominent in the mid-1980s. Jeff Wayne achieved a long-lasting success with his 1978 electronic rock musical version of The War of the Worlds. Film soundtracks also benefit from the electronic sound. In 1977, Gene Page recorded a disco version of the hit theme by John Williams from Steven Spielberg film Close Encounters of the Third Kind. Page's version peaked on the R&B chart at #30 in 1978. The score of 1978 film Midnight Express composed by Italian synth-pioneer Giorgio Moroder won the Academy Award for Best Original Score in 1979, as did it again in 1981 the score by Vangelis for Chariots of Fire. After the arrival of punk rock, a form of basic electronic rock emerged, increasingly using new digital technology to replace other instruments. The American duo Suicide, who arose from the punk scene in New York, utilized drum machines and synthesizers in a hybrid between electronics and punk on their eponymous 1977 album. Synth-pop pioneering bands which enjoyed success for years included Ultravox with their 1977 track "Hiroshima Mon Amour" on Ha!-Ha!-Ha!, Yellow Magic Orchestra with their self-titled album (1978), The Buggles with their prominent 1979 debut single Video Killed the Radio Star, Gary Numan with his solo debut album The Pleasure Principle and single Cars in 1979, Orchestral Manoeuvres in the Dark with their 1979 single Electricity featured on their eponymous debut album, Depeche Mode with their first single Dreaming of Me recorded in 1980 and released in 1981 album Speak & Spell, A Flock of Seagulls with their 1981 single Talking, New Order with Ceremony in 1981, and The Human League with their 1981 hit Don't You Want Me from their third album Dare. The definition of MIDI and the development of digital audio made the development of purely electronic sounds much easier, with audio engineers, producers and composers exploring frequently the possibilities of virtually every new model of electronic sound equipment launched by manufacturers. Synth-pop sometimes used synthesizers to replace all other instruments but was more common that bands had one or more keyboardists in their line-ups along with guitarists, bassists, and/or drummers. These developments led to the growth of synth-pop, which after it was adopted by the New Romantic movement, allowed synthesizers to dominate the pop and rock music of the early 1980s until the style began to fall from popularity in the mid-to-end of the decade. Along with aforementioned successful pioneers, key acts included Yazoo, Duran Duran, Spandau Ballet, Culture Club, Talk Talk, Japan, and Eurythmics. Synth-pop was taken up across the world, with international hits for acts including Men Without Hats, Trans-X and Lime from Canada, Telex from Belgium, Peter Schilling, Sandra, Modern Talking, Propaganda and Alphaville from Germany, Yello from Switzerland and Azul y Negro from Spain. Also, the synth sound is a key feature of Italo-disco. Some synth-pop bands created futuristic visual styles of themselves to reinforce the idea of electronic sounds were linked primarily with technology, as Americans Devo and Spaniards Aviador Dro. Keyboard synthesizers became so common that even heavy metal rock bands, a genre often regarded as the opposite in aesthetics, sound and lifestyle from that of electronic pop artists by fans of both sides, achieved worldwide success with themes as 1983 Jump by Van Halen and 1986 The Final Countdown by Europe, which feature synths prominently. Proliferation of electronic music research institutions (EMS), formerly known as Electroacoustic Music in Sweden, is the Swedish national centre for electronic music and sound art. The research organisation started in 1964 and is based in Stockholm. STEIM is a center for research and development of new musical instruments in the electronic performing arts, located in Amsterdam, Netherlands. STEIM has existed since 1969. It was founded by Misha Mengelberg, Louis Andriessen, Peter Schat, Dick Raaymakers, , Reinbert de Leeuw, and Konrad Boehmer. This group of Dutch composers had fought for the reformation of Amsterdam's feudal music structures; they insisted on Bruno Maderna's appointment as musical director of the Concertgebouw Orchestra and enforced the first public fundings for experimental and improvised electronic music in the Netherlands. IRCAM in Paris became a major center for computer music research and realization and development of the Sogitec 4X computer system, featuring then revolutionary real-time digital signal processing. Pierre Boulez's Répons (1981) for 24 musicians and 6 soloists used the 4X to transform and route soloists to a loudspeaker system. Barry Vercoe describes one of his experiences with early computer sounds: Keyboard synthesizers Released in 1970 by Moog Music, the Mini-Moog was among the first widely available, portable, and relatively affordable synthesizers. It became once the most widely used synthesizer at that time in both popular and electronic art music. Patrick Gleeson, playing live with Herbie Hancock at the beginning of the 1970s, pioneered the use of synthesizers in a touring context, where they were subject to stresses the early machines were not designed for. In 1974, the WDR studio in Cologne acquired an EMS Synthi 100 synthesizer, which many composers used to produce notable electronic works—including Rolf Gehlhaar's Fünf deutsche Tänze (1975), Karlheinz Stockhausen's Sirius (1975–76), and John McGuire's Pulse Music III (1978). Thanks to miniaturization of electronics in the 1970s, by the start of the 1980s keyboard synthesizers, became lighter and affordable, integrating into a single slim unit all the necessary audio synthesis electronics and the piano-style keyboard itself, in sharp contrast with the bulky machinery and "cable spaguetty" employed along with the 1960s and 1970s. First, with analog synthesizers, the trend followed with digital synthesizers and samplers as well (see below). Digital synthesizers In 1975, the Japanese company Yamaha licensed the algorithms for frequency modulation synthesis (FM synthesis) from John Chowning, who had experimented with it at Stanford University since 1971. Yamaha's engineers began adapting Chowning's algorithm for use in a digital synthesizer, adding improvements such as the "key scaling" method to avoid the introduction of distortion that normally occurred in analog systems during frequency modulation. In 1980, Yamaha eventually released the first FM digital synthesizer, the Yamaha GS-1, but at an expensive price. In 1983, Yamaha introduced the first stand-alone digital synthesizer, the DX7, which also used FM synthesis and would become one of the best-selling synthesizers of all time. The DX7 was known for its recognizable bright tonalities that was partly due to an overachieving sampling rate of 57 kHz. The Korg Poly-800 is a synthesizer released by Korg in 1983. Its initial list price of $795 made it the first fully programmable synthesizer that sold for less than $1000. It had 8-voice polyphony with one Digitally controlled oscillator (DCO) per voice. The Casio CZ-101 was the first and best-selling phase distortion synthesizer in the Casio CZ line. Released in November 1984, it was one of the first (if not the first) fully programmable polyphonic synthesizers that was available for under $500. The Roland D-50 is a digital synthesizer produced by Roland and released in April 1987. Its features include subtractive synthesis, on-board effects, a joystick for data manipulation, and an analogue synthesis-styled layout design. The external Roland PG-1000 (1987–1990) programmer could also be attached to the D-50 for more complex manipulation of its sounds. Samplers A sampler is an electronic or digital musical instrument which uses sound recordings (or "samples") of real instrument sounds (e.g., a piano, violin or trumpet), excerpts from recorded songs (e.g., a five-second bass guitar riff from a funk song) or found sounds (e.g., sirens and ocean waves). The samples are loaded or recorded by the user or by a manufacturer. These sounds are then played back using the sampler program itself, a MIDI keyboard, sequencer or another triggering device (e.g., electronic drums) to perform or compose music. Because these samples are usually stored in digital memory, the information can be quickly accessed. A single sample may often be pitch-shifted to different pitches to produce musical scales and chords. Before computer memory-based samplers, musicians used tape replay keyboards, which store recordings on analog tape. When a key is pressed the tape head contacts the moving tape and plays a sound. The Mellotron was the most notable model, used by many groups in the late 1960s and the 1970s, but such systems were expensive and heavy due to the multiple tape mechanisms involved, and the range of the instrument was limited to three octaves at the most. To change sounds a new set of tapes had to be installed in the instrument. The emergence of the digital sampler made sampling far more practical. The earliest digital sampling was done on the EMS Musys system, developed by Peter Grogono (software), David Cockerell (hardware and interfacing), and Peter Zinovieff (system design and operation) at their London (Putney) Studio c. 1969. The first commercially available sampling synthesizer was the Computer Music Melodian by Harry Mendell (1976). First released in 1977–78, the Synclavier I using FM synthesis, re-licensed from Yamaha, and sold mostly to universities, proved to be highly influential among both electronic music composers and music producers, including Mike Thorne, an early adopter from the commercial world, due to its versatility, its cutting-edge technology, and distinctive sounds. The first polyphonic digital sampling synthesizer was the Australian-produced Fairlight CMI, first available in 1979. These early sampling synthesizers used wavetable sample-based synthesis. Birth of MIDI In 1980, a group of musicians and music merchants met to standardize an interface that new instruments could use to communicate control instructions with other instruments and computers. This standard was dubbed Musical Instrument Digital Interface (MIDI) and resulted from a collaboration between leading manufacturers, initially Sequential Circuits, Oberheim, Roland—and later, other participants that included Yamaha, Korg, and Kawai. A paper was authored by Dave Smith of Sequential Circuits and proposed to the Audio Engineering Society in 1981. Then, in August 1983, the MIDI Specification 1.0 was finalized. MIDI technology allows a single keystroke, control wheel motion, pedal movement, or command from a microcomputer to activate every device in the studio remotely and synchrony, with each device responding according to conditions predetermined by the composer. MIDI instruments and software made powerful control of sophisticated instruments easily affordable by many studios and individuals. Acoustic sounds became reintegrated into studios via sampling and sampled-ROM-based instruments. Miller Puckette developed graphic signal-processing software for 4X called Max (after Max Mathews) and later ported it to Macintosh (with Dave Zicarelli extending it for Opcode) for real-time MIDI control, bringing algorithmic composition availability to most composers with modest computer programming background. Sequencers and drum machines The early 1980s saw the rise of bass synthesizers, the most influential being the Roland TB-303, a bass synthesizer and sequencer released in late 1981 that later became a fixture in electronic dance music, particularly acid house. One of the first to use it was Charanjit Singh in 1982, though it wouldn't be popularized until Phuture's "Acid Tracks" in 1987. Music sequencers began being used around the mid 20th century, and Tomita's albums in mid-1970s being later examples. In 1978, Yellow Magic Orchestra were using computer-based technology in conjunction with a synthesiser to produce popular music, making their early use of the microprocessor-based Roland MC-8 Microcomposer sequencer. Drum machines, also known as rhythm machines, also began being used around the late-1950s, with a later example being Osamu Kitajima's progressive rock album Benzaiten (1974), which used a rhythm machine along with electronic drums and a synthesizer. In 1977, Ultravox's "Hiroshima Mon Amour" was one of the first singles to use the metronome-like percussion of a Roland TR-77 drum machine. In 1980, Roland Corporation released the TR-808, one of the first and most popular programmable drum machines. The first band to use it was Yellow Magic Orchestra in 1980, and it would later gain widespread popularity with the release of Marvin Gaye's "Sexual Healing" and Afrika Bambaataa's "Planet Rock" in 1982. The TR-808 was a fundamental tool in the later Detroit techno scene of the late 1980s, and was the drum machine of choice for Derrick May and Juan Atkins. Chiptunes The characteristic lo-fi sound of chip music was initially the result of early computer's sound chips and sound cards' technical limitations; however, the sound has since become sought after in its own right. Common cheap popular sound chips of the firsts home computers of the 1980s include the SID of the Commodore 64 and General Instrument AY series and clones (as the Yamaha YM2149) used in ZX Spectrum, Amstrad CPC, MSX compatibles and Atari ST models, among others. Late 1980s to 1990s Rise of dance music Synth-pop continued into the late 1980s, with a format that moved closer to dance music, including the work of acts such as British duos Pet Shop Boys, Erasure and The Communards, achieving success along much of the 1990s. The trend has continued to the present day with modern nightclubs worldwide regularly playing electronic dance music (EDM). Today, electronic dance music has radio stations, websites, and publications like Mixmag dedicated solely to the genre. Moreover, the genre has found commercial and cultural significance in the United States and North America, thanks to the wildly popular big room house/EDM sound that has been incorporated into the U.S. pop music and the rise of large-scale commercial raves such as Electric Daisy Carnival, Tomorrowland and Ultra Music Festival. Electronica On the other hand, a broad group of electronic-based music styles intended for listening rather than strictly for dancing became known under the "electronica" umbrella which was also a music scene in the early 1990s in the United Kingdom. According to a 1997 Billboard article, "the union of the club community and independent labels" provided the experimental and trend-setting environment in which electronica acts developed and eventually reached the mainstream, citing American labels such as Astralwerks (The Chemical Brothers, Fatboy Slim, The Future Sound of London, Fluke), Moonshine (DJ Keoki), Sims, and City of Angels (The Crystal Method) for popularizing the latest version of electronic music. 2000s and 2010s As computer technology has become more accessible and music software has advanced, interacting with music production technology is now possible using means that bear no relationship to traditional musical performance practices: for instance, laptop performance (laptronica), live coding and Algorave. In general, the term Live PA refers to any live performance of electronic music, whether with laptops, synthesizers, or other devices. Beginning around the year 2000, some software-based virtual studio environments emerged, with products such as Propellerhead's Reason and Ableton Live finding popular appeal. Such tools provide viable and cost-effective alternatives to typical hardware-based production studios, and thanks to advances in microprocessor technology, it is now possible to create high-quality music using little more than a single laptop computer. Such advances have democratized music creation, leading to a massive increase in the amount of home-produced electronic music available to the general public via the internet. Software-based instruments and effect units (so-called "plugins") can be incorporated in a computer-based studio using the VST platform. Some of these instruments are more or less exact replicas of existing hardware (such as the Roland D-50, ARP Odyssey, Yamaha DX7, or Korg M1). In many cases, these software-based instruments are sonically indistinguishable from their physical counterpart. Circuit bending Circuit bending is the modification of battery-powered toys and synthesizers to create new unintended sound effects. It was pioneered by Reed Ghazala in the 1960s and Reed coined the name "circuit bending" in 1992. Modular synth revival Following the circuit bending culture, musicians also began to build their own modular synthesizers, causing a renewed interest in the early 1960s designs. Eurorack became a popular system. See also Clavioline Electronic sackbut List of electronic music genres New Interfaces for Musical Expression Ondioline Rave culture Spectral music Tracker music Timeline of electronic music genres Live electronic music List of electronic music festivals Live electronic music Footnotes Sources (archive on 10 March 2011) (Online reprint , NASA Ames Research Center Technical Memorandum facsimile 2000. (First published in German in Melos 39 (January–February 1972): 42–44.) (Excerpt exist on History of Experimental Music in Northern California) (cloth); (pbk); (ebook). Abstract. (at webcitation.org) . Also published in German, as . Further reading . Guide ID: A520831 (Edited). (Originally published: New York: Twayne, 1998) Chekalin, Mikhael (n.d.). "A.
of electronic musical instruments. By the late 1940s, Japanese composers began experimenting with electronic music and institutional sponsorship enabled them to experiment with advanced equipment. Their infusion of Asian music into the emerging genre would eventually support Japan's popularity in the development of music technology several decades later. Following the foundation of electronics company Sony in 1946, composers Toru Takemitsu and Minao Shibata independently explored possible uses for electronic technology to produce music. Takemitsu had ideas similar to musique concrète, which he was unaware of, while Shibata foresaw the development of synthesizers and predicted a drastic change in music. Sony began producing popular magnetic tape recorders for government and public use. The avant-garde collective Jikken Kōbō (Experimental Workshop), founded in 1950, was offered access to emerging audio technology by Sony. The company hired Toru Takemitsu to demonstrate their tape recorders with compositions and performances of electronic tape music. The first electronic tape pieces by the group were "Toraware no Onna" ("Imprisoned Woman") and "Piece B", composed in 1951 by Kuniharu Akiyama. Many of the electroacoustic tape pieces they produced were used as incidental music for radio, film, and theatre. They also held concerts employing a slide show synchronized with a recorded soundtrack. Composers outside of the Jikken Kōbō, such as Yasushi Akutagawa, Saburo Tominaga, and Shirō Fukai, were also experimenting with radiophonic tape music between 1952 and 1953. Musique concrète was introduced to Japan by Toshiro Mayuzumi, who was influenced by a Pierre Schaeffer concert. From 1952, he composed tape music pieces for a comedy film, a radio broadcast, and a radio drama. However, Schaeffer's concept of sound object was not influential among Japanese composers, who were mainly interested in overcoming the restrictions of human performance. This led to several Japanese electroacoustic musicians making use of serialism and twelve-tone techniques, evident in Yoshirō Irino's 1951 dodecaphonic piece "Concerto da Camera", in the organization of electronic sounds in Mayuzumi's "X, Y, Z for Musique Concrète", and later in Shibata's electronic music by 1956. Modelling the NWDR studio in Cologne, NHK established an electronic music studio in Tokyo in 1955, which became one of the world's leading electronic music facilities. The NHK Studio was equipped with technologies such as tone-generating and audio processing equipment, recording and radiophonic equipment, ondes Martenot, Monochord and Melochord, sine-wave oscillators, tape recorders, ring modulators, band-pass filters, and four- and eight-channel mixers. Musicians associated with the studio included Toshiro Mayuzumi, Minao Shibata, Joji Yuasa, Toshi Ichiyanagi, and Toru Takemitsu. The studio's first electronic compositions were completed in 1955, including Mayuzumi's five-minute pieces "Studie I: Music for Sine Wave by Proportion of Prime Number", "Music for Modulated Wave by Proportion of Prime Number" and "Invention for Square Wave and Sawtooth Wave" produced using the studio's various tone-generating capabilities, and Shibata's 20-minute stereo piece "Musique Concrète for Stereophonic Broadcast". American electronic music In the United States, electronic music was being created as early as 1939, when John Cage published Imaginary Landscape, No. 1, using two variable-speed turntables, frequency recordings, muted piano, and cymbal, but no electronic means of production. Cage composed five more "Imaginary Landscapes" between 1942 and 1952 (one withdrawn), mostly for percussion ensemble, though No. 4 is for twelve radios and No. 5, written in 1952, uses 42 recordings and is to be realized as a magnetic tape. According to Otto Luening, Cage also performed a William Mix at Donaueschingen in 1954, using eight loudspeakers, three years after his alleged collaboration. Williams Mix was a success at the Donaueschingen Festival, where it made a "strong impression". The Music for Magnetic Tape Project was formed by members of the New York School (John Cage, Earle Brown, Christian Wolff, David Tudor, and Morton Feldman), and lasted three years until 1954. Cage wrote of this collaboration: "In this social darkness, therefore, the work of Earle Brown, Morton Feldman, and Christian Wolff continues to present a brilliant light, for the reason that at the several points of notation, performance, and audition, action is provocative." Cage completed Williams Mix in 1953 while working with the Music for Magnetic Tape Project. The group had no permanent facility, and had to rely on borrowed time in commercial sound studios, including the studio of Louis and Bebe Barron. Columbia-Princeton Center In the same year Columbia University purchased its first tape recorder—a professional Ampex machine—to record concerts. Vladimir Ussachevsky, who was on the music faculty of Columbia University, was placed in charge of the device, and almost immediately began experimenting with it. Herbert Russcol writes: "Soon he was intrigued with the new sonorities he could achieve by recording musical instruments and then superimposing them on one another." Ussachevsky said later: "I suddenly realized that the tape recorder could be treated as an instrument of sound transformation." On Thursday, 8 May 1952, Ussachevsky presented several demonstrations of tape music/effects that he created at his Composers Forum, in the McMillin Theatre at Columbia University. These included Transposition, Reverberation, Experiment, Composition, and Underwater Valse. In an interview, he stated: "I presented a few examples of my discovery in a public concert in New York together with other compositions I had written for conventional instruments." Otto Luening, who had attended this concert, remarked: "The equipment at his disposal consisted of an Ampex tape recorder . . . and a simple box-like device designed by the brilliant young engineer, Peter Mauzey, to create feedback, a form of mechanical reverberation. Other equipment was borrowed or purchased with personal funds." Just three months later, in August 1952, Ussachevsky traveled to Bennington, Vermont, at Luening's invitation to present his experiments. There, the two collaborated on various pieces. Luening described the event: "Equipped with earphones and a flute, I began developing my first tape-recorder composition. Both of us were fluent improvisors and the medium fired our imaginations." They played some early pieces informally at a party, where "a number of composers almost solemnly congratulated us saying, 'This is it' ('it' meaning the music of the future)." Word quickly reached New York City. Oliver Daniel telephoned and invited the pair to "produce a group of short compositions for the October concert sponsored by the American Composers Alliance and Broadcast Music, Inc., under the direction of Leopold Stokowski at the Museum of Modern Art in New York. After some hesitation, we agreed. . . . Henry Cowell placed his home and studio in Woodstock, New York, at our disposal. With the borrowed equipment in the back of Ussachevsky's car, we left Bennington for Woodstock and stayed two weeks. . . . In late September 1952, the travelling laboratory reached Ussachevsky's living room in New York, where we eventually completed the compositions." Two months later, on 28 October, Vladimir Ussachevsky and Otto Luening presented the first Tape Music concert in the United States. The concert included Luening's Fantasy in Space (1952)—"an impressionistic virtuoso piece" using manipulated recordings of flute—and Low Speed (1952), an "exotic composition that took the flute far below its natural range." Both pieces were created at the home of Henry Cowell in Woodstock, New York. After several concerts caused a sensation in New York City, Ussachevsky and Luening were invited onto a live broadcast of NBC's Today Show to do an interview demonstration—the first televised electroacoustic performance. Luening described the event: "I improvised some [flute] sequences for the tape recorder. Ussachevsky then and there put them through electronic transformations." The score for Forbidden Planet, by Louis and Bebe Barron, was entirely composed using custom-built electronic circuits and tape recorders in 1956 (but no synthesizers in the modern sense of the word). Australia The world's first computer to play music was CSIRAC, which was designed and built by Trevor Pearcey and Maston Beard. Mathematician Geoff Hill programmed the CSIRAC to play popular musical melodies from the very early 1950s. In 1951 it publicly played the Colonel Bogey March, of which no known recordings exist, only the accurate reconstruction. However, CSIRAC played standard repertoire and was not used to extend musical thinking or composition practice. CSIRAC was never recorded, but the music played was accurately reconstructed. The oldest known recordings of computer-generated music were played by the Ferranti Mark 1 computer, a commercial version of the Baby Machine from the University of Manchester in the autumn of 1951. The music program was written by Christopher Strachey. Mid-to-late 1950s The impact of computers continued in 1956. Lejaren Hiller and Leonard Isaacson composed Illiac Suite for string quartet, the first complete work of computer-assisted composition using algorithmic composition. "... Hiller postulated that a computer could be taught the rules of a particular style and then called on to compose accordingly." Later developments included the work of Max Mathews at Bell Laboratories, who developed the influential MUSIC I program in 1957, one of the first computer programs to play electronic music. Vocoder technology was also a major development in this early era. In 1956, Stockhausen composed Gesang der Jünglinge, the first major work of the Cologne studio, based on a text from the Book of Daniel. An important technological development of that year was the invention of the Clavivox synthesizer by Raymond Scott with subassembly by Robert Moog. In 1957, Kid Baltan (Dick Raaymakers) and Tom Dissevelt released their debut album, Song Of The Second Moon, recorded at the Philips studio in the Netherlands. The public remained interested in the new sounds being created around the world, as can be deduced by the inclusion of Varèse's Poème électronique, which was played over four hundred loudspeakers at the Philips Pavilion of the 1958 Brussels World Fair. That same year, Mauricio Kagel, an Argentine composer, composed Transición II. The work was realized at the WDR studio in Cologne. Two musicians performed on the piano, one in the traditional manner, the other playing on the strings, frame, and case. Two other performers used tape to unite the presentation of live sounds with the future of prerecorded materials from later on and its past of recordings made earlier in the performance. In 1958, Columbia-Princeton developed the RCA Mark II Sound Synthesizer, the first programmable synthesizer. Prominent composers such as Vladimir Ussachevsky, Otto Luening, Milton Babbitt, Charles Wuorinen, Halim El-Dabh, Bülent Arel and Mario Davidovsky used the RCA Synthesizer extensively in various compositions. One of the most influential composers associated with the early years of the studio was Egypt's Halim El-Dabh who, after having developed the earliest known electronic tape music in 1944, became more famous for Leiyla and the Poet, a 1959 series of electronic compositions that stood out for its immersion and seamless fusion of electronic and folk music, in contrast to the more mathematical approach used by serial composers of the time such as Babbitt. El-Dabh's Leiyla and the Poet, released as part of the album Columbia-Princeton Electronic Music Center in 1961, would be cited as a strong influence by a number of musicians, ranging from Neil Rolnick, Charles Amirkhanian and Alice Shields to rock musicians Frank Zappa and The West Coast Pop Art Experimental Band. Following the emergence of differences within the GRMC (Groupe de Recherche de Musique Concrète) Pierre Henry, Philippe Arthuys, and several of their colleagues, resigned in April 1958. Schaeffer created a new collective, called Groupe de Recherches Musicales (GRM) and set about recruiting new members including Luc Ferrari, Beatriz Ferreyra, François-Bernard Mâche, Iannis Xenakis, Bernard Parmegiani, and Mireille Chamass-Kyrou. Later arrivals included Ivo Malec, Philippe Carson, Romuald Vandelle, Edgardo Canton and François Bayle. Expansion: 1960s These were fertile years for electronic music—not just for academia, but for independent artists as synthesizer technology became more accessible. By this time, a strong community of composers and musicians working with new sounds and instruments was established and growing. 1960 witnessed the composition of Luening's Gargoyles for violin and tape as well as the premiere of Stockhausen's Kontakte for electronic sounds, piano, and percussion. This piece existed in two versions—one for 4-channel tape, and the other for tape with human performers. "In Kontakte, Stockhausen abandoned traditional musical form based on linear development and dramatic climax. This new approach, which he termed 'moment form', resembles the 'cinematic splice' techniques in early twentieth-century film." The theremin had been in use since the 1920s but it attained a degree of popular recognition through its use in science-fiction film soundtrack music in the 1950s (e.g., Bernard Herrmann's classic score for The Day the Earth Stood Still). In the UK in this period, the BBC Radiophonic Workshop (established in 1958) came to prominence, thanks in large measure to their work on the BBC science-fiction series Doctor Who. One of the most influential British electronic artists in this period was Workshop staffer Delia Derbyshire, who is now famous for her 1963 electronic realisation of the iconic Doctor Who theme, composed by Ron Grainer. In 1961 Josef Tal established the Centre for Electronic Music in Israel at The Hebrew University, and in 1962 Hugh Le Caine arrived in Jerusalem to install his Creative Tape Recorder in the centre. In the 1990s Tal conducted, together with Dr. Shlomo Markel, in cooperation with the Technion – Israel Institute of Technology, and VolkswagenStiftung a research project (Talmark) aimed at the development of a novel musical notation system for electronic music. Milton Babbitt composed his first electronic work using the synthesizer—his Composition for Synthesizer (1961)—which he created using the RCA synthesizer at the Columbia-Princeton Electronic Music Center. The collaborations also occurred across oceans and continents. In 1961, Ussachevsky invited Varèse to the Columbia-Princeton Studio (CPEMC). Upon arrival, Varese embarked upon a revision of Déserts. He was assisted by Mario Davidovsky and Bülent Arel. The intense activity occurring at CPEMC and elsewhere inspired the establishment of the San Francisco Tape Music Center in 1963 by Morton Subotnick, with additional members Pauline Oliveros, Ramon Sender, Anthony Martin, and Terry Riley. Later, the Center moved to Mills College, directed by Pauline Oliveros, where it is today known as the Center for Contemporary Music. Simultaneously in San Francisco, composer Stan Shaff and equipment designer Doug McEachern, presented the first "Audium" concert at San Francisco State College (1962), followed by work at the San Francisco Museum of Modern Art (1963), conceived of as in time, controlled movement of sound in space. Twelve speakers surrounded the audience, four speakers were mounted on a rotating, mobile-like construction above. In an SFMOMA performance the following year (1964), San Francisco Chronicle music critic Alfred Frankenstein commented, "the possibilities of the space-sound continuum have seldom been so extensively explored". In 1967, the first Audium, a "sound-space continuum" opened, holding weekly performances through 1970. In 1975, enabled by seed money from the National Endowment
the conservatory, he wrote to his biographer, Aimar Grønvold, in 1881: "I must admit, unlike Svendsen, that I left Leipzig Conservatory just as stupid as I entered it. Naturally, I did learn something there, but my individuality was still a closed book to me." During the spring of 1860, he survived two life-threatening lung diseases, pleurisy and tuberculosis. Throughout his life, Grieg's health was impaired by a destroyed left lung and considerable deformity of his thoracic spine. He suffered from numerous respiratory infections, and ultimately developed combined lung and heart failure. Grieg was admitted many times to spas and sanatoria both in Norway and abroad. Several of his doctors became his friends. Career During 1861, Grieg made his debut as a concert pianist in Karlshamn, Sweden. In 1862, he finished his studies in Leipzig and had his first concert in his home town, where his programme included Beethoven's Pathétique sonata. In 1863, Grieg went to Copenhagen, Denmark, and stayed there for three years. He met the Danish composers J. P. E. Hartmann and Niels Gade. He also met his fellow Norwegian composer Rikard Nordraak (composer of the Norwegian national anthem), who became a good friend and source of inspiration. Nordraak died in 1866, and Grieg composed a funeral march in his honor. On 11 June 1867, Grieg married his first cousin, Nina Hagerup (1845–1935), a lyric soprano. The next year, their only child, Alexandra, was born. Alexandra died in 1869 from meningitis. During the summer of 1868, Grieg wrote his Piano Concerto in A minor while on holiday in Denmark. Edmund Neupert gave the concerto its premiere performance on 3 April 1869 in the Casino Theatre in Copenhagen. Grieg himself was unable to be there due to conducting commitments in Christiania (now Oslo). During 1868, Franz Liszt, who had not yet met Grieg, wrote a testimonial for him to the Norwegian Ministry of Education, which resulted in Grieg's obtaining a travel grant. The two men met in Rome in 1870. During Grieg's first visit, they examined Grieg's Violin Sonata No. 1, which pleased Liszt greatly. On his second visit in April, Grieg brought with him the manuscript of his Piano Concerto, which Liszt proceeded to sightread (including the orchestral arrangement). Liszt's rendition greatly impressed his audience, although Grieg said gently to him that he played the first movement too quickly. Liszt also gave Grieg some advice on orchestration (for example, to give the melody of the second theme in the first movement to a solo trumpet, which Grieg himself chose not to accept). In the 1870's he became friends with the poet Bjørnstjerne Bjørnson who shared his interests in Norwegian self-government. Grieg set several of his poems to music, including Landkjenning and Sigurd Jorsalfar. Eventually they decided on an opera based on King Olav Trygvason, but a dispute as to whether music or lyrics should be created first, led to Grieg being diverted to working on incidental music for Henrik Ibsen's play Peer Gynt, which naturally offended Bjørnson. Eventually their friendship was resumed. The incidental music composed for Peer Gynt at the request of the author, contributed to its success, and has separately become some of the composer's most familiar music arranged as orchestral Suites. Grieg had close ties with the Bergen Philharmonic Orchestra (Harmonien), and later became Music Director of the orchestra from 1880 to 1882. In 1888, Grieg met Tchaikovsky in Leipzig. Grieg was impressed by Tchaikovsky. Tchaikovsky thought very highly of Grieg's music, praising its beauty, originality and warmth. On 6 December 1897, Grieg and his wife performed some of his music at a private concert at Windsor Castle for Queen Victoria and her court. Grieg was awarded two honorary doctorates, first by the University of Cambridge in 1894 and the next from the University of Oxford in 1906. Later years The Norwegian government provided Grieg with a pension as he reached retirement age. During the spring of 1903, Grieg made nine 78-rpm gramophone recordings of his piano music in Paris. All of these discs have been reissued on
Edvard Hagerup. The family name, originally spelled Greig, is associated with the Scottish Clann Ghriogair (Clan Gregor). After the Battle of Culloden in Scotland in 1746, Grieg's great-grandfather, Alexander Greig (1739-1803), originally of Aberdeenshire, travelled widely before settling in Norway about 1770 and establishing business interests in Bergen. Grieg's paternal great-great-grandparents, John (1702-1774) and Anne (1704-1784), are buried in the churchyard of the abandoned Church of St Ethernan in Rathen, Aberdeenshire, Scotland. Grieg's first cousin, twice removed, was Canadian pianist Glenn Gould, whose mother was a Grieg. Edvard Grieg was raised in a musical family. His mother was his first piano teacher and taught him to play at the age of six. Grieg studied in several schools, including Tanks Upper Secondary School. During the summer of 1858, Grieg met the eminent Norwegian violinist Ole Bull, who was a family friend; Bull's brother was married to Grieg's aunt. Bull recognized the 15-year-old boy's talent and persuaded his parents to send him to the Leipzig Conservatory, the piano department of which was directed by Ignaz Moscheles. Grieg enrolled in the conservatory, concentrating on piano, and enjoyed the many concerts and recitals given in Leipzig. He disliked the discipline of the conservatory course of study. An exception was the organ, which was mandatory for piano students. About his study in the conservatory, he wrote to his biographer, Aimar Grønvold, in 1881: "I must admit, unlike Svendsen, that I left Leipzig Conservatory just as stupid as I entered it. Naturally, I did learn something there, but my individuality was still a closed book to me." During the spring of 1860, he survived two life-threatening lung diseases, pleurisy and tuberculosis. Throughout his life, Grieg's health was impaired by a destroyed left lung and considerable deformity of his thoracic spine. He suffered from numerous respiratory infections, and ultimately developed combined lung and heart failure. Grieg was admitted many times to spas and sanatoria both in Norway and abroad. Several of his doctors became his friends. Career During 1861, Grieg made his debut as a concert pianist in Karlshamn, Sweden. In 1862, he finished his studies in Leipzig and had his first concert in his home town, where his programme included Beethoven's Pathétique sonata. In 1863, Grieg went to Copenhagen, Denmark, and stayed there for three years. He met the Danish composers J. P. E. Hartmann and Niels Gade. He also met his fellow Norwegian composer Rikard Nordraak (composer of the Norwegian national anthem), who became a good friend and source of inspiration. Nordraak died in 1866, and Grieg composed a funeral march in his honor. On 11 June 1867, Grieg married his first cousin, Nina Hagerup (1845–1935), a lyric soprano. The next year, their only child, Alexandra, was born. Alexandra died in 1869 from meningitis. During the summer of 1868, Grieg wrote his Piano Concerto in A minor while on holiday in Denmark. Edmund Neupert gave the concerto its premiere performance on 3 April 1869 in the Casino Theatre in Copenhagen. Grieg himself was unable to be there due to conducting commitments in Christiania (now Oslo). During 1868, Franz Liszt, who had not yet met Grieg, wrote a testimonial for him to the Norwegian Ministry of Education, which resulted in Grieg's obtaining a travel grant. The two men met in Rome in 1870. During Grieg's first visit, they examined Grieg's Violin Sonata No. 1, which pleased Liszt greatly. On his second visit in April, Grieg brought with him the manuscript of his Piano Concerto, which Liszt proceeded to sightread (including the orchestral arrangement). Liszt's rendition greatly impressed his audience, although Grieg said gently to him that he played the first movement too quickly. Liszt also gave Grieg some advice on orchestration (for example, to give the melody of the second theme in the first movement to a solo trumpet, which Grieg himself chose not to accept). In the 1870's he became friends with the poet Bjørnstjerne Bjørnson who shared his interests in Norwegian self-government. Grieg set several of his poems to music, including Landkjenning and Sigurd Jorsalfar. Eventually they decided on an opera based on King Olav Trygvason, but a dispute as to whether music or lyrics should be created first, led to Grieg being diverted to working on incidental music for Henrik Ibsen's play Peer Gynt, which naturally offended Bjørnson. Eventually their friendship was resumed. The incidental music composed for Peer Gynt at the request of the author, contributed to its success, and has separately become some of the composer's most familiar music arranged as orchestral Suites. Grieg had close ties with the Bergen Philharmonic Orchestra (Harmonien), and later became Music Director of the orchestra from 1880 to 1882. In 1888, Grieg met Tchaikovsky in Leipzig. Grieg was impressed by Tchaikovsky. Tchaikovsky thought very highly of Grieg's music, praising its beauty, originality and warmth. On 6 December 1897, Grieg and his wife performed some of his music at a private concert at Windsor Castle for Queen Victoria and her court. Grieg was awarded two honorary doctorates, first by the University of Cambridge in 1894 and the next from the University of Oxford in 1906. Later years The Norwegian government provided Grieg with a pension as he reached retirement age. During the spring of 1903, Grieg made
Garibaldi wrote to Lincoln: "Posterity will call you the great emancipator, a more enviable title than any crown could be, and greater than any merely mundane treasure". Mayor Abel Haywood, a representative for workers from Manchester, England, wrote to Lincoln saying, "We joyfully honor you for many decisive steps toward practically exemplifying your belief in the words of your great founders: 'All men are created free and equal.'" The Emancipation Proclamation served to ease tensions with Europe over the North's conduct of the war, and combined with the recent failed Southern offensive at Antietam, to remove any practical chance for the Confederacy to receive foreign support in the war. Gettysburg Address Lincoln's Gettysburg Address in November 1863 made indirect reference to the Proclamation and the ending of slavery as a war goal with the phrase "new birth of freedom". The Proclamation solidified Lincoln's support among the rapidly growing abolitionist element of the Republican Party and ensured that they would not block his re-nomination in 1864. Proclamation of Amnesty and Reconstruction (1863) In December 1863, Lincoln issued his Proclamation of Amnesty and Reconstruction, which dealt with the ways the rebel states could reconcile with the Union. Key provisions required that the states accept the Emancipation Proclamation and thus the freedom of their slaves, and accept the Confiscation Acts, as well as the Act banning of slavery in United States territories. Postbellum Near the end of the war, abolitionists were concerned that the Emancipation Proclamation would be construed solely as a war measure, Lincoln's original intent, and would no longer apply once fighting ended. They also were increasingly anxious to secure the freedom of all slaves, not just those freed by the Emancipation Proclamation. Thus pressed, Lincoln staked a large part of his 1864 presidential campaign on a constitutional amendment to abolish slavery uniformly throughout the United States. Lincoln's campaign was bolstered by separate votes in both Maryland and Missouri to abolish slavery in those states. Maryland's new constitution abolishing slavery took effect in November 1864. Slavery in Missouri was ended by executive proclamation of its governor, Thomas C. Fletcher, on January 11, 1865. Winning re-election, Lincoln pressed the lame duck 38th Congress to pass the proposed amendment immediately rather than wait for the incoming 39th Congress to convene. In January 1865, Congress sent to the state legislatures for ratification what became the Thirteenth Amendment, banning slavery in all U.S. states and territories. The amendment was ratified by the legislatures of enough states by December 6, 1865, and proclaimed 12 days later. There were approximately 40,000 slaves in Kentucky and 1,000 in Delaware who were liberated then. Critiques In context the 19th century and because of its scope, Lincoln's proclamation is arguably "one of the most radical emancipations in the history of the modern world." Nonetheless, as the years went on and American life continued to be deeply unfair towards blacks, cynicism towards Lincoln and the Emancipation Proclamation increased. Perhaps the strongest attack was Lerone Bennett's Forced into Glory: Abraham Lincoln's White Dream (2000), which claimed that Lincoln was a white supremacist who issued the Emancipation Proclamation in lieu of the real racial reforms for which radical abolitionists pushed. In his Lincoln's Emancipation Proclamation, Allen C. Guelzo noted the professional historians' lack of substantial respect for the document, since it has been the subject of few major scholarly studies. He argued that Lincoln was the US's "last Enlightenment politician" and as such was dedicated to removing slavery strictly within the bounds of law. Other historians have given more credit to Lincoln for what he accomplished within the tensions of his cabinet and a society at war, for his own growth in political and moral stature, and for the promise he held out to the slaves. More might have been accomplished if he had not been assassinated. As Eric Foner wrote: Lincoln was not an abolitionist or Radical Republican, a point Bennett reiterates innumerable times. He did not favor immediate abolition before the war, and held racist views typical of his time. But he was also a man of deep convictions when it came to slavery, and during the Civil War displayed a remarkable capacity for moral and political growth. Kal Ashraf wrote: Perhaps in rejecting the critical dualism–Lincoln as individual emancipator pitted against collective self-emancipators–there is an opportunity to recognise the greater persuasiveness of the combination. In a sense, yes: a racist, flawed Lincoln did something heroic, and not in lieu of collective participation, but next to, and enabled, by it. To venerate a singular –Great Emancipator' may be as reductive as dismissing the significance of Lincoln's actions. Who he was as a man, no one of us can ever really know. So it is that the version of Lincoln we keep is also the version we make. Legacy in the civil rights era Dr. Martin Luther King Jr. Dr. Martin Luther King Jr. made many references to the Emancipation Proclamation during the civil rights movement. These include a speech made at an observance of the hundredth anniversary of the issuing of the Proclamation made in New York City on September 12, 1962 where he placed it alongside the Declaration of Independence as an "imperishable" contribution to civilization, and "All tyrants, past, present and future, are powerless to bury the truths in these declarations". He lamented that despite a history where the United States "proudly professed the basic principles inherent in both documents", it "sadly practiced the antithesis of these principles". He concluded "There is but one way to commemorate the Emancipation Proclamation. That is to make its declarations of freedom real; to reach back to the origins of our nation when our message of equality electrified an unfree world, and reaffirm democracy by deeds as bold and daring as the issuance of the Emancipation Proclamation." King's most famous invocation of the Emancipation Proclamation was in a speech from the steps of the Lincoln Memorial at the 1963 March on Washington for Jobs and Freedom (often referred to as the "I Have a Dream" speech). King began the speech saying "Five score years ago, a great American, in whose symbolic shadow we stand, signed the Emancipation Proclamation. This momentous decree came as a great beacon light of hope to millions of Negro slaves who had been seared in the flames of withering injustice. It came as a joyous daybreak to end the long night of captivity. But one hundred years later, we must face the tragic fact that the Negro is still not free. One hundred years later, the life of the Negro is still sadly crippled by the manacles of segregation and the chains of discrimination." The "Second Emancipation Proclamation" In the early 1960s, Dr. Martin Luther King Jr. and his associates developed a strategy to call on President John F. Kennedy to bypass a Southern segregationist opposition in the Congress by issuing an executive order to put an end to segregation. This envisioned document was referred to as the "Second Emancipation Proclamation". President John F. Kennedy On June 11, 1963, President Kennedy appeared on national television to address the issue of civil rights. Kennedy, who had been routinely criticized as timid by some of the leaders of the civil rights movement, told Americans that two black students had been peacefully enrolled in the University of Alabama with the aid of the National Guard despite the opposition of Governor George Wallace. John Kennedy called it a "moral issue". Invoking the centennial of the Emancipation Proclamation he said, In the same speech, Kennedy announced he would introduce comprehensive civil rights legislation to the United States Congress which he did a week later (he continued to push for its passage until his assassination in November 1963). Historian Peniel E. Joseph holds Lyndon Johnson's ability to get that bill, the Civil Rights Act of 1964, passed on July 2, 1964 was aided by "the moral forcefulness of the June 11 speech" that turned "the narrative of civil rights from a regional issue into a national story promoting racial equality and democratic renewal". President Lyndon B. Johnson During the civil rights movement of the 1960s, Lyndon B. Johnson invoked the Emancipation Proclamation holding it up as a promise yet to be fully implemented. As vice president while speaking from Gettysburg on May 30, 1963 (Memorial Day), at the centennial of the Emancipation Proclamation, Johnson connected it directly with the ongoing civil rights struggles of the time saying "One hundred years ago, the slave was freed. One hundred years later, the Negro remains in bondage to the color of his skin... In this hour, it is not our respective races which are at stake—it is our nation. Let those who care for their country come forward, North and South, white and Negro, to lead the way through this moment of challenge and decision... Until justice is blind to color, until education is unaware of race, until opportunity is unconcerned with color of men's skins, emancipation will be a proclamation but not a fact. To the extent that the proclamation of emancipation is not fulfilled in fact, to that extent we shall have fallen short of assuring freedom to the free." As president, Johnson again invoked the proclamation in a speech presenting the Voting Rights Act at a joint session of Congress on Monday, March 15, 1965. This was one week after violence had been inflicted on peaceful civil rights marchers during the Selma to Montgomery marches. Johnson said "... it's not just Negroes, but really it's all of us, who must overcome the crippling legacy of bigotry and injustice. And we shall overcome. As a man whose roots go deeply into Southern soil, I know how agonizing racial feelings are. I know how difficult it is to reshape the attitudes and the structure of our society. But a century has passed—more than 100 years—since the Negro was freed. And he is not fully free tonight. It was more than 100 years ago that Abraham Lincoln—a great President of another party—signed the Emancipation Proclamation. But emancipation is a proclamation and not a fact. A century has passed—more than 100 years—since equality was promised, and yet the Negro is not equal. A century has passed since the day of promise, and the promise is unkept. The time of justice has now come, and I tell you that I believe sincerely that no force can hold it back. It is right in the eyes of man and God that it should come, and when it does, I think that day will brighten the lives of every American." In popular culture In the 1963 episode of The Andy Griffith Show, "Andy Discovers America", Andy asks Barney to explain the Emancipation Proclamation to Opie who is struggling with history at school. Barney brags about his history expertise, yet it is apparent he cannot answer Andy's question. He finally becomes frustrated and explains it is a proclamation for certain people who wanted emancipation. In addition, the Emancipation Proclamation was also a main item of discussion in the movie Lincoln (2012) directed by Steven Spielberg. The Emancipation Proclamation is celebrated around the world including on stamps of nations such as the Republic of Togo. The United States commemorative was issued on August 16, 1963, the opening day of the Century of Negro Progress Exposition in Chicago, Illinois. Designed by Georg Olden, an initial printing of 120 million stamps was authorized. See also 1866 Georgia State Freedmen's Conventions Abolition of slavery timeline Act Prohibiting the Return of Slaves – 1862 statute Confiscation Acts District of Columbia Compensated Emancipation Act Emancipation Memorial – a sculpture in Washington, D.C., completed in 1876 Emancipation reform of 1861 – Russia History of slavery in Kentucky History of slavery in Missouri Lieber Code Juneteenth Reconstruction Amendments – amendments added to the Bill of Rights after the Proclamation Slavery Abolition Act 1833 – an act passed by the British parliament abolishing slavery in British colonies with compensation to the owners Slavery in the colonial United States Slave Trade Acts Suez Canal Company Timeline of the civil rights movement United States labor law War Governors' Conference – gave Lincoln the much needed political support to issue the Proclamation Notes Primary sources C. Peter Ripley, Roy E. Finkenbine, Michael F. Hembree, Donald Yacovone, editors, Witness for Freedom: African American Voices on Race, Slavery, and Emancipation (1993)Questia . Further reading Belz, Herman. Emancipation and Equal Rights: Politics and Constitutionalism in the Civil War Era (1978) online Biddle, Daniel R., and Murray Dubin. "'God Is Settling the Account': African American Reaction to Lincoln's Emancipation Proclamation", Pennsylvania Magazine of History and Biography (Jan. 2013) 137#1 57–78. Blackiston, Harry S. "Lincoln's Emancipation Plan." Journal of Negro History 7, no. 3 (1922): 257–277. Blair, William A. and Younger, Karen Fisher, editors. Lincoln's Proclamation: Emancipation Reconsidered (2009) Carnahan, Burrus. Act of Justice: Lincoln's Emancipation Proclamation and the Law of War (2007) Crowther, Edward R. "Emancipation Proclamation". in Encyclopedia of the American Civil War. Heidler, David S. and Heidler, Jeanne T. (2000) Chambers Jr, Henry L. "Lincoln, the Emancipation Proclamation, and Executive Power." Maryland Law Review 73 (2013): 100+ online Ewan, Christopher. "The Emancipation Proclamation and British Public Opinion" The Historian, Vol. 67, 2005 Franklin, John Hope. The Emancipation Proclamation (1963) online Guelzo, Allen C. "How Abe Lincoln Lost the Black Vote: Lincoln and Emancipation in the African American Mind", Journal of the Abraham Lincoln Association (2004) 25#1 Harold Holzer, Edna Greene Medford, and Frank J. Williams. The Emancipation Proclamation: Three Views (2006) Harold Holzer. Emancipating Lincoln: The Proclamation in Text, Context, and Memory (2012) Jones, Howard. Abraham Lincoln and a New Birth of Freedom: The Union and Slavery in the Diplomacy of the Civil War (1999) online Mitch Kachun, Festivals of Freedom: Memory and Meaning in African American Emancipation Celebrations, 1808–1915 (2003) Kennon, Donald R. and Paul Finkelman, eds. Lincoln, Congress, and Emancipation (Ohio UP, 2016), 270 pp. Kolchin, Peter, "Reexamining Southern Emancipation in Comparative Perspective," Journal of Southern History, 81#1 (Feb. 2015), 7–40. Litwack, Leon F. Been in the Storm So Long: The Aftermath of Slavery (1979), social history of the end of slavery in the Confederacy McPherson, James M. Ordeal by Fire: the Civil War and Reconstruction (2001 [3rd ed.]), esp. pp. 316–321. Masur, Louis P. Lincoln's Hundred Days: The Emancipation Proclamation and the War for the Union (Harvard University Press; 2012) Nevins, Allan. Ordeal of the
end slavery in peacetime was limited by the Constitution, which, before 1865, committed the issue to individual states. During the American Civil War, however, Lincoln issued the Proclamation under his authority as "Commander-in-Chief of the Army and Navy" under Article II, section 2 of the United States Constitution. As such, he claimed to have the power to free persons held as slaves in those states that were in rebellion "as a fit and necessary war measure for suppressing said rebellion". He did not have Commander-in-Chief authority over the four border slave-holding states that were not in rebellion: Missouri, Kentucky, Maryland and Delaware, and so those states were not named in the Proclamation. The fifth border jurisdiction, West Virginia, where slavery remained legal but was in the process of being abolished, was, in January 1863, still part of the legally recognized, "reorganized" state of Virginia, based in Alexandria, which was in the Union (as opposed to the Confederate state of Virginia, based in Richmond). Coverage The Proclamation applied in the ten states that were still in rebellion in 1863, and thus did not cover the nearly 500,000 slaves in the slave-holding border states (Missouri, Kentucky, Maryland or Delaware) that had not seceded. Those slaves were freed by later separate state and federal actions. The state of Tennessee had already mostly returned to Union control, under a recognized Union government, so it was not named and was exempted. Virginia was named, but exemptions were specified for the 48 counties then in the process of forming the new state of West Virginia, and seven additional counties and two cities in the Union-controlled Tidewater region of Virginia. Also specifically exempted were New Orleans and 13 named parishes of Louisiana, which were mostly under federal control at the time of the Proclamation. These exemptions left unemancipated an additional 300,000 slaves. The Emancipation Proclamation has been ridiculed, notably in an influential passage by Richard Hofstadter, who wrote that it "had all the moral grandeur of a bill of lading" and "declared free all slaves ... precisely where its effect could not reach." These slaves were freed under Lincoln's war powers as "Commander in Chief of the Army and Navy" under Article II, section 2 of the Constitution of the United States. This act cleared up the issue of contraband slaves. It automatically clarified the status of over 100,000 now-former slaves. Some 20,000 to 50,000 slaves were freed the day it went into effect in parts of nine of the ten states to which it applied (Texas being the exception). In every Confederate state (except Tennessee and Texas), the Proclamation went into immediate effect in Union-occupied areas and at least 20,000 slaves were freed at once on January 1, 1863. The Proclamation provided the legal framework for the emancipation of nearly all four million slaves as the Union armies advanced, and committed the Union to end slavery, which was a controversial decision even in the North. Hearing of the Proclamation, more slaves quickly escaped to Union lines as the Army units moved South. As the Union armies advanced through the Confederacy, thousands of slaves were freed each day until nearly all (approximately 3.9 million, according to the 1860 Census) were freed by July 1865. Although the Proclamation had freed most slaves as a war measure, it had not made slavery illegal. Of the states that were exempted from the Proclamation, Maryland, Missouri, Tennessee, and West Virginia prohibited slavery before the war ended. In 1863, President Lincoln proposed a moderate plan for the Reconstruction of the captured Confederate State of Louisiana. Only 10% of the state's electorate had to take the loyalty oath. The state was also required to accept the Proclamation and abolish slavery in its new constitution. Identical Reconstruction plans would be adopted in Arkansas and Tennessee. By December 1864, the Lincoln plan abolishing slavery had been enacted in Louisiana, as well as in Arkansas and Tennessee. In Kentucky, Union Army commanders relied on the proclamations offer of freedom to slaves who enrolled in the Army and provided freedom for an enrollee's entire family; for this and other reasons the number of slaves in the state fell by over 70% during the war. However, in Delaware and Kentucky, slavery continued to be legal until December 18, 1865, when the Thirteenth Amendment went into effect. Background Military action prior to emancipation The Fugitive Slave Act of 1850 required individuals to return runaway slaves to their owners. During the war, in May 1861, Union general Benjamin Butler declared that slaves who escaped to Union lines were contraband of war, and accordingly he refused to return them. On May 30, after a cabinet meeting called by President Lincoln, "Simon Cameron, the secretary of war, telegraphed Butler to inform him that his contraband policy 'is approved.'" This decision was controversial because it could have been taken to imply recognition of the Confederacy as a separate, independent sovereign state under international law, a notion that Lincoln steadfastly denied. In addition, as contraband, these people were legally designated as "property" when they crossed Union lines and their ultimate status was uncertain. Governmental action toward emancipation In December 1861, Lincoln sent his first annual message to Congress (the State of the Union Address, but then typically given in writing and not referred to as such). In it he praised the free labor system, as respecting human rights over property rights; he endorsed legislation to address the status of contraband slaves and slaves in loyal states, possibly through buying their freedom with federal taxes, and also the funding of strictly voluntary colonization efforts. In January 1862, Thaddeus Stevens, the Republican leader in the House, called for total war against the rebellion to include emancipation of slaves, arguing that emancipation, by forcing the loss of enslaved labor, would ruin the rebel economy. On March 13, 1862, Congress approved an Act Prohibiting the Return of Slaves, which prohibited "All officers or persons in the military or naval service of the United States" from returning fugitive slaves to their owners. Pursuant to a law signed by Lincoln, slavery was abolished in the District of Columbia on April 16, 1862, and owners were compensated. On June 19, 1862, Congress prohibited slavery in all current and future United States territories (though not in the states), and President Lincoln quickly signed the legislation. This act effectively repudiated the 1857 opinion of the Supreme Court of the United States in the Dred Scott case that Congress was powerless to regulate slavery in U.S. territories. It also rejected the notion of popular sovereignty that had been advanced by Stephen A. Douglas as a solution to the slavery controversy, while completing the effort first legislatively proposed by Thomas Jefferson in 1784 to confine slavery within the borders of existing states. On August 6, 1861, the First Confiscation Act freed the slaves who were employed "against the Government and lawful authority of the United States." On July 17, 1862, the Second Confiscation Act freed the slaves "within any place occupied by rebel forces and afterwards occupied by forces of the United States." The Second Confiscation Act, unlike the First Confiscation Act, explicitly provided that all slaves covered by it would be permanently freed, stating in section 10 that "all slaves of persons who shall hereafter be engaged in rebellion against the government of the United States, or who shall in any way give aid or comfort thereto, escaping from such persons and taking refuge within the lines of the army; and all slaves captured from such persons or deserted by them and coming under the control of the government of the United States; and all slaves of such person found on [or] being within any place occupied by rebel forces and afterwards occupied by the forces of the United States, shall be deemed captives of war, and shall be forever free of their servitude, and not again held as slaves." However, Lincoln's position continued to be that, although Congress lacked the power to free the slaves in rebel-held states, he, as commander in chief, could do so if he deemed it a proper military measure. By this time, in the summer of 1862, Lincoln had drafted the preliminary Emancipation Proclamation, which, when he issued it on September 22, 1862, would declare that, on January 1, 1863, he would free the slaves in states still in rebellion. Public opinion of emancipation Abolitionists had long been urging Lincoln to free all slaves. In the summer of 1862, Republican editor Horace Greeley of the highly influential New York Tribune wrote a famous editorial entitled "The Prayer of Twenty Millions" demanding a more aggressive attack on the Confederacy and faster emancipation of the slaves: "On the face of this wide earth, Mr. President, there is not one ... intelligent champion of the Union cause who does not feel ... that the rebellion, if crushed tomorrow, would be renewed if slavery were left in full vigor and that every hour of deference to slavery is an hour of added and deepened peril to the Union." Lincoln responded in his Letter To Horace Greeley from August 22, 1862, in terms of the limits imposed by his duty as president to save the Union: Lincoln scholar Harold Holzer wrote in this context about Lincoln's letter: "Unknown to Greeley, Lincoln composed this after he had already drafted a preliminary Emancipation Proclamation, which he had determined to issue after the next Union military victory. Therefore, this letter, was in truth, an attempt to position the impending announcement in terms of saving the Union, not freeing slaves as a humanitarian gesture. It was one of Lincoln's most skillful public relations efforts, even if it has cast longstanding doubt on his sincerity as a liberator." Historian Richard Striner argues that "for years" Lincoln's letter has been misread as "Lincoln only wanted to save the Union." However, within the context of Lincoln's entire career and pronouncements on slavery this interpretation is wrong, according to Striner. Rather, Lincoln was softening the strong Northern white supremacist opposition to his imminent emancipation by tying it to the cause of the Union. This opposition would fight for the Union but not to end slavery, so Lincoln gave them the means and motivation to do both, at the same time. In his 2014 book, Lincoln's Gamble, journalist and historian Todd Brewster asserted that Lincoln's desire to reassert the saving of the Union as his sole war goal was, in fact, crucial to his claim of legal authority for emancipation. Since slavery was protected by the Constitution, the only way that he could free the slaves was as a tactic of war—not as the mission itself. But that carried the risk that when the war ended, so would the justification for freeing the slaves. Late in 1862, Lincoln asked his Attorney General, Edward Bates, for an opinion as to whether slaves freed through a war-related proclamation of emancipation could be re-enslaved once the war was over. Bates had to work through the language of the Dred Scott decision to arrive at an answer, but he finally concluded that they could indeed remain free. Still, a complete end to slavery would require a constitutional amendment. Conflicting advice, to free all slaves, or not free them at all, was presented to Lincoln in public and private. Thomas Nast, a cartoon artist during the Civil War and the late 1800s considered "Father of the American Cartoon", composed many works including a two-sided spread that showed the transition from slavery into civilization after President Lincoln signed the Proclamation. Nast believed in equal opportunity and equality for all people, including enslaved Africans or free blacks. A mass rally in Chicago on September 7, 1862, demanded immediate and universal emancipation of slaves. A delegation headed by William W. Patton met the president at the White House on September 13. Lincoln had declared in peacetime that he had no constitutional authority to free the slaves. Even used as a war power, emancipation was a risky political act. Public opinion as a whole was against it. There would be strong opposition among Copperhead Democrats and an uncertain reaction from loyal border states. Delaware and Maryland already had a high percentage of free blacks: 91.2% and 49.7%, respectively, in 1860. Drafting and issuance of the proclamation Lincoln first discussed the proclamation with his cabinet in July 1862. He drafted his "preliminary proclamation" and read it to Secretary of State William Seward, and Secretary of Navy Gideon Welles, on July 13. Seward and Welles were at first speechless, then Seward referred to possible anarchy throughout the South and resulting foreign intervention; Welles apparently said nothing. On July 22, Lincoln presented it to his entire cabinet as something he had determined to do and he asked their opinion on wording. Although Secretary of War Edwin Stanton supported it, Seward advised Lincoln to issue the proclamation after a major Union victory, or else it would appear as if the Union was giving "its last shriek of retreat". In September 1862, the Battle of Antietam gave Lincoln the victory he needed to issue the Preliminary Emancipation Proclamation. In the battle, though the Union suffered heavier losses than the Confederates and General McClellan allowed the escape of Robert E. Lee's retreating troops, Union forces turned back a Confederate invasion of Maryland, eliminating more than a quarter of Lee's army in the process. On September 22, 1862, five days after Antietam, and while residing at the Soldier's Home, Lincoln called his cabinet into session and issued the Preliminary Emancipation Proclamation. According to Civil War historian James M. McPherson, Lincoln told cabinet members that he had made a covenant with God, that if the Union drove the Confederacy out of Maryland, he would issue the Emancipation Proclamation. Lincoln had first shown an early draft of the proclamation to Vice President Hannibal Hamlin, an ardent abolitionist, who was more often kept in the dark on presidential decisions. The final proclamation was issued on January 1, 1863. Although implicitly granted authority by Congress, Lincoln used his powers as Commander-in-Chief of the Army and Navy, "as a necessary war measure" as the basis of the proclamation, rather than the equivalent of a statute enacted by Congress or a constitutional amendment. Some days after issuing the final Proclamation, Lincoln wrote to Major General John McClernand: "After the commencement of hostilities I struggled nearly a year and a half to get along without touching the "institution"; and when finally I conditionally determined to touch it, I gave a hundred days fair notice of my purpose, to all the States and people, within which time they could have turned it wholly aside, by simply again becoming good citizens of the United States. They chose to disregard it, and I made the peremptory proclamation on what appeared to me to be a military necessity. And being made, it must stand." Initially, the Emancipation Proclamation effectively freed only a small percentage of the slaves, namely those who were behind Union lines in areas not exempted. Most slaves were still behind Confederate lines or in exempted Union-occupied areas. Secretary of State William H. Seward commented, "We show our sympathy with slavery by emancipating slaves where we cannot reach them and holding them in bondage where we can set them free." Had any slave state ended its secession attempt before January 1, 1863, it could have kept slavery, at least temporarily. The Proclamation only gave the Lincoln Administration the legal basis to free the slaves in the areas of the South that were still in rebellion on January 1, 1863. But as the Union army advanced into the South, slaves fled to behind its lines, and "[s]hortly after issuing the Emancipation Proclamation, the Lincoln administration lifted the ban on enticing slaves into Union lines." These events contributed to the destruction of slavery. The Emancipation Proclamation also allowed for the enrollment of freed slaves into the United States military. During the war nearly 200,000 black men, most of them ex-slaves, joined the Union Army. Their contributions were significant in winning the war. The Confederacy did not allow slaves in their army as soldiers until the last month before its defeat. Though the counties of Virginia that were soon to form West Virginia were specifically exempted from the Proclamation (Jefferson County being the only exception), a condition of the state's admittance to the Union was that its constitution provide for the gradual abolition of slavery (an immediate emancipation of all slaves was also adopted there in early 1865). Slaves in the border states of Maryland and Missouri were also emancipated by separate state action before the Civil War ended. In Maryland, a new state constitution abolishing slavery in the state went into effect on November 1, 1864. The Union-occupied counties of eastern Virginia and parishes of Louisiana, which had been exempted from the Proclamation, both adopted state constitutions that abolished slavery in April 1864. In early 1865, Tennessee adopted an amendment to its constitution prohibiting slavery. Implementation The Proclamation was issued in two parts. The first part, issued on September 22, 1862, was a preliminary announcement outlining the intent of the second part, which officially went into effect 100 days later on January 1, 1863, during the second year of the Civil War. It was Abraham Lincoln's declaration that all slaves would be permanently freed in all areas of the Confederacy that had not already returned to federal control by January 1863. The ten affected states were individually named in the second part (South Carolina, Mississippi, Florida, Alabama, Georgia, Louisiana, Texas, Virginia, Arkansas, North Carolina). Not included were the Union slave states of Maryland, Delaware, Missouri and Kentucky. Also not named was the state of Tennessee, in which a Union-controlled military government had already been set up, based in the capital, Nashville. Specific exemptions were stated for areas also under Union control on January 1, 1863, namely 48 counties that would soon become West Virginia, seven other named counties of Virginia including Berkeley and Hampshire counties, which were soon added to West Virginia, New Orleans and 13 named parishes nearby. Union-occupied areas of the Confederate states where the proclamation was put into immediate effect by local commanders included Winchester, Virginia, Corinth, Mississippi, the Sea Islands along the coasts of the Carolinas and Georgia, Key West, Florida, and Port Royal, South Carolina. Immediate impact It has been inaccurately claimed that the Emancipation Proclamation did not free a single slave; historian Lerone Bennett Jr. alleged that
January 1944. He and his staff set out to improve the fortifications along the Atlantic Wall with great energy and engineering skill. This was a compromise: Rommel now commanded the 7th and 15th armies; he also had authority over a 20-kilometer-wide strip of coastal land between Zuiderzee and the mouth of the Loire. The chain of command was convoluted: the air force and navy had their own chiefs, as did the South and Southwest France and the Panzer group; Rommel also needed Hitler's permissions to use the tank divisions. Undeterred, Rommel had millions of mines laid and thousands of tank traps and obstacles set up on the beaches and throughout the countryside, including in fields suitable for glider aircraft landings, the so-called Rommel's asparagus (the Allies would later counter these with Hobart's Funnies). In April 1944, Rommel promised Hitler that the preparations would be complete by 1 May, but by the time of the Allied invasion, the preparations were far from finished. The quality of some of the troops manning them was poor and many bunkers lacked sufficient stocks of ammunition. Rundstedt expected the Allies to invade in the Pas-de-Calais because it was the shortest crossing point from Britain, its port facilities were essential to supplying a large invasion force, and the distance from Calais to Germany was relatively short. Rommel and Hitler's views on the matter is a matter of debate between authors, with both seeming to change their positions. Hitler vacillated between the two strategies. In late April, he ordered the I SS Panzer Corps placed near Paris, far enough inland to be useless to Rommel, but not far enough for Rundstedt. Rommel moved those armoured formations under his command as far forward as possible, ordering General Erich Marcks, commanding the 84th Corps defending the Normandy section, to move his reserves into the frontline. Although Rommel was the dominating personality in Normandy with Rundstedt willing to delegate most of the responsibilities to him (the central reserve was Rundstedt's idea but he did not oppose some form of coastal defense, and gradually came under the influence of Rommel's thinking), Rommel's strategy of an armor-supported coastal defense line was opposed by some officers, most notably Leo Geyr von Schweppenburg, who was supported by Guderian. Hitler compromised and gave Rommel three divisions (the 2nd, the 21st and the 116th Panzer), let Rundstedt retain four and turned the other three to Army Group G, pleasing no one. The Allies staged elaborate deceptions for D-Day (see Operation Fortitude), giving the impression that the landings would be at Calais. Although Hitler himself expected a Normandy invasion for a while, Rommel and most Army commanders in France believed there would be two invasions, with the main invasion coming at the Pas-de-Calais. Rommel drove defensive preparations all along the coast of Northern France, particularly concentrating fortification building in the River Somme estuary. By D-Day on 6 June 1944 nearly all the German staff officers, including Hitler's staff, believed that Pas-de-Calais was going to be the main invasion site, and continued to believe so even after the landings in Normandy had occurred. The 5 June storm in the channel seemed to make a landing very unlikely, and a number of the senior officers were away from their units for training exercises and various other efforts. On 4 June the chief meteorologist of the 3 Air Fleet reported that weather in the channel was so poor there could be no landing attempted for two weeks. On 5 June Rommel left France and on 6 June he was at home celebrating his wife's birthday. He was recalled and returned to his headquarters at 10pm. Meanwhile, earlier in the day, Rundstedt had requested the reserves be transferred to his command. At 10am Keitel advised that Hitler declined to release the reserves but that Rundstedt could move the 12th SS Panzer Division Hitlerjugend closer to the coast, with the Panzer-Lehr-Division placed on standby. Later in the day, Rundstedt received authorisation to move additional units in preparation for a counterattack, which Rundstedt decided to launch on 7 June. Upon arrival, Rommel concurred with the plan. By nightfall, Rundstedt, Rommel and Speidel continued to believe that the Normandy landing might have been a diversionary attack, as the Allied deception measures still pointed towards Calais. The 7 June counterattack did not take place because Allied air bombardments prevented the 12th SS's timely arrival. All this made the German command structure in France in disarray during the opening hours of the D-Day invasion. Facing relatively small-scale German counterattacks, the Allies secured five beachheads by nightfall of 6 June, landing 155,000 troops. The Allies pushed ashore and expanded their beachhead despite strong German resistance. Rommel believed that if his armies pulled out of range of Allied naval fire, it would give them a chance to regroup and re-engage them later with a better chance of success. While he managed to convince Rundstedt, they still needed to win over Hitler. At a meeting with Hitler at his Wolfsschlucht II headquarters in Margival in northern France on 17 June, Rommel warned Hitler about the inevitable collapse in the German defences, but was rebuffed and told to focus on military operations. By mid-July the German position was crumbling. On 17 July 1944, as Rommel was returning from visiting the headquarters of the I SS Panzer Corps, a fighter plane piloted by either Charley Fox of 412 Squadron RCAF, Jacques Remlinger of No. 602 Squadron RAF, or Johannes Jacobus le Roux of No. 602 Squadron RAF strafed his staff car near Sainte-Foy-de-Montgommery. The driver sped up and attempted to get off the main roadway, but a 20 mm round shattered his left arm, causing the vehicle to veer off of the road and crash into trees. Rommel was thrown from the car, suffering injuries to the left side of his face from glass shards and three fractures to his skull. He was hospitalised with major head injuries (assumed to be almost certainly fatal). Plot against Hitler The role that Rommel played in the military's resistance against Hitler or the 20 July plot is difficult to ascertain, as most of the leaders who were directly involved did not survive and limited documentation on the conspirators' plans and preparations exists. One piece of evidence that points to the possibility that Rommel came to support the assassination plan was General Eberbach's confession to his son (eavesdropped on by British agencies) while in British captivity, which stated that Rommel explicitly said to him that Hitler and his close associates had to be killed because this would be the only way out for Germany. This conversation occurred about a month before Rommel was coerced into committing suicide. Other notable evidence includes the papers of Rudolf Hartmann (who survived the later purge) and Carl-Heinrich von Stülpnagel, who were among the leaders of the military resistance (alongside Rommel's chief of staff General Hans Speidel, Colonel Karl-Richard Koßmann, Colonel Eberhard Finckh and Lieutenant Colonel Caesar von Hofacker). These papers, accidentally discovered by historian Christian Schweizer in 2018 while doing research on Rudolf Hartmann, include Hartmann's eyewitness account of a conversation between Rommel and Stülpnagel in May 1944, as well as photos of the mid-May 1944 meeting between the inner circle of the resistance and Rommel at Koßmann's house. According to Hartmann, by the end of May, in another meeting at Hartmann's quarters in Mareil–Marly, Rommel showed "decisive determination" and clear approval of the inner circle's plan. According to a post-war account by Karl Strölin, three of Rommel's friends—the Oberbürgermeister of Stuttgart, Strölin (who had served with Rommel in the First World War), Alexander von Falkenhausen and Stülpnagel—began efforts to bring Rommel into the anti-Hitler conspiracy in early 1944. According to Strölin, sometime in February, Rommel agreed to lend his support to the resistance. On 15 April 1944 Rommel's new chief of staff, Hans Speidel, arrived in Normandy and reintroduced Rommel to Stülpnagel. Speidel had previously been connected to Carl Goerdeler, the civilian leader of the resistance, but not to the plotters led by Claus von Stauffenberg, and came to Stauffenberg's attention only upon his appointment to Rommel's headquarters. The conspirators felt they needed the support of a field marshal on active duty. Erwin von Witzleben, who would have become commander-in-chief of the Wehrmacht had the plot succeeded, was a field marshal, but had been inactive since 1942. The conspirators gave instructions to Speidel to bring Rommel into their circle. Speidel met with former foreign minister Konstantin von Neurath and Strölin on 27 May in Germany, ostensibly at Rommel's request, although the latter was not present. Neurath and Strölin suggested opening immediate surrender negotiations in the West, and, according to Speidel, Rommel agreed to further discussions and preparations. Around the same timeframe, the plotters in Berlin were not aware that Rommel had allegedly decided to take part in the conspiracy. On 16 May, they informed Allen Dulles, through whom they hoped to negotiate with the Western Allies, that Rommel could not be counted on for support. At least initially, Rommel opposed assassinating Hitler. According to some authors, he gradually changed his attitude. After the war, his widow—among others—maintained that Rommel believed an assassination attempt would spark civil war in Germany and Austria, and Hitler would have become a martyr for a lasting cause. Instead, Rommel reportedly suggested that Hitler be arrested and brought to trial for his crimes; he did not attempt to implement this plan when Hitler visited Margival, France, on 17 June. The arrest plan would have been highly improbable, as Hitler's security was extremely tight. Rommel would have known this, having commanded Hitler's army protection detail in 1939. He was in favour of peace negotiations, and repeatedly urged Hitler to negotiate with the Allies, which is dubbed by some as "hopelessly naive", considering no one would trust Hitler, and "as naive as it was idealistic, the attitude he showed to the man he had sworn loyalty". According to Reuth, the reason Lucie Rommel did not want her husband to be associated with any conspiracy was that even after the war, the German population neither grasped nor wanted to comprehend the reality of the genocide, thus conspirators were still treated as traitors and outcasts. On the other hand, the resistance depended on the reputation of Rommel to win over the population. Some officers who had worked with Rommel also recognized the relationship between Rommel and the resistance: Westphal said that Rommel did not want any more senseless sacrifices. Butler, using Ruge's recollections, reports that when told by Hitler himself that "no one will make peace with me", Rommel told Hitler that if he was the obstacle for peace, he should resign or kill himself, but Hitler insisted on fanatical defense. Reuth, based on Jodl's testimony, reports that Rommel forcefully presented the situation and asked for political solutions from Hitler, who rebuffed that Rommel should leave politics to him. Brighton comments that Rommel seemed devoted, even though he did not have much faith in Hitler anymore, considering he kept informing Hitler in person and by letter about his changing beliefs, despite facing a military dilemma as well as a personal struggle. Lieb remarks that Rommel's attitude in describing the situation honestly and requiring political solutions was almost without precedent and contrary to the attitude of many other generals. Remy comments that Rommel put himself and his family (which he had briefly considered evacuating to France, but refrained from doing so) at risk for the resistance out of a combination of his concern for the fate of Germany, his indignation at atrocities and the influence of people around him. On 15 July, Rommel wrote a letter to Hitler giving him a "last chance" to end the hostilities with the Western Allies, urging Hitler to "draw the proper conclusions without delay". What Rommel did not know was that the letter took two weeks to reach Hitler because of Kluge's precautions. Various authors report that many German generals in Normandy, including some SS officers like Hausser, Bittrich, Dietrich (a hard-core Nazi and Hitler's long-time supporter) and Rommel's former opponent Geyr von Schweppenburg pledged support to him, even against Hitler's orders, while Kluge supported him with much hesitation. Rundstedt encouraged Rommel to carry out his plans but refused to do anything himself, remarking that it had to be a man who was still young and loved by the people, while Erich von Manstein was also approached by Rommel but categorically refused, although he did not report them to Hitler either. Peter Hoffmann reports that he also attracted into his orbit officials who had previously refused to support the conspiracy, like Julius Dorpmüller and Karl Kaufmann (according to Russell A. Hart, reliable details of the conversations are now lost, although they certainly met). On 17 July, Rommel was incapacitated by an Allied air attack, which many authors describe as a fateful event that drastically altered the outcome of the bomb plot. Writer Ernst Jünger commented: "The blow that felled Rommel ... robbed the plan of the shoulders that were to be entrusted the double weight of war and civil war - the only man who had enough naivety to counter the simple terror that those he was about to go against possessed." After the failed bomb attack of 20 July, many conspirators were arrested and the dragnet expanded to thousands. Rommel was first implicated when Stülpnagel, after his suicide attempt, repeatedly muttered "Rommel" in delirium. Under torture, Hofacker named Rommel as one of the participants. Additionally, Goerdeler had written down Rommel's name on a list as potential Reich President (according to Stroelin, they had not managed to announce this intention to Rommel yet and he probably never heard of it until the end of his life). On 27 September, Martin Bormann submitted to Hitler a memorandum which claimed that "the late General Stülpnagel, Colonel Hofacker, Kluge's nephew who has been executed, Lieutenant Colonel Rathgens, and several ... living defendants have testified that Field Marshal Rommel was perfectly in the picture about the assassination plan and has promised to be at the disposal of the New Government." Gestapo agents were sent to Rommel's house in Ulm and placed him under surveillance. Historian Peter Lieb considers the memorandum, as well as Eberbach's conversation and the testimonies of surviving resistance members (including Hartmann) to be the three key sources that indicate Rommel's support of the assassination plan. He further notes that while Speidel had an interest in promoting his own post-war career, his testimonies should not be dismissed, considering his bravery as an early resistance figure. Remy writes that even more important than Rommel's attitude to the assassination is the fact Rommel had his own plan to end the war. He began to contemplate this plan some months after El Alamein and carried it out with a lonely decision and conviction, and in the end, had managed to bring military leaders in the West to his side. Death Rommel's case was turned over to the "Court of Military Honour"—a drumhead court-martial convened to decide the fate of officers involved in the conspiracy. The court included Generalfeldmarschall Wilhelm Keitel, Generalfeldmarschall Gerd von Rundstedt, Generaloberst Heinz Guderian, General der Infanterie Walther Schroth and Generalleutnant Karl-Wilhelm Specht, with General der Infanterie Karl Kriebel and Generalleutnant Heinrich Kirchheim (whom Rommel had fired after Tobruk in 1941) as deputy members and Generalmajor Ernst Maisel as protocol officer. The Court acquired information from Speidel, Hofacker and others that implicated Rommel, with Keitel and Ernst Kaltenbrunner assuming that he had taken part in the subversion. Keitel and Guderian then made the decision that favoured Speidel's case and at the same time shifted the blame to Rommel. By normal procedure, this would lead to Rommel's being brought to Roland Freisler's People's Court, a kangaroo court that always decided in favour of the prosecution. However, Hitler knew that having Rommel branded and executed as a traitor would severely damage morale on the home front. He thus decided to offer Rommel the chance to take his own life. Two generals from Hitler's headquarters, Wilhelm Burgdorf and Ernst Maisel, visited Rommel at his home on 14 October 1944. Burgdorf informed him of the charges against him and offered him three options: (a.) he could choose to defend himself personally in front of Hitler in Berlin, or if he refused to do so (which would be taken as an admission of guilt); (b.) he could face the People's Court (which would have been tantamount to a death sentence), or (c.) choose to commit suicide. In the former case, his family would have suffered even before the all-but-certain conviction and execution, and his staff would have been arrested and executed as well. In the latter case, the government would claim that he died a hero and bury him with full military honours, and his family would receive full pension payments. In support of the suicide option, Burgdorf had brought a cyanide capsule. Rommel opted to commit suicide, and explained his decision to his wife and son. Wearing his Afrika Korps jacket and carrying his field marshal's baton, he got into Burgdorf's car, driven by SS-Stabsscharführer Heinrich Doose, and was driven out of the village. After stopping, Doose and Maisel walked away from the car leaving Rommel with Burgdorf. Five minutes later Burgdorf gestured to the two men to return to the car, and Doose noticed that Rommel was slumped over, having taken the cyanide. He died before being taken to the Wagner-Schule field hospital. Ten minutes later, the group telephoned Rommel's wife to inform her of his death. The official notice of Rommel's death as reported to the public stated that he had died of either a heart attack or a cerebral embolism—a complication of the skull fractures he had suffered in the earlier strafing of his staff car. To strengthen the story, Hitler ordered an official day of mourning in commemoration of his death. As promised, Rommel was given a state funeral but it was held in Ulm instead of Berlin as had been requested by Rommel. Hitler sent Field Marshal Rundstedt (who was unaware that Rommel had died as a result of Hitler's orders) as his representative to the funeral. The truth behind Rommel's death became known to the Allies when intelligence officer Charles Marshall interviewed Rommel's widow, Lucia Rommel, as well as from a letter by Rommel's son Manfred in April 1945. Rommel's grave is located in Herrlingen, a short distance west of Ulm. For decades after the war on the anniversary of his death, veterans of the Africa campaign, including former opponents, would gather at his tomb in Herrlingen. Style as military commander On the Italian front in the First World War, Rommel was a successful tactician in fast-developing mobile battle and this shaped his subsequent style as a military commander. He found that taking initiative and not allowing the enemy forces to regroup led to victory. Some authors argue that his enemies were often less organised, second-rate, or depleted, and his tactics were less effective against adequately led, trained and supplied opponents and proved insufficient in the later years of the war. Others point out that through his career, he frequently fought while out-numbered and out-gunned, sometimes overwhelmingly so, while having to deal with internal opponents in Germany who hoped that he would fail. Rommel is praised by numerous authors as a great leader of men. The historian and journalist Basil Liddell Hart concludes that he was a strong leader worshipped by his troops, respected by his adversaries and deserving to be named as one of the "Great Captains of History". Owen Connelly concurs, writing that "No better exemplar of military leadership can be found" and quoting Friedrich von Mellenthin on the inexplicable mutual understanding that existed between Rommel and his troops. Hitler, though, remarked that, "Unfortunately Field-Marshal Rommel is a very great leader full of drive in times of success, but an absolute pessimist when he meets the slightest problems." Telp criticises Rommel for not extending the benevolence he showed in promoting his own officers' careers to his peers, whom he ignored or slighted in his reports. Taking his opponents by surprise and creating uncertainty in their minds were key elements in Rommel's approach to offensive warfare: he took advantage of sand storms and the dark of night to conceal the movement of his forces. He was aggressive and often directed battle from the front or piloted a reconnaissance aircraft over the lines to get a view of the situation. When the British mounted a commando raid deep behind German lines in an effort to kill Rommel and his staff on the eve of their Crusader offensive, Rommel was indignant that the British expected to find his headquarters behind his front. Mellenthin and Harald Kuhn write that at times in North Africa his absence from a position of communication made command of the battles of the Afrika Korps difficult. Mellenthin lists Rommel's counterattack during Operation Crusader as one such instance. Butler concurred, saying that leading from the front is a good concept but Rommel took it so far – he frequently directed the actions of a single company or battalion – that he made communication and coordination between units problematic, as well as risking his life to the extent that he could easily have been killed even by his own artillery. Albert Kesselring also complained about Rommel cruising about the battlefield like a division or corps commander; but Gause and Westphal, supporting Rommel, replied that in the African desert only this method would work and that it was useless to try to restrain Rommel anyway. His staff officers, although admiring towards their leader, complained about the self-destructive Spartan lifestyle that made life harder, diminished his effectiveness and forced them to "bab[y] him as unobtrusively as possible". For his leadership during the French campaign Rommel received both praise and criticism. Many, such as General Georg Stumme, who had previously commanded 7th Panzer Division, were impressed with the speed and success of Rommel's drive. Others were reserved or critical: Kluge, his commanding officer, argued that Rommel's decisions were impulsive and that he claimed too much credit, by falsifying diagrams or by not acknowledging contributions of other units, especially the Luftwaffe. Some pointed out that Rommel's division took the highest casualties in the campaign. Others point out that in exchange for 2,160 casualties and 42 tanks, it captured more than 100,000 prisoners and destroyed nearly two divisions' worth of enemy tanks (about 450 tanks), vehicles and guns. Rommel spoke German with a pronounced southern German or Swabian accent. He was not a part of the Prussian aristocracy that dominated the German high command, and as such was looked upon somewhat suspiciously by the Wehrmacht's traditional power structure. Rommel felt a commander should be physically more robust than the troops he led, and should always show them an example. He expected his subordinate commanders to do the same. Rommel was direct, unbending, tough in his manners, to superiors and subordinates alike, disobedient even to Hitler whenever he saw fit, although gentle and diplomatic to the lower ranks. Despite being publicity-friendly, he was also shy, introverted, clumsy and overly formal even to his closest aides, judging people only on their merits, although loyal and considerate to those who had proved reliability, and he displayed a surprisingly passionate and devoted side to a very small few (including Hitler) with whom he had dropped the seemingly impenetrable barriers. Relationship with Italian forces Rommel's relationship with the Italian High Command in North Africa was generally poor. Although he was nominally subordinate to the Italians, he enjoyed a certain degree of autonomy from them; since he was directing their troops in battle as well as his own, this was bound to cause hostility among Italian commanders. Conversely, as the Italian command had control over the supplies of the forces in Africa, they resupplied Italian units preferentially, which was a source of resentment for Rommel and his staff. Rommel's direct and abrasive manner did nothing to smooth these issues. While certainly much less proficient than Rommel in their leadership, aggressiveness, tactical outlook and mobile warfare skills, Italian commanders were competent in logistics, strategy and artillery doctrine: their troops were ill-equipped but well-trained. As such, the Italian commanders were repeatedly at odds with Rommel over concerns with issues of supply. Field Marshal Kesselring was assigned Supreme Commander Mediterranean, at least in part to alleviate command problems between Rommel and the Italians. This effort resulted only in partial success, with Kesselring's own relationship with the Italians being unsteady and Kesselring claiming Rommel ignored him as readily as he ignored the Italians. Rommel often went directly to Hitler with his needs and concerns, taking advantage of the favoritism that the Führer displayed towards him and adding to the distrust that Kesselring and the German High Command already had of him. According to Scianna, opinion among the Italian military leaders was not unanimous. In general, Rommel was a target of criticism and a scapegoat for defeat rather than a glorified figure, with certain generals also trying to replace him as the heroic leader or hijack the Rommel myth for their own benefit. Nevertheless, he never became a hated figure, although the "abandonment myth", despite being repudiated by officers of the X Corps themselves, was long-lived. Many found Rommel's chaotic leadership and emotional character hard to work with, yet the Italians held him in higher regard than other German senior commanders, militarily and personally. Very different, however, was the perception of Rommel by Italian common soldiers and NCOs, who, like the German field troops, had the deepest trust and respect for him. Paolo Colacicchi, an officer in the Italian Tenth Army recalled that Rommel "became sort of a myth to the Italian soldiers". Rommel himself held a much more generous view about the Italian soldier than about their leadership, towards whom his disdain, deeply rooted in militarism, was not atypical, although unlike Kesselring he was incapable of concealing it. Unlike many of his superiors and subordinates who held racist views, he was usually "kindly disposed" to the Italians in general. James J. Sadkovich cites examples of Rommel abandoning his Italian units, refusing cooperation, rarely acknowledging their achievements and other improper behaviour towards his Italian allies, Giuseppe Mancinelli, who was liaison between German and Italian command, accused Rommel of blaming Italians for his own errors. Sadkovich names Rommel as arrogantly ethnocentric and disdainful towards Italians.' Views on the conduct of war Combat Many authors describe Rommel as having a reputation of being a chivalrous, humane, and professional officer, and that he earned the respect of both his own troops and his enemies. Gerhard Schreiber quotes Rommel's orders, issued together with Kesselring: "Sentimentality concerning the Badoglio following gangs ("Banden" in the original, indicating a mob-like crowd) in the uniforms of the former ally is misplaced. Whoever fights against the German soldier has lost any right to be treated well and shall experience toughness reserved for the rabble which betrays friends. Every member of the German troop has to adopt this stance." Schreiber writes that this exceptionally harsh and, according to him, "hate fuelled" order brutalised the war and was clearly aimed at Italian soldiers, not just partisans. Dennis Showalter writes that "Rommel was not involved in Italy's partisan war, though the orders he issued prescribing death for Italian soldiers taken in arms and Italian civilians sheltering escaped British prisoners do not suggest he would have behaved significantly different from his Wehrmacht counterparts." According to Maurice Remy, orders issued by Hitler during Rommel's stay in a hospital resulted in massacres in the course of Operation Achse, disarming the Italian forces after the armistice with the Allies in 1943. Remy also states that Rommel treated his Italian opponents with his usual fairness, requiring that the prisoners should be accorded the same conditions as German civilians. Remy opines that an order in which Rommel, in contrast to Hitler's directives, called for no "sentimental scruples" against "Badoglio-dependent bandits in uniforms of the once brothers-in-arms" should not be taken out of context. Peter Lieb agrees that the order did not radicalize the war and that the disarmament in Rommel's area of responsibility happened without major bloodshed. Italian internees were sent to Germany for forced labour, but Rommel was unaware of this. Klaus Schmider comments that the writings of Lieb and others succeed in vindicating Rommel "both with regards to his likely complicity in the July plot as well as his repeated refusal to carry out illegal orders." Rommel withheld Hitler's Commando Order to execute captured commandos from his Army Group B, with his units reporting that they were treating commandos as regular POWs. It is likely that he had acted similarly in North Africa. Historian Szymon Datner argues that Rommel may have been simply trying to conceal the atrocities of Nazi Germany from the Allies. Remy states that although Rommel had heard rumours about massacres while fighting in Africa, his personality, combined with special circumstances, meant that he was not fully confronted with the reality of atrocities before 1944. When Rommel learned about the atrocities that SS Division Leibstandarte committed in Italy in September 1943, he allegedly forbade his son from joining the Waffen-SS. Attitude toward colonial troops By the time of the Second World War, French colonial troops were portrayed as a symbol of French depravity in Nazi propaganda; Canadian historian Myron Echenberg writes that Rommel, just like Hitler, viewed black French soldiers with particular disdain. According to author Ward Rutherford, Rommel also held racist views towards British colonial troops from India; Rutherford in his The biography of Field Marshal Erwin Rommel writes: "Not even his most sycophantic apologists have been able to evade the conclusion, fully demonstrated by his later behavior, that Rommel was a racist who, for example, thought it desperately unfair that the British should employ 'black' – by which he meant Indian – troops against a white adversary." Vaughn Raspberry writes that Rommel and other officers considered it an insult to fight against black Africans because they considered black people to be members of "inferior races". Bruce Watson comments that whatever racism Rommel might have had in the beginning, it was washed away when he fought in the desert. When he saw that they were fighting well, he gave the members of the 4th Division of the Indian Army high praise. Rommel and the Germans acknowledge the Gurkhas' fighting ability, although their style leaned more towards ferocity. Once he witnessed German soldiers with throats cut by a khukri knife. Originally, he did not want Chandra Bose's Indian formation (composed of the Allied Indian soldiers), captured by his own troops, to work under his command. In Normandy though, when they had already become the Indische Freiwilligen Legion der Waffen SS, he visited them and praised them for their efforts (while they still suffered general disrespect within the Wehrmacht). A review on Rutherford's book by the Pakistan Army Journal says that the statement is one of many that Rutherford uses, which lack support in authority and analysis. Rommel saying that using the Indians was unfair should also be put in perspective, considering the disbandment of the battle-hardened 4th Division by the Allies. Rommel praised the colonial troops in the battle of France: "The (French) colonial troops fought with extraordinary determination. The anti-tank teams and tank crews performed with courage and caused serious losses." though that might be an example of generals honouring their opponents so that "their own victories appear the more impressive.". Reuth comments that Rommel ensured that he and his command would act decently (shown by his treatment of the Free French prisoners who were considered partisans by Hitler, the Jews and the coloured men), while he was distancing himself from Hitler's racist war in the East and deluding himself into believing that Hitler was good, only the Party big shots were evil. The black South African soldiers recount that when they were held as POWs after they were captured by Rommel, they initially slept and queued for food away from the whites, until Rommel saw this and told them that brave soldiers should all queue together. Finding this strange coming from a man fighting for Hitler, they adopted this behaviour until they went back to the Union of South Africa, where they were separated again. There are reports that Rommel acknowledged the Maori soldiers' fighting skills, yet at the same time he complained about their methods which were unfair from the European perspective. When he asked the commander of the New Zealand 6th Infantry Brigade about his division's massacres of the wounded and POWs, the commander attributed these incidents to the Maoris in his unit. Hew Strachan notes that lapses in practicing the warriors' code of war were usually attributed to ethnic groups which lived outside Europe with the implication that those ethnic groups which lived in Europe knew how to behave (although Strachan opines that such attributions were probably true). Nevertheless, according to the website of the 28th Maori Battalion, Rommel always treated them fairly and he also showed understanding with regard to war crimes. Politics Some authors cite, among other cases, Rommel's naive reaction to events in Poland while he was there: he paid a visit to his wife's uncle, famous Polish priest and patriotic leader, who was murdered within days, but Rommel never understood this and, at his wife's urgings, kept writing letter after letter to Himmler's adjutants asking them to keep track and take care of their relative. Knopp and Mosier agree that he was naive politically, citing his request for a Jewish Gauleiter in 1943. Despite this, Lieb finds it hard to believe that a man in Rommel's position could have known nothing about atrocities, while accepting that locally he was separated from the places where these atrocities occurred. Der Spiegel comments that Rommel was simply in denial about what happened around him. Alaric Searle points out that it was the early diplomatic successes and bloodless expansion that blinded Rommel to the true nature of his beloved Führer, whom he then naively continued to support. Scheck believes it may be forever unclear whether Rommel recognized the unprecedented depraved character of the regime. Civilians Historian Richard J. Evans has stated that German soldiers in Tunisia raped Jewish women, and the success of Rommel's forces in capturing or securing Allied, Italian and Vichy French territory in North Africa led to many Jews in these areas being killed by other German institutions as part of the Holocaust. Anti-Jewish and Anti-Arab violence erupted in North Africa when Rommel and Ettore Bastico regained territory there in February 1941 and then again in April 1942. While committed by Italian forces, Patrick Bernhard writes "the Germans were aware of Italian reprisals behind the front lines. Yet, perhaps surprisingly, they seem to have exercised little control over events. The German consul general in Tripoli consulted with Italian state and party officials about possible countermeasures against the natives, but this was the full extent of German involvement. Rommel did not directly intervene, though he advised the Italian authorities to do whatever was necessary to eliminate the danger of riots and espionage; for the German general, the rear areas were to be kept "quiet" at all costs. Thus, according to Bernhard, although he had no direct hand in the atrocities, Rommel made himself complicit in war crimes by failing to point out that international laws of war strictly prohibited certain forms of retaliation. By giving carte blanche to the Italians, Rommel implicitly condoned, and perhaps even encouraged, their war crimes". In his article Im Rücken Rommels. Kriegsverbrechen, koloniale Massengewalt und Judenverfolgung in Nordafrika, Bernhard writes that North African campaign was hardly "war without hate" as Rommel described it, and points out rapes of women, ill treatment and executions of captured POWs, as well as racially motivated murders of Arabs, Berbers and Jews, in addition to establishment of concentration camps. Bernhard again cites discussion among the German and Italian authorities about Rommel's position regarding countermeasures against local insurrection (according to them, Rommel wanted to eliminate the danger at all costs) to show that Rommel fundamentally approved of Italian policy in the matter. Bernhard opines that Rommel had informal power over the matter because his military success brought him influence on the Italian authorities. United States Holocaust Memorial Museum describes relationship between Rommel and the proposed Einsatzgruppen Egypt as "problematic". The Museum states that this unit was to be tasked with murdering Jewish population of North Africa, Palestine, and it was to be attached directly to Rommel's Afrika Korps. According to the museum Rauff met with Rommel's staff in 1942 as part of preparations for this plan. The Museum states that Rommel was certainly aware that planning was taking place, even if his reaction to it isn't recorded, and while the main proposed Einsatzgruppen were never set in action, smaller units did murder Jews in North Africa. On the other hand, Christopher Gabel remarks that Richards Evans seems to attempt to prove that Rommel was a war criminal by association but fails to produce evidence that he had actual or constructive knowledge about said crimes. Ben H. Shepherd comments that Rommel showed insight and restraint when dealing with the nomadic Arabs, the only civilians who occasionally intervened into the war and thus risked reprisals as a result. Shepherd cites a request by Rommel to the Italian High Command, in which he complained about excesses against the Arabic population and noted that reprisals without identifying the real culprits were never expedient. The documentary Rommel's War (Rommels Krieg), made by Caron and Müllner with advice from Sönke Neitzel, states that even though it is not clear whether Rommel knew about the crimes (in Africa) or not, "his military success made possible forced labor, torture and robbery. Rommel's war is always part of Hitler's war of worldviews, whether Rommel wanted it or not." More specifically, several German historians have revealed existence of plans to exterminate Jews in Egypt and Palestine, if Rommel had succeeded in his goal of invading the Middle East during 1942 by SS unit embedded to Afrika Korps. According to Mallmann and Cüppers, a post-war CIA report described Rommel as having met with Walther Rauff, who was responsible for the unit, and been disgusted after learning about the plan from him and as having sent him on his way; but they conclude that such a meeting is hardly possible as Rauff was sent to report to Rommel at Tobruk on 20 July and Rommel was then 500 km away conducting the First El Alamein. On 29 July, Rauff's unit was sent to Athens, expecting to enter Africa when Rommel crossed the Nile. However, in view of the Axis' deteriorating situation in Africa it returned to Germany in September. Historian Jean-Christoph Caron opines that there is no evidence that Rommel knew or would have supported Rauff's mission; he also believes Rommel bore no direct responsibility regarding the SS's looting of gold in Tunisia. Historian Haim Saadon, Director of the Center of Research on North African Jewry in WWII, goes further, stating that there was no extermination plan: Rauff's documents show that his foremost concern was helping the Wehrmacht to win, and he came up with the idea of forced labour camps in the process. By the time these labour camps were in operation, according to Ben H. Shepherd, Rommel had already been retreating and there is no proof of his contact with the Einsatzkommando. Haaretz comments that the CIA report is most likely correct regarding both the interaction between Rommel and Rauff and Rommel's objections to the plan: Rauff's assistant Theodor Saevecke, and declassified information from Rauff's file, both report the same story. Haaretz also remarks that Rommel's influence probably softened the Nazi authorities' attitude to the Jews and to the civilian population generally in North Africa. Rolf-Dieter Müller comments that the war in North Africa, while as bloody as any other war, differed considerably from the war of annihilation in eastern Europe, because it was limited to a narrow coastline and hardly affected the population. Showalter writes that: "From the desert campaign’s beginning, both sides consciously sought to wage a "clean" war—war without hate, as Rommel put it in his reflections. Explanations include the absence of civilians and the relative absence of Nazis; the nature of the environment, which conveyed a "moral simplicity and transparency"; and the control of command on both sides by prewar professionals, producing a British tendency to depict war in the imagery of a game, and the corresponding German pattern of seeing it as a test of skill and a proof of virtue. The nature of the fighting as well diminished the last-ditch, close-quarter actions that are primary nurturers of mutual bitterness. A battalion overrun by tanks usually had its resistance broken so completely that nothing was to be gained by a broken-backed final stand." Joachim Käppner writes that while the conflict in North Africa was not as bloody as in Eastern Europe, the Afrika Korps committed some war crimes. Historian Martin Kitchen states that the reputation of the Afrika Korps was preserved by circumstances: The sparsely populated desert areas did not lend themselves to ethnic cleansing; the German forces never reached the large Jewish populations in Egypt and Palestine; and in the urban areas of Tunisia and Tripolitania the Italian government constrained the German efforts to discriminate against or eliminate Jews who were Italian citizens. Despite this, the North African Jews themselves believed that it was Rommel who prevented the "Final Solution" from being carried out against them when German might dominated North Africa from Egypt to Morocco. According to Curtis and Remy, 120,000 Jews lived in Algeria, 200,000 in Morocco, about 80,000 in Tunisia. Remy writes that this number was unchanged following the German invasion of Tunisia in 1942 while Curtis notes that 5000 of these Jews would be sent to forced labour camps. and 26,000 in Libya. Hein Klemann writes that the confiscations in the "foraging zone" of Afrika Korps threatened the survival chances of local civilians, just as plunder enacted by Wehrmacht in Soviet Union In North Africa Rommel's troops laid down landmines, which in decades to come killed and maimed thousands of civilians. Since statistics started in 1980s, 3,300 people have lost their lives, and 7,500 maimed There are disputed whether the landmines in El Alamein, which constitute the most notable portion of landmines left over from World War II, were left by the Afrika Korps or the British Army led by Field Marshal Montgomery. Egypt has not joined the Mine Ban Treaty until this day. Rommel sharply protested the Jewish policies and other immoralities and was an opponent of the Gestapo He also refused to comply with Hitler's order to execute Jewish POWs. Bryan Mark Rigg writes: "The only place in the army where one might find a place of refuge was in the Deutsches Afrika-Korps (DAK) under the leadership of the "Desert Fox," Field Marshal Erwin Rommel. According to this study's files, his half-Jews were not as affected by the racial laws as most others serving on the European continent." He notes, though, that "Perhaps Rommel failed to enforce the order to discharge half-Jews because he was unaware of it". Captain Horst van Oppenfeld (a staff officer to Colonel Claus von Stauffenberg and a quarter-Jew) says that Rommel did not concern himself with the racial decrees and he had never experienced any trouble caused by his ancestry during his time in the DAK even if Rommel never personally interfered on his behalf. Another quarter-Jew, Fritz Bayerlein, became a famous general and Rommel's chief-of-staff, despite also being a bisexual, which made his situation even more precarious. Building the Atlantic Wall was officially the responsibility of the Organisation Todt, which was not under Rommel's command, but he enthusiastically joined the task, protesting slave labour and suggesting that they should recruit French civilians and pay them good wages. Despite this, French civilians and Italian prisoners of war held by the Germans were forced by officials under the Vichy government, the Todt Organization and the SS forces to work on building some of the defences Rommel requested, in appalling conditions according to historian Will Fowler. Although they got basic wages, the workers complained because it was too little and there was no heavy equipment. German troops worked almost round-the-clock under very harsh conditions, with Rommel's rewards being accordions.} Rommel was one of the commander who protested the Oradour-sur-Glane massacre. Reputation as a military commander Rommel was famous in his lifetime, including among his adversaries. His tactical prowess and decency in the treatment of Allied prisoners earned him the respect of opponents including Claude Auchinleck, Archibald Wavell, George S. Patton, and Bernard Montgomery. Rommel's military reputation has been controversial. While nearly all military practitioners acknowledge Rommel's excellent tactical skills and personal bravery, some, such as U.S. major general and military historian David T. Zabecki of the United States Naval Institute, considers Rommel's performance as an operational level commander to be highly overrated and that other officers share this belief. General Klaus Naumann, who served as Chief of Staff of the Bundeswehr, agrees with the military historian Charles Messenger that Rommel had challenges at the operational level, and states that Rommel's violation of the unity of command principle, bypassing the chain of command in Africa, was unacceptable and contributed to the eventual operational and strategic failure in North Africa. The German biographer Wolf Heckmann describes Rommel as "the most overrated commander of an army in world history". Nevertheless, there is also a notable number of officers who admire his methods, like Norman Schwarzkopf who described Rommel as a genius at battles of movement saying "Look at Rommel. Look at North Africa, the Arab-Israeli wars, and all the rest of them. A war in the desert is a war of mobility and lethality. It's not a war where straight lines are drawn in the sand and [you] say, 'I will defend here or die." Ariel Sharon deemed the German military model used by Rommel to be superior to the British model used by Montgomery. His compatriot Moshe Dayan likewise considered Rommel a model and icon. Wesley Clark states that "Rommel's military reputation, though, has lived on, and still sets the standard for a style of daring, charismatic leadership to which most officers aspire." During the recent desert wars, Rommel's military theories and experiences attracted great interest from policy makers and military instructors. Chinese military leader Sun Li-jen had the laudatory nickname "Rommel of the East". Certain modern military historians, such as Larry T. Addington, Niall Barr, Douglas Porch and Robert Citino, are skeptical of Rommel as an operational, let alone strategic level commander. They point to Rommel's lack of appreciation for Germany's strategic situation, his misunderstanding of the relative importance of his theatre to the German High Command, his poor grasp of logistical realities, and, according to the historian Ian Beckett, his "penchant for glory hunting". Citino credits Rommel's limitations as an operational level commander as "materially contributing" to the eventual demise of the Axis forces in North Africa, while Addington focuses on the struggle over strategy, whereby Rommel's initial brilliant success resulted in "catastrophic effects" for Germany in North Africa. Porch highlights Rommel's "offensive mentality", symptomatic of the Wehrmacht commanders as a whole in the belief that the tactical and operational victories would lead to strategic success. Compounding the problem was the Wehrmacht's institutional tendency to discount logistics, industrial output and their opponents' capacity to learn from past mistakes. The historian Geoffrey P. Megargee points out Rommel's playing the German and Italian command structures against each other to his advantage. Rommel used the confused structure—the High command of the armed forces, the OKH (Supreme High Command of the Army) and the Comando Supremo (Italian Supreme Command)—to disregard orders that he disagreed with or to appeal to whatever authority he felt would be most sympathetic to his requests. Some historians take issue with Rommel's absence from Normandy on the day of the Allied invasion, 6 June 1944. He had left France on 5 June and was at home on the 6th celebrating his wife's birthday. (According to Rommel, he planned to proceed to see Hitler the next day to discuss the situation in Normandy). Zabecki calls his decision to leave the theatre in view of an imminent invasion "an incredible lapse of command responsibility". Lieb remarks that Rommel displayed real mental agility, but the lack of an energetic commander, together with other problems, caused the battle largely not to be conducted in his concept (which is the opposite of the German doctrine), although the result was still better than Geyr's plan. Lieb also opines that while his harshest critics (who mostly came from the General Staff) often said that Rommel was overrated or not suitable for higher commands, envy was a big factor here. T.L. McMahon argues that Rommel no doubt possessed operational vision, however Rommel did not have the strategic resources to effect his operational choices while his forces provided the tactical ability to accomplish his goals, and the German staff and system of staff command were designed for commanders who led from the front, and in some cases he might have chosen the same options as Montgomery (a reputedly strategy-oriented commander) had he been put in the same conditions. According to Steven Zaloga, tactical flexibility was a great advantage of the German system, but in the final years of the war, Hitler and his cronies like Himmler and Goering had usurped more and more authority at the strategic level, leaving professionals like Rommel increasing constraints on their actions. Martin Blumenson considers Rommel a general with a compelling view of strategy and logistics, which was demonstrated through his many arguments with his superiors over such matters, although Blumenson also thinks that what distinguished Rommel was his boldness, his intuitive feel for the battlefield.(Upon which Schwarzkopf also comments "Rommel had a feel for the battlefield like no other man.") Joseph Forbes comments that: "The complex, conflict-filled interaction between Rommel and his superiors over logistics, objectives and priorities should not be used to detract from Rommel's reputation as a remarkable military leader", because Rommel was not given powers over logistics, and because if only generals who attain strategic-policy goals are great generals, such highly regarded commanders as Robert E. Lee, Hannibal, Charles XII would have to be excluded from that list. General Siegfried F. Storbeck, Deputy Inspector General of the Bundeswehr (1987–1991), remarks that, Rommel's leadership style and offensive thinking, although carrying inherent risks like losing the overview of the situation and creating overlapping of authority, have been proved effective, and have been analysed and incorporated in the training of officers by "us, our Western allies, the Warsaw Pact, and even the Israel Defense Forces". Maurice Remy defends his strategic decision regarding Malta as, although risky, the only logical choice. Rommel was among the few Axis commanders (the others being Isoroku Yamamoto and Reinhard Heydrich) who were targeted for assassination by Allied planners. Two attempts were made, the first being Operation Flipper in North Africa in 1941, and the second being Operation Gaff in Normandy in 1944. Research by Norman Ohler claims that Rommel's behaviours were heavily influenced by Pervitin which he reportedly took in heavy doses, to such an extent that Ohler refers to him as "the Crystal Fox" ("Kristallfuchs") – playing off the nickname "Desert Fox" famously given to him by the British. Debate about atrocities Executions of prisoners in France In France, Rommel ordered the execution of one French officer who refused three times to cooperate when being taken prisoner; there are disputes as to whether this execution was justified. Caddick-Adams comments that this would make Rommel a war criminal condemned by his own hand, and that other authors overlook this episode. Butler notes that the officer refused to surrender three times and thus died in a courageous but foolhardy way. French historian Petitfrère remarks that Rommel was in a hurry and had no time for useless palavers, although this act was still debatable. Telp remarks that, "he treated prisoners of war with consideration. On one occasion, he was forced to order the shooting of a French lieutenant-colonel for refusing to obey his captors." Scheck says, "Although there is no evidence incriminating Rommel himself, his unit did fight in areas where German massacres of black French prisoners of war were extremely common in June 1940." There are reports that during the fighting in France, Rommel's 7th Panzer Division committed atrocities against surrendering French troops and captured prisoners of war. The atrocities, according to Martin S. Alexander, included the murder of 50 surrendering officers and men at Quesnoy and the nearby Airaines. According to Richardot, on 7 June, the commanding French officer Charles N'Tchoréré and his company surrendered to the 7th Panzer Division. He was then executed by the 25th Infantry Regiment (the 7th Panzer Division did not have a 25th Infantry Regiment). Journalist Alain Aka states simply that he was executed by one of Rommel's soldiers and his body was driven over by tank. Erwan Bergot reports that he was killed by the SS. Historian John Morrow states he was shot in the neck by a Panzer officer, without mentioning the unit of the perpetrators of this crime. The website of the National Federation of Volunteer Servicemen (F.N.C.V., France) states that N'Tchoréré was pushed against the wall and, despite protests from his comrades and newly liberated German prisoners, was shot by the SS. Elements of the division are considered by Scheck to have been "likely" responsible for the murder of POWs in Hangest-sur-Somme, while Scheck reports that they were too far away to have been involved in the massacres at Airaines and nearby villages. Scheck says that the German units fighting there came from the 46th and 2nd Infantry Division, and possibly from the 6th and 27th Infantry Division as well. Scheck also writes that there were no SS units in the area. Morrow, citing Scheck, says that the 7th Panzer Division carried out "cleansing operations". French historian Dominique Lormier counts the number of victims of the 7th Panzer Division in Airaines at 109, mostly French-African soldiers from Senegal. Showalter writes: "In fact, the garrison of Le Quesnoy, most of them Senegalese, took heavy toll of the German infantry in house-to-house fighting. Unlike other occasions in 1940, when Germans and Africans met, there was no deliberate massacre of survivors. Nevertheless, the riflemen took few prisoners, and the delay imposed by the tirailleurs forced the Panzers to advance unsupported until Rommel was ordered to halt for fear of coming under attack by Stukas." Claus Telp comments that Airaines was not in the sector of the 7th, but at Hangest and Martainville, elements of the 7th might have shot some prisoners and used British Colonel Broomhall as a human shield (although Telp is of the opinion that it was unlikely that Rommel approved of, or even knew about, these two incidents). Historian David Stone notes that acts of shooting surrendered prisoners were carried out by Rommel's 7th Panzer Division and observes contradictory statements in Rommel's account of the events; Rommel initially wrote that "any enemy troops were wiped out or forced to withdraw" but also added that "many prisoners taken were hopelessly drunk." Stone attributes the massacres of soldiers from the 53ème Regiment d'Infanterie Coloniale (N'Tchoréré's unit) on 7 June to the 5th Infantry Division. Historian Daniel Butler agrees that it was possible that the massacre at Le Quesnoy happened given the existence of Nazis, such as Hanke, in Rommel's division, while stating that in comparison with other German units, few sources regarding such actions of the men of the 7th Panzer exist. Butler believes that "it's almost impossible to imagine" Rommel authorising or countenancing such actions. He also writes that "Some accusers have twisted a remark in Rommel's own account of the action in the village of Le Quesnoy as proof that he at least tacitly condoned the executions—'any enemy troops were either wiped out or forced to withdraw'—but the words themselves as well as the context of the passage hardly support the contention." Treatment of Jews and other civilians in North Africa Giordana Terracina writes that: "On April 3, the Italians recaptured Benghazi and a few months later the Afrika Korps led by Rommel was sent to Libya and began the deportation of the Jews of Cyrenaica in the concentration camp of Giado and other smaller towns in Tripolitania. This measure was accompanied by shooting, also in Benghazi, of some Jews guilty of having welcomed the British troops, on their arrival, treating them as liberators." According to German historian , Rommel forbade his soldiers from buying anything from the Jewish population of Tripoli, used Jewish slave labour and commanded Jews to clear out minefields by walking on them ahead of his forces. According to Proske, some of the Libyan Jews were eventually sent to concentration camps. Historians Christian Schweizer and Peter Lieb note that: "Over the last few years, even though the social science teacher Wolfgang Proske has sought to participate in the discussion [on Rommel] with very strong opinions, his biased submissions are not scientifically received." The Heidenheimer Zeitung notes that Proske was the publisher of his main work Täter, Helfer, Trittbrettfahrer – NS-Belastete von der Ostalb, after failing to have it published by another publisher. The Jerusalem Posts review of Gershom Gorenberg's War of shadows writes that: "The Italians were far more brutal with civilians, including Libyan Jews, than Rommel’s Afrika Korps, which by all accounts abided by the laws of war. But nobody worried that the Italians who sent Jews to concentration camps in Libya, would invade British-held Egypt, let alone Mandatory Palestine." According to historian Michael Wolffsohn, during the Africa campaign, preparations for committing a Holocaust against the North African Jews were in full swing and a thousand of them were transported to East European concentration camps. At the same time, he recommends the Bundeswehr to keep the names and traditions associated with Rommel (although Wolffsohn opines that focus should be put on the politically thoughtful soldier he became at the end of his life, rather than the swashbuckler and the humane rogue). Robert Satloff writes in his book Among the Righteous: Lost Stories from the Holocaust's Long Reach into Arab Lands that as the German and Italian forces retreated across Libya towards Tunisia, the Jewish population became victim upon which they released their anger and frustration. According to Satloff Afrika Korps soldiers plundered Jewish property all along the Libyan coast. This violence and
his effectiveness and forced them to "bab[y] him as unobtrusively as possible". For his leadership during the French campaign Rommel received both praise and criticism. Many, such as General Georg Stumme, who had previously commanded 7th Panzer Division, were impressed with the speed and success of Rommel's drive. Others were reserved or critical: Kluge, his commanding officer, argued that Rommel's decisions were impulsive and that he claimed too much credit, by falsifying diagrams or by not acknowledging contributions of other units, especially the Luftwaffe. Some pointed out that Rommel's division took the highest casualties in the campaign. Others point out that in exchange for 2,160 casualties and 42 tanks, it captured more than 100,000 prisoners and destroyed nearly two divisions' worth of enemy tanks (about 450 tanks), vehicles and guns. Rommel spoke German with a pronounced southern German or Swabian accent. He was not a part of the Prussian aristocracy that dominated the German high command, and as such was looked upon somewhat suspiciously by the Wehrmacht's traditional power structure. Rommel felt a commander should be physically more robust than the troops he led, and should always show them an example. He expected his subordinate commanders to do the same. Rommel was direct, unbending, tough in his manners, to superiors and subordinates alike, disobedient even to Hitler whenever he saw fit, although gentle and diplomatic to the lower ranks. Despite being publicity-friendly, he was also shy, introverted, clumsy and overly formal even to his closest aides, judging people only on their merits, although loyal and considerate to those who had proved reliability, and he displayed a surprisingly passionate and devoted side to a very small few (including Hitler) with whom he had dropped the seemingly impenetrable barriers. Relationship with Italian forces Rommel's relationship with the Italian High Command in North Africa was generally poor. Although he was nominally subordinate to the Italians, he enjoyed a certain degree of autonomy from them; since he was directing their troops in battle as well as his own, this was bound to cause hostility among Italian commanders. Conversely, as the Italian command had control over the supplies of the forces in Africa, they resupplied Italian units preferentially, which was a source of resentment for Rommel and his staff. Rommel's direct and abrasive manner did nothing to smooth these issues. While certainly much less proficient than Rommel in their leadership, aggressiveness, tactical outlook and mobile warfare skills, Italian commanders were competent in logistics, strategy and artillery doctrine: their troops were ill-equipped but well-trained. As such, the Italian commanders were repeatedly at odds with Rommel over concerns with issues of supply. Field Marshal Kesselring was assigned Supreme Commander Mediterranean, at least in part to alleviate command problems between Rommel and the Italians. This effort resulted only in partial success, with Kesselring's own relationship with the Italians being unsteady and Kesselring claiming Rommel ignored him as readily as he ignored the Italians. Rommel often went directly to Hitler with his needs and concerns, taking advantage of the favoritism that the Führer displayed towards him and adding to the distrust that Kesselring and the German High Command already had of him. According to Scianna, opinion among the Italian military leaders was not unanimous. In general, Rommel was a target of criticism and a scapegoat for defeat rather than a glorified figure, with certain generals also trying to replace him as the heroic leader or hijack the Rommel myth for their own benefit. Nevertheless, he never became a hated figure, although the "abandonment myth", despite being repudiated by officers of the X Corps themselves, was long-lived. Many found Rommel's chaotic leadership and emotional character hard to work with, yet the Italians held him in higher regard than other German senior commanders, militarily and personally. Very different, however, was the perception of Rommel by Italian common soldiers and NCOs, who, like the German field troops, had the deepest trust and respect for him. Paolo Colacicchi, an officer in the Italian Tenth Army recalled that Rommel "became sort of a myth to the Italian soldiers". Rommel himself held a much more generous view about the Italian soldier than about their leadership, towards whom his disdain, deeply rooted in militarism, was not atypical, although unlike Kesselring he was incapable of concealing it. Unlike many of his superiors and subordinates who held racist views, he was usually "kindly disposed" to the Italians in general. James J. Sadkovich cites examples of Rommel abandoning his Italian units, refusing cooperation, rarely acknowledging their achievements and other improper behaviour towards his Italian allies, Giuseppe Mancinelli, who was liaison between German and Italian command, accused Rommel of blaming Italians for his own errors. Sadkovich names Rommel as arrogantly ethnocentric and disdainful towards Italians.' Views on the conduct of war Combat Many authors describe Rommel as having a reputation of being a chivalrous, humane, and professional officer, and that he earned the respect of both his own troops and his enemies. Gerhard Schreiber quotes Rommel's orders, issued together with Kesselring: "Sentimentality concerning the Badoglio following gangs ("Banden" in the original, indicating a mob-like crowd) in the uniforms of the former ally is misplaced. Whoever fights against the German soldier has lost any right to be treated well and shall experience toughness reserved for the rabble which betrays friends. Every member of the German troop has to adopt this stance." Schreiber writes that this exceptionally harsh and, according to him, "hate fuelled" order brutalised the war and was clearly aimed at Italian soldiers, not just partisans. Dennis Showalter writes that "Rommel was not involved in Italy's partisan war, though the orders he issued prescribing death for Italian soldiers taken in arms and Italian civilians sheltering escaped British prisoners do not suggest he would have behaved significantly different from his Wehrmacht counterparts." According to Maurice Remy, orders issued by Hitler during Rommel's stay in a hospital resulted in massacres in the course of Operation Achse, disarming the Italian forces after the armistice with the Allies in 1943. Remy also states that Rommel treated his Italian opponents with his usual fairness, requiring that the prisoners should be accorded the same conditions as German civilians. Remy opines that an order in which Rommel, in contrast to Hitler's directives, called for no "sentimental scruples" against "Badoglio-dependent bandits in uniforms of the once brothers-in-arms" should not be taken out of context. Peter Lieb agrees that the order did not radicalize the war and that the disarmament in Rommel's area of responsibility happened without major bloodshed. Italian internees were sent to Germany for forced labour, but Rommel was unaware of this. Klaus Schmider comments that the writings of Lieb and others succeed in vindicating Rommel "both with regards to his likely complicity in the July plot as well as his repeated refusal to carry out illegal orders." Rommel withheld Hitler's Commando Order to execute captured commandos from his Army Group B, with his units reporting that they were treating commandos as regular POWs. It is likely that he had acted similarly in North Africa. Historian Szymon Datner argues that Rommel may have been simply trying to conceal the atrocities of Nazi Germany from the Allies. Remy states that although Rommel had heard rumours about massacres while fighting in Africa, his personality, combined with special circumstances, meant that he was not fully confronted with the reality of atrocities before 1944. When Rommel learned about the atrocities that SS Division Leibstandarte committed in Italy in September 1943, he allegedly forbade his son from joining the Waffen-SS. Attitude toward colonial troops By the time of the Second World War, French colonial troops were portrayed as a symbol of French depravity in Nazi propaganda; Canadian historian Myron Echenberg writes that Rommel, just like Hitler, viewed black French soldiers with particular disdain. According to author Ward Rutherford, Rommel also held racist views towards British colonial troops from India; Rutherford in his The biography of Field Marshal Erwin Rommel writes: "Not even his most sycophantic apologists have been able to evade the conclusion, fully demonstrated by his later behavior, that Rommel was a racist who, for example, thought it desperately unfair that the British should employ 'black' – by which he meant Indian – troops against a white adversary." Vaughn Raspberry writes that Rommel and other officers considered it an insult to fight against black Africans because they considered black people to be members of "inferior races". Bruce Watson comments that whatever racism Rommel might have had in the beginning, it was washed away when he fought in the desert. When he saw that they were fighting well, he gave the members of the 4th Division of the Indian Army high praise. Rommel and the Germans acknowledge the Gurkhas' fighting ability, although their style leaned more towards ferocity. Once he witnessed German soldiers with throats cut by a khukri knife. Originally, he did not want Chandra Bose's Indian formation (composed of the Allied Indian soldiers), captured by his own troops, to work under his command. In Normandy though, when they had already become the Indische Freiwilligen Legion der Waffen SS, he visited them and praised them for their efforts (while they still suffered general disrespect within the Wehrmacht). A review on Rutherford's book by the Pakistan Army Journal says that the statement is one of many that Rutherford uses, which lack support in authority and analysis. Rommel saying that using the Indians was unfair should also be put in perspective, considering the disbandment of the battle-hardened 4th Division by the Allies. Rommel praised the colonial troops in the battle of France: "The (French) colonial troops fought with extraordinary determination. The anti-tank teams and tank crews performed with courage and caused serious losses." though that might be an example of generals honouring their opponents so that "their own victories appear the more impressive.". Reuth comments that Rommel ensured that he and his command would act decently (shown by his treatment of the Free French prisoners who were considered partisans by Hitler, the Jews and the coloured men), while he was distancing himself from Hitler's racist war in the East and deluding himself into believing that Hitler was good, only the Party big shots were evil. The black South African soldiers recount that when they were held as POWs after they were captured by Rommel, they initially slept and queued for food away from the whites, until Rommel saw this and told them that brave soldiers should all queue together. Finding this strange coming from a man fighting for Hitler, they adopted this behaviour until they went back to the Union of South Africa, where they were separated again. There are reports that Rommel acknowledged the Maori soldiers' fighting skills, yet at the same time he complained about their methods which were unfair from the European perspective. When he asked the commander of the New Zealand 6th Infantry Brigade about his division's massacres of the wounded and POWs, the commander attributed these incidents to the Maoris in his unit. Hew Strachan notes that lapses in practicing the warriors' code of war were usually attributed to ethnic groups which lived outside Europe with the implication that those ethnic groups which lived in Europe knew how to behave (although Strachan opines that such attributions were probably true). Nevertheless, according to the website of the 28th Maori Battalion, Rommel always treated them fairly and he also showed understanding with regard to war crimes. Politics Some authors cite, among other cases, Rommel's naive reaction to events in Poland while he was there: he paid a visit to his wife's uncle, famous Polish priest and patriotic leader, who was murdered within days, but Rommel never understood this and, at his wife's urgings, kept writing letter after letter to Himmler's adjutants asking them to keep track and take care of their relative. Knopp and Mosier agree that he was naive politically, citing his request for a Jewish Gauleiter in 1943. Despite this, Lieb finds it hard to believe that a man in Rommel's position could have known nothing about atrocities, while accepting that locally he was separated from the places where these atrocities occurred. Der Spiegel comments that Rommel was simply in denial about what happened around him. Alaric Searle points out that it was the early diplomatic successes and bloodless expansion that blinded Rommel to the true nature of his beloved Führer, whom he then naively continued to support. Scheck believes it may be forever unclear whether Rommel recognized the unprecedented depraved character of the regime. Civilians Historian Richard J. Evans has stated that German soldiers in Tunisia raped Jewish women, and the success of Rommel's forces in capturing or securing Allied, Italian and Vichy French territory in North Africa led to many Jews in these areas being killed by other German institutions as part of the Holocaust. Anti-Jewish and Anti-Arab violence erupted in North Africa when Rommel and Ettore Bastico regained territory there in February 1941 and then again in April 1942. While committed by Italian forces, Patrick Bernhard writes "the Germans were aware of Italian reprisals behind the front lines. Yet, perhaps surprisingly, they seem to have exercised little control over events. The German consul general in Tripoli consulted with Italian state and party officials about possible countermeasures against the natives, but this was the full extent of German involvement. Rommel did not directly intervene, though he advised the Italian authorities to do whatever was necessary to eliminate the danger of riots and espionage; for the German general, the rear areas were to be kept "quiet" at all costs. Thus, according to Bernhard, although he had no direct hand in the atrocities, Rommel made himself complicit in war crimes by failing to point out that international laws of war strictly prohibited certain forms of retaliation. By giving carte blanche to the Italians, Rommel implicitly condoned, and perhaps even encouraged, their war crimes". In his article Im Rücken Rommels. Kriegsverbrechen, koloniale Massengewalt und Judenverfolgung in Nordafrika, Bernhard writes that North African campaign was hardly "war without hate" as Rommel described it, and points out rapes of women, ill treatment and executions of captured POWs, as well as racially motivated murders of Arabs, Berbers and Jews, in addition to establishment of concentration camps. Bernhard again cites discussion among the German and Italian authorities about Rommel's position regarding countermeasures against local insurrection (according to them, Rommel wanted to eliminate the danger at all costs) to show that Rommel fundamentally approved of Italian policy in the matter. Bernhard opines that Rommel had informal power over the matter because his military success brought him influence on the Italian authorities. United States Holocaust Memorial Museum describes relationship between Rommel and the proposed Einsatzgruppen Egypt as "problematic". The Museum states that this unit was to be tasked with murdering Jewish population of North Africa, Palestine, and it was to be attached directly to Rommel's Afrika Korps. According to the museum Rauff met with Rommel's staff in 1942 as part of preparations for this plan. The Museum states that Rommel was certainly aware that planning was taking place, even if his reaction to it isn't recorded, and while the main proposed Einsatzgruppen were never set in action, smaller units did murder Jews in North Africa. On the other hand, Christopher Gabel remarks that Richards Evans seems to attempt to prove that Rommel was a war criminal by association but fails to produce evidence that he had actual or constructive knowledge about said crimes. Ben H. Shepherd comments that Rommel showed insight and restraint when dealing with the nomadic Arabs, the only civilians who occasionally intervened into the war and thus risked reprisals as a result. Shepherd cites a request by Rommel to the Italian High Command, in which he complained about excesses against the Arabic population and noted that reprisals without identifying the real culprits were never expedient. The documentary Rommel's War (Rommels Krieg), made by Caron and Müllner with advice from Sönke Neitzel, states that even though it is not clear whether Rommel knew about the crimes (in Africa) or not, "his military success made possible forced labor, torture and robbery. Rommel's war is always part of Hitler's war of worldviews, whether Rommel wanted it or not." More specifically, several German historians have revealed existence of plans to exterminate Jews in Egypt and Palestine, if Rommel had succeeded in his goal of invading the Middle East during 1942 by SS unit embedded to Afrika Korps. According to Mallmann and Cüppers, a post-war CIA report described Rommel as having met with Walther Rauff, who was responsible for the unit, and been disgusted after learning about the plan from him and as having sent him on his way; but they conclude that such a meeting is hardly possible as Rauff was sent to report to Rommel at Tobruk on 20 July and Rommel was then 500 km away conducting the First El Alamein. On 29 July, Rauff's unit was sent to Athens, expecting to enter Africa when Rommel crossed the Nile. However, in view of the Axis' deteriorating situation in Africa it returned to Germany in September. Historian Jean-Christoph Caron opines that there is no evidence that Rommel knew or would have supported Rauff's mission; he also believes Rommel bore no direct responsibility regarding the SS's looting of gold in Tunisia. Historian Haim Saadon, Director of the Center of Research on North African Jewry in WWII, goes further, stating that there was no extermination plan: Rauff's documents show that his foremost concern was helping the Wehrmacht to win, and he came up with the idea of forced labour camps in the process. By the time these labour camps were in operation, according to Ben H. Shepherd, Rommel had already been retreating and there is no proof of his contact with the Einsatzkommando. Haaretz comments that the CIA report is most likely correct regarding both the interaction between Rommel and Rauff and Rommel's objections to the plan: Rauff's assistant Theodor Saevecke, and declassified information from Rauff's file, both report the same story. Haaretz also remarks that Rommel's influence probably softened the Nazi authorities' attitude to the Jews and to the civilian population generally in North Africa. Rolf-Dieter Müller comments that the war in North Africa, while as bloody as any other war, differed considerably from the war of annihilation in eastern Europe, because it was limited to a narrow coastline and hardly affected the population. Showalter writes that: "From the desert campaign’s beginning, both sides consciously sought to wage a "clean" war—war without hate, as Rommel put it in his reflections. Explanations include the absence of civilians and the relative absence of Nazis; the nature of the environment, which conveyed a "moral simplicity and transparency"; and the control of command on both sides by prewar professionals, producing a British tendency to depict war in the imagery of a game, and the corresponding German pattern of seeing it as a test of skill and a proof of virtue. The nature of the fighting as well diminished the last-ditch, close-quarter actions that are primary nurturers of mutual bitterness. A battalion overrun by tanks usually had its resistance broken so completely that nothing was to be gained by a broken-backed final stand." Joachim Käppner writes that while the conflict in North Africa was not as bloody as in Eastern Europe, the Afrika Korps committed some war crimes. Historian Martin Kitchen states that the reputation of the Afrika Korps was preserved by circumstances: The sparsely populated desert areas did not lend themselves to ethnic cleansing; the German forces never reached the large Jewish populations in Egypt and Palestine; and in the urban areas of Tunisia and Tripolitania the Italian government constrained the German efforts to discriminate against or eliminate Jews who were Italian citizens. Despite this, the North African Jews themselves believed that it was Rommel who prevented the "Final Solution" from being carried out against them when German might dominated North Africa from Egypt to Morocco. According to Curtis and Remy, 120,000 Jews lived in Algeria, 200,000 in Morocco, about 80,000 in Tunisia. Remy writes that this number was unchanged following the German invasion of Tunisia in 1942 while Curtis notes that 5000 of these Jews would be sent to forced labour camps. and 26,000 in Libya. Hein Klemann writes that the confiscations in the "foraging zone" of Afrika Korps threatened the survival chances of local civilians, just as plunder enacted by Wehrmacht in Soviet Union In North Africa Rommel's troops laid down landmines, which in decades to come killed and maimed thousands of civilians. Since statistics started in 1980s, 3,300 people have lost their lives, and 7,500 maimed There are disputed whether the landmines in El Alamein, which constitute the most notable portion of landmines left over from World War II, were left by the Afrika Korps or the British Army led by Field Marshal Montgomery. Egypt has not joined the Mine Ban Treaty until this day. Rommel sharply protested the Jewish policies and other immoralities and was an opponent of the Gestapo He also refused to comply with Hitler's order to execute Jewish POWs. Bryan Mark Rigg writes: "The only place in the army where one might find a place of refuge was in the Deutsches Afrika-Korps (DAK) under the leadership of the "Desert Fox," Field Marshal Erwin Rommel. According to this study's files, his half-Jews were not as affected by the racial laws as most others serving on the European continent." He notes, though, that "Perhaps Rommel failed to enforce the order to discharge half-Jews because he was unaware of it". Captain Horst van Oppenfeld (a staff officer to Colonel Claus von Stauffenberg and a quarter-Jew) says that Rommel did not concern himself with the racial decrees and he had never experienced any trouble caused by his ancestry during his time in the DAK even if Rommel never personally interfered on his behalf. Another quarter-Jew, Fritz Bayerlein, became a famous general and Rommel's chief-of-staff, despite also being a bisexual, which made his situation even more precarious. Building the Atlantic Wall was officially the responsibility of the Organisation Todt, which was not under Rommel's command, but he enthusiastically joined the task, protesting slave labour and suggesting that they should recruit French civilians and pay them good wages. Despite this, French civilians and Italian prisoners of war held by the Germans were forced by officials under the Vichy government, the Todt Organization and the SS forces to work on building some of the defences Rommel requested, in appalling conditions according to historian Will Fowler. Although they got basic wages, the workers complained because it was too little and there was no heavy equipment. German troops worked almost round-the-clock under very harsh conditions, with Rommel's rewards being accordions.} Rommel was one of the commander who protested the Oradour-sur-Glane massacre. Reputation as a military commander Rommel was famous in his lifetime, including among his adversaries. His tactical prowess and decency in the treatment of Allied prisoners earned him the respect of opponents including Claude Auchinleck, Archibald Wavell, George S. Patton, and Bernard Montgomery. Rommel's military reputation has been controversial. While nearly all military practitioners acknowledge Rommel's excellent tactical skills and personal bravery, some, such as U.S. major general and military historian David T. Zabecki of the United States Naval Institute, considers Rommel's performance as an operational level commander to be highly overrated and that other officers share this belief. General Klaus Naumann, who served as Chief of Staff of the Bundeswehr, agrees with the military historian Charles Messenger that Rommel had challenges at the operational level, and states that Rommel's violation of the unity of command principle, bypassing the chain of command in Africa, was unacceptable and contributed to the eventual operational and strategic failure in North Africa. The German biographer Wolf Heckmann describes Rommel as "the most overrated commander of an army in world history". Nevertheless, there is also a notable number of officers who admire his methods, like Norman Schwarzkopf who described Rommel as a genius at battles of movement saying "Look at Rommel. Look at North Africa, the Arab-Israeli wars, and all the rest of them. A war in the desert is a war of mobility and lethality. It's not a war where straight lines are drawn in the sand and [you] say, 'I will defend here or die." Ariel Sharon deemed the German military model used by Rommel to be superior to the British model used by Montgomery. His compatriot Moshe Dayan likewise considered Rommel a model and icon. Wesley Clark states that "Rommel's military reputation, though, has lived on, and still sets the standard for a style of daring, charismatic leadership to which most officers aspire." During the recent desert wars, Rommel's military theories and experiences attracted great interest from policy makers and military instructors. Chinese military leader Sun Li-jen had the laudatory nickname "Rommel of the East". Certain modern military historians, such as Larry T. Addington, Niall Barr, Douglas Porch and Robert Citino, are skeptical of Rommel as an operational, let alone strategic level commander. They point to Rommel's lack of appreciation for Germany's strategic situation, his misunderstanding of the relative importance of his theatre to the German High Command, his poor grasp of logistical realities, and, according to the historian Ian Beckett, his "penchant for glory hunting". Citino credits Rommel's limitations as an operational level commander as "materially contributing" to the eventual demise of the Axis forces in North Africa, while Addington focuses on the struggle over strategy, whereby Rommel's initial brilliant success resulted in "catastrophic effects" for Germany in North Africa. Porch highlights Rommel's "offensive mentality", symptomatic of the Wehrmacht commanders as a whole in the belief that the tactical and operational victories would lead to strategic success. Compounding the problem was the Wehrmacht's institutional tendency to discount logistics, industrial output and their opponents' capacity to learn from past mistakes. The historian Geoffrey P. Megargee points out Rommel's playing the German and Italian command structures against each other to his advantage. Rommel used the confused structure—the High command of the armed forces, the OKH (Supreme High Command of the Army) and the Comando Supremo (Italian Supreme Command)—to disregard orders that he disagreed with or to appeal to whatever authority he felt would be most sympathetic to his requests. Some historians take issue with Rommel's absence from Normandy on the day of the Allied invasion, 6 June 1944. He had left France on 5 June and was at home on the 6th celebrating his wife's birthday. (According to Rommel, he planned to proceed to see Hitler the next day to discuss the situation in Normandy). Zabecki calls his decision to leave the theatre in view of an imminent invasion "an incredible lapse of command responsibility". Lieb remarks that Rommel displayed real mental agility, but the lack of an energetic commander, together with other problems, caused the battle largely not to be conducted in his concept (which is the opposite of the German doctrine), although the result was still better than Geyr's plan. Lieb also opines that while his harshest critics (who mostly came from the General Staff) often said that Rommel was overrated or not suitable for higher commands, envy was a big factor here. T.L. McMahon argues that Rommel no doubt possessed operational vision, however Rommel did not have the strategic resources to effect his operational choices while his forces provided the tactical ability to accomplish his goals, and the German staff and system of staff command were designed for commanders who led from the front, and in some cases he might have chosen the same options as Montgomery (a reputedly strategy-oriented commander) had he been put in the same conditions. According to Steven Zaloga, tactical flexibility was a great advantage of the German system, but in the final years of the war, Hitler and his cronies like Himmler and Goering had usurped more and more authority at the strategic level, leaving professionals like Rommel increasing constraints on their actions. Martin Blumenson considers Rommel a general with a compelling view of strategy and logistics, which was demonstrated through his many arguments with his superiors over such matters, although Blumenson also thinks that what distinguished Rommel was his boldness, his intuitive feel for the battlefield.(Upon which Schwarzkopf also comments "Rommel had a feel for the battlefield like no other man.") Joseph Forbes comments that: "The complex, conflict-filled interaction between Rommel and his superiors over logistics, objectives and priorities should not be used to detract from Rommel's reputation as a remarkable military leader", because Rommel was not given powers over logistics, and because if only generals who attain strategic-policy goals are great generals, such highly regarded commanders as Robert E. Lee, Hannibal, Charles XII would have to be excluded from that list. General Siegfried F. Storbeck, Deputy Inspector General of the Bundeswehr (1987–1991), remarks that, Rommel's leadership style and offensive thinking, although carrying inherent risks like losing the overview of the situation and creating overlapping of authority, have been proved effective, and have been analysed and incorporated in the training of officers by "us, our Western allies, the Warsaw Pact, and even the Israel Defense Forces". Maurice Remy defends his strategic decision regarding Malta as, although risky, the only logical choice. Rommel was among the few Axis commanders (the others being Isoroku Yamamoto and Reinhard Heydrich) who were targeted for assassination by Allied planners. Two attempts were made, the first being Operation Flipper in North Africa in 1941, and the second being Operation Gaff in Normandy in 1944. Research by Norman Ohler claims that Rommel's behaviours were heavily influenced by Pervitin which he reportedly took in heavy doses, to such an extent that Ohler refers to him as "the Crystal Fox" ("Kristallfuchs") – playing off the nickname "Desert Fox" famously given to him by the British. Debate about atrocities Executions of prisoners in France In France, Rommel ordered the execution of one French officer who refused three times to cooperate when being taken prisoner; there are disputes as to whether this execution was justified. Caddick-Adams comments that this would make Rommel a war criminal condemned by his own hand, and that other authors overlook this episode. Butler notes that the officer refused to surrender three times and thus died in a courageous but foolhardy way. French historian Petitfrère remarks that Rommel was in a hurry and had no time for useless palavers, although this act was still debatable. Telp remarks that, "he treated prisoners of war with consideration. On one occasion, he was forced to order the shooting of a French lieutenant-colonel for refusing to obey his captors." Scheck says, "Although there is no evidence incriminating Rommel himself, his unit did fight in areas where German massacres of black French prisoners of war were extremely common in June 1940." There are reports that during the fighting in France, Rommel's 7th Panzer Division committed atrocities against surrendering French troops and captured prisoners of war. The atrocities, according to Martin S. Alexander, included the murder of 50 surrendering officers and men at Quesnoy and the nearby Airaines. According to Richardot, on 7 June, the commanding French officer Charles N'Tchoréré and his company surrendered to the 7th Panzer Division. He was then executed by the 25th Infantry Regiment (the 7th Panzer Division did not have a 25th Infantry Regiment). Journalist Alain Aka states simply that he was executed by one of Rommel's soldiers and his body was driven over by tank. Erwan Bergot reports that he was killed by the SS. Historian John Morrow states he was shot in the neck by a Panzer officer, without mentioning the unit of the perpetrators of this crime. The website of the National Federation of Volunteer Servicemen (F.N.C.V., France) states that N'Tchoréré was pushed against the wall and, despite protests from his comrades and newly liberated German prisoners, was shot by the SS. Elements of the division are considered by Scheck to have been "likely" responsible for the murder of POWs in Hangest-sur-Somme, while Scheck reports that they were too far away to have been involved in the massacres at Airaines and nearby villages. Scheck says that the German units fighting there came from the 46th and 2nd Infantry Division, and possibly from the 6th and 27th Infantry Division as well. Scheck also writes that there were no SS units in the area. Morrow, citing Scheck, says that the 7th Panzer Division carried out "cleansing operations". French historian Dominique Lormier counts the number of victims of the 7th Panzer Division in Airaines at 109, mostly French-African soldiers from Senegal. Showalter writes: "In fact, the garrison of Le Quesnoy, most of them Senegalese, took heavy toll of the German infantry in house-to-house fighting. Unlike other occasions in 1940, when Germans and Africans met, there was no deliberate massacre of survivors. Nevertheless, the riflemen took few prisoners, and the delay imposed by the tirailleurs forced the Panzers to advance unsupported until Rommel was ordered to halt for fear of coming under attack by Stukas." Claus Telp comments that Airaines was not in the sector of the 7th, but at Hangest and Martainville, elements of the 7th might have shot some prisoners and used British Colonel Broomhall as a human shield (although Telp is of the opinion that it was unlikely that Rommel approved of, or even knew about, these two incidents). Historian David Stone notes that acts of shooting surrendered prisoners were carried out by Rommel's 7th Panzer Division and observes contradictory statements in Rommel's account of the events; Rommel initially wrote that "any enemy troops were wiped out or forced to withdraw" but also added that "many prisoners taken were hopelessly drunk." Stone attributes the massacres of soldiers from the 53ème Regiment d'Infanterie Coloniale (N'Tchoréré's unit) on 7 June to the 5th Infantry Division. Historian Daniel Butler agrees that it was possible that the massacre at Le Quesnoy happened given the existence of Nazis, such as Hanke, in Rommel's division, while stating that in comparison with other German units, few sources regarding such actions of the men of the 7th Panzer exist. Butler believes that "it's almost impossible to imagine" Rommel authorising or countenancing such actions. He also writes that "Some accusers have twisted a remark in Rommel's own account of the action in the village of Le Quesnoy as proof that he at least tacitly condoned the executions—'any enemy troops were either wiped out or forced to withdraw'—but the words themselves as well as the context of the passage hardly support the contention." Treatment of Jews and other civilians in North Africa Giordana Terracina writes that: "On April 3, the Italians recaptured Benghazi and a few months later the Afrika Korps led by Rommel was sent to Libya and began the deportation of the Jews of Cyrenaica in the concentration camp of Giado and other smaller towns in Tripolitania. This measure was accompanied by shooting, also in Benghazi, of some Jews guilty of having welcomed the British troops, on their arrival, treating them as liberators." According to German historian , Rommel forbade his soldiers from buying anything from the Jewish population of Tripoli, used Jewish slave labour and commanded Jews to clear out minefields by walking on them ahead of his forces. According to Proske, some of the Libyan Jews were eventually sent to concentration camps. Historians Christian Schweizer and Peter Lieb note that: "Over the last few years, even though the social science teacher Wolfgang Proske has sought to participate in the discussion [on Rommel] with very strong opinions, his biased submissions are not scientifically received." The Heidenheimer Zeitung notes that Proske was the publisher of his main work Täter, Helfer, Trittbrettfahrer – NS-Belastete von der Ostalb, after failing to have it published by another publisher. The Jerusalem Posts review of Gershom Gorenberg's War of shadows writes that: "The Italians were far more brutal with civilians, including Libyan Jews, than Rommel’s Afrika Korps, which by all accounts abided by the laws of war. But nobody worried that the Italians who sent Jews to concentration camps in Libya, would invade British-held Egypt, let alone Mandatory Palestine." According to historian Michael Wolffsohn, during the Africa campaign, preparations for committing a Holocaust against the North African Jews were in full swing and a thousand of them were transported to East European concentration camps. At the same time, he recommends the Bundeswehr to keep the names and traditions associated with Rommel (although Wolffsohn opines that focus should be put on the politically thoughtful soldier he became at the end of his life, rather than the swashbuckler and the humane rogue). Robert Satloff writes in his book Among the Righteous: Lost Stories from the Holocaust's Long Reach into Arab Lands that as the German and Italian forces retreated across Libya towards Tunisia, the Jewish population became victim upon which they released their anger and frustration. According to Satloff Afrika Korps soldiers plundered Jewish property all along the Libyan coast. This violence and persecution only came to an end with the arrival of General Montgomery in Tripoli on 23 January 1943. According to Maurice Remy, although there were antisemitic individuals in the Afrika Korps, actual cases of abuse are not known, even against the Jewish soldiers of the Eighth Army. Remy quotes Isaac Levy, the Senior Jewish Chaplain of the Eighth Army, as saying that he had never seen "any sign or hint that the soldiers [of the Afrika Korps] are antisemitic.". The Telegraph comments: "Accounts suggest that it was not Field Marshal Erwin Rommel but the ruthless SS colonel Walter Rauff who stripped Tunisian Jews of their wealth." After they arrived in Tunisia, German forces ordered the establishment of Judenrat and forced the local Jewish population to perform slave labour. Mark Wills writes that the newly arrived German force forcefully conscripted 2000 young Jewish men, with 5000 rounded up in next 6 months. This forced labour was used in extremely dangerous situations near targets of bombing raids, facing hunger and violence. Commenting on Rommel's conquest of Tunisia, Marvin Perry writes that: "The bridgehead Rommel established in Tunisia enabled the SS to herd Jews into slave labor camps." Der Spiegel writes that: "The SS had established a network of labor camps in Tunisia. More than 2,500 Tunisian Jews died in six months of German rule, and the regular army was also involved in executions." Caron writes on Der Spiegel that the camps were organized in early December 1942 by Nehring, the commander in Tunisia, and Rauff, while Rommel was retreating. As commander of the German Afrika Korps, Nehring would continue to use Tunisian forced labour. Historian Clemens Vollnhals writes that use of Jews by Afrika Korps as forced labour is barely known, but it did happen alongside persecution of Jewish population (although on smaller scale than in Europe) and some of the labourers have died. According to Caddick-Adams, no Waffen-SS served under Rommel in Africa at any time and most of the activities of Rauff's detachment happened after Rommel's departure. Shepherd notes that during this time Rommel was retreating and there is no evidence that he had contact with the Einsatzkommando Klaus-Michael Mallmann, Martin Cüppers Smith Addressing the call of some authors to contextualize Rommel's actions in Italy and North Africa, Wolfgang Mährle notes that while it is undeniable that Rommel played the role of a Generalfeldmarschall in a criminal war, this only illustrates in a limited way his personal attitude and the actions resulted from that. Alleged treasure and spoils According to several historians, allegations and stories that associate Rommel and the Afrika Korps with the harassing and plundering of Jewish gold and property in Tunisia are usually known under the name "Rommel's treasure" or "Rommel's gold". Michael FitzGerald comments that the treasure should be named more accurately as Rauff's gold, as Rommel had nothing to do with its acquisition or removal. Jean-Christoph Caron comments that the treasure legend has a real core and that Jewish property was looted by the SS in Tunisia and later might have been hidden or sunken around the port city of Corsica, where Rauff was stationed in 1943. The person who gave birth to the full-blown legend was the SS soldier Walter Kirner, who presented a false map to the French authorities. Caron and Jörg Müllner, his co-author of the ZDF documentary Rommel's treasure (Rommels Schatz) tell Die Welt that "Rommel had nothing to do with the treasure, but his name is assocỉated with everything that happened in the war in Africa." Rick Atkinson criticises Rommel for gaining a looted stamp collection (a bribe from Sepp Dietrich) and a villa taken from Jews. Lucas, Matthews and Remy though describe the contemptuous and angry reaction of Rommel towards Dietrich's act and the lootings and other brutal behaviours of the SS that he had discovered in Italy. Claudia Hecht also explains that although the Stuttgart and Ulm authorities did arrange for the Rommel family to use a villa whose Jewish owners had been forced out two years earlier, for a brief period after their own house had been destroyed by Allied bombing, ownership of it was never transferred to them. Butler notes that Rommel was one of the few who refused large estates and gifts of cash Hitler gave to his generals. In Nazi and Allied propaganda At the beginning, although Hitler and Goebbels took particular notice of Rommel, the Nazi elites had no intent to create one major war symbol (partly out of fear that he would offset Hitler), generating huge propaganda campaigns for not only Rommel but also Gerd von Rundstedt, Walther von Brauchitsch, Eduard Dietl, Sepp Dietrich (the latter two were party members and also strongly supported by Hitler), etc. Nevertheless, a multitude of factors—including Rommel's unusual charisma, his talents both in military matters and public relations,, the efforts of Goebbels's propaganda machine, and the Allies' participation in mythologizing his life (either for political benefits, sympathy for someone who evoked a romantic archetype, or genuine admiration for his actions)—gradually contributed to Rommel's fame. Spiegel wrote, "Even back then his fame outshone that of all other commanders." Rommel's victories in France were featured in the German press and in the February 1941 film Sieg im Westen(Victory in the West), in which Rommel personally helped direct a segment re-enacting the crossing of the Somme River.According to Scheck, although there is no evidence of Rommel committing crimes, during the shooting of the movie, African prisoners of war, were forced to take part in its making, and forced to carry out humiliating acts.French Colonial Soldiers in German Captivity during World War II, page 42, Raffael Scheck Stills from the re-enactment are found in "Rommel Collection"; it was filmed by Hans Ertl, assigned to this task by Dr. Kurt Hesse, a personal friend of Rommel, who worked for Wehrmacht Propaganda Section V Rommel's victories in 1941 were played up by the Nazi propaganda, even though his successes in North Africa were achieved in arguably one of Germany's least strategically important theaters of World War II. In November 1941, Reich Minister of Propaganda Joseph Goebbels wrote about "the urgent need" to have Rommel "elevated to a kind of popular hero." Rommel, with his innate abilities as a military commander and love of the spotlight, was a perfect fit for the role Goebbels designed for him. Successes in North Africa In North Africa, Rommel received help in cultivating his image from Alfred Ingemar Berndt, a senior official at the Reich Propaganda Ministry who had volunteered for military service. Seconded by Goebbels, Berndt was assigned to Rommel's staff and became one of his closest aides. Berndt often acted as liaison between Rommel, the Propaganda Ministry, and the Führer Headquarters. He directed Rommel's photo shoots and filed radio dispatches describing the battles. In the spring of 1941, Rommel's name began to appear in the British media. In the autumn of 1941 and early winter of 1941/1942, he was mentioned in the British press almost daily. Toward the end of the year, the Reich propaganda machine also used Rommel's successes in Africa as a diversion from the Wehrmacht's challenging situation in the Soviet Union with the stall of Operation Barbarossa. The American press soon began to take notice of Rommel as well, following the country's entry into the war on 11 December 1941, writing that "The British (...) admire him because he beat them and were surprised to have beaten in turn such a capable general." General Auchinleck distributed a directive to his commanders seeking to dispel the notion that Rommel was a "superman". Rommel, no matter how hard the situation was, made a deliberate effort at always spending some time with soldiers and patients, his own and POWs alike, which contributed greatly to his reputation of not only being a great commander but also "a decent chap" among the troops. The attention of the Western and especially the British press thrilled Goebbels, who wrote in his diary in early 1942: "Rommel continues to be the recognized darling of even the enemies' news agencies." The Field Marshal was pleased by the media attention, although he knew the downsides of having a reputation. Hitler took note of the British propaganda as well, commenting in the summer of 1942 that Britain's leaders must have hoped "to be able to explain their defeat to their own nation more easily by focusing on Rommel". The Field Marshal was the German commander most frequently covered in the German media, and the only one to be given a press conference, which took place in October 1942. The press conference was moderated by Goebbels and was attended by both domestic and foreign media. Rommel declared: "Today we (...) have the gates of Egypt in hand, and with the intent to act!" Keeping the focus on Rommel distracted the German public from Wehrmacht losses elsewhere as the tide of the war began to turn. He became a symbol that was used to reinforce the German public's faith in an ultimate Axis victory. Military reverses In the wake of the successful British offensive in November 1942 and other military reverses, the Propaganda Ministry directed the media
it leads (when viewed from another vantage point in Husserl's 'labyrinth') to "transcendental subjectivity". Also in Ideen Husserl explicitly elaborates the phenomenological and eidetic reductions. In 1913 Karl Jaspers visited Husserl at Göttingen. In October 1914 both his sons were sent to fight on the Western Front of World War I and the following year one of them, Wolfgang Husserl, was badly injured. On 8 March 1916, on the battlefield of Verdun, Wolfgang was killed in action. The next year his other son Gerhart Husserl was wounded in the war but survived. His own mother Julia died. In November 1917 one of his outstanding students and later a noted philosophy professor in his own right, Adolf Reinach, was killed in the war while serving in Flanders. Husserl had transferred in 1916 to the University of Freiburg (in Freiburg im Breisgau) where he continued bringing his work in philosophy to fruition, now as a full professor. Edith Stein served as his personal assistant during his first few years in Freiburg, followed later by Martin Heidegger from 1920 to 1923. The mathematician Hermann Weyl began corresponding with him in 1918. Husserl gave four lectures on Phenomenological method at University College, London in 1922. The University of Berlin in 1923 called on him to relocate there, but he declined the offer. In 1926 Heidegger dedicated his book Sein und Zeit (Being and Time) to him "in grateful respect and friendship." Husserl remained in his professorship at Freiburg until he requested retirement, teaching his last class on 25 July 1928. A Festschrift to celebrate his seventieth birthday was presented to him on 8 April 1929. Despite retirement, Husserl gave several notable lectures. The first, at Paris in 1929, led to Méditations cartésiennes (Paris 1931). Husserl here reviews the phenomenological epoché (or phenomenological reduction), presented earlier in his pivotal Ideen (1913), in terms of a further reduction of experience to what he calls a 'sphere of ownness.' From within this sphere, which Husserl enacts in order to show the impossibility of solipsism, the transcendental ego finds itself always already paired with the lived body of another ego, another monad. This 'a priori' interconnection of bodies, given in perception, is what founds the interconnection of consciousnesses known as transcendental intersubjectivity, which Husserl would go on to describe at length in volumes of unpublished writings. There has been a debate over whether or not Husserl's description of ownness and its movement into intersubjectivity is sufficient to reject the charge of solipsism, to which Descartes, for example, was subject. One argument against Husserl's description works this way: instead of infinity and the Deity being the ego's gateway to the Other, as in Descartes, Husserl's ego in the Cartesian Meditations itself becomes transcendent. It remains, however, alone (unconnected). Only the ego's grasp "by analogy" of the Other (e.g., by conjectural reciprocity) allows the possibility for an 'objective' intersubjectivity, and hence for community. In 1933, the racial laws of the new Nazi regime were enacted. On 6 April Husserl was banned from using the library at the University of Freiburg, or any other academic library; the following week, after a public outcry, he was reinstated. Yet his colleague Heidegger was elected Rector of the university on 21–22 April, and joined the Nazi Party. By contrast, in July Husserl resigned from the Deutsche Akademie. Later Husserl lectured at Prague in 1935 and Vienna in 1936, which resulted in a very differently styled work that, while innovative, is no less problematic: Die Krisis (Belgrade 1936). Husserl describes here the cultural crisis gripping Europe, then approaches a philosophy of history, discussing Galileo, Descartes, several British philosophers, and Kant. The apolitical Husserl before had specifically avoided such historical discussions, pointedly preferring to go directly to an investigation of consciousness. Merleau-Ponty and others question whether Husserl here does not undercut his own position, in that Husserl had attacked in principle historicism, while specifically designing his phenomenology to be rigorous enough to transcend the limits of history. On the contrary, Husserl may be indicating here that historical traditions are merely features given to the pure ego's intuition, like any other. A longer section follows on the "lifeworld" [Lebenswelt], one not observed by the objective logic of science, but a world seen in our subjective experience. Yet a problem arises similar to that dealing with 'history' above, a chicken-and-egg problem. Does the lifeworld contextualize and thus compromise the gaze of the pure ego, or does the phenomenological method nonetheless raise the ego up transcendent? These last writings presented the fruits of his professional life. Since his university retirement Husserl had "worked at a tremendous pace, producing several major works." After suffering a fall the autumn of 1937, the philosopher became ill with pleurisy. Edmund Husserl died at Freiburg on 27 April 1938, having just turned 79. His wife Malvine survived him. Eugen Fink, his research assistant, delivered his eulogy. Gerhard Ritter was the only Freiburg faculty member to attend the funeral, as an anti-Nazi protest. Heidegger and the Nazi era Husserl was rumoured to have been denied the use of the library at Freiburg as a result of the anti-Jewish legislation of April 1933. However, among other disabilities Husserl was unable to publish his works in Nazi Germany [see above footnote to Die Krisis (1936)]. It was also rumoured that his former pupil Martin Heidegger informed Husserl that he was discharged, but it was actually the previous rector. Apparently Husserl and Heidegger had moved apart during the 1920s, which became clearer after 1928 when Husserl retired and Heidegger succeeded to his university chair. In the summer of 1929 Husserl had studied carefully selected writings of Heidegger, coming to the conclusion that on several of their key positions they differed: e.g., Heidegger substituted Dasein ["Being-there"] for the pure ego, thus transforming phenomenology into an anthropology, a type of psychologism strongly disfavored by Husserl. Such observations of Heidegger, along with a critique of Max Scheler, were put into a lecture Husserl gave to various Kant Societies in Frankfurt, Berlin, and Halle during 1931 entitled Phänomenologie und Anthropologie. In the war-time 1941 edition of Heidegger's primary work, Being and Time (Sein und Zeit, first published in 1927), the original dedication to Husserl was removed. This was not due to a negation of the relationship between the two philosophers, however, but rather was the result of a suggested censorship by Heidegger's publisher who feared that the book might otherwise be banned by the Nazi regime. The dedication can still be found in a footnote on page 38, thanking Husserl for his guidance and generosity. Husserl, of course, had died three years earlier. In post-war editions of Sein und Zeit the dedication to Husserl is restored. The complex, troubled, and sundered philosophical relationship between Husserl and Heidegger has been widely discussed. On 4 May 1933, Professor Edmund Husserl addressed the recent regime change in Germany and its consequences:The future alone will judge which was the true Germany in 1933, and who were the true Germans—those who subscribe to the more or less materialistic-mythical racial prejudices of the day, or those Germans pure in heart and mind, heirs to the great Germans of the past whose tradition they revere and perpetuate.After his death, Husserl's manuscripts, amounting to approximately 40,000 pages of "Gabelsberger" stenography and his complete research library, were in 1939 smuggled to the Catholic University of Louvain in Belgium by the Franciscan priest Herman Van Breda. There they were deposited at Leuven to form the Husserl-Archives of the Higher Institute of Philosophy. Much of the material in his research manuscripts has since been published in the Husserliana critical edition series. Development of his thought Several early themes In his first works, Husserl tries to combine mathematics, psychology and philosophy with the main goal to provide a sound foundation for mathematics. He analyzes the psychological process needed to obtain the concept of number and then tries to build up a systematical theory on this analysis. To achieve this he uses several methods and concepts taken from his teachers. From Weierstrass he derives the idea that we generate the concept of number by counting a certain collection of objects. From Brentano and Stumpf he takes over the distinction between proper and improper presenting. In an example Husserl explains this in the following way: if you are standing in front of a house, you have a proper, direct presentation of that house, but if you are looking for it and ask for directions, then these directions (e.g. the house on the corner of this and that street) are an indirect, improper presentation. In other words, you can have a proper presentation of an object if it is actually present, and an improper (or symbolic, as he also calls it) one if you only can indicate that object through signs, symbols, etc. Husserl's Logical Investigations (1900–1901) is considered the starting point for the formal theory of wholes and their parts known as mereology. Another important element that Husserl took over from Brentano is intentionality, the notion that the main characteristic of consciousness is that it is always intentional. While often simplistically summarised as "aboutness" or the relationship between mental acts and the external world, Brentano defined it as the main characteristic of mental phenomena, by which they could be distinguished from physical phenomena. Every mental phenomenon, every psychological act, has a content, is directed at an object (the intentional object). Every belief, desire, etc. has an object that it is about: the believed, the wanted. Brentano used the expression "intentional inexistence" to indicate the status of the objects of thought in the mind. The property of being intentional, of having an intentional object, was the key feature to distinguish mental phenomena and physical phenomena, because physical phenomena lack intentionality altogether. The elaboration of phenomenology Some years after the 1900–1901 publication of his main work, the Logische Untersuchungen (Logical Investigations), Husserl made some key conceptual elaborations which led him to assert that in order to study the structure of consciousness, one would have to distinguish between the act of consciousness and the phenomena at which it is directed (the objects as intended). Knowledge of essences would only be possible by "bracketing" all assumptions about the existence of an external world. This procedure he called "epoché". These new concepts prompted the publication of the Ideen (Ideas) in 1913, in which they were at first incorporated, and a plan for a second edition of the Logische Untersuchungen. From the Ideen onward, Husserl concentrated on the ideal, essential structures of consciousness. The metaphysical problem of establishing the reality of what we perceive, as distinct from the perceiving subject, was of little interest to Husserl in spite of his being a transcendental idealist. Husserl proposed that the world of objects—and of ways in which we direct ourselves toward and perceive those objects—is normally conceived of in what he called the "natural attitude", which is characterized by a belief that objects exist distinct from the perceiving subject and exhibit properties that we see as emanating from them (this attitude is also called physicalist objectivism). Husserl proposed a radical new phenomenological way of looking at objects by examining how we, in our many ways of being intentionally directed toward them, actually "constitute" them (to be distinguished from materially creating objects or objects merely being figments of the imagination); in the Phenomenological standpoint, the object ceases to be something simply "external" and ceases to be seen as providing indicators about what it is, and becomes a grouping of perceptual and functional aspects that imply one another under the idea of a particular object or "type". The notion of objects as real is not expelled by phenomenology, but "bracketed" as a way in which we regard objectsinstead of a feature that inheres in an object's essence founded in the relation between the object and the perceiver. In order to better understand the world of appearances and objects, phenomenology attempts to identify the invariant features of how objects are perceived and pushes attributions of reality into their role as an attribution about the things we perceive (or an assumption underlying how we perceive objects). The major dividing line in Husserl's thought is the turn to transcendental idealism. In a later period, Husserl began to wrestle with the complicated issues of intersubjectivity, specifically, how communication about an object can be assumed to refer to the same ideal entity (Cartesian Meditations, Meditation V). Husserl tries new methods of bringing his readers to understand the importance of phenomenology to scientific inquiry (and specifically to psychology) and what it means to "bracket" the natural attitude. The Crisis of the European Sciences is Husserl's unfinished work that deals most directly with these issues. In it, Husserl for the first time attempts a historical overview of the development of Western philosophy and science, emphasizing the challenges presented by their increasingly (one-sidedly) empirical and naturalistic orientation. Husserl declares that mental and spiritual reality possess their own reality independent of any physical basis, and that a science of the mind ('Geisteswissenschaft') must be established on as scientific a foundation as the natural sciences have managed: "It is my conviction that intentional phenomenology has for the first time made spirit as spirit the field of systematic scientific experience, thus effecting a total transformation of the task of knowledge." Husserl's thought Husserl's thought is revolutionary in several ways, most notably in the distinction between "natural" and "phenomenological" modes of understanding. In the former, sense-perception in correspondence with the material realm constitutes the known reality, and understanding is premised on the accuracy of the perception and the objective knowability of what is called the "real world". Phenomenological understanding strives to be rigorously "presuppositionless" by means of what Husserl calls "phenomenological reduction". This reduction is not conditioned but rather transcendental: in Husserl's terms, pure consciousness of absolute Being. In Husserl's work, consciousness of any given thing calls for discerning its meaning as an "intentional object". Such an object does not simply strike the senses, to be interpreted or misinterpreted by mental reason; it has already been selected and grasped, grasping being an etymological connotation, of percipere, the root of "perceive". Meaning and object From Logical Investigations (1900/1901) to Experience and Judgment (published in 1939), Husserl expressed clearly the difference between meaning and object. He identified several different kinds of names. For example, there are names that have the role of properties that uniquely identify an object. Each of these names expresses a meaning and designates the same object. Examples of this are "the victor in Jena" and "the loser in Waterloo", or "the equilateral triangle" and "the equiangular triangle"; in both cases, both names express different meanings, but designate the same object. There are names which have no meaning, but have the role of designating an object: "Aristotle", "Socrates", and so on. Finally, there are names which designate a variety of objects. These are called "universal names"; their meaning is a "concept" and refers to a series of objects (the extension of the concept). The way we know sensible objects is called "sensible intuition". Husserl also identifies a series of "formal words" which are necessary to form sentences and have no sensible correlates. Examples of formal words are "a", "the", "more than", "over", "under", "two", "group", and so on. Every sentence must contain formal words to designate what Husserl calls "formal categories". There are two kinds of categories: meaning categories and formal-ontological categories. Meaning categories relate judgments; they include forms of conjunction, disjunction, forms of plural, among others. Formal-ontological categories relate objects and include notions such as set, cardinal number, ordinal number, part and whole, relation, and so on. The way we know these categories is through a faculty of understanding called "categorial intuition". Through sensible intuition our consciousness constitutes what Husserl calls a "situation of affairs" (Sachlage). It is a passive constitution where objects themselves are presented to us. To this situation of affairs, through categorial intuition, we are able to constitute a "state of affairs" (Sachverhalt). One situation of affairs through objective acts of consciousness (acts of constituting categorially) can serve as the basis for constituting multiple states of affairs. For example, suppose a and b are two sensible objects in a certain situation of affairs. We can use it as basis to say, "a<b" and "b>a", two judgments which designate the same state of affairs. For Husserl a sentence has a proposition or judgment as its meaning, and refers to a state of affairs which has a situation of affairs as a reference base. Formal and regional ontology Husserl sees ontology as a science of essences. Sciences of essences are contrasted with factual sciences: the former are knowable a priori and provide the foundation for the later, which are knowable a posteriori. Ontology as a science of essences is not interested in actual facts, but in the essences themselves, whether they have instances or not. Husserl distinguishes between formal ontology, which investigates the essence of objectivity in general, and regional ontologies, which study regional essences that are shared by all entities belonging to the region. Regions correspond to the highest genera of concrete entities: material nature, personal consciousness and interpersonal spirit. Husserl's method for studying ontology and sciences of essence in general is called eidetic variation. It involves imagining an object of the kind under investigation and varying its features. The changed feature is inessential to this kind if the object can survive its change, otherwise it belongs to the kind's essence. For example, a triangle remains a triangle if one of its sides is extended but it ceases to be a triangle if a fourth side is added. Regional ontology involves applying this method to the essences corresponding to the highest genera. Philosophy of logic and mathematics Husserl believed that truth-in-itself has as ontological correlate being-in-itself, just as meaning categories have formal-ontological categories as correlates. Logic is a formal theory of judgment, that studies the formal a priori relations among judgments using meaning categories. Mathematics, on the other hand, is formal ontology; it studies all the possible forms of being (of objects). Hence for both logic and mathematics, the different formal categories are the objects of study, not the sensible objects themselves. The problem with the psychological approach to mathematics and logic is that it fails to account for the fact that this approach is about formal categories, and not simply about abstractions from sensibility alone. The reason why we do not deal with sensible objects in mathematics is because of another faculty of understanding called "categorial abstraction." Through this faculty we are able to get rid of sensible components of judgments, and just focus on formal categories themselves. Thanks to "eidetic reduction" (or "essential intuition"), we are able to grasp the possibility, impossibility, necessity and contingency among concepts and among formal categories. Categorial intuition, along with categorial abstraction and eidetic reduction, are the basis for logical and mathematical knowledge. Husserl criticized the logicians of his day for not focusing on the relation between subjective processes that give us objective knowledge of pure logic. All subjective activities of consciousness need an ideal correlate, and objective logic (constituted noematically) as it is constituted by consciousness needs a noetic correlate (the subjective activities of consciousness). Husserl stated that logic has three strata, each further away from consciousness and psychology than those that precede it. The first stratum is what Husserl called a "morphology of meanings" concerning a priori ways to relate judgments to make them meaningful. In this stratum we elaborate a "pure grammar" or a logical syntax, and he would call its rules "laws to prevent non-sense", which would be similar to what logic calls today "formation rules". Mathematics, as logic's ontological correlate, also has a similar stratum, a "morphology of formal-ontological categories". The second stratum would be called by Husserl "logic of consequence" or the "logic of non-contradiction" which explores all possible forms of true judgments. He includes here syllogistic classic logic, propositional logic and that of predicates. This is a semantic stratum, and the rules of this stratum would be the "laws to avoid counter-sense" or "laws to prevent contradiction". They are very similar to today's logic "transformation rules". Mathematics also has a similar stratum which is based among others on pure theory of pluralities, and a pure theory of numbers. They provide a science of the conditions of possibility of any theory whatsoever. Husserl also talked about what he called "logic of truth" which consists of the formal laws of possible truth and its modalities, and precedes the third logical third stratum. The third stratum is metalogical, what he called a "theory of all possible forms of theories." It explores all possible theories in an a priori fashion, rather than the possibility of theory in general. We could establish theories of possible relations between pure forms of theories, investigate these logical relations and the deductions from such general connection. The logician is free to see the extension of this deductive, theoretical sphere of pure logic. The ontological correlate to the third stratum is the "theory of manifolds". In formal ontology, it is a free investigation where a mathematician can assign several meanings to several symbols, and all their possible valid deductions in a general and indeterminate manner. It is, properly speaking, the most universal mathematics of all. Through the posit of certain indeterminate objects (formal-ontological categories) as well as any combination of mathematical axioms, mathematicians can explore the apodeictic connections between them, as long as consistency is preserved. According to Husserl, this view of logic and mathematics accounted for the objectivity of a series of mathematical developments of his time, such as n-dimensional manifolds (both Euclidean and non-Euclidean), Hermann Grassmann's theory of extensions, William Rowan Hamilton's Hamiltonians, Sophus Lie's theory of transformation groups, and Cantor's set theory. Jacob Klein was one student of Husserl who pursued this line of inquiry, seeking to "desedimentize" mathematics and the mathematical sciences. Husserl and psychologism Philosophy of arithmetic and Frege After obtaining his PhD in mathematics, Husserl began analyzing the foundations of mathematics from a psychological point of view. In his habilitation thesis, On the Concept of Number (1886) and in his Philosophy of Arithmetic (1891), Husserl sought, by employing Brentano's descriptive psychology, to define the natural numbers in a way that advanced the methods and techniques of Karl Weierstrass, Richard Dedekind, Georg Cantor, Gottlob Frege, and other contemporary mathematicians. Later, in the first volume of his Logical Investigations, the Prolegomena of Pure Logic, Husserl, while attacking the psychologistic point of view in logic and mathematics, also appears to reject much of his early
Czech Republic. He was born into a Jewish family, the second of four children. His father was a milliner. His childhood was spent in Prostějov, where he attended the secular elementary school. Then Husserl traveled to Vienna to study at the Realgymnasium there, followed next by the Staatsgymnasium in Olomouc (Ger.: Olmütz). At the University of Leipzig from 1876 to 1878, Husserl studied mathematics, physics, and astronomy. At Leipzig he was inspired by philosophy lectures given by Wilhelm Wundt, one of the founders of modern psychology. Then he moved to the Frederick William University of Berlin (the present-day Humboldt University of Berlin) in 1878 where he continued his study of mathematics under Leopold Kronecker and the renowned Karl Weierstrass. In Berlin he found a mentor in Tomáš Garrigue Masaryk, then a former philosophy student of Franz Brentano and later the first president of Czechoslovakia. There Husserl also attended Friedrich Paulsen's philosophy lectures. In 1881 he left for the University of Vienna to complete his mathematics studies under the supervision of Leo Königsberger (a former student of Weierstrass). At Vienna in 1883 he obtained his PhD with the work Beiträge zur Variationsrechnung (Contributions to the Calculus of Variations). Evidently as a result of his becoming familiar with the New Testament during his twenties, Husserl asked to be baptized into the Lutheran Church in 1886. Husserl's father Adolf had died in 1884. Herbert Spiegelberg writes, "While outward religious practice never entered his life any more than it did that of most academic scholars of the time, his mind remained open for the religious phenomenon as for any other genuine experience." At times Husserl saw his goal as one of moral "renewal". Although a steadfast proponent of a radical and rational autonomy in all things, Husserl could also speak "about his vocation and even about his mission under God's will to find new ways for philosophy and science," observes Spiegelberg. Following his PhD in mathematics, Husserl returned to Berlin to work as the assistant to Karl Weierstrass. Yet already Husserl had felt the desire to pursue philosophy. Then professor Weierstrass became very ill. Husserl became free to return to Vienna where, after serving a short military duty, he devoted his attention to philosophy. In 1884 at the University of Vienna he attended the lectures of Franz Brentano on philosophy and philosophical psychology. Brentano introduced him to the writings of Bernard Bolzano, Hermann Lotze, J. Stuart Mill, and David Hume. Husserl was so impressed by Brentano that he decided to dedicate his life to philosophy; indeed, Franz Brentano is often credited as being his most important influence, e.g., with regard to intentionality. Following academic advice, two years later in 1886 Husserl followed Carl Stumpf, a former student of Brentano, to the University of Halle, seeking to obtain his habilitation which would qualify him to teach at the university level. There, under Stumpf's supervision, he wrote Über den Begriff der Zahl (On the Concept of Number) in 1887, which would serve later as the basis for his first important work, Philosophie der Arithmetik (1891). In 1887 Husserl married Malvine Steinschneider, a union that would last over fifty years. In 1892 their daughter Elizabeth was born, in 1893 their son Gerhart, and in 1894 their son Wolfgang. Elizabeth would marry in 1922, and Gerhart in 1923; Wolfgang, however, became a casualty of the First World War. Gerhart would become a philosopher of law, contributing to the subject of comparative law, teaching in the United States and after the war in Austria. Professor of philosophy Following his marriage Husserl began his long teaching career in philosophy. He started in 1887 as a Privatdozent at the University of Halle. In 1891 he published his Philosophie der Arithmetik. Psychologische und logische Untersuchungen which, drawing on his prior studies in mathematics and philosophy, proposed a psychological context as the basis of mathematics. It drew the adverse notice of Gottlob Frege, who criticized its psychologism. In 1901 Husserl with his family moved to the University of Göttingen, where he taught as extraordinarius professor. Just prior to this a major work of his, Logische Untersuchungen (Halle, 1900–1901), was published. Volume One contains seasoned reflections on "pure logic" in which he carefully refutes "psychologism". This work was well received and became the subject of a seminar given by Wilhelm Dilthey; Husserl in 1905 traveled to Berlin to visit Dilthey. Two years later in Italy he paid a visit to Franz Brentano his inspiring old teacher and to Constantin Carathéodory the mathematician. Kant and Descartes were also now influencing his thought. In 1910 he became joint editor of the journal Logos. During this period Husserl had delivered lectures on internal time consciousness, which several decades later his former student Heidegger edited for publication. In 1912 at Freiburg the journal Jahrbuch für Philosophie und Phänomenologische Forschung ("Yearbook for Philosophy and Phenomenological Research") was founded by Husserl and his school, and which published articles of their phenomenological movement from 1913 to 1930. His important work Ideen was published in its first issue (Vol. 1, Issue 1, 1913). Before beginning Ideen Husserl's thought had reached the stage where "each subject is 'presented' to itself, and to each all others are 'presentiated' (Vergegenwärtigung), not as parts of nature but as pure consciousness." Ideen advanced his transition to a "transcendental interpretation" of phenomenology, a view later criticized by, among others, Jean-Paul Sartre. In Ideen Paul Ricœur sees the development of Husserl's thought as leading "from the psychological cogito to the transcendental cogito." As phenomenology further evolves, it leads (when viewed from another vantage point in Husserl's 'labyrinth') to "transcendental subjectivity". Also in Ideen Husserl explicitly elaborates the phenomenological and eidetic reductions. In 1913 Karl Jaspers visited Husserl at Göttingen. In October 1914 both his sons were sent to fight on the Western Front of World War I and the following year one of them, Wolfgang Husserl, was badly injured. On 8 March 1916, on the battlefield of Verdun, Wolfgang was killed in action. The next year his other son Gerhart Husserl was wounded in the war but survived. His own mother Julia died. In November 1917 one of his outstanding students and later a noted philosophy professor in his own right, Adolf Reinach, was killed in the war while serving in Flanders. Husserl had transferred in 1916 to the University of Freiburg (in Freiburg im Breisgau) where he continued bringing his work in philosophy to fruition, now as a full professor. Edith Stein served as his personal assistant during his first few years in Freiburg, followed later by Martin Heidegger from 1920 to 1923. The mathematician Hermann Weyl began corresponding with him in 1918. Husserl gave four lectures on Phenomenological method at University College, London in 1922. The University of Berlin in 1923 called on him to relocate there, but he declined the offer. In 1926 Heidegger dedicated his book Sein und Zeit (Being and Time) to him "in grateful respect and friendship." Husserl remained in his professorship at Freiburg until he requested retirement, teaching his last class on 25 July 1928. A Festschrift to celebrate his seventieth birthday was presented to him on 8 April 1929. Despite retirement, Husserl gave several notable lectures. The first, at Paris in 1929, led to Méditations cartésiennes (Paris 1931). Husserl here reviews the phenomenological epoché (or phenomenological reduction), presented earlier in his pivotal Ideen (1913), in terms of a further reduction of experience to what he calls a 'sphere of ownness.' From within this sphere, which Husserl enacts in order to show the impossibility of solipsism, the transcendental ego finds itself always already paired with the lived body of another ego, another monad. This 'a priori' interconnection of bodies, given in perception, is what founds the interconnection of consciousnesses known as transcendental intersubjectivity, which Husserl would go on to describe at length in volumes of unpublished writings. There has been a debate over whether or not Husserl's description of ownness and its movement into intersubjectivity is sufficient to reject the charge of solipsism, to which Descartes, for example, was subject. One argument against Husserl's description works this way: instead of infinity and the Deity being the ego's gateway to the Other, as in Descartes, Husserl's ego in the Cartesian Meditations itself becomes transcendent. It remains, however, alone (unconnected). Only the ego's grasp "by analogy" of the Other (e.g., by conjectural reciprocity) allows the possibility for an 'objective' intersubjectivity, and hence for community. In 1933, the racial laws of the new Nazi regime were enacted. On 6 April Husserl was banned from using the library at the University of Freiburg, or any other academic library; the following week, after a public outcry, he was reinstated. Yet his colleague Heidegger was elected Rector of the university on 21–22 April, and joined the Nazi Party. By contrast, in July Husserl resigned from the Deutsche Akademie. Later Husserl lectured at Prague in 1935 and Vienna in 1936, which resulted in a very differently styled work that, while innovative, is no less problematic: Die Krisis (Belgrade 1936). Husserl describes here the cultural crisis gripping Europe, then approaches a philosophy of history, discussing Galileo, Descartes, several British philosophers, and Kant. The apolitical Husserl before had specifically avoided such historical discussions, pointedly preferring to go directly to an investigation of consciousness. Merleau-Ponty and others question whether Husserl here does not undercut his own position, in that Husserl had attacked in principle historicism, while specifically designing his phenomenology to be rigorous enough to transcend the limits of history. On the contrary, Husserl may be indicating here that historical traditions are merely features given to the pure ego's intuition, like any other. A longer section follows on the "lifeworld" [Lebenswelt], one not observed by the objective logic of science, but a world seen in our subjective experience. Yet a problem arises similar to that dealing with 'history' above, a chicken-and-egg problem. Does the lifeworld contextualize and thus compromise the gaze of the pure ego, or does the phenomenological method nonetheless raise the ego up transcendent? These last writings presented the fruits of his professional life. Since his university retirement Husserl had "worked at a tremendous pace, producing several major works." After suffering a fall the autumn of 1937, the philosopher became ill with pleurisy. Edmund Husserl died at Freiburg on 27 April 1938, having just turned 79. His wife Malvine survived him. Eugen Fink, his research assistant, delivered his eulogy. Gerhard Ritter was the only Freiburg faculty member to attend the funeral, as an anti-Nazi protest. Heidegger and the Nazi era Husserl was rumoured to have been denied the use of the library at Freiburg as a result of the anti-Jewish legislation of April 1933. However, among other disabilities Husserl was unable to publish his works in Nazi Germany [see above footnote to Die Krisis (1936)]. It was also rumoured that his former pupil Martin Heidegger informed Husserl that he was discharged, but it was actually the previous rector. Apparently Husserl and Heidegger had moved apart during the 1920s, which became clearer after 1928 when Husserl retired and Heidegger succeeded to his university chair. In the summer of 1929 Husserl had studied carefully selected writings of Heidegger, coming to the conclusion that on several of their key positions they differed: e.g., Heidegger substituted Dasein ["Being-there"] for the pure ego, thus transforming phenomenology into an anthropology, a type of psychologism strongly disfavored by Husserl. Such observations of Heidegger, along with a critique of Max Scheler, were put into a lecture Husserl gave to various Kant Societies in Frankfurt, Berlin, and Halle during 1931 entitled Phänomenologie und Anthropologie. In the war-time 1941 edition of Heidegger's primary work, Being and Time (Sein und Zeit, first published in 1927), the original dedication to Husserl was removed. This was not due to a negation of the relationship between the two philosophers, however, but rather was the result of a suggested censorship by Heidegger's publisher who feared that the book might otherwise be banned by the Nazi regime. The dedication can still be found in a footnote on page 38, thanking Husserl for his guidance and generosity. Husserl, of course, had died three years earlier. In post-war editions of Sein und Zeit the dedication to Husserl is restored. The complex, troubled, and sundered philosophical relationship between Husserl and Heidegger has been widely discussed. On 4 May 1933, Professor Edmund Husserl addressed the recent regime change in Germany and its consequences:The future alone will judge which was the true Germany in 1933, and who were the true Germans—those who subscribe to the more or less materialistic-mythical racial prejudices of the day, or those Germans pure in heart and mind, heirs to the great Germans of the past whose tradition they revere and perpetuate.After his death, Husserl's manuscripts, amounting to approximately 40,000 pages of "Gabelsberger" stenography and his complete research library, were in 1939 smuggled to the Catholic University of Louvain in Belgium by the Franciscan priest Herman Van Breda. There they were deposited at Leuven to form the Husserl-Archives of the Higher Institute of Philosophy. Much of the material in his research manuscripts has since been published in the Husserliana critical edition series. Development of his thought Several early themes In his first works, Husserl tries to combine mathematics, psychology and philosophy with the main goal to provide a sound foundation for mathematics. He analyzes the psychological process needed to obtain the concept of number and then tries to build up a systematical theory on this analysis. To achieve this he uses several methods and concepts taken from his teachers. From Weierstrass he derives the idea that we generate the concept of number by counting a certain collection of objects. From Brentano and Stumpf he takes over the distinction between proper and improper presenting. In an example Husserl explains this in the following way: if you are standing in front of a house, you have a proper, direct presentation of that house, but if you are looking for it and ask for directions, then these directions (e.g. the house on the corner of this and that street) are an indirect, improper presentation. In other words, you can have a proper presentation of an object if it is actually present, and an improper (or symbolic, as he also calls it) one if you only can indicate that object through signs, symbols, etc. Husserl's Logical Investigations (1900–1901) is considered the starting point for the formal theory of wholes and their parts known as mereology. Another important element that Husserl took over from Brentano is intentionality, the notion that the main characteristic of consciousness is that it is always intentional. While often simplistically summarised as "aboutness" or the relationship between mental acts and the external world, Brentano defined it as the main characteristic of mental phenomena, by which they could be distinguished from physical phenomena. Every mental phenomenon, every psychological act, has a content, is directed at an object (the intentional object). Every belief, desire, etc. has an object that it is about: the believed, the wanted. Brentano used the expression "intentional inexistence" to indicate the status of the objects of thought in the mind. The property of being intentional, of having an intentional object, was the key feature to distinguish mental phenomena and physical phenomena, because physical phenomena lack intentionality altogether. The elaboration of phenomenology Some years after the 1900–1901 publication of his main work, the Logische Untersuchungen (Logical Investigations), Husserl made some key conceptual elaborations which led him to assert that in order to study the structure of consciousness, one would have to distinguish between the act of consciousness and the phenomena at which it is directed (the objects as intended). Knowledge of essences would only be possible by "bracketing" all assumptions about the existence of an external world. This procedure he called "epoché". These new concepts prompted the publication of the Ideen (Ideas) in 1913, in which they were at first incorporated, and a plan for a second edition of the Logische Untersuchungen. From the Ideen onward, Husserl concentrated on the ideal, essential structures of consciousness. The metaphysical problem of establishing the reality of what we perceive, as distinct from the perceiving subject, was of little interest to Husserl in spite of his being a transcendental idealist. Husserl proposed that the world of objects—and of ways in which we direct ourselves toward and perceive those objects—is normally conceived of in what he called the "natural attitude", which is characterized by a belief that objects exist distinct from the perceiving subject and exhibit properties that we see as emanating from them (this attitude is also called physicalist objectivism). Husserl proposed a radical new phenomenological way of looking at objects by examining how we, in our many ways of being intentionally directed toward them, actually "constitute" them (to be distinguished from materially creating objects or objects merely being figments of the imagination); in the Phenomenological standpoint, the object ceases to be something simply "external" and ceases to be seen as providing indicators about what it is, and becomes a grouping of perceptual and functional aspects that imply one another under the idea of a particular object or "type". The notion of objects as real is not expelled by phenomenology, but "bracketed" as a way in which we regard objectsinstead of a feature that inheres in an object's essence founded in the relation between the object and the perceiver. In order to better understand the world of appearances and objects, phenomenology attempts to identify the invariant features of how objects are perceived and pushes attributions of reality into their role as an attribution about the things we perceive (or an assumption underlying how we perceive objects). The major dividing line in Husserl's thought is the turn to transcendental idealism. In a later period, Husserl began to wrestle with the complicated issues of intersubjectivity, specifically, how communication about an object can be assumed to refer to the same ideal entity (Cartesian Meditations, Meditation V). Husserl tries new methods of bringing his readers to understand the importance of phenomenology to scientific inquiry (and specifically to psychology) and what it means to "bracket" the natural attitude. The Crisis of the European Sciences is Husserl's unfinished work that deals most directly with these issues. In it, Husserl for the first time attempts a historical overview of the development of Western philosophy and science, emphasizing the challenges presented by their increasingly (one-sidedly) empirical and naturalistic orientation. Husserl declares that mental and spiritual reality possess their own reality independent of any physical basis, and that a science of the mind ('Geisteswissenschaft') must be established on as scientific a foundation as the natural sciences have managed: "It is my conviction that intentional phenomenology has for the first time made spirit as spirit the field of systematic scientific experience, thus effecting a total transformation of the task of knowledge." Husserl's thought Husserl's thought is revolutionary in several ways, most notably in the distinction between "natural" and "phenomenological" modes of understanding. In the former, sense-perception in correspondence with the material realm constitutes the known reality, and understanding is premised on the accuracy of the perception and the objective knowability of what is called the "real world". Phenomenological understanding strives to be rigorously "presuppositionless" by means of what Husserl calls "phenomenological reduction". This reduction is not conditioned but rather transcendental: in Husserl's terms, pure consciousness of absolute Being. In Husserl's work, consciousness of any given thing calls for discerning its meaning as an "intentional object". Such an object does not simply strike the senses, to be interpreted or misinterpreted by mental reason; it has already been selected and grasped, grasping being an etymological connotation, of percipere, the root of "perceive". Meaning and object From Logical Investigations (1900/1901) to Experience and Judgment (published in 1939), Husserl expressed clearly the difference between meaning and object. He identified several different kinds of names. For example, there are names that have the role of properties that uniquely identify an object. Each of these names expresses a meaning and designates the same object. Examples of this are "the victor in Jena" and "the loser in Waterloo", or "the equilateral triangle" and "the equiangular triangle"; in both cases, both names express different meanings, but designate the same object. There are names which have no meaning, but have the role of designating an object: "Aristotle", "Socrates", and so on. Finally, there are names which designate a variety of objects. These are called "universal names"; their meaning is a "concept" and refers to a series of objects (the extension of the concept). The way we know sensible objects is called "sensible intuition". Husserl also identifies a series of "formal words" which are necessary to form sentences and have no sensible correlates. Examples of formal words are "a", "the", "more than", "over", "under", "two", "group", and so on. Every sentence must contain formal words to designate what Husserl calls "formal categories". There are two kinds of categories: meaning categories and formal-ontological categories. Meaning categories relate judgments; they include forms of conjunction, disjunction, forms of plural, among others. Formal-ontological categories relate objects and include notions such as set, cardinal number, ordinal number, part and whole, relation, and so on. The way we know these categories is through a faculty of understanding called "categorial intuition". Through sensible intuition our consciousness constitutes what Husserl calls a "situation of affairs" (Sachlage). It is a passive constitution where objects themselves are presented to us. To this situation of affairs, through categorial intuition, we are able to constitute a "state of affairs" (Sachverhalt). One situation of affairs through objective acts of consciousness (acts of constituting categorially) can serve as the basis for constituting multiple states of affairs. For example, suppose a and b are two sensible objects in a certain situation of affairs. We can use it as basis to say, "a<b" and "b>a", two judgments which designate the same state of affairs. For Husserl a sentence has a proposition or judgment as its meaning, and refers to a state of affairs which has a situation of affairs as a reference base. Formal and regional ontology Husserl sees ontology as a science of essences. Sciences of essences are contrasted with factual sciences: the former are knowable a priori and provide the foundation for the later, which are knowable a posteriori. Ontology as a science of essences is not interested in actual facts, but in the essences themselves, whether they have instances or not. Husserl distinguishes between formal ontology, which investigates the essence of objectivity in general, and regional ontologies, which study regional essences that are shared by all entities belonging to the region. Regions correspond to the highest genera of concrete entities: material nature, personal consciousness and interpersonal spirit. Husserl's method for studying ontology and sciences of essence in general is called eidetic variation. It involves imagining an object of the kind under investigation and varying its features. The changed feature is inessential to this kind if the object can survive its change, otherwise it belongs to the kind's essence. For example, a triangle remains a triangle if one of its sides is extended but it ceases to be a triangle if a fourth side is added. Regional ontology involves applying this method to the essences corresponding to the highest genera. Philosophy of logic and mathematics Husserl believed that truth-in-itself has as ontological correlate being-in-itself, just as meaning categories have formal-ontological categories as correlates. Logic is a formal theory of judgment, that studies the formal a priori relations among judgments using meaning categories. Mathematics, on the other hand, is formal ontology; it studies all the possible forms of being (of objects). Hence for both logic and mathematics, the different formal categories are the objects of study, not the sensible objects themselves. The problem with the psychological approach to mathematics and logic is that it fails to account for the fact that this approach is about formal categories, and not simply about abstractions from sensibility alone. The reason why we do not deal with sensible objects in mathematics is because of another faculty of understanding called "categorial abstraction." Through this faculty we are able to get rid of sensible components of judgments, and just focus on formal categories themselves. Thanks to "eidetic reduction" (or "essential intuition"), we are able to grasp the possibility, impossibility, necessity and contingency among concepts and among formal categories. Categorial intuition, along with categorial abstraction and eidetic reduction, are the basis for logical and mathematical knowledge. Husserl criticized the logicians of his day for not focusing on the relation between subjective processes that give us objective knowledge of pure logic. All subjective activities of consciousness need an ideal correlate, and objective logic (constituted noematically) as it is constituted by consciousness needs a noetic correlate (the subjective activities of consciousness). Husserl stated that logic has three strata, each further away from consciousness and psychology than those that precede it. The first stratum is what Husserl called a "morphology of meanings" concerning a priori ways to relate judgments to make them meaningful. In this stratum we elaborate a "pure grammar" or a logical syntax, and he would call its rules "laws to prevent non-sense", which would be similar to what logic calls today "formation rules". Mathematics, as logic's ontological correlate, also has a similar stratum, a "morphology of formal-ontological categories". The second stratum would be called by Husserl "logic of consequence" or the "logic of non-contradiction" which explores all possible forms of true judgments. He includes here syllogistic classic logic, propositional logic and that of predicates. This is a semantic stratum, and the rules of this stratum would be the "laws to avoid counter-sense" or "laws to prevent contradiction". They are very similar to today's logic "transformation rules". Mathematics also has a similar stratum which is based among others on pure theory of pluralities, and a pure theory of numbers. They provide a science of the conditions of possibility of any theory whatsoever.
Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown. Charles Steinmetz and Oliver Heaviside contributed to the theoretical basis of alternating current engineering. The spread in the use of AC set off in the United States what has been called the war of the currents between a George Westinghouse backed AC system and a Thomas Edison backed DC power system, with AC being adopted as the overall standard. Early 20th century During the development of radio, many scientists and inventors contributed to radio technology and electronics. The mathematical work of James Clerk Maxwell during the 1850s had shown the relationship of different forms of electromagnetic radiation including the possibility of invisible airborne waves (later called "radio waves"). In his classic physics experiments of 1888, Heinrich Hertz proved Maxwell's theory by transmitting radio waves with a spark-gap transmitter, and detected them by using simple electrical devices. Other physicists experimented with these new waves and in the process developed devices for transmitting and detecting them. In 1895, Guglielmo Marconi began work on a way to adapt the known methods of transmitting and detecting these "Hertzian waves" into a purpose built commercial wireless telegraphic system. Early on, he sent wireless signals over a distance of one and a half miles. In December 1901, he sent wireless waves that were not affected by the curvature of the Earth. Marconi later transmitted the wireless signals across the Atlantic between Poldhu, Cornwall, and St. John's, Newfoundland, a distance of . Millimetre wave communication was first investigated by Jagadish Chandra Bose during 18941896, when he reached an extremely high frequency of up to 60GHz in his experiments. He also introduced the use of semiconductor junctions to detect radio waves, when he patented the radio crystal detector in 1901. In 1897, Karl Ferdinand Braun introduced the cathode ray tube as part of an oscilloscope, a crucial enabling technology for electronic television. John Fleming invented the first radio tube, the diode, in 1904. Two years later, Robert von Lieben and Lee De Forest independently developed the amplifier tube, called the triode. In 1920, Albert Hull developed the magnetron which would eventually lead to the development of the microwave oven in 1946 by Percy Spencer. In 1934, the British military began to make strides toward radar (which also uses the magnetron) under the direction of Dr Wimperis, culminating in the operation of the first radar station at Bawdsey in August 1936. In 1941, Konrad Zuse presented the Z3, the world's first fully functional and programmable computer using electromechanical parts. In 1943, Tommy Flowers designed and built the Colossus, the world's first fully functional, electronic, digital and programmable computer. In 1946, the ENIAC (Electronic Numerical Integrator and Computer) of John Presper Eckert and John Mauchly followed, beginning the computing era. The arithmetic performance of these machines allowed engineers to develop completely new technologies and achieve new objectives. In 1948 Claude Shannon publishes "A Mathematical Theory of Communication" which mathematically describes the passage of information with uncertainty (electrical noise). Solid-state electronics The first working transistor was a point-contact transistor invented by John Bardeen and Walter Houser Brattain while working under William Shockley at the Bell Telephone Laboratories (BTL) in 1947. They then invented the bipolar junction transistor in 1948. While early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, they opened the door for more compact devices. The first integrated circuits were the hybrid integrated circuit invented by Jack Kilby at Texas Instruments in 1958 and the monolithic integrated circuit chip invented by Robert Noyce at Fairchild Semiconductor in 1959. The MOSFET (metal-oxide-semiconductor field-effect transistor, or MOS transistor) was invented by Mohamed Atalla and Dawon Kahng at BTL in 1959. It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. It revolutionized the electronics industry, becoming the most widely used electronic device in the world. The MOSFET made it possible to build high-density integrated circuit chips. The earliest experimental MOS IC chip to be fabricated was built by Fred Heiman and Steven Hofstein at RCA Laboratories in 1962. MOS technology enabled Moore's law, the doubling of transistors on an IC chip every two years, predicted by Gordon Moore in 1965. Silicon-gate MOS technology was developed by Federico Faggin at Fairchild in 1968. Since then, the MOSFET has been the basic building block of modern electronics. The mass-production of silicon MOSFETs and MOS integrated circuit chips, along with continuous MOSFET scaling miniaturization at an exponential pace (as predicted by Moore's law), has since led to revolutionary changes in technology, economy, culture and thinking. The Apollo program which culminated in landing astronauts on the Moon with Apollo 11 in 1969 was enabled by NASA's adoption of advances in semiconductor electronic technology, including MOSFETs in the Interplanetary Monitoring Platform (IMP) and silicon integrated circuit chips in the Apollo Guidance Computer (AGC). The development of MOS integrated circuit technology in the 1960s led to the invention of the microprocessor in the early 1970s. The first single-chip microprocessor was the Intel 4004, released in 1971. The Intel 4004 was designed and realized by Federico Faggin at Intel with his silicon-gate MOS technology, along with Intel's Marcian Hoff and Stanley Mazor and Busicom's Masatoshi Shima. The microprocessor led to the development of microcomputers and personal computers, and the microcomputer revolution. Subfields One of the properties of electricity is that it is very useful for energy transmission as well as for information transmission. These were also the first areas in which electrical engineering was developed. Today electrical engineering has many subdisciplines, the most common of which are listed below. Although there are electrical engineers who focus exclusively on one of these subdisciplines, many deal with a combination of them. Sometimes certain fields, such as electronic engineering and computer engineering, are considered disciplines in their own right. Power and energy Power & Energy engineering deals with the generation, transmission, and distribution of electricity as well as the design of a range of related devices. These include transformers, electric generators, electric motors, high voltage engineering, and power electronics. In many regions of the world, governments maintain an electrical network called a power grid that connects a variety of generators together with users of their energy. Users purchase electrical energy from the grid, avoiding the costly exercise of having to generate their own. Power engineers may work on the design and maintenance of the power grid as well as the power systems that connect to it. Such systems are called on-grid power systems and may supply the grid with additional power, draw power from the grid, or do both. Power engineers may also work on systems that do not connect to the grid, called off-grid power systems, which in some cases are preferable to on-grid systems. The future includes Satellite controlled power systems, with feedback in real time to prevent power surges and prevent blackouts. Telecommunications Telecommunications engineering focuses on the transmission of information across a communication channel such as a coax cable, optical fiber or free space. Transmissions across free space require information to be encoded in a carrier signal to shift the information to a carrier frequency suitable for transmission; this is known as modulation. Popular analog modulation techniques include amplitude modulation and frequency modulation. The choice of modulation affects the cost and performance of a system and these two factors must be balanced carefully by the engineer. Once the transmission characteristics of a system are determined, telecommunication engineers design the transmitters and receivers needed for such systems. These two are sometimes combined to form a two-way communication device known as a transceiver. A key consideration in the design of transmitters is their power consumption as this is closely related to their signal strength. Typically, if the power of the transmitted signal is insufficient once the signal arrives at the receiver's antenna(s), the information contained in the signal will be corrupted by noise, specifically static. Control engineering Control engineering focuses on the modeling of a diverse range of dynamic systems and the design of controllers that will cause these systems to behave in the desired manner. To implement such controllers, electronics control engineers may use electronic circuits, digital signal processors, microcontrollers, and programmable logic controllers (PLCs). Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles. It also plays an important role in industrial automation. Control engineers often use feedback when designing control systems. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system which adjusts the motor's power output accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback. Control engineers also work in robotics to design autonomous systems using control algorithms which interpret sensory feedback to control actuators that move robots such as autonomous vehicles, autonomous drones and others used in a variety of industries. Electronics Electronic engineering involves the design and testing of electronic circuits that use the properties of components such as resistors, capacitors, inductors, diodes, and transistors to achieve a particular functionality. The tuned circuit, which allows the user of a radio to filter out all but a single station, is just one example of such a circuit. Another example to research is a pneumatic signal conditioner. Prior to the Second World War, the subject was commonly known as radio engineering and basically was restricted to aspects of communications and radar, commercial radio, and early television. Later, in post-war years, as consumer devices began to be developed, the field grew to include modern television, audio systems, computers, and microprocessors. In the mid-to-late 1950s, the term radio engineering gradually gave way to the name electronic engineering. Before the invention of the integrated circuit in 1959, electronic circuits were constructed from discrete components that could be manipulated by humans. These discrete circuits consumed much space and power and were limited in speed, although they are still common in some applications. By contrast, integrated circuits packed a large number—often millions—of tiny electrical components, mainly transistors, into a small chip around the size of a coin. This allowed for the powerful computers and other electronic devices we see today. Microelectronics and nanoelectronics Microelectronics engineering deals with the design and microfabrication of very small electronic circuit components for use in an integrated circuit or sometimes for use on their own as a general electronic component. The most common microelectronic components are semiconductor transistors, although all main electronic components (resistors, capacitors etc.) can be created at a microscopic level. Nanoelectronics is the further scaling of devices down to nanometer levels. Modern devices are already in the nanometer regime, with below 100 nm processing having been standard since around 2002. Microelectronic components are created by chemically fabricating wafers of semiconductors such as silicon (at higher frequencies, compound semiconductors like gallium arsenide and indium phosphide) to obtain the desired transport of electronic charge and control of current. The field of microelectronics involves a significant amount of chemistry and material science and requires the electronic engineer working in the field to have a very good working knowledge of the effects of quantum mechanics. Signal processing Signal processing deals with the analysis and manipulation of signals. Signals can be either analog, in which case the signal varies continuously according to the information, or digital, in which case the signal varies according to a series of discrete values representing the information. For analog signals, signal processing may involve the amplification and filtering of audio signals for audio equipment or the modulation and demodulation of signals for telecommunications. For digital signals, signal processing may involve the compression, error detection and
(IMP) and silicon integrated circuit chips in the Apollo Guidance Computer (AGC). The development of MOS integrated circuit technology in the 1960s led to the invention of the microprocessor in the early 1970s. The first single-chip microprocessor was the Intel 4004, released in 1971. The Intel 4004 was designed and realized by Federico Faggin at Intel with his silicon-gate MOS technology, along with Intel's Marcian Hoff and Stanley Mazor and Busicom's Masatoshi Shima. The microprocessor led to the development of microcomputers and personal computers, and the microcomputer revolution. Subfields One of the properties of electricity is that it is very useful for energy transmission as well as for information transmission. These were also the first areas in which electrical engineering was developed. Today electrical engineering has many subdisciplines, the most common of which are listed below. Although there are electrical engineers who focus exclusively on one of these subdisciplines, many deal with a combination of them. Sometimes certain fields, such as electronic engineering and computer engineering, are considered disciplines in their own right. Power and energy Power & Energy engineering deals with the generation, transmission, and distribution of electricity as well as the design of a range of related devices. These include transformers, electric generators, electric motors, high voltage engineering, and power electronics. In many regions of the world, governments maintain an electrical network called a power grid that connects a variety of generators together with users of their energy. Users purchase electrical energy from the grid, avoiding the costly exercise of having to generate their own. Power engineers may work on the design and maintenance of the power grid as well as the power systems that connect to it. Such systems are called on-grid power systems and may supply the grid with additional power, draw power from the grid, or do both. Power engineers may also work on systems that do not connect to the grid, called off-grid power systems, which in some cases are preferable to on-grid systems. The future includes Satellite controlled power systems, with feedback in real time to prevent power surges and prevent blackouts. Telecommunications Telecommunications engineering focuses on the transmission of information across a communication channel such as a coax cable, optical fiber or free space. Transmissions across free space require information to be encoded in a carrier signal to shift the information to a carrier frequency suitable for transmission; this is known as modulation. Popular analog modulation techniques include amplitude modulation and frequency modulation. The choice of modulation affects the cost and performance of a system and these two factors must be balanced carefully by the engineer. Once the transmission characteristics of a system are determined, telecommunication engineers design the transmitters and receivers needed for such systems. These two are sometimes combined to form a two-way communication device known as a transceiver. A key consideration in the design of transmitters is their power consumption as this is closely related to their signal strength. Typically, if the power of the transmitted signal is insufficient once the signal arrives at the receiver's antenna(s), the information contained in the signal will be corrupted by noise, specifically static. Control engineering Control engineering focuses on the modeling of a diverse range of dynamic systems and the design of controllers that will cause these systems to behave in the desired manner. To implement such controllers, electronics control engineers may use electronic circuits, digital signal processors, microcontrollers, and programmable logic controllers (PLCs). Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles. It also plays an important role in industrial automation. Control engineers often use feedback when designing control systems. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system which adjusts the motor's power output accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback. Control engineers also work in robotics to design autonomous systems using control algorithms which interpret sensory feedback to control actuators that move robots such as autonomous vehicles, autonomous drones and others used in a variety of industries. Electronics Electronic engineering involves the design and testing of electronic circuits that use the properties of components such as resistors, capacitors, inductors, diodes, and transistors to achieve a particular functionality. The tuned circuit, which allows the user of a radio to filter out all but a single station, is just one example of such a circuit. Another example to research is a pneumatic signal conditioner. Prior to the Second World War, the subject was commonly known as radio engineering and basically was restricted to aspects of communications and radar, commercial radio, and early television. Later, in post-war years, as consumer devices began to be developed, the field grew to include modern television, audio systems, computers, and microprocessors. In the mid-to-late 1950s, the term radio engineering gradually gave way to the name electronic engineering. Before the invention of the integrated circuit in 1959, electronic circuits were constructed from discrete components that could be manipulated by humans. These discrete circuits consumed much space and power and were limited in speed, although they are still common in some applications. By contrast, integrated circuits packed a large number—often millions—of tiny electrical components, mainly transistors, into a small chip around the size of a coin. This allowed for the powerful computers and other electronic devices we see today. Microelectronics and nanoelectronics Microelectronics engineering deals with the design and microfabrication of very small electronic circuit components for use in an integrated circuit or sometimes for use on their own as a general electronic component. The most common microelectronic components are semiconductor transistors, although all main electronic components (resistors, capacitors etc.) can be created at a microscopic level. Nanoelectronics is the further scaling of devices down to nanometer levels. Modern devices are already in the nanometer regime, with below 100 nm processing having been standard since around 2002. Microelectronic components are created by chemically fabricating wafers of semiconductors such as silicon (at higher frequencies, compound semiconductors like gallium arsenide and indium phosphide) to obtain the desired transport of electronic charge and control of current. The field of microelectronics involves a significant amount of chemistry and material science and requires the electronic engineer working in the field to have a very good working knowledge of the effects of quantum mechanics. Signal processing Signal processing deals with the analysis and manipulation of signals. Signals can be either analog, in which case the signal varies continuously according to the information, or digital, in which case the signal varies according to a series of discrete values representing the information. For analog signals, signal processing may involve the amplification and filtering of audio signals for audio equipment or the modulation and demodulation of signals for telecommunications. For digital signals, signal processing may involve the compression, error detection and error correction of digitally sampled signals. Signal Processing is a very mathematically oriented and intensive area forming the core of digital signal processing and it is rapidly expanding with new applications in every field of electrical engineering such as communications, control, radar, audio engineering, broadcast engineering, power electronics, and biomedical engineering as many already existing analog systems are replaced with their digital counterparts. Analog signal processing is still important in the design of many control systems. DSP processor ICs are found in many types of modern electronic devices, such as digital television sets, radios, Hi-Fi audio equipment, mobile phones, multimedia players, camcorders and digital cameras, automobile control systems, noise cancelling headphones, digital spectrum analyzers, missile guidance systems, radar systems, and telematics systems. In such products, DSP may be responsible for noise reduction, speech recognition or synthesis, encoding or decoding digital media, wirelessly transmitting or receiving data, triangulating positions using GPS, and other kinds of image processing, video processing, audio processing, and speech processing. Instrumentation Instrumentation engineering deals with the design of devices to measure physical quantities such as pressure, flow, and temperature. The design of such instruments requires a good understanding of physics that often extends beyond electromagnetic theory. For example, flight instruments measure variables such as wind speed and altitude to enable pilots the control of aircraft analytically. Similarly, thermocouples use the Peltier-Seebeck effect to measure the temperature difference between two points. Often instrumentation is not used by itself, but instead as the sensors of larger electrical systems. For example, a thermocouple might be used to help ensure a furnace's temperature remains constant. For this reason, instrumentation engineering is often viewed as the counterpart of control. Computers Computer engineering deals with the design of computers and computer systems. This may involve the design of new hardware, the design of PDAs, tablets, and supercomputers, or the use of computers to control an industrial plant. Computer engineers may also work on a system's software. However, the design of complex software systems is often the domain of software engineering, which is usually considered a separate discipline. Desktop computers represent a tiny fraction of the devices a computer engineer might work on, as computer-like architectures are now found in a range of devices including video game consoles and DVD players. Computer engineers are involved in many hardware and software aspects of computing. Optics and Photonics Optics and photonics deals with the generation, transmission, amplification, modulation, detection, and analysis of electromagnetic radiation. The application of optics deals with design of optical instruments such as lenses, microscopes, telescopes, and other equipment that uses
Heaviside and Heinrich Hertz, is one of the key accomplishments of 19th-century mathematical physics. It has had far-reaching consequences, one of which was the understanding of the nature of light. Unlike what was proposed by the electromagnetic theory of that time, light and other electromagnetic waves are at present seen as taking the form of quantized, self-propagating oscillatory electromagnetic field disturbances called photons. Different frequencies of oscillation give rise to the different forms of electromagnetic radiation, from radio waves at the lowest frequencies, to visible light at intermediate frequencies, to gamma rays at the highest frequencies. Ørsted was not the only person to examine the relationship between electricity and magnetism. In 1802, Gian Domenico Romagnosi, an Italian legal scholar, deflected a magnetic needle using a Voltaic pile. The factual setup of the experiment is not completely clear, so if current flowed across the needle or not. An account of the discovery was published in 1802 in an Italian newspaper, but it was largely overlooked by the contemporary scientific community, because Romagnosi seemingly did not belong to this community. An earlier (1735), and often neglected, connection between electricity and magnetism was reported by a Dr. Cookson. The account stated:A tradesman at Wakefield in Yorkshire, having put up a great number of knives and forks in a large box ... and having placed the box in the corner of a large room, there happened a sudden storm of thunder, lightning, &c. ... The owner emptying the box on a counter where some nails lay, the persons who took up the knives, that lay on the nails, observed that the knives took up the nails. On this the whole number was tried, and found to do the same, and that, to such a degree as to take up large nails, packing needles, and other iron things of considerable weight ... E. T. Whittaker suggested in 1910 that this particular event was responsible for lightning to be "credited with the power of magnetizing steel; and it was doubtless this which led Franklin in 1751 to attempt to magnetize a sewing-needle by means of the discharge of Leyden jars." Fundamental forces The electromagnetic force is one of the four known fundamental forces. The other fundamental forces are: the strong nuclear force, which binds quarks to form nucleons, and binds nucleons to form nuclei. the weak nuclear force, which binds to all known particles in the Standard Model, and causes certain forms of radioactive decay. (In particle physics though, the electroweak interaction is the unified description of two of the four known fundamental interactions of nature: electromagnetism and the weak interaction); the gravitational force. All other forces (e.g., friction, contact forces) are derived from these four fundamental forces and they are known as non-fundamental forces. The electromagnetic force is responsible for practically all phenomena one encounters in daily life above the nuclear scale, with the exception of gravity. Roughly speaking, all the forces involved in interactions between atoms can be explained by the electromagnetic force acting between the electrically charged atomic nuclei and electrons of the atoms. Electromagnetic forces also explain how these particles carry momentum by their movement. This includes the forces we experience in "pushing" or "pulling" ordinary material objects, which result from the intermolecular forces that act between the individual molecules in our bodies and those in the objects. The electromagnetic force is also involved in all forms of chemical phenomena. A necessary part of understanding the intra-atomic and intermolecular forces is the effective force generated by the momentum of the electrons' movement, such that as electrons move between interacting atoms they carry momentum with them. As a collection of electrons becomes more confined, their minimum momentum necessarily increases due to the Pauli exclusion principle. The behaviour of matter at the molecular scale including its density is determined by the balance between the electromagnetic force and the force generated by the exchange of momentum carried by the electrons themselves. Classical electrodynamics In 1600, William Gilbert proposed, in his De Magnete, that electricity and magnetism, while both capable of causing attraction and repulsion of objects, were distinct effects. Mariners had noticed that lightning
were distinct effects. Mariners had noticed that lightning strikes had the ability to disturb a compass needle. The link between lightning and electricity was not confirmed until Benjamin Franklin's proposed experiments in 1752. One of the first to discover and publish a link between man-made electric current and magnetism was Gian Romagnosi, who in 1802 noticed that connecting a wire across a voltaic pile deflected a nearby compass needle. However, the effect did not become widely known until 1820, when Ørsted performed a similar experiment. Ørsted's work influenced Ampère to produce a theory of electromagnetism that set the subject on a mathematical foundation. A theory of electromagnetism, known as classical electromagnetism, was developed by various physicists during the period between 1820 and 1873 when it culminated in the publication of a treatise by James Clerk Maxwell, which unified the preceding developments into a single theory and discovered the electromagnetic nature of light. In classical electromagnetism, the behavior of the electromagnetic field is described by a set of equations known as Maxwell's equations, and the electromagnetic force is given by the Lorentz force law. One of the peculiarities of classical electromagnetism is that it is difficult to reconcile with classical mechanics, but it is compatible with special relativity. According to Maxwell's equations, the speed of light in a vacuum is a universal constant that is dependent only on the electrical permittivity and magnetic permeability of free space. This violates Galilean invariance, a long-standing cornerstone of classical mechanics. One way to reconcile the two theories (electromagnetism and classical mechanics) is to assume the existence of a luminiferous aether through which the light propagates. However, subsequent experimental efforts failed to detect the presence of the aether. After important contributions of Hendrik Lorentz and Henri Poincaré, in 1905, Albert Einstein solved the problem with the introduction of special relativity, which replaced classical kinematics with a new theory of kinematics compatible with classical electromagnetism. (For more information, see History of special relativity.) In addition, relativity theory implies that in moving frames of reference, a magnetic field transforms to a field with a nonzero electric component and conversely, a moving electric field transforms to a nonzero magnetic component, thus firmly showing that the phenomena are two sides of the same coin. Hence the term "electromagnetism". (For more information, see Classical electromagnetism and special relativity and Covariant formulation of classical electromagnetism.) Extension to nonlinear phenomena The Maxwell equations are linear, in that a change in the sources (the charges and currents) results in a proportional change of the fields. Nonlinear dynamics can occur when electromagnetic fields couple to matter that follows nonlinear dynamical laws. This is studied, for example, in the subject of magnetohydrodynamics, which combines Maxwell theory with the Navier–Stokes equations. Quantities and units Electromagnetic units are part of a system of electrical units based primarily upon the magnetic properties of electric currents, the fundamental SI unit being the ampere. The units are: ampere (electric current) coulomb (electric charge) farad (capacitance) henry (inductance) ohm (resistance) siemens (conductance) tesla (magnetic flux density) volt (electric potential) watt (power) weber (magnetic flux) In the electromagnetic CGS system, electric current is a fundamental quantity defined via Ampère's law and takes the permeability as a dimensionless quantity (relative permeability) whose value in a vacuum is unity. As a consequence, the square of the speed of light appears explicitly in some of the equations interrelating quantities in this system. Formulas for physical laws of electromagnetism (such as Maxwell's equations) need to be adjusted depending on what system of units one uses. This is because there is no one-to-one correspondence between electromagnetic units in SI and those in CGS, as is the case for mechanical units. Furthermore, within CGS, there are several plausible choices of electromagnetic units, leading to different unit "sub-systems", including Gaussian, "ESU", "EMU", and Heaviside–Lorentz. Among these choices, Gaussian units are the most common today, and in fact the phrase "CGS units" is often used to refer specifically to CGS-Gaussian units. See also Abraham–Lorentz force Aeromagnetic surveys Computational electromagnetics Double-slit experiment Electromagnet Electromagnetic induction Electromagnetic wave equation Electromagnetic scattering Electromechanics Geophysics Introduction to electromagnetism Magnetostatics Magnetoquasistatic field Optics Relativistic electromagnetism Wheeler–Feynman absorber theory References Further reading Web sources Textbooks General references External links Magnetic Field Strength Converter Electromagnetic Force
official statements or documents. For instance, one reason for the comparative scarcity of written evidence documenting the exterminations at Auschwitz, relative to their sheer number, is "directives for the extermination process obscured in bureaucratic euphemisms". Euphemisms are sometimes used to lessen the opposition to a political move. For example, according to linguist Ghil'ad Zuckermann, Israeli Prime Minister Benjamin Netanyahu used the neutral Hebrew lexical item פעימות peimót ("beatings (of the heart)"), rather than נסיגה nesigá ("withdrawal"), to refer to the stages in the Israeli withdrawal from the West Bank (see Wye River Memorandum), in order to lessen the opposition of right-wing Israelis to such a move. The lexical item פעימות peimót, which literally means "beatings (of the heart)" is thus a euphemism for "withdrawal". Rhetoric Euphemism may be used as a rhetorical strategy, in which case its goal is to change the valence of a description. Controversial use The act of labeling a term as a euphemism can in itself be controversial, as in the following two examples: Affirmative action, meaning a preference for minorities or the historically disadvantaged, usually in employment or academic admissions. This term is sometimes said to be a euphemism for reverse discrimination, or, in the UK, positive discrimination, which suggests an intentional bias that might be legally prohibited, or otherwise unpalatable. Enhanced interrogation is sometimes said to be a euphemism for torture. For example, columnist David Brooks called the use of this term for practices at Abu Ghraib, Guantánamo, and elsewhere an effort to "dull the moral sensibility". Formation methods Phonetic modification Phonetic euphemism is used to replace profanities, diminishing their intensity. Modifications include: Shortening or "clipping" the term, such as Jeez (Jesus) and what the— ("what the hell" or "what the fuck") Mispronunciations, such as frak, frig (both the preceding for "fuck"), what the fudge, what the truck (both "what the fuck"), oh my gosh ("oh my God"), frickin ("fucking"), darn ("damn"), oh shoot ("oh shit"), be-yotch ("bitch"), etc. This is also referred to as a minced oath. Using acronyms as replacements, such as SOB ("son of a bitch"), what the eff ("what the fuck"), S my D ("suck my dick"), POS ("piece of shit"), BS ("bullshit"). Sometimes, the word "word" or "bomb" is added after it, such as F-word ("fuck"), S-word ("shit"), B-word ("bitch"), N-word ("nigger"), etc. Also, the letter can be phonetically respelled. Pronunciation To alter the pronunciation or spelling of a taboo word (such as a swear word) to form a euphemism is known as taboo deformation, or a minced oath. In American English, words that are unacceptable on television, such as fuck, may be represented by deformations such as freak, even in children's cartoons. Feck is a minced oath originating in Hiberno-English and popularised outside of Ireland by the British sitcom Father Ted. Some examples of Cockney rhyming slang may serve the same purpose: to call a person a berk sounds less offensive than to call a person a cunt, though berk is short for Berkeley Hunt, which rhymes with cunt. Understatement Euphemisms formed from understatements include: asleep for dead and drinking for consuming alcohol. "Tired and emotional" is a notorious British euphemism for "drunk", one of many recurring jokes popularised by the satirical magazine Private Eye; it has been used by MPs to avoid unparliamentary language. Substitution Pleasant, positive, worthy, neutral, or nondescript terms are substituted for explicit or unpleasant ones, with many substituted terms deliberately coined by sociopolitical progressive movements, cynically by planned marketing, public relations, or advertising initiatives, including: "meat packing company" for "slaughter-house" (avoids entirely the subject of killing); "natural issue" or "love child" for "bastard"; "let go" for "fired" or "dismissed" (implies a generosity on the part of the employer in allowing employee to depart); "intimate" for "sexual"; "adult material" for "pornography"; "exotic dancer" for "stripper"; "issue" for "problem"; "adult beverage" for "alcoholic beverage"; "expecting" for "pregnant"; "health problem" for "illness"; "make love" for "have sex"; "special friend" for "romantic partner"; "high-net worth" for "rich"; "plus-sized" for "overweight"; "memorial marker" for "gravestone"; "staff-member" for "servant"; "colleague" for "employee" (apparent promotion from servant to partner); "operative" for "worker" (elevates status); "turf-accountant" or "book-maker" for "betting shop" (professionalises an unworthy activity); "marital aid" for "sex toy" (converts to an object fulfilling a worthy objective); "special needs" for disability; or "final expenses" for "funeral costs". Basic ancient and (overly) direct Anglo-Saxon words such as deaf, dumb, blind, lame, all have modern euphemisms. Over time, it becomes socially unacceptable to use the latter word, as one is effectively downgrading the matter concerned to its former lower status, and the euphemism becomes dominant, due to a wish not to offend. Metaphor Metaphors (beat the meat, choke the chicken, or jerkin' the gherkin for masturbation; take a dump and
for "overweight"; "memorial marker" for "gravestone"; "staff-member" for "servant"; "colleague" for "employee" (apparent promotion from servant to partner); "operative" for "worker" (elevates status); "turf-accountant" or "book-maker" for "betting shop" (professionalises an unworthy activity); "marital aid" for "sex toy" (converts to an object fulfilling a worthy objective); "special needs" for disability; or "final expenses" for "funeral costs". Basic ancient and (overly) direct Anglo-Saxon words such as deaf, dumb, blind, lame, all have modern euphemisms. Over time, it becomes socially unacceptable to use the latter word, as one is effectively downgrading the matter concerned to its former lower status, and the euphemism becomes dominant, due to a wish not to offend. Metaphor Metaphors (beat the meat, choke the chicken, or jerkin' the gherkin for masturbation; take a dump and take a leak for defecation and urination, respectively) Comparisons (buns for buttocks, weed for cannabis) Metonymy (men's room for "men's toilet") Slang The use of a term with a softer connotation, though it shares the same meaning. For instance, screwed up is a euphemism for fucked up; hook-up and laid are euphemisms for sexual intercourse. Foreign words Expressions or words from a foreign language may be imported for use as euphemism. For example, the French word enceinte was sometimes used instead of the English word pregnant; abattoir for "slaughter-house", although in French the word retains its explicit violent meaning "a place for beating down", conveniently lost on non-French speakers. "Entrepreneur" for "business-man", adds glamour; "douche" (French: shower) for vaginal irrigation device; "bidet" (French: little pony) for "vessel for intimate ablutions". Ironically, although in English physical "handicaps" are almost always described with euphemism, in French the English word "handicap" is used as a euphemism for their problematic words "infirmité" or "invalidité". Periphrasis/circumlocution Periphrasis, or circumlocution, is one of the most common: to "speak around" a given word, implying it without saying it. Over time, circumlocutions become recognized as established euphemisms for particular words or ideas. Doublespeak Bureaucracies frequently spawn euphemisms intentionally, as doublespeak expressions. For example, in the past, the US military used the term "sunshine units" for contamination by radioactive isotopes. Even today, the United States Central Intelligence Agency refers to systematic torture as "enhanced interrogation techniques". An effective death sentence in the Soviet Union during the Great Purge often used the clause "imprisonment without right to correspondence": the person sentenced would be shot soon after conviction. As early as 1939, Nazi official Reinhard Heydrich used the term Sonderbehandlung ("special treatment") to mean summary execution of persons viewed as "disciplinary problems" by the Nazis even before commencing the systematic extermination of the Jews. Heinrich Himmler, aware that the word had come to be known to mean murder, replaced that euphemism with one in which Jews would be "guided" (to their deaths) through the slave-labor and extermination camps after having been "evacuated" to their doom. Such was part of the formulation of Endlösung der Judenfrage (the "Final Solution to the Jewish Question"), which became known to the outside world during the Nuremberg Trials. Lifespan Frequently, over time, euphemisms themselves become taboo words, through the linguistic process of semantic change known as pejoration, which University of Oregon linguist Sharon Henderson Taylor dubbed the "euphemism cycle" in 1974, also frequently referred to as the "euphemism treadmill". For instance, toilet is an 18th-century euphemism, replacing the older euphemism house-of-office, which in turn replaced the even older euphemisms privy-house and bog-house. The act of human defecation is possibly the most needy candidate for a euphemism in all eras. In the 20th century, where the old euphemisms lavatory (a place where one washes) or toilet (a place where one dresses) had grown from widespread usage (e.g., in the United States) to being synonymous with the crude act they sought to deflect, they were sometimes replaced with bathroom (a place where one bathes), washroom (a place where one washes), or restroom (a place where one rests) or even by the extreme form powder room (a place where one applies facial cosmetics). The form water closet, which in turn became euphemised to W.C., is a less deflective form. Another example in American English is the replacement of colored people with
the "evils" of the Irish people into three prominent categories: laws, customs and religion. According to Spenser, these three elements worked together in creating the supposedly "disruptive and degraded people" which inhabited the country. One example given in the work is the Irish law system termed "Brehon law", which at the time trumped the established law as dictated by the Crown. The Brehon system had its own court and methods of punishing infractions committed. Spenser viewed this system as a backward custom which contributed to the "degradation" of the Irish people. A particular legal punishment viewed with distaste by Spenser was the Brehon method of dealing with murder, which was to impose an éraic (fine) on the murderer's family. From Spenser's viewpoint, the appropriate punishment for murder was capital punishment. Spenser also warned of the dangers that allowing the education of children in the Irish language would bring: "Soe that the speach being Irish, the hart must needes be Irishe; for out of the aboundance of the hart, the tonge speaketh". He pressed for a scorched earth policy in Ireland, noting its effectiveness in the Second Desmond Rebellion: "'Out of everye corner of the woode and glenns they came creepinge forth upon theire handes, for theire legges could not beare them; they looked Anatomies [of] death, they spake like ghostes, crying out of theire graves; they did eate of the carrions, happye wheare they could find them, yea, and one another soone after, in soe much as the verye carcasses they spared not to scrape out of theire graves; and if they found a plott of water-cresses or shamrockes, theyr they flocked as to a feast… in a shorte space there were none almost left, and a most populous and plentyfull countrye suddenly lefte voyde of man or beast: yett sure in all that warr, there perished not manye by the sworde, but all by the extreamytie of famine ... they themselves had wrought.'" List of works Iambicum Trimetrum 1569: Jan van der Noodt's A Theatre for Worldlings, including poems translated into English by Spenser from French sources, published by Henry Bynneman in London 1579: The Shepheardes Calender, published under the pseudonym "Immerito" (entered into the Stationers' Register in December) 1590: The Faerie Queene, Books 1–3 1591: Complaints, Containing Sundrie Small Poemes of the Worlds Vanitie (entered into the Stationer's Register in 1590), includes: "The Ruines of Time" "The Teares of the Muses" "Virgil's Gnat" "Prosopopoia, or Mother Hubberds Tale" "Ruines of Rome: by Bellay" "Muiopotmos, or the Fate of the Butterflie" "Visions of the Worlds Vanitie" "The Visions of Bellay" "The Visions of Petrarch" 1592: Axiochus, a translation of a pseudo-Platonic dialogue from the original Ancient Greek; published by Cuthbert Burbie; attributed to "Edw: Spenser" but the attribution is uncertain Daphnaïda. An Elegy upon the Death of the Noble and Vertuous Douglas Howard, Daughter and Heire of Henry Lord Howard, Viscount Byndon, and Wife of Arthure Gorges Esquier (published in London in January, according to one source; another source gives 1591 as the year) 1595: Amoretti and Epithalamion, containing: "Amoretti" "Epithalamion" Astrophel. A Pastorall Elegie vpon the Death of the Most Noble and Valorous Knight, Sir Philip Sidney Colin Clouts Come Home Againe 1596: Fowre Hymnes dedicated from the court at Greenwich; published with the second edition of Daphnaida Prothalamion The Faerie Queene, Books 4–6 Babel, Empress of the East – a dedicatory poem prefaced to Lewes Lewkenor's The Commonwealth of Venice, 1599. Posthumous: 1609: Two Cantos of Mutabilitie published together with a reprint of The Faerie Queene 1611: First folio edition of Spenser's collected works 1633: A Vewe of the Present State of Irelande, a prose treatise on the reformation of Ireland, first published by Sir James Ware (historian) entitled The Historie of Ireland (Spenser's work was entered into the Stationer's Register in 1598 and circulated in manuscript but not published until it was edited by Ware) Editions Edmund Spenser, Selected Letters and Other Papers. Edited by Christopher Burlinson and Andrew Zurcher (Oxford, OUP, 2009). Edmund Spenser, The Faerie-Queene (Longman-Annotated-English Poets, 2001, 2007) Edited by A. C. Hamilton, Text Edited by Yamashita and Toshiyuki Suzuki. Digital archive Washington University in St. Louis professor Joseph Lowenstein, with the assistance of several undergraduate students, has been involved in creating, editing, and annotating a digital archive of the first publication of poet Edmund Spenser's collective works in 100 years. A large grant from the National Endowment for the Humanities has been given to support this ambitious project centralized at Washington University with support from other colleges in the United States. References Sources Croft, Ryan J. "Sanctified Tyrannicide: Tyranny And Theology In John Ponet's Shorte Treatise Of Politike Power And Edmund “Spenser's The Faerie Queene." Studies in Philosophy, 108.4 (2011): 538–571. MLA International
1596. Spenser originally indicated that he intended the poem to consist of twelve books, so the version of the poem we have today is incomplete. Despite this, it remains one of the longest poems in the English language. It is an allegorical work, and can be read (as Spenser presumably intended) on several levels of allegory, including as praise of Queen Elizabeth I. In a completely allegorical context, the poem follows several knights in an examination of several virtues. In Spenser's "A Letter of the Authors", he states that the entire epic poem is "cloudily enwrapped in allegorical devises", and that the aim behind The Faerie Queene was to "fashion a gentleman or noble person in virtuous and gentle discipline". Shorter poems Spenser published numerous relatively short poems in the last decade of the sixteenth century, almost all of which consider love or sorrow. In 1591, he published Complaints, a collection of poems that express complaints in mournful or mocking tones. Four years later, in 1595, Spenser published Amoretti and Epithalamion. This volume contains eighty-eight sonnets commemorating his courtship of Elizabeth Boyle. In Amoretti, Spenser uses subtle humour and parody while praising his beloved, reworking Petrarchism in his treatment of longing for a woman. Epithalamion, similar to Amoretti, deals in part with the unease in the development of a romantic and sexual relationship. It was written for his wedding to his young bride, Elizabeth Boyle. Some have speculated that the attention to disquiet, in general, reflects Spenser's personal anxieties at the time, as he was unable to complete his most significant work, The Faerie Queene. In the following year, Spenser released Prothalamion, a wedding song written for the daughters of a duke, allegedly in hopes to gain favour in the court. The Spenserian stanza and sonnet Spenser used a distinctive verse form, called the Spenserian stanza, in several works, including The Faerie Queene. The stanza's main meter is iambic pentameter with a final line in iambic hexameter (having six feet or stresses, known as an Alexandrine), and the rhyme scheme is . He also used his own rhyme scheme for the sonnet. In a Spenserian sonnet, the last line of every quatrain is linked with the first line of the next one, yielding the rhyme scheme . "Men Call you Fayre" is a fine Sonnet from Amoretti. The poet presents the concept of true beauty in the poem. He addresses the sonnet to his beloved, Elizabeth Boyle, and presents his courtship. Like all Renaissance men, Edmund Spenser believed that love is an inexhaustible source of beauty and order. In this Sonnet, the poet expresses his idea of true beauty. The physical beauty will finish after a few days; it is not a permanent beauty. He emphasises beauty of mind and beauty of intellect. He considers his beloved is not simply flesh but is also a spiritual being. The poet opines that he is beloved born of heavenly seed and she is derived from fair spirit. The poet states that because of her clean mind, pure heart and sharp intellect, men call her fair and she deserves it. At the end, the poet praises her spiritual beauty and he worships her because of her Divine Soul. Influences Though Spenser was well-read in classical literature, scholars have noted that his poetry does not rehash tradition, but rather is distinctly his. This individuality may have resulted, to some extent, from a lack of comprehension of the classics. Spenser strove to emulate such ancient Roman poets as Virgil and Ovid, whom he studied during his schooling, but many of his best-known works are notably divergent from those of his predecessors. The language of his poetry is purposely archaic, reminiscent of earlier works such as The Canterbury Tales of Geoffrey Chaucer and Il Canzoniere of Francesco Petrarca, whom Spenser greatly admired. An Anglican and a devotee of the Protestant Queen Elizabeth, Spenser was particularly offended by the anti-Elizabethan propaganda that some Catholics circulated. Like most Protestants near the time of the Reformation, Spenser saw a Catholic church full of corruption, and he determined that it was not only the wrong religion but the anti-religion. This sentiment is an important backdrop for the battles of The Faerie Queene. Spenser was called "the Poet's Poet" by Charles Lamb, and was admired by John Milton, William Blake, William Wordsworth, John Keats, Lord Byron, Alfred Tennyson and others. Among his contemporaries Walter Raleigh wrote a commendatory poem to The Faerie Queene in 1590, in which he claims to admire and value Spenser's work more so than any other in the English language. John Milton in his Areopagitica mentions "our sage and serious poet Spenser, whom I dare be known to think a better teacher than Scotus or Aquinas". In the eighteenth century, Alexander Pope compared Spenser to "a mistress, whose faults we see, but love her with them all." A View of the Present State of Ireland In his work A View of the Present State of Irelande (1596), Spenser discussed future plans to establish control over Ireland, the most recent Irish uprising, led by Hugh O'Neill having demonstrated the futility of previous efforts. The work is partly a defence of Lord Arthur Grey de Wilton, who was appointed Lord Deputy of Ireland in 1580, and who greatly influenced Spenser's thinking on Ireland. The goal of the piece was to show that Ireland was in great need of reform. Spenser believed that "Ireland is a diseased portion of the State, it must
of pollutants and greenhouse gases from fossil fuel-based electricity generation account for a significant portion of world greenhouse gas emissions. In the United States, fossil fuel combustion for electric power generation is responsible for 65% of all emissions of sulfur dioxide, the main component of acid rain. Electricity generation is the fourth highest combined source of NOx, carbon monoxide, and particulate matter in the US. According to the International Energy Agency (IEA), low-carbon electricity generation needs to account for 85% of global electrical output by 2040 in order to ward off the worst effects of climate change. Like other organizations including the Energy Impact Center (EIC) and the United Nations Economic Commission for Europe (UNECE), the IEA has called for the expansion of nuclear and renewable energy to meet that objective. Some, like EIC founder Bret Kugelmass, believe that nuclear power is the primary method for decarbonizing electricity generation because it can also power direct air capture that removes existing carbon emissions from the atmosphere. Nuclear power plants can also create district heating and desalination projects, limiting carbon emissions and the need for expanded electrical output. A fundamental issue regarding centralised generation and the current electrical generation methods in use today is the significant negative environmental effects that many of the generation processes have. Processes such as coal and gas not only release carbon dioxide as they combust, but their extraction from the ground also impacts the environment. Open pit coal mines use large areas of land to extract coal and limit the potential for productive land use after the excavation. Natural gas extraction releases large amounts of methane into the atmosphere when extracted from the ground greatly increase global greenhouse gases. Although nuclear power plants to not release carbon dioxide through electricity generation, there are significant risks associated with nuclear waste and safety concerns associated with the use of nuclear sources. This fear of nuclear power stems from large-scale nuclear catastrophes such as the Chernobyl Disaster and the Fukushima Daiichi nuclear disaster. Both tragedies led to significant casualties and the radioactive contamination of large areas. Centralised generation Centralised generation refers the common process of electricity generation through large-scale centralised facilities, through transmission lines to consumer. These facilities are usually located far away from consumers and distribute the electricity through high voltage transmission lines to a substation, where it is then distributed to consumers; the basic concept being that incredibly large stations create electricity for a large number of people. The vast majority of electricity used is created from centralised generation. Most centralised power generation comes from large power plants run by fossil fuels such as coal or natural gas, though nuclear or large hydroelectricity plants are also commonly used. Many disagree with the processes of centralised generation as it often relies on electrical generation through processes of the combustion of fossil fuels, which are bad for the environment. However unsustainable the current system is, it is by far the most widely used, reliable and efficient system that is currently in use. Centralised generation is fundamentally the opposite of distributed generation. Distributed generation is the small-scale generation of electricity to smaller groups of consumers. This can also include independently producing electricity by either solar or wind power. In recent years distributed generation as has seen a spark in popularity due to its propensity to use renewable energy generation methods such as wind and solar. Technologies Centralised energy sources are large thermal power stations that produce huge amounts of electricity to a large number of consumers. This is the traditional way of producing energy. Almost all power plants used in centralised generation are thermal power plants meaning that they use a fuel to heat steam to produce a pressurised gas which in turn spins a turbine and generates electricity. This process relies on several forms of technology to produce widespread electricity, these being natural coal, gas and nuclear forms of thermal generation. Coal Coal power stations produce steam by burning coal dug up from the earth. This steam, under intensely high-pressure forces into a turbine. These turbines are connected to generators that spin at high speeds creating electricity. Following the generation, the steam is cooled back into water to be heated once again to produce electricity. A single coal power plant can produce electricity for 70 000 homes, however, can use up to 14 000 tonnes of coal a day to heat its boiler. The fundamental issues regarding the use of coal in electricity generation is to with the greenhouse gases released by the burning of coal, and the limited amount of coal on earth, leading many to agree that is a very unsustainable way of producing electricity. Natural gas Natural gas is ignited to create pressurised gas which is used to spin turbines to generate electricity. Natural gas plants use a gas turbine where natural gas is added along with oxygen which in turn combusts and expands through the turbine to force a generator to spin. Natural gas power plants are more efficient than coal power generation, they however contribute to climate change but not as highly as coal generation. Not only do they produce carbon dioxide from the ignition of natural gas, but also the extraction of gas when mined releases a significant amount of methane into the atmosphere. Nuclear Nuclear power plants create electricity through the process of nuclear fission. Currently nuclear power produces 11% of all electricity in the world. Nuclear reactors use uranium as a source of fuel to power the reactors. When these nuclear atoms are split a sudden release in energy is formed and can be converted into heat. This process is called nuclear fission. Electricity is created through the use of a nuclear reactor where heat produced by nuclear fission is used to produce steam which in turn spins turbines and powers the generators. Although there are several types on nuclear reactors, but all fundamentally use this process. Although nuclear energy produces very little emissions, several accidents throughout history have led many to speculate the safety and risk associated with these plants. Accidents such as the Chernobyl disaster and the Fukushima nuclear disaster have led many to disagree with the practice of nuclear energy generation. Uses Although there is variation in the amount of electricity needed at all times, the base load is the minimum amount of energy required at one time, this is the majority of all energy and must be created by large power stations that have the ability to run all day, every day. Only nuclear, coal, oil, gas and some hydro plants can reliably supply the base load. Many greener methods rely heavily on variables such as the sun and wind and thus their output varies too much to support the base load. Highly industrial areas tend to be powered almost entirely by thermal energy plants such as coal or gas-powered plants, as their huge power output is necessary to power industry in the region. The localised effect of pollution is also minimal as industrial regions are usually far from suburban areas. The plants can also cope with large variation in power output by adjusting the production of the turbines. Large thermal power plants produce the vast majority of electricity for residential areas, although there has been an increase in renewable sources of energy, it still only makes up around 8% of all electricity consumed. However, these renewable sources of electricity, aid in fluctuations of electricity demand as they are often easy to adjust the output required to meet the needs of the grid. Through transmission line the majority of electricity is distributed to residential areas. Features The total amount of electricity used fluctuates depends highly on factors such as the time of the day, date and the weather. When demand varies operators must vary total output from the power plants. This is usually done through collaboration with other power plants thus keeping on power grid in an equilibrium. This adds a complication for the entire power grid as it is often difficult to adjust the power output of large thermal power plants. Although confusing and often inefficient, centralised generation is the most popular way of energy production and distribution in the world. The three major aspects of centralised generation are: generation, transmission and distribution. Generation Electricity is generated throughout the world in many ways using a variety of resources. The three most common resources used are natural gas, nuclear and coal. But renewable sources of generation are quickly growing. The most common way to generate electricity is through the transformation of kinetic energy into electricity by large electric generators. The vast majority of electrical generation is produced through electromagnetic induction, where mechanical energy through a turbine drives a generator rotate to produce electricity. Often utilities need to purchase more electricity through a wholesale market from a rival utility or wholesale retailer; this is brokered and organised by a regional transmission reliability organisation. It is necessary for utilities to produce electricity far away from consumers as the plants are large and release extensive greenhouse gases. Thus, there is a major process of transporting electricity from the generation plants to consumers. Centralised generation vs. distributed generation Centralised generation vs distributed generation is an argument that has recently surfaced, with many claiming that centralised generation is a way of the past and that distributed generation is the future of electricity production. Distributed generation is the process of small-scale production of electricity often by individuals having their own way of producing energy that they then use. Distributed energy is usually described as using environmentally sustainable practices such as solar or wind opposed to nuclear, gas or coal. An example of this would be solar panels on one's house or a small local energy producer. Distributed energy is usually seen as far better for the environment as it does not use large thermal combustion to produce energy. It also does not rely on a network of power grids that can often be unreliable and leave many without power. Over the past few years many there has been a major increase in the use of Distributed generation as many governments are promoting the use of this technology through subsidies as a way of reducing greenhouse emissions. But others say that the economies of scale of centralised generation outweigh the transmission costs. See also Cogeneration: the use of a heat engine or power station to generate electricity and useful heat at the same time. Cost of electricity by source Diesel generator Electric generator Engine-generator Electric power transmission World energy consumption: the total energy used by all of human civilization. Electrification Nuclear
is based on Faraday's law. It can be seen experimentally by rotating a magnet within closed loops of conducting material (e.g. copper wire). Almost all commercial electrical generation is done using electromagnetic induction, in which mechanical energy forces a generator to rotate. Electrochemistry Electrochemistry is the direct transformation of chemical energy into electricity, as in a battery. Electrochemical electricity generation is important in portable and mobile applications. Currently, most electrochemical power comes from batteries. Primary cells, such as the common zinc–carbon batteries, act as power sources directly, but secondary cells (i.e. rechargeable batteries) are used for storage systems rather than primary generation systems. Open electrochemical systems, known as fuel cells, can be used to extract power either from natural fuels or from synthesized fuels. Osmotic power is a possibility at places where salt and fresh water merge. Photovoltaic effect The photovoltaic effect is the transformation of light into electrical energy, as in solar cells. Photovoltaic panels convert sunlight directly to DC electricity. Power inverters can then convert that to AC electricity if needed. Although sunlight is free and abundant, solar power electricity is still usually more expensive to produce than large-scale mechanically generated power due to the cost of the panels. Low-efficiency silicon solar cells have been decreasing in cost and multijunction cells with close to 30% conversion efficiency are now commercially available. Over 40% efficiency has been demonstrated in experimental systems. Until recently, photovoltaics were most commonly used in remote sites where there is no access to a commercial power grid, or as a supplemental electricity source for individual homes and businesses. Recent advances in manufacturing efficiency and photovoltaic technology, combined with subsidies driven by environmental concerns, have dramatically accelerated the deployment of solar panels. Installed capacity is growing by 40% per year led by increases in Germany, Japan, United States, China, and India. Economics The selection of electricity production modes and their economic viability varies in accordance with demand and region. The economics vary considerably around the world, resulting in widespread residential selling prices, e.g. the price in Iceland is 5.54 cents per kWh while in some island nations it is 40 cents per kWh. Hydroelectric plants, nuclear power plants, thermal power plants and renewable sources have their own pros and cons, and selection is based upon the local power requirement and the fluctuations in demand. All power grids have varying loads on them but the daily minimum is the base load, often supplied by plants which run continuously. Nuclear, coal, oil, gas and some hydro plants can supply base load. If well construction costs for natural gas are below $10 per MWh, generating electricity from natural gas is cheaper than generating power by burning coal. Thermal energy may be economical in areas of high industrial density, as the high demand cannot be met by local renewable sources. The effect of localized pollution is also minimized as industries are usually located away from residential areas. These plants can also withstand variation in load and consumption by adding more units or temporarily decreasing the production of some units. Nuclear power plants can produce a huge amount of power from a single unit. However, nuclear disasters have raised concerns over the safety of nuclear power, and the capital cost of nuclear plants is very high. Hydroelectric power plants are located in areas where the potential energy from falling water can be harnessed for moving turbines and the generation of power. It may not be an economically viable single source of production where the ability to store the flow of water is limited and the load varies too much during the annual production cycle. Due to advancements in technology, and with mass production, renewable sources other than hydroelectricity (solar power, wind energy) experienced decreases in cost of production, and the energy is now in many cases as expensive or less expensive than fossil fuels. Many governments around the world provide subsidies to offset the higher cost of any new power production, and to make the installation of renewable energy systems economically feasible. Generating equipment Electric generators were known in simple forms from the discovery of electromagnetic induction in the 1830s. In general, some form of prime mover such as an engine or the turbines described above, drives a rotating magnetic field past stationary coils of wire thereby turning mechanical energy into electricity. The only commercial scale electricity production that does not employ a generator is solar PV. Turbines Almost all commercial electrical power on Earth is generated with a turbine, driven by wind, water, steam or burning gas. The turbine drives a generator, thus transforming its mechanical energy into electrical energy by electromagnetic induction. There are many different methods of developing mechanical energy, including heat engines, hydro, wind and tidal power. Most electric generation is driven by heat engines. The combustion of fossil fuels supplies most of the energy to these engines, with a significant fraction from nuclear fission and some from renewable sources. The modern steam turbine (invented by Sir Charles Parsons in 1884) currently generates about 80% of the electric power in the world using a variety of heat sources. Turbine types include: Steam Water is boiled by coal burned in a thermal power plant. About 41% of all electricity is generated this way. Nuclear fission heat created in a nuclear reactor creates steam. Less than 15% of electricity is generated this way. Renewable energy. The steam is generated by biomass, solar thermal energy, or geothermal power. Natural gas: turbines are driven directly by gases produced by combustion. Combined cycle are driven by both steam and natural gas. They generate power by burning natural gas in a gas turbine and use residual heat to generate steam. At least 20% of the world's electricity is generated by natural gas. Water Energy is captured by a water turbine from the movement of water - from falling water, the rise and fall of tides or ocean thermal currents (see ocean thermal energy conversion). Currently, hydroelectric plants provide approximately 16% of the world's electricity. The windmill was a very early wind turbine. In 2018 around 5% of the world's electricity was produced from wind. Although turbines are most common in commercial power generation, smaller generators can be powered by gasoline or diesel engines. These may used for backup generation or as a prime source of power within isolated villages. Production Total worldwide gross production of electricity in 2016 was 25 082 TWh. Sources of electricity were coal and peat 38.3%, natural gas 23.1%, hydroelectric 16.6%, nuclear power 10.4%, oil 3.7%, solar/wind/geothermal/tidal/other 5.6%, biomass and waste 2.3%. Historical results of production of electricity Production by country The United States has long been the largest producer and consumer of electricity, with a global share in 2005 of at least 25%, followed by China, Japan, Russia, and India. In 2011, China overtook the United States to become the largest producer of electricity. Environmental concerns Variations between countries generating electrical power affect concerns about the environment. In France only 10% of electricity is generated from fossil fuels, the US is higher at 70% and China is at 80%. The cleanliness of electricity depends on its source. Most scientists agree that emissions of pollutants and greenhouse gases from fossil fuel-based electricity generation account for a significant portion of world greenhouse gas emissions. In the United States, fossil fuel combustion for electric power generation is responsible for 65% of all emissions of sulfur dioxide, the main component of acid rain. Electricity generation is the fourth highest combined source of NOx, carbon monoxide, and particulate matter in the US. According to the International Energy Agency (IEA), low-carbon electricity generation needs to account for 85% of global electrical output by 2040 in order to ward off the worst effects of climate change. Like other organizations including the Energy Impact Center (EIC) and the United Nations Economic Commission for Europe (UNECE), the IEA has called for the expansion of nuclear and renewable energy to meet that objective. Some, like EIC founder Bret Kugelmass, believe that nuclear power is the primary method for decarbonizing electricity generation because it can also power direct air capture that removes existing carbon emissions from the atmosphere. Nuclear power plants can also create district heating and desalination projects, limiting carbon emissions and the need for expanded electrical output. A fundamental issue regarding centralised generation and the current electrical generation methods in use today is the significant negative environmental effects that many of the generation processes have. Processes such as coal and gas not only release carbon dioxide as they combust, but their extraction from the ground also impacts the environment. Open pit coal mines use large areas of land to extract coal and limit the potential for productive land use after the excavation. Natural gas extraction releases large amounts of methane into the atmosphere when extracted from the ground greatly increase global greenhouse gases. Although nuclear power plants to not release carbon dioxide through electricity generation, there are significant risks associated with nuclear waste and safety concerns associated with the use of nuclear sources. This fear of nuclear power stems from large-scale nuclear catastrophes such as the Chernobyl Disaster and the Fukushima Daiichi nuclear disaster. Both tragedies led to significant casualties and the radioactive contamination of large areas. Centralised generation Centralised generation refers the common process of electricity generation through large-scale centralised facilities, through transmission lines to consumer. These facilities are usually located far away from consumers and distribute the electricity through high voltage transmission lines to a substation, where it is then distributed to consumers; the basic concept being that incredibly large stations create electricity for a large number of people. The vast majority of electricity used is created from
the other items. For example The question of design of experiments is: which experiment is better? The variance of the estimate X1 of θ1 is σ2 if we use the first experiment. But if we use the second experiment, the variance of the estimate given above is σ2/8. Thus the second experiment gives us 8 times as much precision for the estimate of a single item, and estimates all items simultaneously, with the same precision. What the second experiment achieves with eight would require 64 weighings if the items are weighed separately. However, note that the estimates for the items obtained in the second experiment have errors that correlate with each other. Many problems of the design of experiments involve combinatorial designs, as in this example and others. Avoiding false positives False positive conclusions, often resulting from the pressure to publish or the author's own confirmation bias, are an inherent hazard in many fields. A good way to prevent biases potentially leading to false positives in the data collection phase is to use a double-blind design. When a double-blind design is used, participants are randomly assigned to experimental groups but the researcher is unaware of what participants belong to which group. Therefore, the researcher can not affect the participants' response to the intervention. Experimental designs with undisclosed degrees of freedom are a problem. This can lead to conscious or unconscious "p-hacking": trying multiple things until you get the desired result. It typically involves the manipulation – perhaps unconsciously – of the process of statistical analysis and the degrees of freedom until they return a figure below the p<.05 level of statistical significance. So the design of the experiment should include a clear statement proposing the analyses to be undertaken. P-hacking can be prevented by preregistering researches, in which researchers have to send their data analysis plan to the journal they wish to publish their paper in before they even start their data collection, so no data manipulation is possible (https://osf.io). Another way to prevent this is taking the double-blind design to the data-analysis phase, where the data are sent to a data-analyst unrelated to the research who scrambles up the data so there is no way to know which participants belong to before they are potentially taken away as outliers. Clear and complete documentation of the experimental methodology is also important in order to support replication of results. Discussion topics when setting up an experimental design An experimental design or randomized clinical trial requires careful consideration of several factors before actually doing the experiment. An experimental design is the laying out of a detailed experimental plan in advance of doing the experiment. Some of the following topics have already been discussed in the principles of experimental design section: How many factors does the design have, and are the levels of these factors fixed or random? Are control conditions needed, and what should they be? Manipulation checks; did the manipulation really work? What are the background variables? What is the sample size. How many units must be collected for the experiment to be generalisable and have enough power? What is the relevance of interactions between factors? What is the influence of delayed effects of substantive factors on outcomes? How do response shifts affect self-report measures? How feasible is repeated administration of the same measurement instruments to the same units at different occasions, with a post-test and follow-up tests? What about using a proxy pretest? Are there lurking variables? Should the client/patient, researcher or even the analyst of the data be blind to conditions? What is the feasibility of subsequent application of different conditions to the same units? How many of each control and noise factors should be taken into account? The independent variable of a study often has many levels or different groups. In a true experiment, researchers can have an experimental group, which is where their intervention testing the hypothesis is implemented, and a control group, which has all the same element as the experimental group, without the interventional element. Thus, when everything else except for one intervention is held constant, researchers can certify with some certainty that this one element is what caused the observed change. In some instances, having a control group is not ethical. This is sometimes solved using two different experimental groups. In some cases, independent variables cannot be manipulated, for example when testing the difference between two groups who have a different disease, or testing the difference between genders (obviously variables that would be hard or unethical to assign participants to). In these cases, a quasi-experimental design may be used. Causal attributions In the pure experimental design, the independent (predictor) variable is manipulated by the researcher – that is – every participant of the research is chosen randomly from the population, and each participant chosen is assigned randomly to conditions of the independent variable. Only when this is done is it possible to certify with high probability that the reason for the differences in the outcome variables are caused by the different conditions. Therefore, researchers should choose the experimental design over other design types whenever possible. However, the nature of the independent variable does not always allow for manipulation. In those cases, researchers must be aware of not certifying about causal attribution when their design doesn't allow for it. For example, in observational designs, participants are not assigned randomly to conditions, and so if there are differences found in outcome variables between conditions, it is likely that there is something other than the differences between the conditions that causes the differences in outcomes, that is – a third variable. The same goes for studies with correlational design. (Adér & Mellenbergh, 2008). Statistical control It is best that a process be in reasonable statistical control prior to conducting designed experiments. When this is not possible, proper blocking, replication, and randomization allow for the careful conduct of designed experiments. To control for nuisance variables, researchers institute control checks as additional measures. Investigators should ensure that uncontrolled influences (e.g., source credibility perception) do not skew the findings of the study. A manipulation check is one example of a control check. Manipulation checks allow investigators to isolate the chief variables to strengthen support that these variables are operating as planned. One of the most important requirements of experimental research designs is the necessity of eliminating the effects of spurious, intervening, and antecedent variables. In the most basic model, cause (X) leads to effect (Y). But there could be a third variable (Z) that influences (Y), and X might not be the true cause at all. Z is said to be a spurious variable and must be controlled for. The same is true for intervening variables (a variable in between the supposed cause (X) and the effect (Y)), and anteceding variables (a variable prior to the supposed cause (X) that is the true cause). When a third variable is involved and has not been controlled for, the relation is said to be a zero order relationship. In most practical applications of experimental research designs there are several causes (X1, X2, X3). In most designs, only one of these causes is manipulated at a time. Experimental designs after Fisher Some efficient designs for estimating several main effects were found independently and in near succession by Raj Chandra Bose and K. Kishen in 1940 at the Indian Statistical Institute, but remained little known until the Plackett–Burman designs were published in Biometrika in 1946. About the same time, C. R. Rao introduced the concepts of orthogonal arrays as experimental designs. This concept played a central role in the development of Taguchi methods by Genichi Taguchi, which took place during his visit to Indian Statistical Institute in early 1950s. His methods were successfully applied and adopted by Japanese and Indian industries and subsequently were also embraced by US industry albeit with some reservations. In 1950, Gertrude Mary Cox and William Gemmell Cochran published the book Experimental Designs, which became the major reference work on the design of experiments for statisticians for years afterwards. Developments of the theory of linear models have encompassed and surpassed the cases that concerned early writers. Today, the theory rests on advanced topics in linear algebra, algebra and combinatorics. As with other branches of statistics, experimental design is pursued using both frequentist and Bayesian approaches: In evaluating statistical procedures like experimental designs, frequentist statistics studies the sampling distribution while Bayesian statistics updates a probability distribution on the parameter space. Some important contributors to the field of experimental designs are C. S. Peirce, R. A. Fisher, F. Yates, R. C. Bose, A. C. Atkinson, R. A. Bailey, D. R. Cox, G. E. P. Box, W. G. Cochran, W. T. Federer, V. V. Fedorov, A. S. Hedayat, J. Kiefer, O. Kempthorne, J. A. Nelder, Andrej Pázman, Friedrich Pukelsheim, D. Raghavarao, C. R. Rao, Shrikhande S. S., J. N. Srivastava, William J. Studden, G. Taguchi and H. P. Wynn. The textbooks of D. Montgomery, R. Myers, and G. Box/W. Hunter/J.S. Hunter have reached generations of students and practitioners. Some discussion of experimental design in the context of system identification (model building for static or dynamic models) is given in and Human participant constraints Laws and ethical considerations preclude some carefully designed experiments with human subjects. Legal
their paper in before they even start their data collection, so no data manipulation is possible (https://osf.io). Another way to prevent this is taking the double-blind design to the data-analysis phase, where the data are sent to a data-analyst unrelated to the research who scrambles up the data so there is no way to know which participants belong to before they are potentially taken away as outliers. Clear and complete documentation of the experimental methodology is also important in order to support replication of results. Discussion topics when setting up an experimental design An experimental design or randomized clinical trial requires careful consideration of several factors before actually doing the experiment. An experimental design is the laying out of a detailed experimental plan in advance of doing the experiment. Some of the following topics have already been discussed in the principles of experimental design section: How many factors does the design have, and are the levels of these factors fixed or random? Are control conditions needed, and what should they be? Manipulation checks; did the manipulation really work? What are the background variables? What is the sample size. How many units must be collected for the experiment to be generalisable and have enough power? What is the relevance of interactions between factors? What is the influence of delayed effects of substantive factors on outcomes? How do response shifts affect self-report measures? How feasible is repeated administration of the same measurement instruments to the same units at different occasions, with a post-test and follow-up tests? What about using a proxy pretest? Are there lurking variables? Should the client/patient, researcher or even the analyst of the data be blind to conditions? What is the feasibility of subsequent application of different conditions to the same units? How many of each control and noise factors should be taken into account? The independent variable of a study often has many levels or different groups. In a true experiment, researchers can have an experimental group, which is where their intervention testing the hypothesis is implemented, and a control group, which has all the same element as the experimental group, without the interventional element. Thus, when everything else except for one intervention is held constant, researchers can certify with some certainty that this one element is what caused the observed change. In some instances, having a control group is not ethical. This is sometimes solved using two different experimental groups. In some cases, independent variables cannot be manipulated, for example when testing the difference between two groups who have a different disease, or testing the difference between genders (obviously variables that would be hard or unethical to assign participants to). In these cases, a quasi-experimental design may be used. Causal attributions In the pure experimental design, the independent (predictor) variable is manipulated by the researcher – that is – every participant of the research is chosen randomly from the population, and each participant chosen is assigned randomly to conditions of the independent variable. Only when this is done is it possible to certify with high probability that the reason for the differences in the outcome variables are caused by the different conditions. Therefore, researchers should choose the experimental design over other design types whenever possible. However, the nature of the independent variable does not always allow for manipulation. In those cases, researchers must be aware of not certifying about causal attribution when their design doesn't allow for it. For example, in observational designs, participants are not assigned randomly to conditions, and so if there are differences found in outcome variables between conditions, it is likely that there is something other than the differences between the conditions that causes the differences in outcomes, that is – a third variable. The same goes for studies with correlational design. (Adér & Mellenbergh, 2008). Statistical control It is best that a process be in reasonable statistical control prior to conducting designed experiments. When this is not possible, proper blocking, replication, and randomization allow for the careful conduct of designed experiments. To control for nuisance variables, researchers institute control checks as additional measures. Investigators should ensure that uncontrolled influences (e.g., source credibility perception) do not skew the findings of the study. A manipulation check is one example of a control check. Manipulation checks allow investigators to isolate the chief variables to strengthen support that these variables are operating as planned. One of the most important requirements of experimental research designs is the necessity of eliminating the effects of spurious, intervening, and antecedent variables. In the most basic model, cause (X) leads to effect (Y). But there could be a third variable (Z) that influences (Y), and X might not be the true cause at all. Z is said to be a spurious variable and must be controlled for. The same is true for intervening variables (a variable in between the supposed cause (X) and the effect (Y)), and anteceding variables (a variable prior to the supposed cause (X) that is the true cause). When a third variable is involved and has not been controlled for, the relation is said to be a zero order relationship. In most practical applications of experimental research designs there are several causes (X1, X2, X3). In most designs, only one of these causes is manipulated at a time. Experimental designs after Fisher Some efficient designs for estimating several main effects were found independently and in near succession by Raj Chandra Bose and K. Kishen in 1940 at the Indian Statistical Institute, but remained little known until the Plackett–Burman designs were published in Biometrika in 1946. About the same time, C. R. Rao introduced the concepts of orthogonal arrays as experimental designs. This concept played a central role in the development of Taguchi methods by Genichi Taguchi, which took place during his visit to Indian Statistical Institute in early 1950s. His methods were successfully applied and adopted by Japanese and Indian industries and subsequently were also embraced by US industry albeit with some reservations. In 1950, Gertrude Mary Cox and William Gemmell Cochran published
of phenomena as perceived in experience. Later empiricism referred to a theory of knowledge in philosophy which adheres to the principle that knowledge arises from experience and evidence gathered specifically using the senses. In scientific use, the term empirical refers to the gathering of data using only evidence that is observable by the senses or in some cases using calibrated scientific instruments. What early philosophers described as empiricist and empirical research have in common is the dependence on observable data to formulate and test theories and come to conclusions. Usage The researcher attempts to describe accurately the interaction between the instrument (or the human senses) and the entity being observed. If instrumentation is involved, the researcher is expected to calibrate his/her instrument by applying it to known standard objects and documenting the results before applying it to unknown objects. In other words, it describes the research that has not taken place before and their results. In practice, the accumulation of evidence for or against any particular theory involves planned research designs for the collection of empirical data, and academic rigor plays a large part of judging the merits of research design. Several typologies for such designs have been suggested, one of the most popular of which comes from Campbell and Stanley. They are responsible for popularizing the widely cited distinction among pre-experimental, experimental, and quasi-experimental designs and are staunch advocates of the central role of randomized experiments in educational research. Scientific research Accurate analysis of data using standardized statistical methods in scientific studies is critical to determining the validity of empirical research. Statistical formulas such as regression, uncertainty coefficient, t-test, chi square, and various types of ANOVA (analyses of variance) are fundamental to forming logical, valid conclusions. If empirical data reach significance under the appropriate statistical formula, the research hypothesis is supported. If not, the null hypothesis is supported (or, more accurately, not rejected), meaning no effect of the independent variable(s) was observed on the dependent variable(s). The outcome of empirical research using statistical hypothesis testing is never proof. It can only support a hypothesis, reject it, or do neither. These methods yield only probabilities. Among scientific researchers, empirical evidence (as distinct from empirical research) refers to objective evidence that appears the same regardless of the observer. For example, a thermometer will not display different temperatures for each individual who observes it. Temperature, as measured by an accurate, well calibrated thermometer, is empirical evidence. By contrast, non-empirical evidence is subjective, depending on the observer. Following the previous example, observer A might truthfully report that a room is warm, while observer B might truthfully report that the same room is cool, though both observe the same reading on the thermometer. The use of empirical evidence negates this effect of personal (i.e., subjective) experience or time. The varying perception of empiricism and rationalism shows concern with the limit to which there is dependency on experience of sense as an effort of gaining knowledge. According to rationalism, there are a number of different ways in which sense experience is gained independently for the knowledge and concepts. According to empiricism, sense experience is considered as the main source of every piece of knowledge and the concepts. In general, rationalists are known for the development of their own views following two different way. First, the key argument can be placed that there are cases in which the content of knowledge or concepts end up outstripping the information. This outstripped information is provided by the sense experience (Hjørland, 2010, 2). Second, there is construction of accounts as to how reasoning helps in the provision of addition knowledge about a specific or broader scope. Empiricists are known to be presenting complementary senses related to thought. First, there is development of accounts of how there is provision of information by experience that is cited by rationalists. This is insofar for having it in the initial place. At times, empiricists tend to be opting skepticism as an option of rationalism. If experience is not helpful in the provision of knowledge or concept cited by rationalists, then they
vocal music will remember fewer words on a later memory test than people who study a word list in silence."). These predictions can then be tested with a suitable experiment. Depending on the outcomes of the experiment, the theory on which the hypotheses and predictions were based will be supported or not, or may need to be modified and then subjected to further testing. Terminology The term empirical was originally used to refer to certain ancient Greek practitioners of medicine who rejected adherence to the dogmatic doctrines of the day, preferring instead to rely on the observation of phenomena as perceived in experience. Later empiricism referred to a theory of knowledge in philosophy which adheres to the principle that knowledge arises from experience and evidence gathered specifically using the senses. In scientific use, the term empirical refers to the gathering of data using only evidence that is observable by the senses or in some cases using calibrated scientific instruments. What early philosophers described as empiricist and empirical research have in common is the dependence on observable data to formulate and test theories and come to conclusions. Usage The researcher attempts to describe accurately the interaction between the instrument (or the human senses) and the entity being observed. If instrumentation is involved, the researcher is expected to calibrate his/her instrument by applying it to known standard objects and documenting the results before applying it to unknown objects. In other words, it describes the research that has not taken place before and their results. In practice, the accumulation of evidence for or against any particular theory involves planned research designs for the collection of empirical data, and academic rigor plays a large part of judging the merits of research design. Several typologies for such designs have been suggested, one of the most popular of which comes from Campbell and Stanley. They are responsible for popularizing the widely cited distinction among pre-experimental, experimental, and quasi-experimental designs and are staunch advocates of the central role of randomized experiments in educational research. Scientific research Accurate analysis of data using standardized statistical methods in scientific studies is critical to determining the validity of empirical research. Statistical formulas such as regression, uncertainty coefficient, t-test, chi square, and various types of ANOVA (analyses of variance) are fundamental to forming logical, valid conclusions. If empirical data reach significance under the appropriate statistical formula, the research hypothesis is supported. If not, the null hypothesis is supported (or, more accurately, not rejected), meaning no effect of the independent variable(s) was observed on the dependent variable(s). The outcome of empirical research using statistical hypothesis testing is never proof. It can only support a hypothesis, reject it, or do neither. These methods yield only probabilities. Among scientific researchers, empirical evidence (as distinct from empirical research) refers to objective evidence that appears the same regardless of the observer. For example, a thermometer will not display different temperatures for each individual who observes it. Temperature, as measured by an accurate, well calibrated thermometer, is empirical evidence. By contrast, non-empirical evidence is subjective, depending on the observer. Following the previous example, observer A might truthfully report that a room is warm, while observer B might truthfully report that the same room is cool, though both observe the same reading on the thermometer. The use of empirical evidence negates this effect of personal (i.e., subjective) experience or time. The varying perception of empiricism and rationalism shows concern with the limit to which there is dependency on experience of sense as an effort of gaining knowledge. According to rationalism, there are a number of different ways in which sense experience is gained independently for the knowledge and concepts. According to empiricism, sense experience is considered as the main source of every piece of knowledge and the concepts. In general, rationalists are known for the development of their own views following two different way. First, the key argument can
I. The integration of computers and calculators into the industry brought about a more efficient means of analyzing data and the beginning of engineering statistics. Examples Factorial Experimental Design A factorial experiment is one where, contrary to the standard experimental philosophy of changing only one independent variable and holding everything else constant, multiple independent variables are tested at the same time. With this design, statistical engineers can see both the direct effects of one independent variable (main effect), as well as potential interaction effects that arise when multiple independent variables provide a different result when together than either would on its own. Six Sigma Six Sigma is a set of techniques to improve the reliability of a manufacturing process. Ideally, all products will have the exact same specifications equivalent to what was desired, but countless imperfections of real-world manufacturing makes this impossible. The as-built specifications of a product are assumed to be centered around a mean, with each individual product deviating some amount away from that mean in a normal distribution. The goal of Six Sigma is to ensure that the acceptable specification limits are six standard deviations away from the mean of the distribution; in other words, that each step of the manufacturing process has at most a 0.00034% chance of producing a defect. Notes References
design of experiments for efficiently generating informative data for fitting such models. History Engineering statistics dates back to 1000 B.C. when the Abacus was developed as means to calculate numerical data. In the 1600s, the development of information processing to systematically analyze and process data began. In 1654, the Slide Rule technique was developed by Robert Bissaker for advanced data calculations. In 1833, a British mathematician named Charles Babbage designed the idea of an automatic computer which inspired developers at Harvard University and IBM to design the first mechanical automatic-sequence-controlled calculator called MARK I. The integration of computers and calculators into the industry brought about a more efficient means of analyzing data and the beginning of engineering statistics. Examples Factorial Experimental Design A factorial experiment is one where, contrary to the standard experimental philosophy of changing only one independent variable and holding everything else constant, multiple independent variables are tested at the same time. With this design, statistical engineers can see both the direct effects of one independent variable (main effect), as well as potential interaction effects that arise when multiple independent variables provide a different result when together than either would on its own. Six Sigma Six Sigma is a set of techniques to improve the reliability of a manufacturing process. Ideally, all products will have the exact same specifications equivalent to what was desired, but countless imperfections of real-world manufacturing makes this impossible. The as-built specifications of a product are assumed to be centered around a mean, with each individual product deviating some amount away from that mean in a normal distribution. The goal of Six Sigma is to ensure that the acceptable specification limits are six standard deviations away from the mean of the distribution;
included Poe's letters as evidence. Many of his claims were either lies or distortions; for example, it is seriously disputed that Poe was a drug addict. Griswold's book was denounced by those who knew Poe well, including John Neal, who published an article defending Poe and attacking Griswold as a "Rhadamanthus, who is not to be bilked of his fee, a thimble-full of newspaper notoriety". Griswold's book nevertheless became a popularly accepted biographical source. This was in part because it was the only full biography available and was widely reprinted, and in part because readers thrilled at the thought of reading works by an "evil" man. Letters that Griswold presented as proof were later revealed as forgeries. Literary style and themes Genres Poe's best known fiction works are Gothic, adhering to the genre's conventions to appeal to the public taste. His most recurring themes deal with questions of death, including its physical signs, the effects of decomposition, concerns of premature burial, the reanimation of the dead, and mourning. Many of his works are generally considered part of the dark romanticism genre, a literary reaction to transcendentalism which Poe strongly disliked. He referred to followers of the transcendental movement as "Frog-Pondians", after the pond on Boston Common, and ridiculed their writings as "metaphor—run mad," lapsing into "obscurity for obscurity's sake" or "mysticism for mysticism's sake". Poe once wrote in a letter to Thomas Holley Chivers that he did not dislike transcendentalists, "only the pretenders and sophists among them". Beyond horror, Poe also wrote satires, humor tales, and hoaxes. For comic effect, he used irony and ludicrous extravagance, often in an attempt to liberate the reader from cultural conformity. "Metzengerstein" is the first story that Poe is known to have published and his first foray into horror, but it was originally intended as a burlesque satirizing the popular genre. Poe also reinvented science fiction, responding in his writing to emerging technologies such as hot air balloons in "The Balloon-Hoax". Poe wrote much of his work using themes aimed specifically at mass-market tastes. To that end, his fiction often included elements of popular pseudosciences, such as phrenology and physiognomy. Literary theory Poe's writing reflects his literary theories, which he presented in his criticism and also in essays such as "The Poetic Principle". He disliked didacticism and allegory, though he believed that meaning in literature should be an undercurrent just beneath the surface. Works with obvious meanings, he wrote, cease to be art. He believed that work of quality should be brief and focus on a specific single effect. To that end, he believed that the writer should carefully calculate every sentiment and idea. Poe describes his method in writing "The Raven" in the essay "The Philosophy of Composition", and he claims to have strictly followed this method. It has been questioned whether he really followed this system, however. T. S. Eliot said: "It is difficult for us to read that essay without reflecting that if Poe plotted out his poem with such calculation, he might have taken a little more pains over it: the result hardly does credit to the method." Biographer Joseph Wood Krutch described the essay as "a rather highly ingenious exercise in the art of rationalization". Legacy Influence During his lifetime, Poe was mostly recognized as a literary critic. Fellow critic James Russell Lowell called him "the most discriminating, philosophical, and fearless critic upon imaginative works who has written in America", suggesting—rhetorically—that he occasionally used prussic acid instead of ink. Poe's caustic reviews earned him the reputation of being a "tomahawk man". A favorite target of Poe's criticism was Boston's acclaimed poet Henry Wadsworth Longfellow, who was often defended by his literary friends in what was later called "The Longfellow War". Poe accused Longfellow of "the heresy of the didactic", writing poetry that was preachy, derivative, and thematically plagiarized. Poe correctly predicted that Longfellow's reputation and style of poetry would decline, concluding, "We grant him high qualities, but deny him the Future". Poe was also known as a writer of fiction and became one of the first American authors of the 19th century to become more popular in Europe than in the United States. Poe is particularly respected in France, in part due to early translations by Charles Baudelaire. Baudelaire's translations became definitive renditions of Poe's work in Continental Europe. Poe's early detective fiction tales featuring C. Auguste Dupin laid the groundwork for future detectives in literature. Sir Arthur Conan Doyle said, "Each [of Poe's detective stories] is a root from which a whole literature has developed.... Where was the detective story until Poe breathed the breath of life into it?" The Mystery Writers of America have named their awards for excellence in the genre the "Edgars". Poe's work also influenced science fiction, notably Jules Verne, who wrote a sequel to Poe's novel The Narrative of Arthur Gordon Pym of Nantucket called An Antarctic Mystery, also known as The Sphinx of the Ice Fields. Science fiction author H. G. Wells noted, "Pym tells what a very intelligent mind could imagine about the south polar region a century ago". In 2013, The Guardian cited Pym as one of the greatest novels ever written in the English language, and noted its influence on later authors such as Doyle, Henry James, B. Traven, and David Morrell. Horror author and historian H. P. Lovecraft was heavily influenced by Poe's horror tales, dedicating an entire section of his long essay, "Supernatural Horror in Literature", to his influence on the genre. In his letters, Lovecraft stated, "When I write stories, Edgar Allan Poe is my model." Alfred Hitchcock once said, "It's because I liked Edgar Allan Poe's stories so much that I began to make suspense films". Like many famous artists, Poe's works have spawned imitators. One trend among imitators of Poe has been claims by clairvoyants or psychics to be "channeling" poems from Poe's spirit. One of the most notable of these was Lizzie Doten, who published Poems from the Inner Life in 1863, in which she claimed to have "received" new compositions by Poe's spirit. The compositions were re-workings of famous Poe poems such as "The Bells", but which reflected a new, positive outlook. Even so, Poe has also received criticism. This is partly because of the negative perception of his personal character and its influence upon his reputation. William Butler Yeats was occasionally critical of Poe and once called him "vulgar". Transcendentalist Ralph Waldo Emerson reacted to "The Raven" by saying, "I see nothing in it", and derisively referred to Poe as "the jingle man". Aldous Huxley wrote that Poe's writing "falls into vulgarity" by being "too poetical"—the equivalent of wearing a diamond ring on every finger. It is believed that only twelve copies have survived of Poe's first book Tamerlane and Other Poems. In December 2009, one copy sold at Christie's auctioneers in New York City for $662,500, a record price paid for a work of American literature. Physics and cosmology Eureka: A Prose Poem, an essay written in 1848, included a cosmological theory that presaged the Big Bang theory by 80 years, as well as the first plausible solution to Olbers' paradox. Poe eschewed the scientific method in Eureka and instead wrote from pure intuition. For this reason, he considered it a work of art, not science, but insisted that it was still true and considered it to be his career masterpiece. Even so, Eureka is full of scientific errors. In particular, Poe's suggestions ignored Newtonian principles regarding the density and rotation of planets. Cryptography Poe had a keen interest in cryptography. He had placed a notice of his abilities in the Philadelphia paper Alexander's Weekly (Express) Messenger, inviting submissions of ciphers which he proceeded to solve. In July 1841, Poe had published an essay called "A Few Words on Secret Writing" in Graham's Magazine. Capitalizing on public interest in the topic, he wrote "The Gold-Bug" incorporating ciphers as an essential part of the story. Poe's success with cryptography relied not so much on his deep knowledge of that field (his method was limited to the simple substitution cryptogram) as on his knowledge of the magazine and newspaper culture. His keen analytical abilities, which were so evident in his detective stories, allowed him to see that the general public was largely ignorant of the methods by which a simple substitution cryptogram can be solved, and he used this to his advantage. The sensation that Poe created with his cryptography stunts played a major role in popularizing cryptograms in newspapers and magazines. Two ciphers he published in 1841 under the name "W. B. Tyler" were not solved until 1992 and 2000 respectively. One was a quote from Joseph Addison's play Cato; the other is probably based on a poem by Hester Thrale. Poe had an influence on cryptography beyond increasing public interest during his lifetime. William Friedman, America's foremost cryptologist, was heavily influenced by Poe. Friedman's initial interest in cryptography came from reading "The Gold-Bug" as a child, an interest that he later put to use in deciphering Japan's PURPLE code during World War II. In popular culture As a character The historical Edgar Allan Poe has appeared as a fictionalized character, often representing the "mad genius"
as the city celebrated the visit of the Marquis de Lafayette. In March 1825, Allan's uncle and business benefactor William Galt died, who was said to be one of the wealthiest men in Richmond, leaving Allan several acres of real estate. The inheritance was estimated at $750,000 (). By summer 1825, Allan celebrated his expansive wealth by purchasing a two-story brick house called Moldavia. Poe may have become engaged to Sarah Elmira Royster before he registered at the University of Virginia in February 1826 to study ancient and modern languages. The university was in its infancy, established on the ideals of its founder Thomas Jefferson. It had strict rules against gambling, horses, guns, tobacco, and alcohol, but these rules were mostly ignored. Jefferson had enacted a system of student self-government, allowing students to choose their own studies, make their own arrangements for boarding, and report all wrongdoing to the faculty. The unique system was still in chaos, and there was a high dropout rate. During his time there, Poe lost touch with Royster and also became estranged from his foster father over gambling debts. He claimed that Allan had not given him sufficient money to register for classes, purchase texts, and procure and furnish a dormitory. Allan did send additional money and clothes, but Poe's debts increased. Poe gave up on the university after a year but did not feel welcome returning to Richmond, especially when he learned that his sweetheart Royster had married another man, Alexander Shelton. He traveled to Boston in April 1827, sustaining himself with odd jobs as a clerk and newspaper writer, and he started using the pseudonym Henri Le Rennet during this period. Military career Poe was unable to support himself, so he enlisted in the United States Army as a private on May 27, 1827, using the name "Edgar A. Perry". He claimed that he was even though he was 18. He first served at Fort Independence in Boston Harbor for five dollars a month. That same year, he released his first book, a 40-page collection of poetry titled Tamerlane and Other Poems, attributed with the byline "by a Bostonian". Only 50 copies were printed, and the book received virtually no attention. Poe's regiment was posted to Fort Moultrie in Charleston, South Carolina, and traveled by ship on the brig Waltham on November 8, 1827. Poe was promoted to "artificer", an enlisted tradesman who prepared shells for artillery, and had his monthly pay doubled. He served for two years and attained the rank of Sergeant Major for Artillery (the highest rank that a non-commissioned officer could achieve); he then sought to end his five-year enlistment early. Poe revealed his real name and his circumstances to his commanding officer, Lieutenant Howard, who would only allow Poe to be discharged if he reconciled with Allan. Poe wrote a letter to Allan, who was unsympathetic and spent several months ignoring Poe's pleas; Allan may not have written to Poe even to make him aware of his foster mother's illness. Frances Allan died on February 28, 1829, and Poe visited the day after her burial. Perhaps softened by his wife's death, Allan agreed to support Poe's attempt to be discharged in order to receive an appointment to the United States Military Academy at West Point, New York. Poe was finally discharged on April 15, 1829, after securing a replacement to finish his enlisted term for him. Before entering West Point, he moved back to Baltimore for a time to stay with his widowed aunt Maria Clemm, her daughter Virginia Eliza Clemm (Poe's first cousin), his brother Henry, and his invalid grandmother Elizabeth Cairnes Poe. In September of that year, Poe received "the very first words of encouragement I ever remember to have heard" in a review of his poetry by influential critic John Neal, prompting Poe to dedicate one of the poems to Neal in his second book Al Aaraaf, Tamerlane and Minor Poems, published in Baltimore in 1829. Poe traveled to West Point and matriculated as a cadet on July 1, 1830. In October 1830, Allan married his second wife Louisa Patterson. The marriage and bitter quarrels with Poe over the children born to Allan out of extramarital affairs led to the foster father finally disowning Poe. Poe decided to leave West Point by purposely getting court-martialed. On February 8, 1831, he was tried for gross neglect of duty and disobedience of orders for refusing to attend formations, classes, or church. He tactically pleaded not guilty to induce dismissal, knowing that he would be found guilty. Poe left for New York in February 1831 and released a third volume of poems, simply titled Poems. The book was financed with help from his fellow cadets at West Point, many of whom donated 75 cents to the cause, raising a total of $170. They may have been expecting verses similar to the satirical ones that Poe had been writing about commanding officers. It was printed by Elam Bliss of New York, labeled as "Second Edition," and including a page saying, "To the U.S. Corps of Cadets this volume is respectfully dedicated". The book once again reprinted the long poems "Tamerlane" and "Al Aaraaf" but also six previously unpublished poems, including early versions of "To Helen", "Israfel", and "The City in the Sea". Poe returned to Baltimore to his aunt, brother, and cousin in March 1831. His elder brother Henry had been in ill health, in part due to problems with alcoholism, and he died on August 1, 1831. Publishing career After his brother's death, Poe began more earnest attempts to start his career as a writer, but he chose a difficult time in American publishing to do so. He was one of the first Americans to live by writing alone and was hampered by the lack of an international copyright law. American publishers often produced unauthorized copies of British works rather than paying for new work by Americans. The industry was also particularly hurt by the Panic of 1837. There was a booming growth in American periodicals around this time, fueled in part by new technology, but many did not last beyond a few issues. Publishers often refused to pay their writers or paid them much later than they promised, and Poe repeatedly resorted to humiliating pleas for money and other assistance. After his early attempts at poetry, Poe had turned his attention to prose, likely based on John Neal's critiques in The Yankee magazine. He placed a few stories with a Philadelphia publication and began work on his only drama Politian. The Baltimore Saturday Visiter awarded him a prize in October 1833 for his short story "MS. Found in a Bottle". The story brought him to the attention of John P. Kennedy, a Baltimorean of considerable means who helped Poe place some of his stories and introduced him to Thomas W. White, editor of the Southern Literary Messenger in Richmond. Poe became assistant editor of the periodical in August 1835, but White discharged him within a few weeks for being drunk on the job. Poe returned to Baltimore where he obtained a license to marry his cousin Virginia on September 22, 1835, though it is unknown if they were married at that time. He was 26 and she was 13. Poe was reinstated by White after promising good behavior, and he went back to Richmond with Virginia and her mother. He remained at the Messenger until January 1837. During this period, Poe claimed that its circulation increased from 700 to 3,500. He published several poems, book reviews, critiques, and stories in the paper. On May 16, 1836, he and Virginia held a Presbyterian wedding ceremony performed by Amasa Converse at their Richmond boarding house, with a witness falsely attesting Clemm's age as 21. Poe's novel The Narrative of Arthur Gordon Pym of Nantucket was published and widely reviewed in 1838. In the summer of 1839, Poe became assistant editor of Burton's Gentleman's Magazine. He published numerous articles, stories, and reviews, enhancing his reputation as a trenchant critic which he had established at the Messenger. Also in 1839, the collection Tales of the Grotesque and Arabesque was published in two volumes, though he made little money from it and it received mixed reviews. In June 1840, Poe published a prospectus announcing his intentions to start his own journal called The Stylus, although he originally intended to call it The Penn, as it would have been based in Philadelphia. He bought advertising space for his prospectus in the June 6, 1840 issue of Philadelphia's Saturday Evening Post: "Prospectus of the Penn Magazine, a Monthly Literary journal to be edited and published in the city of Philadelphia by Edgar A. Poe." The journal was never produced before Poe's death. Poe left Burton's after about a year and found a position as writer and co-editor at the then-very-successful monthly Graham's Magazine. In the last number of Graham's for 1841, Poe was among the co-signatories to an editorial note of celebration of the tremendous success that magazine had achieved in the past year: "Perhaps the editors of no magazine, either in America or in Europe, ever sat down, at the close of a year, to contemplate the progress of their work with more satisfaction than we do now. Our success has been unexampled, almost incredible. We may assert without fear of contradiction that no periodical ever witnessed the same increase during so short a period." Around this time, Poe attempted to secure a position within the administration of President John Tyler, claiming that he was a member of the Whig Party. He hoped to be appointed to the United States Custom House in Philadelphia with help from President Tyler's son Robert, an acquaintance of Poe's friend Frederick Thomas. Poe failed to show up for a meeting with Thomas to discuss the appointment in mid-September 1842, claiming to have been sick, though Thomas believed that he had been drunk. Poe was promised an appointment, but all positions were filled by others. One evening in January 1842, Virginia showed the first signs of consumption, now known as tuberculosis, while singing and playing the piano, which Poe described as breaking a blood vessel in her throat. She only partially recovered, and Poe began to drink more heavily under the stress of her illness. He left Graham's and attempted to find a new position, for a time angling for a government post. He returned to New York where he worked briefly at the Evening Mirror before becoming editor of the Broadway Journal, and later its owner. There Poe alienated himself from other writers by publicly accusing Henry Wadsworth Longfellow of plagiarism, though Longfellow never responded. On January 29, 1845, his poem "The Raven" appeared in the Evening Mirror and became a popular sensation. It made Poe a household name almost instantly, though he was paid only $9 for its publication. It was concurrently published in The American Review: A Whig Journal under the pseudonym "Quarles". The Broadway Journal failed in 1846, and Poe moved to a cottage in Fordham, New York, in what is now the Bronx. That home is now known as the Edgar Allan Poe Cottage, relocated to a park near the southeast corner of the Grand Concourse and Kingsbridge Road. Nearby, Poe befriended the Jesuits at St. John's College, now Fordham University. Virginia died at the cottage on January 30, 1847. Biographers and critics often suggest that Poe's frequent theme of the "death of a beautiful woman" stems from the repeated loss of women throughout his life, including his wife. Poe was increasingly unstable after his wife's death. He attempted to court poet Sarah Helen Whitman who lived in Providence, Rhode Island. Their engagement failed, purportedly because of Poe's drinking and erratic behavior. There is also strong evidence that Whitman's mother intervened and did much to derail their relationship. Poe then returned to Richmond and resumed a relationship with his childhood sweetheart Sarah Elmira Royster. Death On October 3, 1849, Poe was found delirious on the streets of Baltimore, "in great distress, and... in need of immediate assistance", according to Joseph W. Walker, who found him. He was taken to the Washington Medical College, where he died on Sunday, October 7, 1849, at 5:00 in the morning. Poe was not coherent long enough to explain how he came to be in his dire condition and was wearing clothes that were not his own. He is said to have repeatedly called out the name "Reynolds" on the night before his death, though it is unclear to whom he was referring. Some sources say that Poe's final words were, "Lord help my poor soul". All medical records have been lost, including Poe's death certificate. Newspapers at the time reported Poe's death as "congestion of the brain" or "cerebral inflammation", common euphemisms for death from disreputable causes such as alcoholism. The actual cause of death remains a mystery. Speculation has included delirium tremens, heart disease, epilepsy, syphilis, meningeal inflammation, cholera, carbon monoxide poisoning, and rabies. One theory dating from 1872 suggests that cooping was the cause of Poe's death, a form of electoral fraud in which citizens were forced to vote for a particular candidate, sometimes leading to violence and even murder. Griswold's "Memoir" Immediately after Poe's death, his literary rival Rufus Wilmot Griswold wrote a slanted high-profile obituary under a pseudonym, filled with falsehoods that cast him as a lunatic and a madman, and which described him as a person who "walked the streets, in madness or melancholy, with lips moving in indistinct curses, or with eyes upturned in passionate prayers, (never for himself, for he felt, or professed to feel, that he was already damned)". The long obituary appeared in the New York Tribune signed "Ludwig" on the day that Poe was buried. It was soon further published throughout the country. The piece began, "Edgar Allan Poe is dead. He died in Baltimore the day before yesterday. This announcement will startle many, but few will be grieved by it." "Ludwig" was soon identified as Griswold, an editor, critic, and anthologist who had borne a grudge against Poe since 1842. Griswold somehow became Poe's literary executor and attempted to destroy his enemy's reputation after his death. Griswold wrote a biographical article of Poe called "Memoir of the Author", which he included in an 1850 volume of the collected works. There he depicted Poe as a depraved, drunken, drug-addled madman and included Poe's letters as evidence. Many of his claims were either lies or distortions; for example, it is seriously disputed that Poe was a drug addict. Griswold's book was denounced by those who knew Poe well, including John Neal, who published an article defending Poe and attacking Griswold as a "Rhadamanthus, who is not to be bilked of his fee, a thimble-full of newspaper notoriety". Griswold's book nevertheless became a popularly accepted biographical source. This was in part because it was the only full biography available and was widely reprinted, and in part because readers thrilled at the thought of reading works by an "evil" man. Letters that Griswold presented as proof were later revealed as forgeries. Literary style and themes Genres Poe's best known fiction works are Gothic, adhering to the genre's conventions to appeal to the public taste. His most recurring themes deal with questions of death, including its physical signs, the effects of decomposition, concerns of premature burial, the reanimation of the dead, and mourning. Many of his works are generally considered part of the dark romanticism genre, a literary reaction to transcendentalism which Poe strongly disliked. He referred to followers of the transcendental movement as "Frog-Pondians", after the pond on Boston Common, and ridiculed their writings as "metaphor—run mad," lapsing into "obscurity for obscurity's sake" or "mysticism for mysticism's sake". Poe once wrote in a letter to Thomas Holley Chivers that he did not dislike transcendentalists, "only the pretenders and sophists among them". Beyond horror, Poe also wrote satires, humor tales, and hoaxes. For comic effect, he used irony and ludicrous extravagance, often in an attempt to liberate the reader from cultural conformity. "Metzengerstein" is the first story that Poe is known to have published and his first foray into horror, but it was originally intended as a burlesque satirizing the popular genre. Poe also reinvented science fiction, responding in his writing to emerging technologies such as hot air balloons in "The Balloon-Hoax". Poe wrote much of his work using themes aimed specifically at mass-market tastes. To that end, his fiction often included elements of popular pseudosciences, such as phrenology and physiognomy. Literary theory Poe's writing reflects his literary theories, which he presented in his criticism and also in essays such as "The Poetic Principle". He disliked didacticism and allegory, though he believed that meaning in literature should be an undercurrent just beneath the surface. Works with obvious meanings, he wrote, cease to be art. He believed that work of quality should be brief and focus on a specific single effect. To that end, he believed that the writer should carefully calculate every sentiment and idea. Poe describes his method in writing "The Raven" in the essay "The Philosophy of Composition", and he claims to have strictly followed this method. It has been questioned whether he really followed this system, however. T. S. Eliot said: "It is difficult for us to read that essay without reflecting that if Poe plotted out his poem with such calculation, he might have taken a little more pains over it: the result hardly does credit to the method." Biographer Joseph Wood Krutch described the essay as "a rather highly ingenious exercise in the art of rationalization". Legacy Influence During his lifetime, Poe was mostly recognized as a literary
deals with electrical circuits that involve active electrical components such as vacuum tubes, transistors, diodes and integrated circuits, and associated passive interconnection technologies. Electrical phenomena have been studied since antiquity, though progress in theoretical understanding remained slow until the seventeenth and eighteenth centuries. The theory of electromagnetism was developed in the 19th century, and by the end of that century electricity was being put to industrial and residential use by electrical engineers. The rapid expansion in electrical technology at this time transformed industry and society, becoming a driving force for the Second Industrial Revolution. Electricity's extraordinary versatility means it can be put to an almost limitless set of applications which include transport, heating, lighting, communications, and computation. Electrical power is now the backbone of modern industrial society. History Long before any knowledge of electricity existed, people were aware of shocks from electric fish. Ancient Egyptian texts dating from 2750 BCE referred to these fish as the "Thunderer of the Nile", and described them as the "protectors" of all other fish. Electric fish were again reported millennia later by ancient Greek, Roman and Arabic naturalists and physicians. Several ancient writers, such as Pliny the Elder and Scribonius Largus, attested to the numbing effect of electric shocks delivered by electric catfish and electric rays, and knew that such shocks could travel along conducting objects. Patients suffering from ailments such as gout or headache were directed to touch electric fish in the hope that the powerful jolt might cure them. Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers. Thales of Miletus made a series of observations on static electricity around 600 BCE, from which he believed that friction rendered amber magnetic, in contrast to minerals such as magnetite, which needed no rubbing. Thales was incorrect in believing the attraction was due to a magnetic effect, but later science would prove a link between magnetism and electricity. According to a controversial theory, the Parthians may have had knowledge of electroplating, based on the 1936 discovery of the Baghdad Battery, which resembles a galvanic cell, though it is uncertain whether the artifact was electrical in nature. Electricity would remain little more than an intellectual curiosity for millennia until 1600, when the English scientist William Gilbert wrote De Magnete, in which he made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the New Latin word electricus ("of amber" or "like amber", from ἤλεκτρον, elektron, the Greek word for "amber") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646. Further work was conducted in the 17th and early 18th centuries by Otto von Guericke, Robert Boyle, Stephen Gray and C. F. du Fay. Later in the 18th century, Benjamin Franklin conducted extensive research in electricity, selling his possessions to fund his work. In June 1752 he is reputed to have attached a metal key to the bottom of a dampened kite string and flown the kite in a storm-threatened sky. A succession of sparks jumping from the key to the back of his hand showed that lightning was indeed electrical in nature. He also explained the apparently paradoxical behavior of the Leyden jar as a device for storing large amounts of electrical charge in terms of electricity consisting of both positive and negative charges. In 1791, Luigi Galvani published his discovery of bioelectromagnetics, demonstrating that electricity was the medium by which neurons passed signals to the muscles. Alessandro Volta's battery, or voltaic pile, of 1800, made from alternating layers of zinc and copper, provided scientists with a more reliable source of electrical energy than the electrostatic machines previously used. The recognition of electromagnetism, the unity of electric and magnetic phenomena, is due to Hans Christian Ørsted and André-Marie Ampère in 1819–1820. Michael Faraday invented the electric motor in 1821, and Georg Ohm mathematically analysed the electrical circuit in 1827. Electricity and magnetism (and light) were definitively linked by James Clerk Maxwell, in particular in his "On Physical Lines of Force" in 1861 and 1862. While the early 19th century had seen rapid progress in electrical science, the late 19th century would see the greatest progress in electrical engineering. Through such people as Alexander Graham Bell, Ottó Bláthy, Thomas Edison, Galileo Ferraris, Oliver Heaviside, Ányos Jedlik, William Thomson, 1st Baron Kelvin, Charles Algernon Parsons, Werner von Siemens, Joseph Swan, Reginald Fessenden, Nikola Tesla and George Westinghouse, electricity turned from a scientific curiosity into an essential tool for modern life. In 1887, Heinrich Hertz discovered that electrodes illuminated with ultraviolet light create electric sparks more easily. In 1905, Albert Einstein published a paper that explained experimental data from the photoelectric effect as being the result of light energy being carried in discrete quantized packets, energising electrons. This discovery led to the quantum revolution. Einstein was awarded the Nobel Prize in Physics in 1921 for "his discovery of the law of the photoelectric effect". The photoelectric effect is also employed in photocells such as can be found in solar panels and this is frequently used to make electricity commercially. The first solid-state device was the "cat's-whisker detector" first used in the 1900s in radio receivers. A whisker-like wire is placed lightly in contact with a solid crystal (such as a germanium crystal) to detect a radio signal by the contact junction effect. In a solid-state component, the current is confined to solid elements and compounds engineered specifically to switch and amplify it. Current flow can be understood in two forms: as negatively charged electrons, and as positively charged electron deficiencies called holes. These charges and holes are understood in terms of quantum physics. The building material is most often a crystalline semiconductor. Solid-state electronics came into its own with the emergence of transistor technology. The first working transistor, a germanium-based point-contact transistor, was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947, followed by the bipolar junction transistor in 1948. These early transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis. They were followed by the silicon-based MOSFET (metal-oxide-semiconductor field-effect transistor, or MOS transistor), invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959. It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses, leading to the silicon revolution. Solid-state devices started becoming prevalent from the 1960s, with the transition from vacuum tubes to semiconductor diodes, transistors, integrated circuit (IC) chips, MOSFETs, and light-emitting diode (LED) technology. The most common electronic device is the MOSFET, which has become the most widely manufactured device in history. Common solid-state MOS devices include microprocessor chips and semiconductor memory. A special type of semiconductor memory is flash memory, which is used in USB flash drives and mobile devices, as well as solid-state drive (SSD) technology to replace mechanically rotating magnetic disc hard disk drive (HDD) technology. Concepts Electric charge The presence of charge gives rise to an electrostatic force: charges exert a force on each other, an effect that was known, though not understood, in antiquity. A lightweight ball suspended from a string can be charged by touching it with a glass rod that has itself been charged by rubbing with a cloth. If a similar ball is charged by the same glass rod, it is found to repel the first: the charge acts to force the two balls apart. Two balls that are charged with a rubbed amber rod also repel each other. However, if one ball is charged by the glass rod, and the other by an amber rod, the two balls are found to attract each other. These phenomena were investigated in the late eighteenth century by Charles-Augustin de Coulomb, who deduced that charge manifests itself in two opposing forms. This discovery led to the well-known axiom: like-charged objects repel and opposite-charged objects attract. The force acts on the charged particles themselves, hence charge has a tendency to spread itself as evenly as possible over a conducting surface. The magnitude of the electromagnetic force, whether attractive or repulsive, is given by Coulomb's law, which relates the force to the product of the charges and has an inverse-square relation to the distance between them. The electromagnetic force is very strong, second only in strength to the strong interaction, but unlike that force it operates over all distances. In comparison with the much weaker gravitational force, the electromagnetic force pushing two electrons apart is 1042 times that of the gravitational attraction pulling them together. Charge originates from certain types of subatomic particles, the most familiar carriers of which are the electron and proton. Electric charge gives rise to and interacts with the electromagnetic force, one of the four fundamental forces of nature. Experiment has shown charge to be a conserved quantity, that is, the net charge within an electrically isolated system will always remain constant regardless of any changes taking place within that system. Within the system, charge may be transferred between bodies, either by direct contact, or by passing along a conducting material, such as a wire. The informal term static electricity refers to the net presence (or 'imbalance') of charge on a body, usually caused when dissimilar materials are rubbed together, transferring charge from one to the other. The charge on electrons and protons is opposite in sign, hence an amount of charge may be expressed as being either negative or positive. By convention, the charge carried by electrons is deemed negative, and that by protons positive, a custom that originated with the work of Benjamin Franklin. The amount of charge is usually given the symbol Q and expressed in coulombs; each electron carries the same charge of approximately −1.6022×10−19 coulomb. The proton has a charge that is equal and opposite, and thus +1.6022×10−19 coulomb. Charge is possessed not just by matter, but also by antimatter, each antiparticle bearing an equal and opposite charge to its corresponding particle. Charge can be measured by a number of means, an early instrument being the gold-leaf electroscope, which although still in use for classroom demonstrations, has been superseded by the electronic electrometer. Electric current The movement of electric charge is known as an electric current, the intensity of which is usually measured in amperes. Current can consist of any moving charged particles; most commonly these are electrons, but any charge in motion constitutes a current. Electric current can flow through some things, electrical conductors, but will not flow through an electrical insulator. By historical convention, a positive current is defined as having the same direction of flow as any positive charge it contains, or to flow from the most positive part of a circuit to the most negative part. Current defined in this manner is called conventional current. The motion of negatively charged electrons around an electric circuit, one of the most familiar forms of current, is thus deemed positive in the opposite direction to that of the electrons. However, depending on the conditions, an electric current can consist of a flow of charged particles in either direction, or even in both directions at once. The positive-to-negative convention is widely used to simplify this situation. The process by which electric current passes through a material is termed electrical conduction, and its nature varies with that of the charged particles and the material through which they are travelling. Examples of electric currents include metallic conduction, where electrons flow through a conductor such as metal, and electrolysis, where ions (charged atoms) flow through liquids, or through plasmas such as electrical sparks. While the particles themselves can move quite slowly, sometimes with an average drift velocity only fractions of a millimetre per second, the electric field that drives them itself propagates at close to the speed of light, enabling electrical signals to pass rapidly along wires. Current causes several observable effects, which historically were the means of recognising its presence. That water could be decomposed by the current from a voltaic pile was discovered by Nicholson and Carlisle in 1800, a process now known as electrolysis. Their work was greatly expanded upon by Michael Faraday in 1833. Current through a resistance causes localised heating, an effect James Prescott Joule studied mathematically in 1840. One of the most important discoveries relating to current was made accidentally by Hans Christian Ørsted in 1820, when, while preparing a lecture, he witnessed the current in a wire disturbing the needle of a magnetic compass. He had discovered electromagnetism, a fundamental interaction between electricity and magnetics. The level of electromagnetic emissions generated by electric arcing is high enough to produce electromagnetic interference, which can be detrimental to the workings of adjacent equipment. In engineering or household applications, current is often described as being either direct current (DC) or alternating current (AC). These terms refer to how the current varies in time. Direct current, as produced by example from a battery and required by most electronic devices, is a unidirectional flow from the positive part of a circuit to the negative. If, as is most common, this flow is carried by electrons, they will be travelling in the opposite direction. Alternating current is any current that reverses direction repeatedly; almost always this takes the form of a sine wave. Alternating current thus pulses back and forth within a conductor without the charge moving any net distance over time. The time-averaged value of an alternating current is zero, but it delivers energy in first one direction, and then the reverse. Alternating current is affected by electrical properties that are not observed under steady state direct current, such as inductance and capacitance. These properties however can become important when circuitry is subjected to transients, such as when first energised. Electric field The concept of the electric field was introduced by Michael Faraday. An electric field is created by a charged body in the space that surrounds it, and results in a force exerted on any other charges placed within the field. The electric field acts between two charges in a similar manner to the way that the gravitational field acts between two masses, and like it, extends towards infinity and shows an inverse square relationship with distance. However, there is an important difference. Gravity always acts in attraction, drawing two masses together, while the electric field can result in either attraction or repulsion. Since large bodies such as planets generally carry no net charge, the electric field at a distance is usually zero. Thus gravity is the dominant force at distance in the universe, despite being much weaker. An electric field generally varies in space, and its strength at any one point is defined as the force (per unit charge) that would be felt by a stationary, negligible charge if placed at that point. The conceptual charge, termed a 'test charge', must be vanishingly small to prevent its own electric field disturbing the main field and must also be stationary to prevent the effect of magnetic fields. As the electric field is defined in terms of force, and force is a vector, having both magnitude and direction, so it follows that an electric field is a vector field. The study of electric fields created by stationary charges is called electrostatics. The field may be visualised by a set of imaginary lines whose direction at any point is the same as that of the field. This concept was introduced by Faraday, whose term 'lines of force' still sometimes sees use. The field lines are the paths that a point positive charge would seek to make as it was forced to move within the field; they are however an imaginary concept with no physical existence, and the field permeates all the intervening space between the lines. Field lines emanating from stationary charges have several key properties: first, that they originate at positive charges and terminate at negative charges; second, that they must enter any good conductor at right angles, and third, that they may never cross nor close in on themselves. A hollow conducting body carries all its charge on its outer surface. The field is therefore zero at all places inside the body. This is the operating principal of the Faraday cage, a conducting metal shell which isolates its interior from outside electrical effects. The principles of electrostatics are important when designing
any current that reverses direction repeatedly; almost always this takes the form of a sine wave. Alternating current thus pulses back and forth within a conductor without the charge moving any net distance over time. The time-averaged value of an alternating current is zero, but it delivers energy in first one direction, and then the reverse. Alternating current is affected by electrical properties that are not observed under steady state direct current, such as inductance and capacitance. These properties however can become important when circuitry is subjected to transients, such as when first energised. Electric field The concept of the electric field was introduced by Michael Faraday. An electric field is created by a charged body in the space that surrounds it, and results in a force exerted on any other charges placed within the field. The electric field acts between two charges in a similar manner to the way that the gravitational field acts between two masses, and like it, extends towards infinity and shows an inverse square relationship with distance. However, there is an important difference. Gravity always acts in attraction, drawing two masses together, while the electric field can result in either attraction or repulsion. Since large bodies such as planets generally carry no net charge, the electric field at a distance is usually zero. Thus gravity is the dominant force at distance in the universe, despite being much weaker. An electric field generally varies in space, and its strength at any one point is defined as the force (per unit charge) that would be felt by a stationary, negligible charge if placed at that point. The conceptual charge, termed a 'test charge', must be vanishingly small to prevent its own electric field disturbing the main field and must also be stationary to prevent the effect of magnetic fields. As the electric field is defined in terms of force, and force is a vector, having both magnitude and direction, so it follows that an electric field is a vector field. The study of electric fields created by stationary charges is called electrostatics. The field may be visualised by a set of imaginary lines whose direction at any point is the same as that of the field. This concept was introduced by Faraday, whose term 'lines of force' still sometimes sees use. The field lines are the paths that a point positive charge would seek to make as it was forced to move within the field; they are however an imaginary concept with no physical existence, and the field permeates all the intervening space between the lines. Field lines emanating from stationary charges have several key properties: first, that they originate at positive charges and terminate at negative charges; second, that they must enter any good conductor at right angles, and third, that they may never cross nor close in on themselves. A hollow conducting body carries all its charge on its outer surface. The field is therefore zero at all places inside the body. This is the operating principal of the Faraday cage, a conducting metal shell which isolates its interior from outside electrical effects. The principles of electrostatics are important when designing items of high-voltage equipment. There is a finite limit to the electric field strength that may be withstood by any medium. Beyond this point, electrical breakdown occurs and an electric arc causes flashover between the charged parts. Air, for example, tends to arc across small gaps at electric field strengths which exceed 30 kV per centimetre. Over larger gaps, its breakdown strength is weaker, perhaps 1 kV per centimetre. The most visible natural occurrence of this is lightning, caused when charge becomes separated in the clouds by rising columns of air, and raises the electric field in the air to greater than it can withstand. The voltage of a large lightning cloud may be as high as 100 MV and have discharge energies as great as 250 kWh. The field strength is greatly affected by nearby conducting objects, and it is particularly intense when it is forced to curve around sharply pointed objects. This principle is exploited in the lightning conductor, the sharp spike of which acts to encourage the lightning stroke to develop there, rather than to the building it serves to protect Electric potential The concept of electric potential is closely linked to that of the electric field. A small charge placed within an electric field experiences a force, and to have brought that charge to that point against the force requires work. The electric potential at any point is defined as the energy required to bring a unit test charge from an infinite distance slowly to that point. It is usually measured in volts, and one volt is the potential for which one joule of work must be expended to bring a charge of one coulomb from infinity. This definition of potential, while formal, has little practical application, and a more useful concept is that of electric potential difference, and is the energy required to move a unit charge between two specified points. An electric field has the special property that it is conservative, which means that the path taken by the test charge is irrelevant: all paths between two specified points expend the same energy, and thus a unique value for potential difference may be stated. The volt is so strongly identified as the unit of choice for measurement and description of electric potential difference that the term voltage sees greater everyday usage. For practical purposes, it is useful to define a common reference point to which potentials may be expressed and compared. While this could be at infinity, a much more useful reference is the Earth itself, which is assumed to be at the same potential everywhere. This reference point naturally takes the name earth or ground. Earth is assumed to be an infinite source of equal amounts of positive and negative charge, and is therefore electrically uncharged—and unchargeable. Electric potential is a scalar quantity, that is, it has only magnitude and not direction. It may be viewed as analogous to height: just as a released object will fall through a difference in heights caused by a gravitational field, so a charge will 'fall' across the voltage caused by an electric field. As relief maps show contour lines marking points of equal height, a set of lines marking points of equal potential (known as equipotentials) may be drawn around an electrostatically charged object. The equipotentials cross all lines of force at right angles. They must also lie parallel to a conductor's surface, otherwise this would produce a force that will move the charge carriers to even the potential of the surface. The electric field was formally defined as the force exerted per unit charge, but the concept of potential allows for a more useful and equivalent definition: the electric field is the local gradient of the electric potential. Usually expressed in volts per metre, the vector direction of the field is the line of greatest slope of potential, and where the equipotentials lie closest together. Electromagnets Ørsted's discovery in 1821 that a magnetic field existed around all sides of a wire carrying an electric current indicated that there was a direct relationship between electricity and magnetism. Moreover, the interaction seemed different from gravitational and electrostatic forces, the two forces of nature then known. The force on the compass needle did not direct it to or away from the current-carrying wire, but acted at right angles to it. Ørsted's words were that "the electric conflict acts in a revolving manner." The force also depended on the direction of the current, for if the flow was reversed, then the force did too. Ørsted did not fully understand his discovery, but he observed the effect was reciprocal: a current exerts a force on a magnet, and a magnetic field exerts a force on a current. The phenomenon was further investigated by Ampère, who discovered that two parallel current-carrying wires exerted a force upon each other: two wires conducting currents in the same direction are attracted to each other, while wires containing currents in opposite directions are forced apart. The interaction is mediated by the magnetic field each current produces and forms the basis for the international definition of the ampere. This relationship between magnetic fields and currents is extremely important, for it led to Michael Faraday's invention of the electric motor in 1821. Faraday's homopolar motor consisted of a permanent magnet sitting in a pool of mercury. A current was allowed through a wire suspended from a pivot above the magnet and dipped into the mercury. The magnet exerted a tangential force on the wire, making it circle around the magnet for as long as the current was maintained. Experimentation by Faraday in 1831 revealed that a wire moving perpendicular to a magnetic field developed a potential difference between its ends. Further analysis of this process, known as electromagnetic induction, enabled him to state the principle, now known as Faraday's law of induction, that the potential difference induced in a closed circuit is proportional to the rate of change of magnetic flux through the loop. Exploitation of this discovery enabled him to invent the first electrical generator in 1831, in which he converted the mechanical energy of a rotating copper disc to electrical energy. Faraday's disc was inefficient and of no use as a practical generator, but it showed the possibility of generating electric power using magnetism, a possibility that would be taken up by those that followed on from his work. Electrochemistry The ability of chemical reactions to produce electricity, and conversely the ability of electricity to drive chemical reactions has a wide array of uses. Electrochemistry has always been an important part of electricity. From the initial invention of the Voltaic pile, electrochemical cells have evolved into the many different types of batteries, electroplating and electrolysis cells. Aluminium is produced in vast quantities this way, and many portable devices are electrically powered using rechargeable cells. Electric circuits An electric circuit is an interconnection of electric components such that electric charge is made to flow along a closed path (a circuit), usually to perform some useful task. The components in an electric circuit can take many forms, which can include elements such as resistors, capacitors, switches, transformers and electronics. Electronic circuits contain active components, usually semiconductors, and typically exhibit non-linear behaviour, requiring complex analysis. The simplest electric components are those that are termed passive and linear: while they may temporarily store energy, they contain no sources of it, and exhibit linear responses to stimuli. The resistor is perhaps the simplest of passive circuit elements: as its name suggests, it resists the current through it, dissipating its energy as heat. The resistance is a consequence of the motion of charge through a conductor: in metals, for example, resistance is primarily due to collisions between electrons and ions. Ohm's law is a basic law of circuit theory, stating that the current passing through a resistance is directly proportional to the potential difference across it. The resistance of most materials is relatively constant over a range of temperatures and currents; materials under these conditions are known as 'ohmic'. The ohm, the unit of resistance, was named in honour of Georg Ohm, and is symbolised by the Greek letter Ω. 1 Ω is the resistance that will produce a potential difference of one volt in response to a current of one amp. The capacitor is a development of the Leyden jar and is a device that can store charge, and thereby storing electrical energy in the resulting field. It consists of two conducting plates separated by a thin insulating dielectric layer; in practice, thin metal foils are coiled together, increasing the surface area per unit volume and therefore the capacitance. The unit of capacitance is the farad, named after Michael Faraday, and given
old editions of Empedocles, only about 100 lines were typically ascribed to his "Purifications", which was taken to be a poem about ritual purification, or the poem that contained all his religious and ethical thought. Early editors supposed that it was a poem that offered a mythical account of the world which may, nevertheless, have been part of Empedocles' philosophical system. According to Diogenes Laërtius it began with the following verses: Friends who inhabit the mighty town by tawny Acragas which crowns the citadel, caring for good deeds, greetings; I, an immortal God, no longer mortal, wander among you, honoured by all, adorned with holy diadems and blooming garlands. To whatever illustrious towns I go, I am praised by men and women, and accompanied by thousands, who thirst for deliverance, some ask for prophecies, and some entreat, for remedies against all kinds of disease. In the older editions, it is to this work that editors attributed the story about souls, where we are told that there were once spirits who lived in a state of bliss, but having committed a crime (the nature of which is unknown) they were punished by being forced to become mortal beings, reincarnated from body to body. Humans, animals, and even plants are such spirits. The moral conduct recommended in the poem may allow us to become like gods again. If, as is now widely held, this title "Purifications" refers to the poem "On Nature", or to a part of that poem, this story will have been at the beginning of the main work on nature and the cosmic cycle. The relevant verses are also sometimes attributed to the poem of "On Nature", even by those who think that there was a separate poem called "Purifications". On Nature There are about 450 lines of his poem "On Nature" extant, including 70 lines which have been reconstructed from some papyrus scraps known as the Strasbourg papyrus. The poem originally consisted of 2000 lines of hexameter verse, and was addressed to Pausanias. It was this poem which outlined his philosophical system. In it, Empedocles explains not only the nature and history of the universe, including his theory of the four classical elements, but he describes theories on causation, perception, and thought, as well as explanations of terrestrial phenomena and biological processes. Philosophy Although acquainted with the theories of the Eleatics and the Pythagoreans, Empedocles did not belong to any one definite school. An eclectic in his thinking, he combined much that had been suggested by Parmenides, Pythagoras and the Ionian schools. He was a firm believer in Orphic mysteries, as well as a scientific thinker and a precursor of physics. Aristotle mentions Empedocles among the Ionic philosophers, and he places him in very close relation to the atomist philosophers and to Anaxagoras. According to House (1956) Another of the fragments of the dialogue On the Poets (Aristotle) treats more fully what is said in Poetics ch. i about Empedocles, for though clearly implying that he was not a poet, Aristotle there says he is Homeric, and an artist in language, skilled in metaphor and in the other devices of poetry. Empedocles, like the Ionian philosophers and the atomists, continued the tradition of tragic thought which tried to find the basis of the relationship of the One and the Many. Each of the various philosophers, following Parmenides, derived from the Eleatics, the conviction that an existence could not pass into non-existence, and vice versa. Yet, each one had his peculiar way of describing this relation of Divine and mortal thought and thus of the relation of the One and the Many. In order to account for change in the world, in accordance with the ontological requirements of the Eleatics, they viewed changes as the result of mixture and separation of unalterable fundamental realities. Empedocles held that the four elements (Water, Air, Earth, and Fire) were those unchangeable fundamental realities, which were themselves transfigured into successive worlds by the powers of Love and Strife (Heraclitus had explicated the Logos or the "unity of opposites"). The four elements Empedocles established four ultimate elements which make all the structures in the world—fire, air, water, earth. Empedocles called these four elements "roots", which he also identified with the mythical names of Zeus, Hera, Nestis, and Aidoneus (e.g., "Now hear the fourfold roots of everything: enlivening Hera, Hades, shining Zeus. And Nestis, moistening mortal springs with tears"). Empedocles never used the term "element" (, stoicheion), which seems to have been first used by Plato. According to the different proportions in which these four indestructible and unchangeable elements are combined with each other the difference of the structure is produced. It is in the aggregation and segregation of elements thus arising, that Empedocles, like the atomists, found the real process which corresponds to what is popularly termed growth, increase or decrease. Nothing new comes or can come into being; the only change that can occur is a change in the juxtaposition of element with element. This theory of the four elements became the standard dogma for the next two thousand years. Love and Strife The four elements, however, are simple, eternal, and unalterable, and as change is the consequence of their mixture and separation, it was also necessary to suppose the existence of moving powers that bring about mixture and separation. The four elements are both eternally brought into union and parted from one another by two divine powers, Love and Strife (Philotes and Neikos). Love () is responsible for the attraction of different forms of what we now call matter, and Strife () is the cause of their separation. If the four elements make up the universe, then Love and Strife explain their variation and harmony. Love and Strife are attractive and repulsive forces, respectively, which are plainly observable in human behavior, but also pervade the universe. The two forces wax and wane in their dominance, but neither force ever wholly escapes the imposition of the other. According to Burnet: "Empedokles sometimes gave an efficient power to Love and Strife, and sometimes put them on a level with the other four. The fragments leave no room for doubt that they were thought of as spatial and corporeal. ... Love is said to be "equal in length and breadth" to the others, and Strife is described as equal to each of them in weight (fr. 17). These physical speculations were part of a history of the universe which also dealt with the origin and development of life." The sphere of Empedocles As the best and original state, there was a time when the pure elements and the two powers co-existed in a condition of rest and inertness in the form of a sphere. The elements existed together in their purity, without mixture and separation, and the uniting power of Love predominated in the sphere: the separating power of Strife guarded the extreme edges of the sphere. Since that time, strife gained more sway and the bond which kept the pure elementary substances together in the sphere was dissolved. The elements became the world of phenomena we see today, full of contrasts and oppositions, operated on by both Love and Strife. The sphere of Empedocles being the embodiment of pure existence is the embodiment or representative of God. Empedocles assumed a cyclical universe whereby the elements return and prepare the formation of the sphere for the next period of the universe. Cosmogony Empedocles attempted to explain the separation of elements, the formation of earth and sea, of Sun and Moon, of atmosphere. He also dealt with the first origin of plants and animals, and with the physiology of humans. As the elements entered into combinations, there appeared strange results—heads without necks, arms without shoulders. Then as these fragmentary structures met, there were seen horned heads on human bodies, bodies of oxen with human heads, and figures of double sex. But most of these products of natural forces disappeared as suddenly as they arose; only in those rare cases where the parts were found to be adapted to each other did the complex structures last. Thus the organic universe sprang from spontaneous aggregations that suited each other as if this had been intended. Soon various influences reduced creatures of double sex to a male and a female, and the world was replenished with organic life. It is possible to see this theory as an anticipation of Charles Darwin's theory of natural selection, although Empedocles was not trying to explain evolution. Perception and knowledge Empedocles is credited with the first comprehensive theory of light and vision. Historian Will Durant noted that "Empedocles suggested that light takes time to pass from one point to another.". He put forward the idea that we see objects because light streams out of our eyes and touches them. While flawed, this became the fundamental basis on which later Greek philosophers and mathematicians like Euclid would construct some of the
what we now call matter, and Strife () is the cause of their separation. If the four elements make up the universe, then Love and Strife explain their variation and harmony. Love and Strife are attractive and repulsive forces, respectively, which are plainly observable in human behavior, but also pervade the universe. The two forces wax and wane in their dominance, but neither force ever wholly escapes the imposition of the other. According to Burnet: "Empedokles sometimes gave an efficient power to Love and Strife, and sometimes put them on a level with the other four. The fragments leave no room for doubt that they were thought of as spatial and corporeal. ... Love is said to be "equal in length and breadth" to the others, and Strife is described as equal to each of them in weight (fr. 17). These physical speculations were part of a history of the universe which also dealt with the origin and development of life." The sphere of Empedocles As the best and original state, there was a time when the pure elements and the two powers co-existed in a condition of rest and inertness in the form of a sphere. The elements existed together in their purity, without mixture and separation, and the uniting power of Love predominated in the sphere: the separating power of Strife guarded the extreme edges of the sphere. Since that time, strife gained more sway and the bond which kept the pure elementary substances together in the sphere was dissolved. The elements became the world of phenomena we see today, full of contrasts and oppositions, operated on by both Love and Strife. The sphere of Empedocles being the embodiment of pure existence is the embodiment or representative of God. Empedocles assumed a cyclical universe whereby the elements return and prepare the formation of the sphere for the next period of the universe. Cosmogony Empedocles attempted to explain the separation of elements, the formation of earth and sea, of Sun and Moon, of atmosphere. He also dealt with the first origin of plants and animals, and with the physiology of humans. As the elements entered into combinations, there appeared strange results—heads without necks, arms without shoulders. Then as these fragmentary structures met, there were seen horned heads on human bodies, bodies of oxen with human heads, and figures of double sex. But most of these products of natural forces disappeared as suddenly as they arose; only in those rare cases where the parts were found to be adapted to each other did the complex structures last. Thus the organic universe sprang from spontaneous aggregations that suited each other as if this had been intended. Soon various influences reduced creatures of double sex to a male and a female, and the world was replenished with organic life. It is possible to see this theory as an anticipation of Charles Darwin's theory of natural selection, although Empedocles was not trying to explain evolution. Perception and knowledge Empedocles is credited with the first comprehensive theory of light and vision. Historian Will Durant noted that "Empedocles suggested that light takes time to pass from one point to another.". He put forward the idea that we see objects because light streams out of our eyes and touches them. While flawed, this became the fundamental basis on which later Greek philosophers and mathematicians like Euclid would construct some of the most important theories of light, vision, and optics. Knowledge is explained by the principle that elements in the things outside us are perceived by the corresponding elements in ourselves. Like is known by like. The whole body is full of pores and hence respiration takes place over the whole frame. In the organs of sense these pores are specially adapted to receive the effluences which are continually rising from bodies around us; thus perception occurs. In vision, certain particles go forth from the eye to meet similar particles given forth from the object, and the resultant contact constitutes vision. Perception is not merely a passive reflection of external objects. Empedocles noted the limitation and narrowness of human perceptions. We see only a part but fancy that we have grasped the whole. But the senses cannot lead to truth; thought and reflection must look at the thing from every side. It is the business of a philosopher, while laying bare the fundamental difference of elements, to show the identity that exists between what seem unconnected parts of the universe. Respiration In a famous fragment, Empedocles attempted to explain the phenomenon of respiration by means of an elaborate analogy with the clepsydra, an ancient device for conveying liquids from one vessel to another. This fragment has sometimes been connected to a passage in Aristotle's Physics where Aristotle refers to people who twisted wineskins and captured air in clepsydras to demonstrate that void does not exist. There is, however, no evidence that Empedocles performed any experiment with clepsydras. The fragment certainly implies that Empedocles knew about the corporeality of air, but he says nothing whatever about the void. The clepsydra was a common utensil and everyone who used it must have known, in some sense, that the invisible air could resist liquid. Reincarnation Like Pythagoras, Empedocles believed in the transmigration of the soul or metempsychosis, that souls can be reincarnated between humans, animals and even plants. According to him, all humans, or maybe only a selected few among them, were originally long-lived daimons who dwelt in a state of bliss until comitting an unspecified crime, possibly bloodshed or perjury. As a consequence, they fell to Earth, where they would forced to spend 30.000 cycles of metempsychosis through different bodies before being able to return to the sphere of divinity. One's behavior during his lifetime would also determine his next incarnation. Wise people, who have learned the secret of life, are closer to the divine, while their souls similarly closer are to the freedom from the cycle of reincarnations, after which they are able to rest in happiness for eternity. This cycle of mortal incarnation seems to have been inspired by the god Apollo's punishment as a servant to Admetus. Empedocles was a vegetarian and advocated vegetarianism, since the bodies of animals are also dwelling places
before Linnaean times, and simply been formalised when Linnaeus described Erica in 1753, and then again when Jussieu described the Ericaceae in 1789. Historically, the Ericaceae included both subfamilies and tribes. In 1971, Stevens, who outlined the history from 1876 and in some instances 1839, recognised six subfamilies (Rhododendroideae, Ericoideae, Vaccinioideae, Pyroloideae, Monotropoideae, and Wittsteinioideae), and further subdivided four of the subfamilies into tribes, the Rhododendroideae having seven tribes (Bejarieae, Rhodoreae, Cladothamneae, Epigaeae, Phyllodoceae, and Diplarcheae). Within tribe Rhodoreae, five genera were described, Rhododendron L. (including Azalea L. pro parte), Therorhodion Small, Ledum L., Tsusiophyllum Max., Menziesia J. E. Smith, that were eventually transferred into Rhododendron, along with Diplarche from the monogeneric tribe Diplarcheae. In 2002, systematic research resulted in the inclusion of the formerly recognised families Empetraceae, Epacridaceae, Monotropaceae, Prionotaceae, and Pyrolaceae into the Ericaceae based on a combination of molecular, morphological, anatomical, and embryological data, analysed within a phylogenetic framework. The move significantly increased the morphological and geographical range found within the group. One possible classification of the resulting family includes 9 subfamilies, 126 genera, and about 4000 species: Enkianthoideae Kron, Judd & Anderberg (one genus, 16 species) Pyroloideae Kosteltsky (4 genera, 40 species) Monotropoideae Arnott (10 genera, 15 species) Arbutoideae Niedenzu (up to six genera, about 80 species) Cassiopoideae Kron & Judd (one genus, 12 species) Ericoideae Link (19 genera, 1790 species) Harrimanelloideae Kron & Judd (one species) Styphelioideae Sweet (35 genera, 545 species) Vaccinioideae Arnott (50 genera, 1580 species) Genera See the full list at List of Ericaceae genera. Distribution and ecology The Ericaceae have a nearly worldwide distribution. They are absent from continental Antarctica, parts of the high Arctic, central Greenland, northern and central Australia, and much of the lowland tropics and neotropics. The family is largely composed of plants that can tolerate acidic, infertile conditions. Like other stress-tolerant plants, many Ericaceae have mycorrhizal fungi to assist with extracting nutrients from infertile soils, as well as evergreen foliage to conserve absorbed nutrients. This trait is not found in the Clethraceae and Cyrillaceae, the two families most closely related to the Ericaceae. Most Ericaceae (excluding the Monotropoideae, and some Styphelioideae) form a distinctive accumulation of mycorrhizae, in which fungi grow in and around the roots and provide the plant with nutrients. The Pyroloideae are mixotrophic
and ecology The Ericaceae have a nearly worldwide distribution. They are absent from continental Antarctica, parts of the high Arctic, central Greenland, northern and central Australia, and much of the lowland tropics and neotropics. The family is largely composed of plants that can tolerate acidic, infertile conditions. Like other stress-tolerant plants, many Ericaceae have mycorrhizal fungi to assist with extracting nutrients from infertile soils, as well as evergreen foliage to conserve absorbed nutrients. This trait is not found in the Clethraceae and Cyrillaceae, the two families most closely related to the Ericaceae. Most Ericaceae (excluding the Monotropoideae, and some Styphelioideae) form a distinctive accumulation of mycorrhizae, in which fungi grow in and around the roots and provide the plant with nutrients. The Pyroloideae are mixotrophic and gain sugars from the mycorrhizae, as well as nutrients. In many parts of the world, a "heath" or "heathland" is an environment characterised by an open dwarf-shrub community found on low-quality acidic soils, generally dominated by plants in the Ericaceae. A common example is Erica tetralix. This plant family is also typical of peat bogs and blanket bogs; examples include Rhododendron groenlandicum and Kalmia polifolia. In eastern North America, members of this family often grow in association with an oak canopy, in a habitat known as an oak-heath forest. In heathland, plants in the family Ericaceae serve as hostplants to the butterfly Plebejus argus. Some evidence suggests eutrophic rainwater can convert ericoid heaths with species such as Erica tetralix to grasslands. Nitrogen is particularly suspect in this regard, and may be causing measurable changes to the distribution and abundance of some ericaceous species. References Bibliography External links Ericaceae at The Plant List
response, AC response, and transient response. A resistive circuit is a circuit containing only resistors and ideal current and voltage sources. Analysis of resistive circuits is less complicated than analysis of circuits containing capacitors and inductors. If the sources are constant (DC) sources, the result is a DC circuit. The effective resistance and current distribution properties of arbitrary resistor networks can be modeled in terms of their graph measures and geometrical properties. A network that contains active electronic components is known as an electronic circuit. Such networks are generally nonlinear and require more complex design and analysis tools. Classification By passivity An active network contains at least one voltage source or current source that can supply energy to the network indefinitely. A passive network does not contain an active source. An active network contains one or more sources of electromotive force. Practical examples of such sources include a battery or a generator. Active elements can inject power to the circuit, provide power gain, and control the current flow within the circuit. Passive networks do not contain any sources of electromotive force. They consist of passive elements like resistors and capacitors. By linearity A network is linear if its signals obey the principle of superposition; otherwise it is non-linear. Passive networks are generally taken to be linear, but there are exceptions. For instance, an inductor with an iron core can be driven into saturation if driven with a large enough current. In this region, the behaviour of the inductor is very non-linear. By lumpiness Discrete passive components (resistors, capacitors and inductors) are called lumped elements because all of their, respectively, resistance, capacitance and inductance is assumed to be located ("lumped") at one place. This design philosophy is called the lumped-element model and networks so designed are called lumped-element circuits. This is the conventional approach to circuit design. At high enough frequencies, or for long enough circuits (such as power transmission lines), the lumped assumption no longer holds because there is a significant fraction of a wavelength across the component dimensions. A new design model is needed for such cases called the distributed-element model. Networks designed to this model are called distributed-element circuits. A distributed-element circuit that includes some lumped components is called a semi-lumped design. An example of a semi-lumped circuit is the combline filter. Classification of sources Sources can be classified as independent sources and dependent sources. Independent An ideal independent source maintains the same voltage or current regardless of the other elements present in the circuit. Its value is either constant (DC) or sinusoidal (AC). The strength of voltage or current is not changed by any variation in the connected network. Dependent Dependent sources depend upon a particular element of the circuit for delivering the power or voltage or current depending upon the type of source it is. Applying electrical laws A number of electrical laws apply to all linear resistive networks. These include: Kirchhoff's current law: The sum of all currents entering a node is equal to the sum of all currents leaving the node. Kirchhoff's voltage law: The directed sum of the electrical potential differences around a loop must be zero. Ohm's law: The voltage across a
component dimensions. A new design model is needed for such cases called the distributed-element model. Networks designed to this model are called distributed-element circuits. A distributed-element circuit that includes some lumped components is called a semi-lumped design. An example of a semi-lumped circuit is the combline filter. Classification of sources Sources can be classified as independent sources and dependent sources. Independent An ideal independent source maintains the same voltage or current regardless of the other elements present in the circuit. Its value is either constant (DC) or sinusoidal (AC). The strength of voltage or current is not changed by any variation in the connected network. Dependent Dependent sources depend upon a particular element of the circuit for delivering the power or voltage or current depending upon the type of source it is. Applying electrical laws A number of electrical laws apply to all linear resistive networks. These include: Kirchhoff's current law: The sum of all currents entering a node is equal to the sum of all currents leaving the node. Kirchhoff's voltage law: The directed sum of the electrical potential differences around a loop must be zero. Ohm's law: The voltage across a resistor is equal to the product of the resistance and the current flowing through it. Norton's theorem: Any network of voltage or current sources and resistors is electrically equivalent to an ideal current source in parallel with a single resistor. Thévenin's theorem: Any network of voltage or current sources and resistors is electrically equivalent to a single voltage source in series with a single resistor. Superposition theorem: In a linear network with several independent sources, the response in a particular branch when all the sources are acting simultaneously is equal to the linear sum of individual responses calculated by taking one independent source at a time. Applying these laws results in a set of simultaneous equations that can be solved either algebraically or numerically. The laws can generally be extended to networks containing reactances. They cannot be used in networks that contain nonlinear or time-varying components. Design methods To design any electrical circuit, either analog or digital, electrical engineers need to be able to predict the voltages and currents at all places within the circuit. Simple linear circuits can be analyzed by hand using complex number theory. In more complex cases the circuit may be analyzed with specialized computer programs or estimation techniques such as the piecewise-linear model. Circuit simulation software, such as HSPICE (an analog circuit simulator), and languages such as VHDL-AMS and verilog-AMS allow engineers to design circuits without the time, cost and risk of error involved in building circuit
of challenging mathematical/computer programming problems Other uses Euler (surname) Euler Hermes, a global credit insurance company EULAR, European rheumatology organization Euler jump, an edge jump in figure skating Euler (crater), a lunar impact crater in the southern half of the Mare Imbrium See also List of things named after
(surname) Euler Hermes, a global credit insurance company EULAR, European rheumatology organization Euler jump, an edge jump in figure skating Euler (crater), a lunar impact crater in the southern half of the Mare Imbrium See also List of things named after Leonhard Euler Oiler
be considered a derangement of itself, because it has only one permutation (), and it is vacuously true that no element (of the empty set) can be found that retains its original position. In other areas of mathematics Extended real numbers Since the empty set has no member when it is considered as a subset of any ordered set, every member of that set will be an upper bound and lower bound for the empty set. For example, when considered as a subset of the real numbers, with its usual ordering, represented by the real number line, every real number is both an upper and lower bound for the empty set. When considered as a subset of the extended reals formed by adding two "numbers" or "points" to the real numbers (namely negative infinity, denoted which is defined to be less than every other extended real number, and positive infinity, denoted which is defined to be greater than every other extended real number), we have that: and That is, the least upper bound (sup or supremum) of the empty set is negative infinity, while the greatest lower bound (inf or infimum) is positive infinity. By analogy with the above, in the domain of the extended reals, negative infinity is the identity element for the maximum and supremum operators, while positive infinity is the identity element for the minimum and infimum operators. Topology In any topological space X, the empty set is open by definition, as is X. Since the complement of an open set is closed and the empty set and X are complements of each other, the empty set is also closed, making it a clopen set. Moreover, the empty set is compact by the fact that every finite set is compact. The closure of the empty set is empty. This is known as "preservation of nullary unions." Category theory If is a set, then there exists precisely one function from to the empty function. As a result, the empty set is the unique initial object of the category of sets and functions. The empty set can be turned into a topological space, called the empty space, in just one way: by defining the empty set to be open. This empty topological space is the unique initial object in the category of topological spaces with continuous maps. In fact, it is a strict initial object: only the empty set has a function to the empty set. Set theory In the von Neumann construction of the ordinals, 0 is defined as the empty set, and the successor of an ordinal is defined as . Thus, we have , , , and so on. The von Neumann construction, along with the axiom of infinity, which guarantees the existence of at least one infinite set, can be used to construct the set of natural numbers, , such that the Peano axioms of arithmetic are satisfied. Questioned existence Axiomatic set theory In Zermelo set theory, the existence of the empty set is assured by the axiom of empty set, and its uniqueness follows from the axiom of extensionality. However, the axiom of empty set can be shown redundant in at least two ways: Standard first-order logic implies, merely from the logical axioms, that exists, and in the language of set theory, that thing must be a set. Now the existence of the empty set follows easily from the axiom of separation. Even using free logic (which does not logically imply that something exists), there is already an axiom implying the existence of at least one set, namely the axiom of infinity. Philosophical issues While the empty set is a standard and widely accepted mathematical concept, it remains an ontological curiosity, whose meaning and usefulness are debated by philosophers and logicians. The empty set is not the
ordered set, every member of that set will be an upper bound and lower bound for the empty set. For example, when considered as a subset of the real numbers, with its usual ordering, represented by the real number line, every real number is both an upper and lower bound for the empty set. When considered as a subset of the extended reals formed by adding two "numbers" or "points" to the real numbers (namely negative infinity, denoted which is defined to be less than every other extended real number, and positive infinity, denoted which is defined to be greater than every other extended real number), we have that: and That is, the least upper bound (sup or supremum) of the empty set is negative infinity, while the greatest lower bound (inf or infimum) is positive infinity. By analogy with the above, in the domain of the extended reals, negative infinity is the identity element for the maximum and supremum operators, while positive infinity is the identity element for the minimum and infimum operators. Topology In any topological space X, the empty set is open by definition, as is X. Since the complement of an open set is closed and the empty set and X are complements of each other, the empty set is also closed, making it a clopen set. Moreover, the empty set is compact by the fact that every finite set is compact. The closure of the empty set is empty. This is known as "preservation of nullary unions." Category theory If is a set, then there exists precisely one function from to the empty function. As a result, the empty set is the unique initial object of the category of sets and functions. The empty set can be turned into a topological space, called the empty space, in just one way: by defining the empty set to be open. This empty topological space is the unique initial object in the category of topological spaces with continuous maps. In fact, it is a strict initial object: only the empty set has a function to the empty set. Set theory In the von Neumann construction of the ordinals, 0 is defined as the empty set, and the successor of an ordinal is defined as . Thus, we have , , , and so on. The von Neumann construction, along with the axiom of infinity, which guarantees the existence of at least one infinite set, can be used to construct the set of natural numbers, , such that the Peano axioms of arithmetic are satisfied. Questioned existence Axiomatic set theory In Zermelo set theory, the existence of the empty set is assured by the axiom of empty set, and its uniqueness follows from the axiom of extensionality. However, the axiom of empty set can be shown redundant in at least two ways: Standard first-order logic implies, merely from the logical axioms, that exists, and in the language of set theory, that thing must be a set. Now the existence of the empty set follows easily from the axiom of separation. Even using free logic (which does not logically imply that something exists), there is already an axiom implying the existence of at least one set, namely the axiom of infinity. Philosophical issues While the empty set is a standard and widely accepted mathematical concept, it remains an ontological curiosity, whose meaning and usefulness are debated by philosophers and logicians. The empty set is not the same thing as ; rather, it is a set with nothing it and a set is always . This issue can be overcome by viewing a set as a bag—an empty bag undoubtedly still exists. Darling (2004) explains that the empty set is not nothing, but rather "the set of all triangles with four sides, the set of all numbers that are bigger than nine but smaller than eight, and the set of all opening moves in chess that involve a king." The popular syllogism Nothing is better than eternal happiness; a ham sandwich is better than nothing; therefore, a ham sandwich is better than eternal happiness is often used to demonstrate the philosophical relation between the concept of nothing and the empty set. Darling writes that the contrast can be seen by rewriting the statements "Nothing is better than eternal happiness" and "[A] ham sandwich is better than nothing" in a mathematical tone. According to Darling, the former is equivalent to "The set of all things that are better than eternal happiness is " and the latter to "The set {ham sandwich} is better than the set ". The first compares elements of sets, while the second compares the sets themselves. Jonathan Lowe argues that while the empty set: "was undoubtedly an important landmark in the history of mathematics, … we should not assume that its utility in calculation is dependent upon its actually denoting some object." it is also the case that: "All that we are ever informed about the empty set is that it (1) is a set, (2) has no members, and (3) is unique amongst sets in having no members. However, there are very many things that 'have no members', in the set-theoretical sense—namely, all non-sets. It is perfectly clear why these things have no members, for they are
posit that this is a truer sense of egoism. The New Catholic Encyclopedia states of egoism that it "incorporates in itself certain basic truths: it is natural for man to love himself; he should moreover do so, since each one is ultimately responsible for himself; pleasure, the development of one's potentialities, and the acquisition of power are normally desirable." The moral censure of self-interest is a common subject of critique in egoist philosophy, with such judgments being examined as means of control and the result of power relations. Egoism may also reject that insight into one's internal motivation can arrive extrinsically, such as from psychology or sociology, though, for example, this is not present in the philosophy of Friedrich Nietzsche. Etymology The term egoism is derived from the French , from the Latin (first person singular personal pronoun; "I") with the French ("-ism"). Descriptive theories The descriptive variants of egoism are concerned with self-regard as a factual description of human motivation and, in its furthest application, that all human motivation stems from the desires and interest of the ego. In these theories, action which is self-regarding may be simply termed egoistic. The position that people tend to act in their own self-interest is called default egoism, whereas psychological egoism is the position that all motivations are rooted in an ultimately self-serving psyche. That is, in its strong form, that even seemingly altruistic actions are only disguised as such and are always self-serving. Its weaker form instead holds that, even if altruistic motivation is possible, the willed action necessarily becomes egoistic in serving one's own will. In contrast to this and philosophical egoism, biological egoism (also called evolutionary egoism) describes motivations rooted solely in reproductive self-interest (i.e. reproductive fitness). Furthermore, selfish gene theory holds that it is the self-interest of genetic information that conditions human behaviour. In moral psychology In his On the Genealogy of Morals, Friedrich Nietzsche traces the origins of master–slave morality to fundamentally egoistic value judgments. In the aristocratic valuation, excellence and virtue come as a form of superiority over the common masses, which the priestly valuation, in ressentiment of power, seeks to invert—where the powerless and pitiable become the moral ideal. This upholding of unegoistic actions is therefore seen as stemming from a desire to reject the superiority or excellency of others. He holds that all normative systems which operate in the role often associated with morality favor the interests of some people, often, though not necessarily, at the expense of others. Normative theories Theories which hold egoism to be normative stipulate that the ego ought to promote its own interests above other values. Where this ought is held to be a pragmatic judgment it is termed rational egoism and where it is held to be a moral judgment it is termed ethical egoism. The Stanford Encyclopedia of Philosophy states that "ethical egoism might also apply to things other than acts, such as rules or character traits" but that such variants are uncommon. Furthermore, conditional egoism is a consequentialist form of ethical egoism which holds that egoism is morally right if it leads to morally acceptable ends. John F. Welsh, in his work Max Stirner's Dialectical Egoism: A New Interpretation, coins the term dialectical egoism to describe an interpretation of the egoist philosophy of Max Stirner as being fundamentally dialectical. Normative egoism, as in the case of Stirner, need not reject that some modes of behavior are to be valued above others—such as Stirner's affirmation that non-restriction and autonomy are to be most highly valued. Contrary theories, however, may just as easily favour egoistic domination of others. Relations with altruism In 1851, French philosopher Auguste Comte coined the term altruism (; , ) as an antonym for egoism. In this sense, altruism defined Comte's position that all self-regard must be replaced with only the regard for others. While Friedrich Nietzsche does not view altruism as a suitable antonym for egoism, Comte instead states that only two human
own reproductive fitness. While biological egoism does grant that an organism may act to the benefit of others, it describes only such when in accordance with reproductive self-interest. Kin altruism and selfish gene theory are examples of this division. On biological altruism, the Stanford Encyclopedia of Philosophy states: "Contrary to what is often thought, an evolutionary approach to human behaviour does not imply that humans are likely to be motivated by self-interest alone. One strategy by which ‘selfish genes’ may increase their future representation is by causing humans to be non-selfish, in the psychological sense." This is a central topic within contemporary discourse of psychological egoism. Relations with nihilism The history of egoist thought has often overlapped with that of nihilism. For example, Max Stirner's rejection of absolutes and abstract concepts often places him among the first philosophical nihilists. The popular description of Stirner as a moral nihilist, however, may fail to encapsulate certain subtleties of his ethical thought. The Stanford Encyclopedia of Philosophy states, "Stirner is clearly committed to the non-nihilistic view that certain kinds of character and modes of behaviour (namely autonomous individuals and actions) are to be valued above all others. His conception of morality is, in this respect, a narrow one, and his rejection of the legitimacy of moral claims is not to be confused with a denial of the propriety of all normative or ethical judgement." Stirner's nihilism may instead be understood as cosmic nihilism. Likewise, both normative and descriptive theories of egoism further developed under Russian nihilism, shortly giving birth to rational egoism. Nihilist philosophers Dmitry Pisarev and Nikolay Chernyshevsky were influential in this regard, compounding such forms of egoism with hard determinism. Nietzsche and egoism The terms nihilism and anti-nihilism have both been used to categorise the philosophy of Friedrich Nietzsche. His thought has similarly been linked to forms of both descriptive and normative egoism. Nietzsche, in attacking the widely held moral abhorrence for egoistic action, seeks to free higher human beings from their belief that this morality is good for them. He rejects Christian and Kantian ethics as merely the disguised egoism of slave morality. Postmodernity and egoism Max Stirner's philosophy strongly rejects modernity and is highly critical of the increasing dogmatism and oppressive social institutions that embody it. In order that it might be surpassed, egoist principles are upheld as a necessary advancement beyond the modern world. The Stanford Encyclopedia states that Stirner's historical analyses serve to "undermine historical narratives which portray the modern development of humankind as the progressive realisation of freedom, but also to support an account of individuals in the modern world as increasingly oppressed". This critique of humanist discourses especially has linked Stirner to more contemporary poststructuralist thought. Relations with political theory Since normative egoism rejects the moral obligation to subordinate the ego to a ruling class, it is predisposed to certain political implications. The Internet Encyclopedia of Philosophy states: In contrast with this however, such an ethic may not morally obligate against the egoistic exercise of power over others. On these grounds, Friedrich Nietzsche criticizes egalitarian morality and political projects as unconducive to the development of
structure known as a near-ring. Every ring with one is the endomorphism ring of its regular module, and so is a subring of an endomorphism ring of an abelian group; however there are rings that are not the endomorphism ring of any abelian group. Operator theory In any concrete category, especially for vector spaces, endomorphisms are maps from a set into itself, and may be interpreted as unary operators on that set, acting on the elements, and allowing to define the notion of orbits of elements, etc. Depending on the additional structure defined for the category at hand (topology, metric, ...), such operators can have properties like continuity, boundedness, and so on. More details should be found in the article about operator theory. Endofunctions An endofunction is a function
an algebraic structure known as a near-ring. Every ring with one is the endomorphism ring of its regular module, and so is a subring of an endomorphism ring of an abelian group; however there are rings that are not the endomorphism ring of any abelian group. Operator theory In any concrete category, especially for vector spaces, endomorphisms are maps from a set into itself, and may be interpreted as unary operators on that set, acting on the elements, and allowing to define the notion of orbits of elements, etc. Depending on the additional structure defined for the category at hand (topology, metric, ...), such operators can have properties like continuity, boundedness, and so on. More details should be found in the article about operator theory. Endofunctions An endofunction is a function whose domain is equal to its codomain. A homomorphic endofunction is an endomorphism. Let be an arbitrary set. Among endofunctions on one finds permutations of and constant functions associating to every in the same element in . Every permutation of has the codomain equal to its domain and is bijective and invertible. If has more than one element, a constant function on has an image that is a proper subset of its codomain, and thus is not bijective (and hence not invertible). The function associating to each natural number the floor of has its image equal to its codomain and is not invertible. Finite endofunctions are equivalent to directed pseudoforests. For sets
about forty, when his name appeared in a census. Books and opinions The True Believer Hoffer came to public attention with the 1951 publication of his first book, The True Believer: Thoughts on the Nature of Mass Movements, which consists of a preface and 125 sections, which are divided into 18 chapters. Hoffer analyzes the phenomenon of "mass movements," a general term that he applies to revolutionary parties, nationalistic movements, and religious movements. He summarizes his thesis in §113: "A movement is pioneered by men of words, materialized by fanatics and consolidated by men of actions." Hoffer argues that fanatical and extremist cultural movements, whether religious, social, or national, arise when large numbers of frustrated people, believing their own individual lives to be worthless or spoiled, join a movement demanding radical change. But the real attraction for this population is an escape from the self, not a realization of individual hopes: "A mass movement attracts and holds a following not because it can satisfy the desire for self-advancement, but because it can satisfy the passion for self-renunciation." Hoffer consequently argues that the appeal of mass movements is interchangeable: in the Germany of the 1920s and the 1930s, for example, the Communists and National Socialists were ostensibly enemies, but sometimes enlisted each other's members, since they competed for the same kind of marginalized, angry, frustrated people. For the "true believer," Hoffer argues that particular beliefs are less important than escaping from the burden of the autonomous self. Harvard historian Arthur M. Schlesinger Jr. said of The True Believer: "This brilliant and original inquiry into the nature of mass movements is a genuine contribution to our social thought." Later works Subsequent to the publication of The True Believer (1951), Eric Hoffer touched upon Asia and American interventionism in several of his essays. In "The Awakening of Asia" (1954), published in The Reporter and later his book The Ordeal of Change (1963), Hoffer discusses the reasons for unrest on the continent. In particular, he argues that the root cause of social discontent in Asia was not government corruption, "communist agitation," or the legacy of European colonial "oppression and exploitation," but rather that a "craving for pride" was the central problem in Asia, suggesting a problem that could not be relieved through typical American intervention. For centuries, Hoffer notes, Asia had "submitted to one conqueror after another." Throughout these centuries, Asia had "been misruled, looted, and bled by both foreign and native oppressors without" so much as "a peep" from the general population. Though not without negative effect, corrupt governments and the legacy of European imperialism represented nothing new under the sun. Indeed, the European colonial authorities had been "fairly beneficent" in Asia. To be sure, Communism exerted an appeal of sorts. For the Asian "pseudo-intellectual," it promised elite status and the phony complexities of "doctrinaire double talk." For the ordinary Asian, it promised partnership with the seemingly emergent Soviet Union in a "tremendous, unprecedented undertaking" to build a better tomorrow. According to Hoffer, however, Communism in Asia was dwarfed by the desire for pride. To satisfy such desire, Asians would willingly and irrationally sacrifice their economic well-being and their lives as well. Unintentionally, the West had created this appetite, causing "revolutionary unrest" in Asia. The West had done so by eroding the traditional communal bonds that once had woven the individual to the patriarchal family, clan, tribe, "cohesive rural or urban unit," and "religious or political body." Without the security and spiritual meaning produced by such bonds, Asians had been liberated from tradition only to find themselves now atomized, isolated, exposed, and abandoned, "left orphaned and empty in a cold world." Certainly, Europe had undergone a similar destruction of tradition, but it had occurred centuries earlier at the end of the medieval period and produced better results thanks to different circumstances. For the Asians of the 1950s, the circumstances differed markedly. Most were illiterate and impoverished, living in a world that included no expansive physical or intellectual vistas. Dangerously, the "articulate minority" of the Asian population inevitably disconnected themselves from the ordinary people, thereby failing to acquire "a sense of usefulness and of worth" that came by "taking part in the world's work." As a result, they were "condemned to the life of chattering posturing pseudo-intellectuals" and coveted "the illusion of weight and importance." Most significantly, Hoffer asserts that the disruptive awakening of Asia came about as a result of an unbearable sense of weakness. Indeed, Hoffer discusses the problem of weakness, asserting that while "power corrupts the few... weakness corrupts the many." Hoffer notes that "the resentment of the weak does not spring from any injustice done them but from the sense of their inadequacy and impotence." In short, the weak "hate not wickedness" but themselves for being weak. Consequently, self-loathing produces explosive effects that cannot be mitigated through social engineering schemes, such as programs of wealth redistribution. In fact, American "generosity" is counterproductive, perceived in Asia simply as an example of Western "oppression." In the wake of the Korean War, Hoffer does not recommend exporting at gunpoint either American political institutions or mass democracy. In fact, Hoffer advances the possibility that winning over the multitudes of Asia may not even be desirable. If on the other hand, necessity truly dictates that for "survival" the United States must persuade the "weak" of Asia to "our side," Hoffer suggests the wisest course of action would be to master "the art or technique of sharing hope, pride, and as a last resort, hatred with others." During the Vietnam War, despite his objections to the antiwar movement and acceptance of the notion that the war was somehow necessary to prevent a third world war, Hoffer remained skeptical concerning American interventionism, specifically the intelligence with which the war was being conducted in Southeast Asia. After the United States became involved in the war, Hoffer wished to avoid defeat in Vietnam because of his fear that such a defeat would transform American society for ill, opening the door to those who would preach a stab-in-the-back myth and allow for the rise of an American version of Hitler. In The Temper of Our Time (1967), Hoffer implies that the United States as a rule should avoid interventions in the first place: "the better part of statesmanship might be to know clearly and precisely what not to do, and leave action to the improvisation of chance." In fact, Hoffer indicates that "it might be wise to wait for enemies to defeat themselves," as they might fall upon each other with the United States out of the picture. The view was somewhat borne out with the Cambodian-Vietnamese War and Chinese-Vietnamese War of the late 1970s. In May 1968, about a year after the Six-Day War, he wrote an article for the Los Angeles Times titled "Israel's Peculiar Position:" Hoffer asks why "everyone expects the Jews to be the only real Christians in this world" and why Israel should sue for peace after its victory. Hoffer believed that rapid change is not necessarily a positive thing for a society and that too rapid
odd jobs. In 1931, he considered suicide by drinking a solution of oxalic acid, but he could not bring himself to do it. He left Skid Row and became a migrant worker, following the harvests in California. He acquired a library card where he worked, dividing his time "between the books and the brothels." He also prospected for gold in the mountains. Snowed in for the winter, he read the Essays by Michel de Montaigne. Montaigne impressed Hoffer deeply, and Hoffer often made reference to him. He also developed a respect for America's underclass, which he said was "lumpy with talent." Career He wrote a novel, Four Years in Young Hank's Life, and a novella, Chance and Mr. Kunze, both partly autobiographical. He also penned a long article based on his experiences in a federal work camp, "Tramps and Pioneers." It was never published, but a truncated version appeared in Harper's Magazine after he became well known. Hoffer tried to enlist in the US Army at age 40 during World War II, but he was rejected due to a hernia. Instead, he began work as a longshoreman on the docks of San Francisco in 1943. At the same time, he began to write seriously. Hoffer left the docks in 1964, and shortly after became an adjunct professor at the University of California, Berkeley. He later retired from public life in 1970. “I'm going to crawl back into my hole where I started,” he said. “I don't want to be a public person or anybody's spokesman... Any man can ride a train. Only a wise man knows when to get off.” In 1970, he endowed the Lili Fabilli and Eric Hoffer Laconic Essay Prize for students, faculty, and staff at the University of California, Berkeley. Hoffer called himself an atheist but had sympathetic views of religion and described it as a positive force. He died at his home in San Francisco in 1983 at the age of 80. Working-class roots Hoffer was influenced by his modest roots and working-class surroundings, seeing in it vast human potential. In a letter to Margaret Anderson in 1941, he wrote: He once remarked, "my writing grows out of my life just as a branch from a tree." When he was called an intellectual, he insisted that he simply was a longshoreman. Hoffer has been dubbed by some authors a "longshoreman philosopher." Personal life Hoffer, who was an only child, never married. He fathered a child with Lili Fabilli Osborne, named Eric Osborne, who was born in 1955 and raised by Lili Osborne and her husband, Selden Osborne. Lili Fabilli Osborne had become acquainted with Hoffer through her husband, a fellow longshoreman and acquaintance of Hoffer's. Despite the affair and Lili Osborne later co-habitating with Hoffer, Selden Osborne and Hoffer remained on good terms. Hoffer referred to Eric Osborne as his son or godson. Lili Fabilli Osborne died in 2010 at the age of 93. Prior to her death, Osborne was the executor of Hoffer's estate, and vigorously controlled the rights to his intellectual property. In his 2012 book Eric Hoffer: The Longshoreman Philosopher, journalist Tom Bethell revealed doubts about Hoffer's account of his early life. Although Hoffer claimed his parents were from Alsace-Lorraine, Hoffer himself spoke with a pronounced Bavarian accent. He claimed to have been born and raised in the Bronx but had no Bronx accent. His lover and executor Lili Fabilli stated that she always thought Hoffer was an immigrant. Her son, Eric Fabilli, said that Hoffer's life might have been comparable to that of B. Traven and considered hiring a genealogist to investigate Hoffer's early life, to which Hoffer reportedly replied, "Are you sure you want to know?" Pescadero land-owner Joe Gladstone, a family friend of the Fabilli's who also knew Hoffer, said of Hoffer's account of his early life: "I don't believe a word of it." To this day, no one ever has claimed to have known Hoffer in his youth, and no records apparently exist of his parents, nor indeed of Hoffer himself until he was about forty, when his name appeared in a census. Books and opinions The True Believer Hoffer came to public attention with the 1951 publication of his first book, The True Believer: Thoughts on the Nature of Mass Movements, which consists of a preface and 125 sections, which are divided into 18 chapters. Hoffer analyzes the phenomenon of "mass movements," a general term that he applies to revolutionary parties, nationalistic movements, and religious movements. He summarizes his thesis in §113: "A movement is pioneered by men of words, materialized by fanatics and consolidated by men of actions." Hoffer argues that fanatical and extremist cultural movements, whether religious, social, or national, arise when large numbers of frustrated people, believing their own individual lives to be worthless or spoiled, join a movement demanding radical change. But the real attraction for this population is an escape from the self, not a realization of individual hopes: "A mass movement attracts and holds a following not because it can satisfy the desire for self-advancement, but because it can satisfy the passion for self-renunciation." Hoffer consequently argues that the appeal of mass movements is interchangeable: in the Germany of the 1920s and the 1930s, for example, the Communists and National Socialists were ostensibly enemies, but sometimes enlisted each other's members, since they competed for the same kind of marginalized, angry, frustrated people. For the "true believer," Hoffer argues that particular beliefs are less important than escaping from the burden of the autonomous self. Harvard historian Arthur M. Schlesinger Jr. said of The True Believer: "This brilliant and original inquiry into the nature of mass movements is a genuine contribution to our social thought." Later works Subsequent to the publication of The True Believer (1951), Eric Hoffer touched upon Asia and American interventionism in several of his essays. In "The Awakening of Asia" (1954), published in The Reporter and later his book The Ordeal of Change (1963), Hoffer discusses the reasons for unrest on the continent. In particular, he argues that the root cause of social discontent in Asia was not government corruption, "communist agitation," or the legacy of European colonial "oppression and exploitation," but rather that a "craving for pride" was the central problem in Asia, suggesting a problem that could not be relieved through typical American intervention. For centuries, Hoffer notes, Asia had "submitted to one conqueror after another." Throughout these centuries, Asia had "been misruled, looted, and bled by both foreign and native oppressors without" so much as "a peep" from the general population. Though not without negative effect, corrupt governments and the legacy of European imperialism represented nothing new under the sun. Indeed, the European colonial authorities had been "fairly beneficent"
Minister, Schuman was instrumental in turning French policy away from the Gaullist objective of permanent occupation or control of parts of German territory such as the Ruhr or the Saar. Despite stiff ultra-nationalist, Gaullist and communist opposition, the French Assembly voted a number of resolutions in favour of his new policy of integrating Germany into a community. The International Authority for the Ruhr changed in consequence. Schuman declaration The Schuman Declaration was intended to prevent further war between France and Germany and other states by tackling the root cause of war. The ECSC was primarily conceived with France and Germany in mind: "The coming together of the nations of Europe requires the elimination of the age-old opposition of France and Germany. Any action taken must in the first place concern these two countries." The coal and steel industries being essential for the production of munitions, Schuman believed that by uniting these two industries across France and Germany under an innovative supranational system that also included a European anti-cartel agency, he could "make war not only unthinkable but materially impossible". Negotiations Following the Schuman Declaration in May 1950, negotiations on what became the Treaty of Paris (1951) began on 20 June 1950. The objective of the treaty was to create a single market in the coal and steel industries of the member states. Customs duties, subsidies, discriminatory and restrictive practices were all to be abolished. The single market was to be supervised by a High Authority, with powers to handle extreme shortages of supply or demand, to tax, and to prepare production forecasts as guidelines for investment. A key issue in the negotiations for the treaty was the break-up of the excessive concentrations in the coal and steel industries of the Ruhr, where the Konzerne, or trusts, had underlain the military power of the former Reich. The Germans regarded the concentration of coal and steel as one of the bases of their economic efficiency, and a right. The steel barons were a formidable lobby because they embodied a national tradition. The US was not officially part of the treaty negotiations, but it was a major force behind the scenes. The US High Commissioner for Occupied Germany, John McCloy, was an advocate of decartelization and his chief advisor in Germany was a Harvard anti-trust lawyer, Robert Bowie. Bowie was asked to draft anti-trust articles, and texts of the two articles he prepared (on cartels and the abuse of monopoly power) became the basis of the treaty's competition policy regime. Also, Raymond Vernon (of later fame for his studies on industrial policy at Harvard university) was passing every clause of successive drafts of the treaty under his microscope down in the bowels of the State Department. He stressed the importance of the freedom of the projected common market from restrictive practices. The Americans insisted that the German coal sales monopoly, the Deutscher Kohlenverkauf (DKV), should lose its monopoly, and that the steel industries should no longer own the coalmines. It was agreed that the DKV would be broken up into four independent sales agencies. The steel firm Vereinigte Stahlwerke was to be divided into thirteen firms, and Krupp into two. Ten years after the Schuman negotiations, a US State Department official noted that while the articles as finally agreed were more qualified than American officials in touch with the negotiations would have wished, they were "almost revolutionary" in terms of the traditional European approach to these basic industries. Political pressures and treaty ratification In West Germany, Karl Arnold, the Minister President of North Rhine-Westphalia, the state that included the coal and steel producing Ruhr, was initially spokesman for German foreign affairs. He gave a number of speeches and broadcasts on a supranational coal and steel community at the same time as Robert Schuman began to propose this Community in 1948 and 1949. The Social Democratic Party of Germany (, SPD), in spite of support from unions and other socialists in Europe, decided it would oppose the Schuman plan. Kurt Schumacher's personal distrust of France, capitalism, and Konrad Adenauer aside, he claimed that a focus on integrating with a "Little Europe of the Six" would override the SPD's prime objective of German reunification and thus empower ultra-nationalist and Communist movements in democratic countries. He also thought the ECSC would end any hopes of nationalising the steel industry and lock in a Europe of "cartels, clerics and conservatives". Younger members of the party like Carlo Schmid, were, however, in favor of the Community and pointed to the long socialist support for the supranational idea. In France, Schuman had gained strong political and intellectual support from all sections of the nation and many non-communist parties. Notable amongst these were ministerial colleague Andre Philip, president of the Foreign Relations Committee Edouard Bonnefous, and former prime minister, Paul Reynaud. Projects for a coal and steel authority and other supranational communities were formulated in specialist subcommittees of the Council of Europe in the period before it became French government policy. Charles de Gaulle, who was then out of power, had been an early supporter of "linkages" between economies, on French terms, and had spoken in 1945
Gaullist objective of permanent occupation or control of parts of German territory such as the Ruhr or the Saar. Despite stiff ultra-nationalist, Gaullist and communist opposition, the French Assembly voted a number of resolutions in favour of his new policy of integrating Germany into a community. The International Authority for the Ruhr changed in consequence. Schuman declaration The Schuman Declaration was intended to prevent further war between France and Germany and other states by tackling the root cause of war. The ECSC was primarily conceived with France and Germany in mind: "The coming together of the nations of Europe requires the elimination of the age-old opposition of France and Germany. Any action taken must in the first place concern these two countries." The coal and steel industries being essential for the production of munitions, Schuman believed that by uniting these two industries across France and Germany under an innovative supranational system that also included a European anti-cartel agency, he could "make war not only unthinkable but materially impossible". Negotiations Following the Schuman Declaration in May 1950, negotiations on what became the Treaty of Paris (1951) began on 20 June 1950. The objective of the treaty was to create a single market in the coal and steel industries of the member states. Customs duties, subsidies, discriminatory and restrictive practices were all to be abolished. The single market was to be supervised by a High Authority, with powers to handle extreme shortages of supply or demand, to tax, and to prepare production forecasts as guidelines for investment. A key issue in the negotiations for the treaty was the break-up of the excessive concentrations in the coal and steel industries of the Ruhr, where the Konzerne, or trusts, had underlain the military power of the former Reich. The Germans regarded the concentration of coal and steel as one of the bases of their economic efficiency, and a right. The steel barons were a formidable lobby because they embodied a national tradition. The US was not officially part of the treaty negotiations, but it was a major force behind the scenes. The US High Commissioner for Occupied Germany, John McCloy, was an advocate of decartelization and his chief advisor in Germany was a Harvard anti-trust lawyer, Robert Bowie. Bowie was asked to draft anti-trust articles, and texts of the two articles he prepared (on cartels and the abuse of monopoly power) became the basis of the treaty's competition policy regime. Also, Raymond Vernon (of later fame for his studies on industrial policy at Harvard university) was passing every clause of successive drafts of the treaty under his microscope down in the bowels of the State Department. He stressed the importance of the freedom of the projected common market from restrictive practices. The Americans insisted that the German coal sales monopoly, the Deutscher Kohlenverkauf (DKV), should lose its monopoly, and that the steel industries should no longer own the coalmines. It was agreed that the DKV would be broken up into four independent sales agencies. The steel firm Vereinigte Stahlwerke was to be divided into thirteen firms, and Krupp into two. Ten years after the Schuman negotiations, a US State Department official noted that while the articles as finally agreed were more qualified than American officials in touch with the negotiations would have wished, they were "almost revolutionary" in terms of the traditional European approach to these basic industries. Political pressures and treaty ratification In West Germany, Karl Arnold, the Minister President of North Rhine-Westphalia, the state that included the coal and steel producing Ruhr, was initially spokesman for German foreign affairs. He gave a number of speeches and broadcasts on a supranational coal and steel community at the same time as Robert Schuman began to propose this Community in 1948 and 1949. The Social Democratic Party of Germany (, SPD), in spite of support from unions and other socialists in Europe, decided it would oppose the Schuman plan. Kurt Schumacher's personal distrust of France, capitalism, and Konrad Adenauer aside, he claimed that a focus on integrating with a "Little Europe of the Six" would override the SPD's prime objective of German reunification and thus empower ultra-nationalist and Communist movements in democratic countries. He also thought the ECSC would end any hopes of nationalising the steel industry and lock in a Europe of "cartels, clerics and conservatives". Younger members of the party like Carlo Schmid, were, however, in favor of the Community and pointed to the long socialist support for the supranational idea. In France, Schuman had gained strong political and intellectual support from all sections of the nation and many non-communist parties. Notable amongst these were ministerial colleague Andre Philip, president of the Foreign Relations Committee Edouard Bonnefous, and former prime minister, Paul Reynaud. Projects for a coal and steel authority and other supranational communities were formulated in specialist subcommittees of the Council of Europe in the period before it became French government policy. Charles de Gaulle, who was then out of power, had been an early supporter of "linkages" between economies, on French terms, and had spoken in 1945 of a "European confederation" that would exploit the resources of the Ruhr. However, he opposed the ECSC as a faux (false) pooling ("le pool, ce faux semblant") because he considered it an unsatisfactory "piecemeal approach" to European unity and because he considered the French government "too weak" to dominate the ECSC as he thought proper. De Gaulle also felt that the ECSC had insufficient supranational authority because the Assembly was not ratified by a European referendum and he did not accept Raymond Aron's contention that the ECSC was intended as a movement away from United States domination. Consequently, de Gaulle and his followers in the RPF voted against ratification in the lower house of the French Parliament. Despite these attacks and those from the extreme left, the ECSC found substantial public support. It gained strong majority votes in all eleven chambers of the parliaments of the Six, as well as approval among associations and European public opinion. In 1950, many had thought another war was inevitable. The steel and coal interests, however, were quite vocal in their opposition. The Council of Europe, created by a proposal of Schuman's first government in May 1948, helped articulate European public opinion and gave the Community idea positive support. The UK Prime Minister Clement Attlee opposed Britain joining the proposed European Coal and Steel Community, saying that he 'would not accept the [UK] economy being handed over to an authority that is utterly undemocratic and is responsible to nobody.' Treaty The 100-article Treaty of Paris, which established the ECSC, was signed on 18 April 1951 by "the inner six": France, West Germany, Italy, Belgium, the Netherlands and Luxembourg. The ECSC was based on supranational principles and was, through the establishment of a common market for coal and steel, intended to expand the economy, increase employment, and raise the standard of living within the Community. The market was also intended to progressively rationalise the distribution of production whilst ensuring stability and employment. The common market for coal was opened on 10 February 1953, and for steel on 1 May 1953. Upon taking effect, the ECSC replaced the International Authority for the Ruhr. On 11 August 1952, the United States was the first non-ECSC member to recognise the Community and stated it would now deal with the ECSC on coal and steel matters, establishing its delegation in Brussels. Monnet responded by choosing Washington, D.C. as the site of the ECSC's first external presence. The headline of the delegation's first bulletin read "Towards a Federal Government of Europe". Six years after the Treaty of Paris, the Treaties of Rome were signed by the six ECSC members, creating the European Economic Community (EEC) and the European Atomic Energy Community (EAEC or Euratom). These Communities were based, with some adjustments, on the ECSC. The Treaties of Rome were to be in force indefinitely, unlike the Treaty of Paris, which was to expire after fifty years. These two new Communities worked on the creation of a customs union and nuclear power community respectively. Merger and expiry Despite being separate legal entities, the ECSC, EEC and Euratom initially shared the Common Assembly and the European Court
of the first important accomplishments of the EEC was the establishment (1962) of common price levels for agricultural products. In 1968, internal tariffs (tariffs on trade between member nations) were removed on certain products. Another crisis was triggered in regard to proposals for the financing of the Common Agricultural Policy, which came into force in 1962. The transitional period whereby decisions were made by unanimity had come to an end, and majority-voting in the council had taken effect. Then-French President Charles de Gaulle's opposition to supranationalism and fear of the other members challenging the CAP led to an "empty chair policy" whereby French representatives were withdrawn from the European institutions until the French veto was reinstated. Eventually, a compromise was reached with the Luxembourg compromise on 29 January 1966 whereby a gentlemen's agreement permitted members to use a veto on areas of national interest. On 1 July 1967 when the Merger Treaty came into operation, combining the institutions of the ECSC and Euratom into that of the EEC, they already shared a Parliamentary Assembly and Courts. Collectively they were known as the European Communities. The Communities still had independent personalities although were increasingly integrated. Future treaties granted the community new powers beyond simple economic matters which had achieved a high level of integration. As it got closer to the goal of political integration and a peaceful and united Europe, what Mikhail Gorbachev described as a Common European Home. Enlargement and elections The 1960s saw the first attempts at enlargement. In 1961, Denmark, Ireland, the United Kingdom and Norway (in 1962), applied to join the three Communities. However, President Charles de Gaulle saw British membership as a Trojan horse for U.S. influence and vetoed membership, and the applications of all four countries were suspended. Greece became the first country to join the EC in 1961 as an associate member, however its membership was suspended in 1967 after the Colonels' coup d'état. A year later, in February 1962, Spain attempted to join the European Communities. However, because Francoist Spain was not a democracy, all members rejected the request in 1964. The four countries resubmitted their applications on 11 May 1967 and with Georges Pompidou succeeding Charles de Gaulle as French president in 1969, the veto was lifted. Negotiations began in 1970 under the pro-European UK government of Edward Heath, who had to deal with disagreements relating to the Common Agricultural Policy and the UK's relationship with the Commonwealth of Nations. Nevertheless, two years later the accession treaties were signed so that Denmark, Ireland and the UK joined the Community effective 1 January 1973. The Norwegian people had finally rejected membership in a referendum on 25 September 1972. The Treaties of Rome had stated that the European Parliament must be directly elected, however this required the Council to agree on a common voting system first. The Council procrastinated on the issue and the Parliament remained appointed, French President Charles de Gaulle was particularly active in blocking the development of the Parliament, with it only being granted Budgetary powers following his resignation. Parliament pressured for agreement and on 20 September 1976 the Council agreed part of the necessary instruments for election, deferring details on electoral systems which remain varied to this day. During the tenure of President Jenkins, in June 1979, the elections were held in all the then-members (see 1979 European Parliament election). The new Parliament, galvanised by direct election and new powers, started working full-time and became more active than the previous assemblies. Shortly after its election, the Parliament proposed that the Community adopt the flag of Europe design used by the Council of Europe. The European Council in 1984 appointed an ad hoc committee for this purpose. The European Council in 1985 largely followed the Committee's recommendations, but as the adoption of a flag was strongly reminiscent of a national flag representing statehood, was controversial, the "flag of Europe" design was adopted only with the status of a "logo" or "emblem". The European Council, or European summit, had developed since the 1960s as an informal meeting of the Council at the level of heads of state. It had originated from then-French President Charles de Gaulle's resentment at the domination of supranational institutions (e.g. the Commission) over the integration process. It was mentioned in the treaties for the first time in the Single European Act (see below). Toward Maastricht Greece re-applied to join the community on 12 June 1975, following the restoration of democracy, and joined on 1 January 1981. Following on from Greece, and after their own democratic restoration, Spain and Portugal applied to the communities in 1977 and joined together on 1 January 1986. In 1987 Turkey formally applied to join the Community and began the longest application process for any country. With the prospect of further enlargement, and a desire to increase areas of co-operation, the Single European Act was signed by the foreign ministers on 17 and 28 February 1986 in Luxembourg and The Hague respectively. In a single document it dealt with reform of institutions, extension of powers, foreign policy cooperation and the single market. It came into force on 1 July 1987. The act was followed by work on what would be the Maastricht Treaty, which was agreed on 10 December 1991, signed the following year and coming into force on 1 November 1993 establishing the European Union, and paving the way for the European Monetary Union. European Community The EU absorbed the European Communities as one of its three pillars. The EEC's areas of activities were enlarged and were renamed the European Community, continuing to follow the supranational structure of the EEC. The EEC institutions became those of the EU, however the Court, Parliament and Commission had only limited input in the new pillars, as they worked on a more intergovernmental system than the European Communities. This was reflected in the names of the institutions, the Council was formally the "Council of the European Union" while the Commission was formally the "Commission of the European Communities". However, after the Treaty of Maastricht, Parliament gained a much bigger role. Maastricht brought in the codecision procedure, which gave it equal
Merger Treaty (Treaty of Brussels). In 1993 a complete single market was achieved, known as the internal market, which allowed for the free movement of goods, capital, services, and people within the EEC. In 1994 the internal market was formalised by the EEA agreement. This agreement also extended the internal market to include most of the member states of the European Free Trade Association, forming the European Economic Area, which encompasses 15 countries. Upon the entry into force of the Maastricht Treaty in 1993, the EEC was renamed the European Community to reflect that it covered a wider range than economic policy. This was also when the three European Communities, including the EC, were collectively made to constitute the first of the three pillars of the European Union, which the treaty also founded. The EC existed in this form until it was abolished by the 2009 Treaty of Lisbon, which incorporated the EC's institutions into the EU's wider framework and provided that the EU would "replace and succeed the European Community". The EEC was also known as the European Common Market in the English-speaking countries and sometimes referred to as the European Community even before it was officially renamed as such in 1993. History Background In 1951, the Treaty of Paris was signed, creating the European Coal and Steel Community (ECSC). This was an international community based on supranationalism and international law, designed to help the economy of Europe and prevent future war by integrating its members. With the aim of creating a federal Europe two further communities were proposed: a European Defence Community and a European Political Community. While the treaty for the latter was being drawn up by the Common Assembly, the ECSC parliamentary chamber, the proposed defense community was rejected by the French Parliament. ECSC President Jean Monnet, a leading figure behind the communities, resigned from the High Authority in protest and began work on alternative communities, based on economic integration rather than political integration. After the Messina Conference in 1955, Paul Henri Spaak was given the task to prepare a report on the idea of a customs union. The so-called Spaak Report of the Spaak Committee formed the cornerstone of the intergovernmental negotiations at Val Duchesse conference centre in 1956. Together with the Ohlin Report the Spaak Report would provide the basis for the Treaty of Rome. In 1956, Paul Henri Spaak led the Intergovernmental Conference on the Common Market and Euratom at the Val Duchesse conference centre, which prepared for the Treaty of Rome in 1957. The conference led to the signature, on 25 March 1957, of the Treaty of Rome establishing a European Economic Community. Creation and early years The resulting communities were the European Economic Community (EEC) and the European Atomic Energy Community (EURATOM or sometimes EAEC). These were markedly less supranational than the previous communities, due to protests from some countries that their sovereignty was being infringed (however there would still be concerns with the behaviour of the Hallstein Commission). Germany became a founding member of the EEC, and Konrad Adenauer was made leader in a very short time. The first formal meeting of the Hallstein Commission was held on 16 January 1958 at the Chateau de Val-Duchesse. The EEC (direct ancestor of the modern Community) was to create a customs union while Euratom would promote co-operation in the nuclear power sphere. The EEC rapidly became the most important of these and expanded its activities. One of the first important accomplishments of the EEC was the establishment (1962) of common price levels for agricultural products. In 1968, internal tariffs (tariffs on trade between member nations) were removed on certain products. Another crisis was triggered in regard to proposals for the financing of the Common Agricultural Policy, which came into force in 1962. The transitional period whereby decisions were made by unanimity had come to an end, and majority-voting in the council had taken effect. Then-French President Charles de Gaulle's opposition to supranationalism and fear of the other members challenging the CAP led to an "empty chair policy" whereby French representatives were withdrawn from the European institutions until the French veto was reinstated. Eventually, a compromise was reached with the Luxembourg compromise on 29 January 1966 whereby a gentlemen's agreement permitted members to use a veto on areas of national interest. On 1 July 1967 when the Merger Treaty came into operation, combining the institutions of the ECSC and Euratom into that of the EEC, they already shared a Parliamentary Assembly and Courts. Collectively they were known as the European Communities. The Communities still had independent personalities although were increasingly integrated. Future treaties granted the community new powers beyond simple economic matters which had achieved a high level of integration. As it got closer to the goal of political integration and a peaceful and united Europe, what Mikhail Gorbachev described as a Common European Home. Enlargement and elections The 1960s saw the first attempts at enlargement. In 1961, Denmark, Ireland, the United Kingdom and Norway (in 1962), applied to join the three Communities. However, President Charles de Gaulle saw British membership as a Trojan horse for U.S. influence and vetoed membership, and the applications of all four countries were suspended. Greece became the first country to join the EC in 1961 as an associate member, however its membership was suspended in 1967 after the Colonels' coup d'état. A year later, in February 1962, Spain attempted to join the European Communities. However, because Francoist Spain was not a democracy, all members rejected the request in 1964. The four countries resubmitted their applications on 11 May 1967 and with Georges Pompidou succeeding Charles de Gaulle as French president in 1969, the veto was lifted. Negotiations began in 1970 under the pro-European UK government of Edward Heath, who had to deal with disagreements relating to the Common Agricultural Policy and the UK's relationship with the Commonwealth of Nations. Nevertheless, two years later the accession treaties were signed so that Denmark, Ireland and the UK joined the Community effective 1 January 1973. The Norwegian people had finally rejected membership in a referendum on 25 September 1972. The Treaties of Rome had stated that the European Parliament must be directly elected, however this required the Council to agree on a common voting system first. The Council procrastinated on the issue and the Parliament remained appointed, French President Charles de Gaulle was particularly active in blocking the development of the Parliament, with it only being granted Budgetary powers following his resignation. Parliament pressured for agreement and on 20 September 1976 the Council agreed part of the necessary instruments for election, deferring details on electoral systems which remain varied to this day. During the tenure of President Jenkins, in June 1979, the elections were held in all the then-members (see 1979 European Parliament election). The new Parliament, galvanised by direct election and new powers, started working full-time and became more active than the previous assemblies. Shortly after its election, the Parliament proposed that the Community adopt the flag of Europe design used by the Council of Europe. The European Council in 1984 appointed an ad hoc committee for this purpose. The European Council in 1985 largely followed the Committee's recommendations, but as the adoption of a flag was strongly reminiscent of a national flag representing statehood, was controversial, the "flag of Europe" design was adopted only with the status of a "logo" or "emblem". The European Council, or European summit, had developed since the 1960s as an informal meeting of the Council at the level of heads of state. It had originated from then-French President Charles de Gaulle's resentment at the domination of supranational institutions (e.g. the Commission) over the integration process. It was mentioned in the treaties for the first time in the Single European Act (see below). Toward Maastricht Greece re-applied to join the community on 12 June 1975, following the restoration of democracy, and joined on 1 January 1981. Following on from Greece, and after their own democratic restoration, Spain and Portugal applied to the communities in 1977 and joined together on 1 January 1986. In 1987 Turkey formally applied to join the Community and began the longest application process for any country. With the prospect of further enlargement, and a desire to increase areas of co-operation, the Single European Act was signed by the foreign ministers on 17 and 28 February 1986 in Luxembourg and The Hague respectively. In a single document it dealt with reform of institutions, extension of powers, foreign policy cooperation and the single market. It came into force on 1 July 1987. The act was followed by work on what would be the Maastricht Treaty, which was agreed on 10 December 1991, signed the following year and coming into force on 1 November 1993 establishing the European Union, and paving the way for the European Monetary Union. European Community The EU absorbed the European Communities as one of its three pillars. The EEC's areas of activities were enlarged and were renamed the European Community, continuing to follow the supranational structure of the EEC. The EEC institutions became those of the EU, however the Court, Parliament and Commission had only limited input in the new pillars, as they worked on a more intergovernmental system than the European Communities. This was reflected in the names of the institutions, the Council was formally the "Council of the European Union" while the Commission was formally the "Commission of the European Communities". However, after the Treaty of Maastricht, Parliament gained a much bigger role. Maastricht brought in the codecision procedure, which gave it equal legislative power with the Council on Community matters. Hence, with the greater powers of the supranational institutions and the operation of Qualified Majority Voting in the Council, the Community pillar could be described as a far more federal method of decision making. The Treaty of Amsterdam transferred responsibility for free movement of persons (e.g., visas, illegal immigration, asylum) from the Justice and Home Affairs (JHA) pillar to the European Community (JHA was renamed Police and Judicial Co-operation in Criminal Matters (PJCC) as a result). Both Amsterdam and the Treaty of Nice also extended codecision procedure to nearly all policy areas, giving Parliament equal power to the Council in the Community. In 2002, the Treaty of Paris which established the ECSC expired, having reached its 50-year limit (as the first treaty, it was the only one
treaty amongst these countries. Cultural role The Queen's Official Birthday is a public holiday in Papua New Guinea. In Papua New Guinea, it is usually celebrated on the second Monday of June every year. Official celebrations occur at hotels in Port Moresby, and much of the day is filled with sports matches, fireworks displays, and other celebrations and events. Honours and medals are given for public service to Papua New Guineans, who are mentioned in the Queen's Birthday Honours List. The national police force of Papua New Guinea is known as "The Royal Papua New Guinea Constabulary". The Crown and Honours Within the Commonwealth realms, the monarch is deemed the fount of honour. Similarly, the monarch, as Sovereign of Papua New Guinea, confers awards and honours in Papua New Guinea in her name. Most of them are often awarded on the advice of "Her Majesty's Papua New Guinea Ministers". Papua New Guinea's own national honours and awards system, known as "The Orders of Papua New Guinea", was formally established on 23 August 2005 by authority of the Queen of Papua New Guinea, Elizabeth II. The Queen is the Sovereign and Head of the Orders of Papua New Guinea. Her vice-regal representative, the Governor-General, is the Chancellor of the Orders of Papua New Guinea and Principal Grand Companion of the Order of Logohu. The Crown and the Defence Force The Crown sits at the pinnacle of the Papua New Guinea Defence Force. It is reflected in Papua New Guinea's maritime vessels, which bear the prefix HMPNGS, i.e., Her Majesty's Papua New Guinea Ship. St Edward's Crown appears on Papua New Guinea's Defence Force rank insignia, which illustrates the monarchy as the locus of authority. Members of the royal family also act as colonels-in-chief of various regiments, reflecting the Crown's relationship with the Defence Force through participation in military ceremonies both at home and abroad. Charles, Prince of Wales is the Colonel-in-Chief of Papua New Guinea's Royal Pacific Islands Regiment. In 2012, Charles, dressed in the forest green uniform of the regiment, presented troops with new colours at the Sir John Guise Stadium in Port Moresby. The Crown and Tok Pisin In Tok Pisin, the Queen is referred to as Missis Kwin and as Mama belong big family. The Queen's eldest son, Charles is known in Tok Pisin as Nambawan Pikinini Bilong Misis Kwin (first born child of Missis Kwin). The late Prince Philip, Duke of Edinburgh was addressed as "Oldfella Pili-Pili Him Bilong Misis Kwin". In August 1984, the Prince of Wales visited Manus island and in a lavish ceremony was crowned the "10th Lapan of Manus". A feast was organised for this occasion and all the local chiefs were invited. Charles—draped with dogs' teeth necklaces—accepted the title by saying, "Wuroh, wuroh, wuroh, all man meri bilong Manus. Mi hammamas tru" (Tok Pisin: Thank you all men and women of Manus. I am truly filled with happiness). In 1996, the people of Papua New Guinea presented the Queen with a portrait, titled Missis Kwin. Painted by artist Mathias Kauage, the Queen is shown wearing a Gerua, an important ceremonial headdress traditionally worn by Chieftains in the Highlands of Papua New Guinea. A Gerua is generally made of wood that is carved and then painted in bright colours to resemble the feathers of birds of paradise, and other species. According to the artist, the portrait represents the Queen as Head of the Commonwealth. Royal visits Prince Philip, Duke of Edinburgh, visited during an extended Commonwealth tour which lasted from October 1956 until February 1957. Prince Edward and Katherine, the Duke and Duchess of Kent, visited in 1969 to open the 3rd South Pacific Games in Port Moresby. The Queen visited Papua New Guinea for the first time, along with Prince Philip and Princess Anne, in February 1974. The Queen returned in 1977 during her Silver Jubilee tour, when she toured the capital Port Moresby, Popondetta and Alotau. The Queen and the Duke visited again in October 1982. Charles, Prince of Wales, toured in 1966, while he was a student in Australia. For the independence celebrations in 1975, the Queen of Papua New Guinea was represented by the Prince of Wales. Charles visited again in 1984 to open the new parliament building in Port Moresby. Prince Andrew, Duke of York visited in 1991 to open the 9th South Pacific Games. Anne, Princess Royal visited in 2005 for the 30th anniversary of independence celebrations. Among other places, the Princess visited the Bomana War Cemetery, Anglicare Stop Aids centre at Waigani, Cheshire Homes at Hohola, and the Violence Against
has been dismissed from office, although in 1991, Sir Vincent Serei Eri resigned from office after Prime Minister Sir Rabbie Namaliu advised the queen to dismiss him. All executive powers of Papua New Guinea rest with the sovereign. All laws in Papua New Guinea are enacted only with the granting of Royal Assent, done by the Governor-General on behalf of the sovereign. The Governor-General is also responsible for proroguing, and dissolving Parliament. The opening of a session of Parliament is accompanied by the Speech from the Throne by the Governor-General. The Crown and the Courts The Papua New Guinean monarch, on the advice of the National Executive Council, can also grant immunity from prosecution, exercise the royal prerogative of mercy, and pardon offences against the Crown, either before, during, or after a trial. The exercise of the 'Power of Mercy' to grant a pardon and the commutation of prison sentences is described in section 151 of the Constitution. Title The monarch holds a unique Papua New Guinean title, granted by the constitution—Elizabeth the Second, Queen of Papua New Guinea and of Her other Realms and Territories, Head of the Commonwealth—though, the monarch is typically styled Queen of Papua New Guinea and is addressed as such when in Papua New Guinea or performing duties on behalf of Papua New Guinea abroad. Colloquially, the Queen is referred to as "Missis Kwin" and as "Mama belong big family" in the creole language of Tok Pisin. Oath of allegiance The oath of allegiance in Papua New Guinea is: Succession The constitution provides that the Queen's heirs shall succeed her as head of state. Like some realms, Papua New Guinea defers to United Kingdom law to determine the line of succession. Succession is by absolute primogeniture governed by the provisions of the Succession to the Crown Act 2013, as well as the Act of Settlement, 1701, and the Bill of Rights, 1689. This legislation limits the succession to the natural (i.e. non-adopted), legitimate descendants of Sophia, Electress of Hanover, and stipulates that the monarch cannot be a Roman Catholic, nor married to one, and must be in communion with the Church of England upon ascending the throne. Though these constitutional laws, as they apply to Papua New Guinea, still lie within the control of the British parliament, via adopting the Statute of Westminster both the United Kingdom and Papua New Guinea agreed not to change the rules of succession without the unanimous consent of the other realms, unless explicitly leaving the shared monarchy relationship; a situation that applies identically in all the other realms, and which has been likened to a treaty amongst these countries. Cultural role The Queen's Official Birthday is a public holiday in Papua New Guinea. In Papua New Guinea, it is usually celebrated on the second Monday of June every year. Official celebrations occur at hotels in Port Moresby, and much of the day is filled with sports matches, fireworks displays, and other celebrations and events. Honours and medals are given for public service to Papua New Guineans, who are mentioned in the Queen's Birthday Honours List. The national police force of Papua New Guinea is known as "The Royal Papua New Guinea Constabulary". The Crown and Honours Within the Commonwealth realms, the monarch is deemed the fount of honour. Similarly, the monarch, as Sovereign of Papua New Guinea, confers awards and honours in Papua New Guinea in her name. Most of them are often awarded on the advice of "Her Majesty's Papua New Guinea Ministers". Papua New Guinea's own national honours and awards system, known as "The Orders of Papua New Guinea", was formally established on 23 August 2005 by authority of the Queen of Papua New Guinea, Elizabeth II. The Queen is the Sovereign and Head of the Orders of Papua New Guinea. Her vice-regal representative, the Governor-General, is the Chancellor of the Orders of Papua New Guinea and Principal Grand Companion of the Order of Logohu. The Crown and the Defence Force The Crown sits at the pinnacle of the Papua New Guinea Defence Force. It is reflected in Papua New Guinea's maritime vessels, which bear the prefix HMPNGS, i.e., Her Majesty's Papua New Guinea Ship. St Edward's Crown appears on Papua New Guinea's Defence Force rank insignia, which illustrates the monarchy as the locus of authority. Members of the royal family also act as colonels-in-chief of various regiments, reflecting the Crown's relationship with the Defence Force through participation in military ceremonies both at home and abroad. Charles, Prince of Wales is the Colonel-in-Chief of Papua New Guinea's Royal Pacific Islands Regiment. In 2012, Charles, dressed in the forest green uniform of the regiment, presented troops with new colours at the Sir John Guise Stadium in
Trade Association, a trade organisation and free trade area. EFTA may also refer to: European Fair Trade Association,
EFTA is the European Free Trade Association, a trade organisation and free trade area. EFTA may also refer to: European Fair
Islands from 1968. Since then, the Faroe Islands have examined the possibility of membership of EFTA. In Greenland there has been a political debate about whether the Government of Greenland consider filing for membership of the EFTA. However, membership of the EFTA is not possible without the Kingdom of Denmark as a state becoming a member of the organization on behalf of the Faroe Islands and/or Greenland. EFTA assumes that membership is reserved for states. Special procedures for the accession of states are laid down in accordance with Article 56 of the EFTA Convention. The Kingdom of Denmark's membership of EFTA is reserved under the Kingdom of Denmark under international law. As parts of the Kingdom of Denmark, the Faroe Islands and Greenland cannot, with the current treaty basis, become independent members of the EFTA. In the event of regaining membership of EFTA for the Kingdom of Denmark it can be arranged to take effect for only the Faroe Islands and/or Greenland. EFTA membership would be geographically separated from EU membership(which is limited to Denmark). It is possible to assume that membership of the EU with effect for Denmark does not preclude membership of the EFTA with effect for the Faroe Islands and/or Greenland. This form of membership of the EFTA appears to be possible in accordance with the EFTA treaty. In mid-2005, representatives of the Faroe Islands raised the possibility of their territory joining the EFTA. According to Article 56 of the EFTA Convention, only states may become members of the EFTA. The Faroes are a constituent country of the Kingdom of Denmark, and not a sovereign state in their own right. Consequently, they considered the possibility that the "Kingdom of Denmark in respect of the Faroes" could join the EFTA, though the Danish Government has stated that this mechanism would not allow the Faroes to become a separate member of the EEA because Denmark was already a party to the EEA Agreement. The Government of Denmark officially supports membership of the EFTA with effect for the Faroe Islands. The Faroes already have an extensive bilateral free trade agreement with Iceland, known as the Hoyvík Agreement. United Kingdom The United Kingdom was a co-founder of EFTA in 1960, but ceased to be a member upon joining the European Economic Community. The country held a referendum in 2016 on withdrawing from the EU (popularly referred to as "Brexit"), resulting in a 51.9% vote in favour of withdrawing. A 2013 research paper presented to the Parliament of the United Kingdom proposed a number of alternatives to EU membership which would continue to allow it access to the EU's internal market, including continuing EEA membership as an EFTA member state, or the Swiss model of a number of bilateral treaties covering the provisions of the single market. In the first meeting since the Brexit vote, EFTA reacted by saying both that they were open to a UK return, and that Britain has many issues to work through. The president of Switzerland Johann Schneider-Ammann stated that its return would strengthen the association. However, in August 2016 the Norwegian Government expressed reservations. Norway's European affairs minister, Elisabeth Vik Aspaker, told the Aftenposten newspaper: "It’s not certain that it would be a good idea to let a big country into this organization. It would shift the balance, which is not necessarily in Norway’s interests." In late 2016, the Scottish First Minister said that her priority was to keep the whole of the UK in the European single market but that taking Scotland alone into the EEA was an option being "looked at". However, other EFTA states have stated that only sovereign states are eligible for membership, so it could only join if it became independent from the UK, unless the solution scouted for the Faroes in 2005 were to be adopted (see above). In early 2018, British MPs Antoinette Sandbach, Stephen Kinnock and Stephen Hammond all called for the UK to rejoin EFTA. Relationship with the European Union: the European Economic Area In 1992, the EU, its member states, and the EFTA member states signed the Agreement on the European Economic Area in Oporto, Portugal. However, the proposal that Switzerland ratify its participation was rejected by referendum. (Nevertheless, Switzerland has multiple bilateral treaties with the EU that allow it to participate in the European Single Market, the Schengen Agreement and other programmes). Thus, except for Switzerland, the EFTA members are also members of the European Economic Area (EEA). The EEA comprises three member states of the European Free Trade Association (EFTA) and 27 member states of the European Union (EU), including Croatia which the agreement is provisionally applied to, pending its ratification by all contracting parties. It was established on 1 January 1994 following an agreement with the European Community (which had become the EU two months earlier). It allows the EFTA-EEA states to participate in the EU's Internal Market without being members of the EU. They adopt almost all EU legislation related to the single market, except laws on agriculture and fisheries. However, they also contribute to and influence the formation of new EEA relevant policies and legislation at an early stage as part of a formal decision-shaping process. One EFTA member, Switzerland, has not joined the EEA but has a series of bilateral agreements, including a free trade agreement, with the EU. The following table summarises the various components of EU laws applied in the EFTA countries and their sovereign territories. Some territories of EU member states also have a special status in regard to EU laws applied as is the case with some European microstates. EEA institutions A Joint Committee consisting of the EEA-EFTA States plus the European Commission (representing the EU) has the function of extending relevant EU law to the non EU members. An EEA Council meets twice yearly to govern the overall relationship between the EEA members. Rather than setting up pan-EEA institutions, the activities of the EEA are regulated by the EFTA Surveillance Authority and the EFTA Court. The EFTA Surveillance Authority and the EFTA Court regulate the activities of the EFTA members in respect of their obligations in the European Economic Area (EEA). Since Switzerland is not an EEA member, it does not participate in these institutions. The EFTA Surveillance Authority performs a role for EFTA members that is equivalent to that of the European Commission for the EU, as "guardian of the treaties" and the EFTA Court performs the European Court of Justice-equivalent role. The original plan for the EEA lacked the EFTA Court or the EFTA Surveillance Authority: the European Court of Justice and the European Commission were to exercise those roles. However, during the negotiations for the EEA agreement, the European Court of Justice informed the Council of the European Union by way of letter that it considered that it would be a violation of the treaties to give to the EU institutions these powers with respect to non-EU member states. Therefore, the current arrangement was developed instead. EEA and Norway Grants The EEA and Norway Grants are the financial contributions of Iceland, Liechtenstein and Norway to reduce social and economic disparities in Europe. They were established in conjunction with the 2004 enlargement of the European Economic Area (EEA), which brought together the EU, Iceland, Liechtenstein and Norway in the Internal Market. In the period from 2004 to 2009, €1.3 billion of project funding was made available for project funding in the 15 beneficiary states in Central and Southern Europe. The EEA and Norway Grants are administered by the Financial Mechanism Office, which is affiliated to the EFTA Secretariat in Brussels. International conventions EFTA also originated the Hallmarking Convention and the Pharmaceutical Inspection Convention, both of which are open to non-EFTA states. International trade relations EFTA has several free trade agreements with non-EU countries as well as declarations on cooperation and joint workgroups to improve trade. Currently, the EFTA States have established preferential trade relations with 24 states and territories, in addition to the 27 member states of the European Union. EFTA's interactive Free Trade Map gives an overview of the partners worldwide. Free trade agreements Albania Bosnia and Herzegovina Canada (Canada-European Free Trade Association Free Trade Agreement) Central American States (Costa Rica, Guatemala, Panama) Chile Colombia Ecuador Egypt Georgia Gulf Co-operation Council (Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, United Arab Emirates) Hong Kong Indonesia ( The ratification procedures are currently ongoing and the entry into force is pending) Israel Japan Jordan South Korea Lebanon Mexico Montenegro Morocco (excluding Western Sahara) North Macedonia Palestinian National Authority Peru Philippines Serbia Singapore Southern African Customs Union (Botswana, Eswatini, Lesotho, Namibia, South Africa) Tunisia Turkey Ukraine Ongoing free trade negotiations Algeria (Negotiations currently on hold) Central American States (Honduras) (Negotiations currently on hold) India Malaysia MERCOSUR (Argentina, Brazil, Paraguay Uruguay and Venezuela) (Negotiations currently on hold) Thailand (Negotiations currently on hold) Vietnam Declarations on cooperation or dialogue on closer trade relations Mauritius MERCOSUR (Argentina, Brazil, Paraguay and Uruguay) Moldova Mongolia Myanmar Pakistan Travel policies Free movement of people within EFTA and the EU/EEA EFTA member
membership of EFTA is reserved under the Kingdom of Denmark under international law. As parts of the Kingdom of Denmark, the Faroe Islands and Greenland cannot, with the current treaty basis, become independent members of the EFTA. In the event of regaining membership of EFTA for the Kingdom of Denmark it can be arranged to take effect for only the Faroe Islands and/or Greenland. EFTA membership would be geographically separated from EU membership(which is limited to Denmark). It is possible to assume that membership of the EU with effect for Denmark does not preclude membership of the EFTA with effect for the Faroe Islands and/or Greenland. This form of membership of the EFTA appears to be possible in accordance with the EFTA treaty. In mid-2005, representatives of the Faroe Islands raised the possibility of their territory joining the EFTA. According to Article 56 of the EFTA Convention, only states may become members of the EFTA. The Faroes are a constituent country of the Kingdom of Denmark, and not a sovereign state in their own right. Consequently, they considered the possibility that the "Kingdom of Denmark in respect of the Faroes" could join the EFTA, though the Danish Government has stated that this mechanism would not allow the Faroes to become a separate member of the EEA because Denmark was already a party to the EEA Agreement. The Government of Denmark officially supports membership of the EFTA with effect for the Faroe Islands. The Faroes already have an extensive bilateral free trade agreement with Iceland, known as the Hoyvík Agreement. United Kingdom The United Kingdom was a co-founder of EFTA in 1960, but ceased to be a member upon joining the European Economic Community. The country held a referendum in 2016 on withdrawing from the EU (popularly referred to as "Brexit"), resulting in a 51.9% vote in favour of withdrawing. A 2013 research paper presented to the Parliament of the United Kingdom proposed a number of alternatives to EU membership which would continue to allow it access to the EU's internal market, including continuing EEA membership as an EFTA member state, or the Swiss model of a number of bilateral treaties covering the provisions of the single market. In the first meeting since the Brexit vote, EFTA reacted by saying both that they were open to a UK return, and that Britain has many issues to work through. The president of Switzerland Johann Schneider-Ammann stated that its return would strengthen the association. However, in August 2016 the Norwegian Government expressed reservations. Norway's European affairs minister, Elisabeth Vik Aspaker, told the Aftenposten newspaper: "It’s not certain that it would be a good idea to let a big country into this organization. It would shift the balance, which is not necessarily in Norway’s interests." In late 2016, the Scottish First Minister said that her priority was to keep the whole of the UK in the European single market but that taking Scotland alone into the EEA was an option being "looked at". However, other EFTA states have stated that only sovereign states are eligible for membership, so it could only join if it became independent from the UK, unless the solution scouted for the Faroes in 2005 were to be adopted (see above). In early 2018, British MPs Antoinette Sandbach, Stephen Kinnock and Stephen Hammond all called for the UK to rejoin EFTA. Relationship with the European Union: the European Economic Area In 1992, the EU, its member states, and the EFTA member states signed the Agreement on the European Economic Area in Oporto, Portugal. However, the proposal that Switzerland ratify its participation was rejected by referendum. (Nevertheless, Switzerland has multiple bilateral treaties with the EU that allow it to participate in the European Single Market, the Schengen Agreement and other programmes). Thus, except for Switzerland, the EFTA members are also members of the European Economic Area (EEA). The EEA comprises three member states of the European Free Trade Association (EFTA) and 27 member states of the European Union (EU), including Croatia which the agreement is provisionally applied to, pending its ratification by all contracting parties. It was established on 1 January 1994 following an agreement with the European Community (which had become the EU two months earlier). It allows the EFTA-EEA states to participate in the EU's Internal Market without being members of the EU. They adopt almost all EU legislation related to the single market, except laws on agriculture and fisheries. However, they also contribute to and influence the formation of new EEA relevant policies and legislation at an early stage as part of a formal decision-shaping process. One EFTA member, Switzerland, has not joined the EEA but has a series of bilateral agreements, including a free trade agreement, with the EU. The following table summarises the various components of EU laws applied in the EFTA countries and their sovereign territories. Some territories of EU member states also have a special status in regard to EU laws applied as is the case with some European microstates. EEA institutions A Joint Committee consisting of the EEA-EFTA States plus the European Commission (representing the EU) has the function of extending relevant EU law to the non EU members. An EEA Council meets twice yearly to govern the overall relationship between the EEA members. Rather than setting up pan-EEA institutions, the activities of the EEA are regulated by the EFTA Surveillance Authority and the EFTA Court. The EFTA Surveillance Authority and the EFTA Court regulate the activities of the EFTA members in respect of their obligations in the European Economic Area (EEA). Since Switzerland is not an EEA member, it does not participate in these institutions. The EFTA Surveillance Authority performs a role for EFTA members that is equivalent to that of the European Commission for the EU, as "guardian of the treaties" and the EFTA Court performs the European Court of Justice-equivalent role. The original plan for the EEA lacked the EFTA Court or the EFTA Surveillance Authority: the European Court of Justice and the European Commission were to exercise those roles. However, during the negotiations for the EEA agreement, the European Court of Justice informed the Council of the European Union by way of letter that it considered that it would be a violation of the treaties to give to the EU institutions these powers with respect to non-EU member states. Therefore, the current arrangement was developed instead. EEA and Norway Grants The EEA and Norway Grants are the financial contributions of Iceland, Liechtenstein and Norway to reduce social and economic disparities in Europe. They were established in conjunction with the 2004 enlargement of the European Economic Area (EEA), which brought together the EU, Iceland, Liechtenstein and Norway in the Internal Market. In the period from 2004 to 2009, €1.3 billion of project funding was made available for project funding in the 15 beneficiary states in Central and Southern Europe. The EEA and Norway Grants are administered by the Financial Mechanism Office, which is affiliated to the EFTA Secretariat in Brussels. International conventions EFTA also originated the Hallmarking Convention and the Pharmaceutical Inspection Convention, both of which are open to non-EFTA states. International trade relations EFTA has several free trade agreements with non-EU countries as well as declarations on cooperation and joint workgroups to improve trade. Currently, the EFTA States have established preferential trade relations with 24 states and territories, in addition to the 27 member states of the European Union. EFTA's interactive Free Trade Map gives an overview of the partners worldwide. Free trade agreements Albania Bosnia and Herzegovina Canada (Canada-European Free Trade Association Free Trade Agreement) Central American States (Costa Rica, Guatemala, Panama) Chile Colombia Ecuador Egypt Georgia Gulf Co-operation Council (Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, United Arab Emirates) Hong Kong Indonesia ( The ratification procedures are currently ongoing and the entry into force is pending) Israel Japan Jordan South Korea Lebanon Mexico Montenegro Morocco (excluding Western Sahara) North Macedonia Palestinian National Authority Peru Philippines Serbia Singapore Southern African Customs Union (Botswana, Eswatini, Lesotho, Namibia, South Africa) Tunisia Turkey Ukraine Ongoing free trade negotiations Algeria (Negotiations currently on hold) Central American States (Honduras) (Negotiations currently on hold) India Malaysia MERCOSUR (Argentina, Brazil, Paraguay Uruguay and Venezuela) (Negotiations currently on hold) Thailand (Negotiations currently on hold) Vietnam Declarations on cooperation or dialogue on closer trade relations Mauritius MERCOSUR (Argentina, Brazil, Paraguay and Uruguay) Moldova Mongolia Myanmar Pakistan Travel policies Free movement of people within EFTA and the EU/EEA EFTA member states' citizens enjoy freedom of movement in each other's territories in accordance with the EFTA convention. EFTA nationals also enjoy freedom of movement in the European Union (EU). EFTA nationals and EU citizens are not only visa-exempt but are legally entitled to enter and reside in each other's countries. The Citizens' Rights Directive (also sometimes called the "Free Movement Directive") defines the right of free movement for citizens of the European Economic Area (EEA), which includes the three EFTA members Iceland, Norway and Liechtenstein plus the member states of the EU. Switzerland, which is a member of EFTA but not of the EEA, is not bound by the Directive but rather has a separate bilateral agreement on free movement with the EU. As a result, a citizen of an EFTA country can live and work in all the other EFTA countries and in all the EU countries, and a citizen of an EU country can live and work in all the EFTA countries (but for voting and working in sensitive fields, such as government / police / military, citizenship is often required, and non-citizens may not have the same rights to welfare and unemployment benefits as
than citizens of the six largest countries. , Germany (80.9 million inhabitants) has 96 seats (previously 99 seats), i.e. one seat for 843,000 inhabitants. Malta (0.4 million inhabitants) has 6 seats, i.e. one seat for 70,000 inhabitants. The new system implemented under the Lisbon Treaty, including revising the seating well before elections, was intended to avoid political horse trading when the allocations have to be revised to reflect demographic changes. Pursuant to this apportionment, the constituencies are formed. In four EU member states (Belgium, Ireland, Italy and Poland), the national territory is divided into a number of constituencies. In the remaining member states, the whole country forms a single constituency. All member states hold elections to the European Parliament using various forms of proportional representation. Transitional arrangements Due to the delay in ratifying the Lisbon Treaty, the seventh parliament was elected under the lower Nice Treaty cap. A small scale treaty amendment was ratified on 29 November 2011. This amendment brought in transitional provisions to allow the 18 additional MEPs created under the Lisbon Treaty to be elected or appointed before the 2014 election. Under the Lisbon Treaty reforms, Germany was the only state to lose members from 99 to 96. However, these seats were not removed until the 2014 election. Salaries and expenses Before 2009, members received the same salary as members of their national parliament. However, from 2009 a new members statute came into force, after years of attempts, which gave all members an equal monthly pay, of €8,484.05 each in 2016, subject to a European Union tax and which can also be taxed nationally. MEPs are entitled to a pension, paid by Parliament, from the age of 63. Members are also entitled to allowances for office costs and subsistence, and travelling expenses, based on actual cost. Besides their pay, members are granted a number of privileges and immunities. To ensure their free movement to and from the Parliament, they are accorded by their own states the facilities accorded to senior officials travelling abroad and, by other state governments, the status of visiting foreign representatives. When in their own state, they have all the immunities accorded to national parliamentarians, and, in other states, they have immunity from detention and legal proceedings. However, immunity cannot be claimed when a member is found committing a criminal offence and the Parliament also has the right to strip a member of their immunity. Political groups MEPs in Parliament are organised into eight different parliamentary groups, including thirty non-attached members known as non-inscrits. The two largest groups are the European People's Party (EPP) and the Socialists & Democrats (S&D). These two groups have dominated the Parliament for much of its life, continuously holding between 50 and 70 percent of the seats between them. No single group has ever held a majority in Parliament. As a result of being broad alliances of national parties, European group parties are very decentralised and hence have more in common with parties in federal states like Germany or the United States than unitary states like the majority of the EU states. Nevertheless, the European groups were actually more cohesive than their US counterparts between 2004 and 2009. Groups are often based on a single European political party such as the European People's Party. However, they can, like the liberal group, include more than one European party as well as national parties and independents. For a group to be recognised, it needs 23 MEPs from seven different countries. Groups receive funding from the parliament. Grand coalition Given that the Parliament does not form the government in the traditional sense of a Parliamentary system, its politics have developed along more consensual lines rather than majority rule of competing parties and coalitions. Indeed, for much of its life it has been dominated by a grand coalition of the European People's Party and the Party of European Socialists. The two major parties tend to co-operate to find a compromise between their two groups leading to proposals endorsed by huge majorities. However, this does not always produce agreement, and each may instead try to build other alliances, the EPP normally with other centre-right or right wing Groups and the PES with centre-left or left wing groups. Sometimes, the Liberal Group is then in the pivotal position. There are also occasions where very sharp party political divisions have emerged, for example over the resignation of the Santer Commission. When the initial allegations against the Commission emerged, they were directed primarily against Édith Cresson and Manuel Marín, both socialist members. When the parliament was considering refusing to discharge the Community budget, President Jacques Santer stated that a no vote would be tantamount to a vote of no confidence. The Socialist group supported the Commission and saw the issue as an attempt by the EPP to discredit their party ahead of the 1999 elections. Socialist leader, Pauline Green MEP, attempted a vote of confidence and the EPP put forward counter motions. During this period the two parties took on similar roles to a government-opposition dynamic, with the Socialists supporting the executive and EPP renouncing its previous coalition support and voting it down. Politicisation such as this has been increasing, in 2007 Simon Hix of the London School of Economics noted that: During the fifth term, 1999 to 2004, there was a break in the grand coalition resulting in a centre-right coalition between the Liberal and People's parties. This was reflected in the Presidency of the Parliament with the terms being shared between the EPP and the ELDR, rather than the EPP and Socialists. In the following term the liberal group grew to hold 88 seats, the largest number of seats held by any third party in Parliament. Elections Elections have taken place, directly in every member state, every five years since 1979. there have been nine elections. When a nation joins mid-term, a by-election will be held to elect their representatives. This has happened six times, most recently when Croatia joined in 2013. Elections take place across four days according to local custom and, apart from having to be proportional, the electoral system is chosen by the member state. This includes allocation of sub-national constituencies; while most members have a national list, some, like the UK and Poland, divide their allocation between regions. Seats are allocated to member states according to their population, since 2014 with no state having more than 96, but no fewer than 6, to maintain proportionality. The most recent Union-wide elections to the European Parliament were the European elections of 2019, held from 23 to 26 May 2019. They were the largest simultaneous transnational elections ever held anywhere in the world. The first session of the ninth parliament started 2 July 2019. European political parties have the exclusive right to campaign during the European elections (as opposed to their corresponding EP groups). There have been a number of proposals designed to attract greater public attention to the elections. One such innovation in the 2014 elections was that the pan-European political parties fielded "candidates" for president of the Commission, the so-called Spitzenkandidaten (German, "leading candidates" or "top candidates"). However, European Union governance is based on a mixture of intergovernmental and supranational features: the President of the European Commission is nominated by the European Council, representing the governments of the member states, and there is no obligation for them to nominate the successful "candidate". The Lisbon Treaty merely states that they should take account of the results of the elections when choosing whom to nominate. The so-called Spitzenkandidaten were Jean-Claude Juncker for the European People's Party, Martin Schulz for the Party of European Socialists, Guy Verhofstadt for the Alliance of Liberals and Democrats for Europe Party, Ska Keller and José Bové jointly for the European Green Party and Alexis Tsipras for the Party of the European Left. Turnout dropped consistently every year since the first election, and from 1999 until 2019 was below 50%. In 2007 both Bulgaria and Romania elected their MEPs in by-elections, having joined at the beginning of 2007. The Bulgarian and Romanian elections saw two of the lowest turnouts for European elections, just 28.6% and 28.3% respectively. This trend was interrupted in the 2019 election, when turnout increased by 8% EU-wide, rising to 50.6%, the highest since 1994. In England, Scotland and Wales, EP elections were originally held for a constituency MEP on a first-past-the-post basis. In 1999 the system was changed to a form of proportional representation where a large group of candidates stand for a post within a very large regional constituency. One can vote for a party, but not a candidate (unless that party has a single candidate). Proceedings Each year the activities of the Parliament cycle between committee weeks where reports are discussed in committees and interparliamentary delegations meet, political group weeks for members to discuss work within their political groups and session weeks where members spend 3½ days in Strasbourg for part-sessions. In addition six 2-day part-sessions are organised in Brussels throughout the year. Four weeks are allocated as constituency week to allow members to do exclusively constituency work. Finally there are no meetings planned during the summer weeks. The Parliament has the power to meet without being convened by another authority. Its meetings are partly controlled by the treaties but are otherwise up to Parliament according to its own "Rules of Procedure" (the regulations governing the parliament). During sessions, members may speak after being called on by the President. Members of the Council or Commission may also attend and speak in debates. Partly due to the need for interpretation, and the politics of consensus in the chamber, debates tend to be calmer and more polite than, say, the Westminster system. Voting is conducted primarily by a show of hands, that may be checked on request by electronic voting. Votes of MEPs are not recorded in either case, however; that only occurs when there is a roll-call ballot. This is required for the final votes on legislation and also whenever a political group or 30 MEPs request it. The number of roll-call votes has increased with time. Votes can also be a completely secret ballot (for example, when the president is elected). All recorded votes, along with minutes and legislation, are recorded in the Official Journal of the European Union and can be accessed online. Votes usually do not follow a debate, but rather they are grouped with other due votes on specific occasions, usually at noon on Tuesdays, Wednesdays or Thursdays. This is because the length of the vote is unpredictable and if it continues for longer than allocated it can disrupt other debates and meetings later in the day. Members are arranged in a hemicycle according to their political groups (in the Common Assembly, prior to 1958, members sat alphabetically) who are ordered mainly by left to right, but some smaller groups are placed towards the outer ring of the Parliament. All desks are equipped with microphones, headphones for translation and electronic voting equipment. The leaders of the groups sit on the front benches at the centre, and in the very centre is a podium for guest speakers. The remaining half of the circular chamber is primarily composed of the raised area where the President and staff sit. Further benches are provided between the sides of this area and the MEPs, these are taken up by the Council on the far left and the Commission on the far right. Both the Brussels and Strasbourg hemicycle roughly follow this layout with only minor differences. The hemicycle design is a compromise between the different Parliamentary systems. The British-based system has the different groups directly facing each other while the French-based system is a semicircle (and the traditional German system had all members in rows facing a rostrum for speeches). Although the design is mainly based on a semicircle, the opposite ends of the spectrum do still face each other. With access to the chamber limited, entrance is controlled by ushers who aid MEPs in the chamber (for example in delivering documents). The ushers can also occasionally act as a form of police in enforcing the President, for example in ejecting an MEP who is disrupting the session (although this is rare). The first head of protocol in the Parliament was French, so many of the duties in the Parliament are based on the French model first developed following the French Revolution. The 180 ushers are highly visible in the Parliament, dressed in black tails and wearing a silver chain, and are recruited in the same manner as the European civil service. The President is allocated a personal usher. President and organisation The President is essentially the speaker of the Parliament and presides over the plenary when it is in session. The President's signature is required for all acts adopted by co-decision, including the EU budget. The President is also responsible for representing the Parliament externally, including in legal matters, and for the application of the rules of procedure. The President is elected for two-and-a-half-year terms, meaning two elections per parliamentary term. The current President of the European Parliament is Roberta Metsola, who was elected in January 2022. In most countries, the protocol of the head of state comes before all others; however, in the EU the Parliament is listed as the first institution, and hence the protocol of its president comes before any other European, or national, protocol. The gifts given to numerous visiting dignitaries depend upon the President. President Josep Borrell MEP of Spain gave his counterparts a crystal cup created by an artist from Barcelona who had engraved upon it parts of the Charter of Fundamental Rights among other things. A number of notable figures have been President of the Parliament and its predecessors. The first President was Paul-Henri Spaak MEP, one of the founding fathers of the Union. Other founding fathers include Alcide de Gasperi MEP and Robert Schuman MEP. The two female Presidents were Simone Veil MEP in 1979 (first President of the elected Parliament) and Nicole Fontaine MEP in 1999, both Frenchwomen. The previous president, Jerzy Buzek was the first East-Central European to lead an EU institution, a former Prime Minister of Poland who rose out of the Solidarity movement in Poland that helped overthrow communism in the Eastern Bloc. During the election of a President, the previous President (or, if unable to, one of the previous Vice-Presidents) presides over the chamber. Prior to 2009, the oldest member fulfilled this role but the rule was changed to prevent far-right French MEP Jean-Marie Le Pen taking the chair. Below the President, there are 14 Vice-Presidents who chair debates when the President is not in the chamber. There are a number of other bodies and posts responsible for the running of parliament besides these speakers. The two main bodies are the Bureau, which is responsible for budgetary and administration issues, and the Conference of Presidents which is a governing body composed of the presidents of each of the parliament's political groups. Looking after the financial and administrative interests of members are five Quaestors. , the European Parliament budget was EUR 1.756 billion. A 2008 report on the Parliament's finances highlighted certain overspending and miss-payments. Despite some MEPs calling for the report to be published, Parliamentary authorities had refused until an MEP broke confidentiality and leaked it. Committees and delegations The Parliament has 20 Standing Committees consisting of 25 to 73 MEPs each (reflecting the political make-up of the whole Parliament) including a chair, a bureau and secretariat. They meet twice a month in public to draw up, amend to adopt legislative proposals and reports to be presented to the plenary. The rapporteurs for a committee are supposed to present the view of the committee, although notably this has not always been the case. In the events leading to the resignation of the Santer Commission, the rapporteur went against the Budgetary Control Committee's narrow vote to discharge the budget, and urged the Parliament to reject it. Committees can also set up sub-committees (e.g. the Subcommittee on Human Rights) and temporary committees to deal with a specific topic (e.g. on extraordinary rendition). The chairs of the Committees co-ordinate their work through the "Conference of Committee Chairmen". When co-decision was introduced it increased the Parliament's powers in a number of areas, but most notably those covered by the Committee on the Environment, Public Health and Food Safety. Previously this committee was considered by MEPs as a "Cinderella committee"; however, as it gained a new importance, it became more professional and rigorous, attracting increasing attention to its work. The nature of the committees differ from their national counterparts as, although smaller in comparison to those of the United States Congress, the European Parliament's committees are unusually large by European standards with between eight and twelve dedicated members of staff and three to four support staff. Considerable administration, archives and research resources are also at the disposal of the whole Parliament when needed. Delegations of the Parliament are formed in a similar manner and are responsible for relations with Parliaments outside the EU. There are 34 delegations made up of around 15 MEPs, chairpersons of the delegations also cooperate in a conference like the committee chairs do. They include "Interparliamentary delegations" (maintain relations with Parliament outside the EU), "joint parliamentary committees" (maintaining relations with parliaments of states which are candidates or associates of the EU), the delegation to the ACP EU Joint Parliamentary Assembly and the delegation to the Euro-Mediterranean Parliamentary Assembly. MEPs also participate in other international activities such as the Euro-Latin American Parliamentary Assembly, the Transatlantic Legislators' Dialogue and through election observation in third countries. Intergroups The Intergroups in the European Parliament are informal fora which gather MEPs from various political groups around any topic. They do not express the view of the European Parliament. They serve a double purpose: to address a topic which is transversal to several committees and in a less formal manner. Their daily secretariat can be run either through the office of MEPs or through interest groups, be them corporate lobbies or NGOs. The favored access to MEPs which the organization running the secretariat enjoys can be one explanation to the multiplication of Intergroups in the 1990s. They are now strictly regulated and financial support, direct or otherwise (via Secretariat staff, for example) must be officially specified in a declaration of financial interests. Also Intergroups are established or renewed at the beginning of each legislature through a specific process. Indeed, the proposal for the constitution or renewal of an Intergroup must be supported by at least 3 political groups whose support is limited to a specific number of proposals in proportion to their size (for example, for the legislature 2014-2019, the EPP or S&D political groups could support 22 proposals whereas the Greens/EFA or the EFDD political groups only 7). Translation and interpretation Speakers in the European Parliament are entitled to speak in any of the 24 official languages of the European Union, ranging from French and German to Maltese and Irish. Simultaneous interpreting is offered in all plenary sessions, and all final texts of legislation are translated. With twenty-four languages, the European Parliament is the most multilingual parliament in the world and the biggest employer of interpreters in the world (employing 350 full-time and 400 free-lancers when there is higher demand). Citizens may also address the Parliament in Basque, Catalan/Valencian and Galician. Usually a language is translated from a foreign tongue into a translator's native tongue. Due to the large number of languages, some being minor ones, since 1995 interpreting is sometimes done the opposite way, out of an interpreter's native tongue (the "retour" system). In addition, a speech in a minor language may be interpreted through a third language for lack of interpreters ("relay" interpreting) for example, when interpreting out of Estonian into Maltese. Due to the complexity of the issues, interpretation is not word for word. Instead, interpreters have to convey the political meaning of a speech, regardless of their own views. This requires detailed understanding of the politics and terms of the Parliament, involving a great deal of preparation beforehand (e.g. reading the documents in question). Difficulty can often arise when MEPs use profanities, jokes and word play or speak too fast. While some see speaking their native language as an important part of their identity, and can speak more fluently in debates, interpretation and its cost has been criticised by some. A 2006 report by Alexander Stubb MEP highlighted that by only using English, French and German costs could be reduced from €118,000 per day (for 21 languages then Romanian, Bulgarian and Croatian having not yet been included) to €8,900 per day. There has also been a small-scale campaign to make French the reference language for all legal texts, on the basis of an argument that it is more clear and precise for legal purposes. Because the proceedings are translated into all of the official EU languages, they have been used to make a multilingual corpus known as Europarl. It is widely used to train statistical machine translation systems. Annual costs According to the European Parliament website, the annual parliament budget for 2016 was €1.838 billion. The main cost categories were: 34% staff, interpretation and translation costs 24% information policy, IT, telecommunications 23% MEPs' salaries, expenses, travel, offices and staff 13% buildings 6% political group activities According to a European Parliament study prepared in 2013, the Strasbourg seat costs an extra €103 million over maintaining a single location and according to the Court of Auditors an additional €5 million is related to travel expenses caused by having two seats. As a comparison, the German lower house of parliament (Bundestag) is estimated to cost €517 million in total for 2018, for a parliament with 709 members. The British House of Commons reported total annual costs in 2016-2017 of £249 million (€279 million). It had 650 seats. According to The Economist, the European Parliament costs more than the British, French and German parliaments combined. A quarter of the costs is estimated to be related to translation and interpretation costs (c. €460 million) and the double seats are estimated to add an additional €180 million a year. For a like-for-like comparison, these two cost blocks can be excluded. On 2 July 2018, MEPs rejected proposals to tighten the rules around the General Expenditure Allowance (GEA), which "is a controversial €4,416 per month payment that MEPs are given to cover office and other expenses, but they are not required to provide any evidence of how the money is spent". Seat The Parliament is based in three different cities with numerous buildings. A protocol attached to the Treaty of Amsterdam requires that 12 plenary sessions be held in Strasbourg (none in August but two in September), which is the Parliament's official seat, while extra part sessions as well as committee meetings are held in Brussels. Luxembourg City hosts the Secretariat of the European Parliament. The European Parliament is one of at least two assemblies in the world with more than one meeting place (another being the parliament of the Isle of Man, Tynwald) and one of the few that does not have the power to decide its own location. The Strasbourg seat is seen as a symbol of reconciliation between France and Germany, the Strasbourg region having been fought over by the two countries in the past. However, the cost and inconvenience of having two seats is questioned. While Strasbourg is the official seat, and sits alongside the Council of Europe, Brussels is home to nearly all other major EU institutions, with the majority of Parliament's work being carried out there. Critics have described the two-seat arrangement as a "travelling circus", and there is a strong movement to establish Brussels as the sole seat. This is because the other political institutions (the Commission, Council and European Council) are located there, and hence Brussels is treated as the 'capital' of the EU. This movement has received strong backing from numerous figures, including Margot Wallström, Commission First-Vice President from 2004 to 2010, who stated that "something that was once a very positive symbol of the EU reuniting France and Germany has now become a negative symbol of wasting money, bureaucracy and the insanity of the Brussels institutions". The Green Party has also
or it may adopt further amendments, also by an absolute majority. If the Council does not approve these, then a "Conciliation Committee" is formed. The Committee is composed of the Council members plus an equal number of MEPs who seek to agree a compromise. Once a position is agreed, it has to be approved by Parliament, by a simple majority. This is also aided by Parliament's mandate as the only directly democratic institution, which has given it leeway to have greater control over legislation than other institutions, for example over its changes to the Bolkestein directive in 2006. The few other areas that operate the special legislative procedures are justice and home affairs, budget and taxation, and certain aspects of other policy areas, such as the fiscal aspects of environmental policy. In these areas, the Council or Parliament decide law alone. The procedure also depends upon which type of institutional act is being used. The strongest act is a regulation, an act or law which is directly applicable in its entirety. Then there are directives which bind member states to certain goals which they must achieve. They do this through their own laws and hence have room to manoeuvre in deciding upon them. A decision is an instrument which is focused at a particular person or group and is directly applicable. Institutions may also issue recommendations and opinions which are merely non-binding, declarations. There is a further document which does not follow normal procedures, this is a "written declaration" which is similar to an early day motion used in the Westminster system. It is a document proposed by up to five MEPs on a matter within the EU's activities used to launch a debate on that subject. Having been posted outside the entrance to the hemicycle, members can sign the declaration and if a majority do so it is forwarded to the President and announced to the plenary before being forwarded to the other institutions and formally noted in the minutes. Budget The legislative branch officially holds the Union's budgetary authority with powers gained through the Budgetary Treaties of the 1970s and the Lisbon Treaty. The EU budget is subject to a form of the ordinary legislative procedure with a single reading giving Parliament power over the entire budget (before 2009, its influence was limited to certain areas) on an equal footing to the Council. If there is a disagreement between them, it is taken to a conciliation committee as it is for legislative proposals. If the joint conciliation text is not approved, the Parliament may adopt the budget definitively. The Parliament is also responsible for discharging the implementation of previous budgets based on the annual report of the European Court of Auditors. It has refused to approve the budget only twice, in 1984 and in 1998. On the latter occasion it led to the resignation of the Santer Commission; highlighting how the budgetary power gives Parliament a great deal of power over the Commission. Parliament also makes extensive use of its budgetary, and other powers, elsewhere; for example in the setting up of the European External Action Service, Parliament has a de facto veto over its design as it has to approve the budgetary and staff changes. Control of the executive The President of the European Commission is proposed by the European Council on the basis of the European elections to Parliament. That proposal has to be approved by the Parliament (by a simple majority) who "elect" the President according to the treaties. Following the approval of the Commission President, the members of the Commission are proposed by the President in accord with the member states. Each Commissioner comes before a relevant parliamentary committee hearing covering the proposed portfolio. They are then, as a body, approved or rejected by the Parliament. In practice, the Parliament has never voted against a President or his Commission, but it did seem likely when the Barroso Commission was put forward. The resulting pressure forced the proposal to be withdrawn and changed to be more acceptable to parliament. That pressure was seen as an important sign by some of the evolving nature of the Parliament and its ability to make the Commission accountable, rather than being a rubber stamp for candidates. Furthermore, in voting on the Commission, MEPs also voted along party lines, rather than national lines, despite frequent pressure from national governments on their MEPs. This cohesion and willingness to use the Parliament's power ensured greater attention from national leaders, other institutions and the public who previously gave the lowest ever turnout for the Parliament's elections. The Parliament also has the power to censure the Commission if they have a two-thirds majority which will force the resignation of the entire Commission from office. As with approval, this power has never been used but it was threatened to the Santer Commission, who subsequently resigned of their own accord. There are a few other controls, such as: the requirement of Commission to submit reports to the Parliament and answer questions from MEPs; the requirement of the President-in-office of the Council to present its programme at the start of their presidency; the obligation on the President of the European Council to report to Parliament after each of its meetings; the right of MEPs to make requests for legislation and policy to the Commission; and the right to question members of those institutions (e.g. "Commission Question Time" every Tuesday). At present, MEPs may ask a question on any topic whatsoever, but in July 2008 MEPs voted to limit questions to those within the EU's mandate and ban offensive or personal questions. Supervisory powers The Parliament also has other powers of general supervision, mainly granted by the Maastricht Treaty. The Parliament has the power to set up a Committee of Inquiry, for example over mad cow disease or CIA detention flights the former led to the creation of the European veterinary agency. The Parliament can call other institutions to answer questions and if necessary to take them to court if they break EU law or treaties. Furthermore, it has powers over the appointment of the members of the Court of Auditors and the president and executive board of the European Central Bank. The ECB president is also obliged to present an annual report to the parliament. The European Ombudsman is elected by the Parliament, who deals with public complaints against all institutions. Petitions can also be brought forward by any EU citizen on a matter within the EU's sphere of activities. The Committee on Petitions hears cases, some 1500 each year, sometimes presented by the citizen themselves at the Parliament. While the Parliament attempts to resolve the issue as a mediator they do resort to legal proceedings if it is necessary to resolve the citizens dispute. Members The parliamentarians are known in English as Members of the European Parliament (MEPs). They are elected every five years by universal adult suffrage and sit according to political allegiance; about one third are women. Before the first direct elections, in 1979, they were appointed by their national parliaments. The Parliament has been criticized for underrepresentation of minority groups. In 2017, an estimated 17 MEPs were nonwhite, and of these, three were black, a disproportionately low number. According to activist organization European Network Against Racism, while an estimated 10% of Europe is composed of racial and ethnic minorities, only 5% of MEPs were members of such groups following the 2019 European Parliament election. Under the Lisbon Treaty, seats are allocated to each state according to population and the maximum number of members is set at 751 (however, as the President cannot vote while in the chair there will only be 750 voting members at any one time). Since 1 February 2020, 705 MEPs (including the president of the Parliament) sit in the European Parliament, the reduction in size due to the United Kingdom leaving the EU. Representation is currently limited to a maximum of 96 seats and a minimum of 6 seats per state and the seats are distributed according to "degressive proportionality", i.e., the larger the state, the more citizens are represented per MEP. As a result, Maltese and Luxembourgish voters have roughly 10x more influence per voter than citizens of the six largest countries. , Germany (80.9 million inhabitants) has 96 seats (previously 99 seats), i.e. one seat for 843,000 inhabitants. Malta (0.4 million inhabitants) has 6 seats, i.e. one seat for 70,000 inhabitants. The new system implemented under the Lisbon Treaty, including revising the seating well before elections, was intended to avoid political horse trading when the allocations have to be revised to reflect demographic changes. Pursuant to this apportionment, the constituencies are formed. In four EU member states (Belgium, Ireland, Italy and Poland), the national territory is divided into a number of constituencies. In the remaining member states, the whole country forms a single constituency. All member states hold elections to the European Parliament using various forms of proportional representation. Transitional arrangements Due to the delay in ratifying the Lisbon Treaty, the seventh parliament was elected under the lower Nice Treaty cap. A small scale treaty amendment was ratified on 29 November 2011. This amendment brought in transitional provisions to allow the 18 additional MEPs created under the Lisbon Treaty to be elected or appointed before the 2014 election. Under the Lisbon Treaty reforms, Germany was the only state to lose members from 99 to 96. However, these seats were not removed until the 2014 election. Salaries and expenses Before 2009, members received the same salary as members of their national parliament. However, from 2009 a new members statute came into force, after years of attempts, which gave all members an equal monthly pay, of €8,484.05 each in 2016, subject to a European Union tax and which can also be taxed nationally. MEPs are entitled to a pension, paid by Parliament, from the age of 63. Members are also entitled to allowances for office costs and subsistence, and travelling expenses, based on actual cost. Besides their pay, members are granted a number of privileges and immunities. To ensure their free movement to and from the Parliament, they are accorded by their own states the facilities accorded to senior officials travelling abroad and, by other state governments, the status of visiting foreign representatives. When in their own state, they have all the immunities accorded to national parliamentarians, and, in other states, they have immunity from detention and legal proceedings. However, immunity cannot be claimed when a member is found committing a criminal offence and the Parliament also has the right to strip a member of their immunity. Political groups MEPs in Parliament are organised into eight different parliamentary groups, including thirty non-attached members known as non-inscrits. The two largest groups are the European People's Party (EPP) and the Socialists & Democrats (S&D). These two groups have dominated the Parliament for much of its life, continuously holding between 50 and 70 percent of the seats between them. No single group has ever held a majority in Parliament. As a result of being broad alliances of national parties, European group parties are very decentralised and hence have more in common with parties in federal states like Germany or the United States than unitary states like the majority of the EU states. Nevertheless, the European groups were actually more cohesive than their US counterparts between 2004 and 2009. Groups are often based on a single European political party such as the European People's Party. However, they can, like the liberal group, include more than one European party as well as national parties and independents. For a group to be recognised, it needs 23 MEPs from seven different countries. Groups receive funding from the parliament. Grand coalition Given that the Parliament does not form the government in the traditional sense of a Parliamentary system, its politics have developed along more consensual lines rather than majority rule of competing parties and coalitions. Indeed, for much of its life it has been dominated by a grand coalition of the European People's Party and the Party of European Socialists. The two major parties tend to co-operate to find a compromise between their two groups leading to proposals endorsed by huge majorities. However, this does not always produce agreement, and each may instead try to build other alliances, the EPP normally with other centre-right or right wing Groups and the PES with centre-left or left wing groups. Sometimes, the Liberal Group is then in the pivotal position. There are also occasions where very sharp party political divisions have emerged, for example over the resignation of the Santer Commission. When the initial allegations against the Commission emerged, they were directed primarily against Édith Cresson and Manuel Marín, both socialist members. When the parliament was considering refusing to discharge the Community budget, President Jacques Santer stated that a no vote would be tantamount to a vote of no confidence. The Socialist group supported the Commission and saw the issue as an attempt by the EPP to discredit their party ahead of the 1999 elections. Socialist leader, Pauline Green MEP, attempted a vote of confidence and the EPP put forward counter motions. During this period the two parties took on similar roles to a government-opposition dynamic, with the Socialists supporting the executive and EPP renouncing its previous coalition support and voting it down. Politicisation such as this has been increasing, in 2007 Simon Hix of the London School of Economics noted that: During the fifth term, 1999 to 2004, there was a break in the grand coalition resulting in a centre-right coalition between the Liberal and People's parties. This was reflected in the Presidency of the Parliament with the terms being shared between the EPP and the ELDR, rather than the EPP and Socialists. In the following term the liberal group grew to hold 88 seats, the largest number of seats held by any third party in Parliament. Elections Elections have taken place, directly in every member state, every five years since 1979. there have been nine elections. When a nation joins mid-term, a by-election will be held to elect their representatives. This has happened six times, most recently when Croatia joined in 2013. Elections take place across four days according to local custom and, apart from having to be proportional, the electoral system is chosen by the member state. This includes allocation of sub-national constituencies; while most members have a national list, some, like the UK and Poland, divide their allocation between regions. Seats are allocated to member states according to their population, since 2014 with no state having more than 96, but no fewer than 6, to maintain proportionality. The most recent Union-wide elections to the European
policy matters. The extraordinary meetings always end with official Council conclusions - but differs from the scheduled meetings by not being scheduled more than a year in advance, as for example in 2001 when the European Council gathered to lead the European Union's response to the 11 September attacks. Some meetings of the European Council—and, before the European Council was formalised, meetings of the heads of government—are seen by some as turning points in the history of the European Union. For example: 1969, The Hague: Foreign policy and enlargement. 1974, Paris: Creation of the council. 1985, Milan: Initiate IGC leading to the Single European Act. 1991, Maastricht: Agreement on the Maastricht Treaty. 1992, Edinburgh: Agreement (by treaty provision) to retain at Strasbourg the plenary seat of the European Parliament. 1993, Copenhagen: Leading to the definition of the Copenhagen Criteria. 1997, Amsterdam: Agreement on the Amsterdam Treaty. 1998, Brussels: Selected member states to adopt the euro. 1999; Cologne: Declaration on military forces. 1999, Tampere: Institutional reform 2000, Lisbon: Lisbon Strategy 2002, Copenhagen: Agreement for May 2004 enlargement. 2007, Lisbon: Agreement on the Lisbon Treaty. 2009, Brussels: Appointment of first president and merged High Representative. 2010, European Financial Stability Facility As such, the European Council had already existed before it gained the status as an institution of the European Union with the entering into force of the Treaty of Lisbon, but even after it had been mentioned in the treaties (since the Single European Act) it could only take political decisions, not formal legal acts. However, when necessary, the Heads of State or Government could also meet as the Council of Ministers and take formal decisions in that role. Sometimes, this was even compulsory, e.g. Article 214(2) of the Treaty establishing the European Community provided (before it was amended by the Treaty of Lisbon) that ‘the Council, meeting in the composition of Heads of State or Government and acting by a qualified majority, shall nominate the person it intends to appoint as President of the Commission’ (emphasis added); the same rule applied in some monetary policy provisions introduced by the Maastricht Treaty (e.g. Article 109j TEC). In that case, what was politically part of a European Council meeting was legally a meeting of the Council of Ministers. When the European Council, already introduced into the treaties by the Single European Act, became an institution by virtue of the Treaty of Lisbon, this was no longer necessary, and the "Council [of the European Union] meeting in the composition of the Heads of State or Government", was replaced in these instances by the European Council now taking formal legally binding decisions in these cases (Article 15 of the Treaty on European Union). The Treaty of Lisbon made the European Council a formal institution distinct from the (ordinary) Council of the EU, and created the present longer term and full-time presidency. As an outgrowth of the Council of the EU, the European Council had previously followed the same Presidency, rotating between each member state. While the Council of the EU retains that system, the European Council established, with no change in powers, a system of appointing an individual (without them being a national leader) for a two-and-a-half-year term—which can be renewed for the same person only once. Following the ratification of the treaty in December 2009, the European Council elected the then-Prime Minister of Belgium Herman Van Rompuy as its first permanent president (resigning from Belgian Prime Minister). Powers and functions The European Council is an official institution of the EU, described in the Lisbon Treaty as a body which "shall provide the Union with the necessary impetus for its development". Essentially it defines the EU's policy agenda and has thus been considered to be the motor of European integration. Beyond the need to provide "impetus", the council has developed further roles: to "settle issues outstanding from discussions at a lower level", to lead in foreign policy — acting externally as a "collective Head of State", "formal ratification of important documents" and "involvement in the negotiation of the treaty changes". Since the institution is composed of national leaders, it gathers the executive power of the member states and has thus a great influence in high-profile policy areas as for example foreign policy. It also exercises powers of appointment, such as appointment of its own President, the High Representative of the Union for Foreign Affairs and Security Policy, and the President of the European Central Bank. It proposes, to the European Parliament, a candidate for President of the European Commission. Moreover, the European Council influences police and justice planning, the composition of the commission, matters relating to the organisation of the rotating Council presidency, the suspension of membership rights, and changing the voting systems through the Passerelle Clause. Although the European Council has no direct legislative power, under the "emergency brake" procedure, a state outvoted in the Council of Ministers may refer contentious legislation to the European Council. However, the state may still be outvoted in the European Council. Hence with powers over the supranational executive of the EU, in addition to its other powers, the European Council has been described by some as the Union's "supreme political authority". Composition The European Council consists of the heads of state or government of the member states, alongside its own President and the Commission President (both non-voting). The meetings used to be regularly attended
2009, the European Council elected the then-Prime Minister of Belgium Herman Van Rompuy as its first permanent president (resigning from Belgian Prime Minister). Powers and functions The European Council is an official institution of the EU, described in the Lisbon Treaty as a body which "shall provide the Union with the necessary impetus for its development". Essentially it defines the EU's policy agenda and has thus been considered to be the motor of European integration. Beyond the need to provide "impetus", the council has developed further roles: to "settle issues outstanding from discussions at a lower level", to lead in foreign policy — acting externally as a "collective Head of State", "formal ratification of important documents" and "involvement in the negotiation of the treaty changes". Since the institution is composed of national leaders, it gathers the executive power of the member states and has thus a great influence in high-profile policy areas as for example foreign policy. It also exercises powers of appointment, such as appointment of its own President, the High Representative of the Union for Foreign Affairs and Security Policy, and the President of the European Central Bank. It proposes, to the European Parliament, a candidate for President of the European Commission. Moreover, the European Council influences police and justice planning, the composition of the commission, matters relating to the organisation of the rotating Council presidency, the suspension of membership rights, and changing the voting systems through the Passerelle Clause. Although the European Council has no direct legislative power, under the "emergency brake" procedure, a state outvoted in the Council of Ministers may refer contentious legislation to the European Council. However, the state may still be outvoted in the European Council. Hence with powers over the supranational executive of the EU, in addition to its other powers, the European Council has been described by some as the Union's "supreme political authority". Composition The European Council consists of the heads of state or government of the member states, alongside its own President and the Commission President (both non-voting). The meetings used to be regularly attended by the national foreign minister as well, and the Commission President likewise accompanied by another member of the commission. However, since the Treaty of Lisbon, this has been discontinued, as the size of the body had become somewhat large following successive accessions of new Member States to the Union. Meetings can also include other invitees, such as the President of the European Central Bank, as required. The Secretary-General of the Council attends, and is responsible for organisational matters, including minutes. The President of the European Parliament also attends to give an opening speech outlining the European Parliament's position before talks begin. Additionally, the negotiations involve a large number of other people working behind the scenes. Most of those people, however, are not allowed to the conference room, except for two delegates per state to relay messages. At the push of a button members can also call for advice from a Permanent Representative via the "Antici Group" in an adjacent room. The group is composed of diplomats and assistants who convey information and requests. Interpreters are also required for meetings as members are permitted to speak in their own languages. As the composition is not precisely defined, some states which have a considerable division of executive power can find it difficult to decide who should attend the meetings. While an MEP, Alexander Stubb argued that there was no need for the President of Finland to attend Council meetings with or instead of the Prime Minister of Finland (who was head of European foreign policy). In 2008, having become Finnish Foreign Minister, Stubb was forced out of the Finnish delegation to the emergency council meeting on the Georgian crisis because the President wanted to attend the high-profile summit as well as the Prime Minister (only two people from each country could attend the meetings). This was despite Stubb being Chair-in-Office of the Organisation for Security and Co-operation in Europe at the time which was heavily involved in the crisis. Problems also occurred in Poland where the President of Poland and the Prime Minister of Poland were of different parties and had a different foreign policy response to the crisis. A similar situation arose in Romania between President Traian Băsescu and Prime Minister Călin Popescu-Tăriceanu in 2007–2008 and again in 2012 with Prime Minister Victor Ponta, who both opposed the president. Eurozone summits A number of ad hoc meetings of Heads of State or Government of the Euro area countries were held in 2010 and 2011 to discuss the Sovereign Debt crisis. It was agreed in October 2011 that they should meet regularly twice a year (with extra meetings if needed). This will normally be at the end of a European Council meeting and according to the same format (chaired by the President of the European Council and including the President of the commission), but usually restricted to the (currently 19) Heads of State or Government of countries whose currency is the euro. President The President of the European Council is elected by the European Council by a qualified majority for a once-renewable term of two and a half years. The President must report to the European Parliament after each European Council meeting. The post was created by the Treaty of Lisbon and was subject to a debate over its exact role. Prior to Lisbon, the Presidency rotated in accordance with the Presidency of the Council of the European Union. The role of that President-in-Office was in no sense (other than protocol) equivalent to an office of a head of state, merely a primus inter pares (first among equals) role among other European heads of government. The President-in-Office was primarily responsible for preparing and chairing the Council meetings, and had no executive powers other than the task of representing the Union externally. Now the leader of the Council Presidency country can still act as president when the permanent president is absent. Members Source: (8 + 1 non-voting from the EU institution) (7) (6 + 1 non-voting from the EU institution) (4) (2) Political alliances Almost all members of the European Council are members of a political party at national level, and most of these are members of a European-level political party or other alliances such as Renew Europe. These frequently hold pre-meetings of their European Council members, prior to its meetings. However, the European Council is composed to represent the EU's states rather than political alliances and decisions are generally made on these lines, though ideological alignment can colour
suicide and termination of life on request". Euthanasia is categorized in different ways, which include voluntary, non-voluntary, or involuntary. Voluntary euthanasia is when a person wills to have their life ended and is legal in a growing number of countries. Non-voluntary euthanasia occurs when a patient's consent is unavailable and is legal in some countries under certain limited conditions, in both active and passive forms. Involuntary euthanasia, which is done without asking for consent or against the patient's will, is illegal in all countries and is usually considered murder. euthanasia had become the most active area of research in bioethics. In some countries divisive public controversy occurs over the moral, ethical, and legal issues associated with euthanasia. Passive euthanasia (known as "pulling the plug") is legal under some circumstances in many countries. Active euthanasia, however, is legal or de facto legal in only a handful of countries (for example: Belgium, Canada and Switzerland), which limit it to specific circumstances and require the approval of counselors and doctors or other specialists. In some countries - such as Nigeria, Saudi Arabia and Pakistan - support for active euthanasia is almost non-existent. Definition Like other terms borrowed from history, "euthanasia" has had different meanings depending on usage. The first apparent usage of the term "euthanasia" belongs to the historian Suetonius, who described how the Emperor Augustus, "dying quickly and without suffering in the arms of his wife, Livia, experienced the 'euthanasia' he had wished for." The word "euthanasia" was first used in a medical context by Francis Bacon in the 17th century, to refer to an easy, painless, happy death, during which it was a "physician's responsibility to alleviate the 'physical sufferings' of the body." Bacon referred to an "outward euthanasia"—the term "outward" he used to distinguish from a spiritual concept—the euthanasia "which regards the preparation of the soul." In current usage, euthanasia has been defined as the "painless inducement of a quick death". However, it is argued that this approach fails to properly define euthanasia, as it leaves open a number of possible actions which would meet the requirements of the definition, but would not be seen as euthanasia. In particular, these include situations where a person kills another, painlessly, but for no reason beyond that of personal gain; or accidental deaths that are quick and painless, but not intentional. Another approach incorporates the notion of suffering into the definition. The definition offered by the Oxford English Dictionary incorporates suffering as a necessary condition, with "the painless killing of a patient suffering from an incurable and painful disease or in an irreversible coma", This approach is included in Marvin Khol and Paul Kurtz's definition of it as "a mode or act of inducing or permitting death painlessly as a relief from suffering". Counterexamples can be given: such definitions may encompass killing a person suffering from an incurable disease for personal gain (such as to claim an inheritance), and commentators such as Tom Beauchamp and Arnold Davidson have argued that doing so would constitute "murder simpliciter" rather than euthanasia. The third element incorporated into many definitions is that of intentionality – the death must be intended, rather than being accidental, and the intent of the action must be a "merciful death". Michael Wreen argued that "the principal thing that distinguishes euthanasia from intentional killing simpliciter is the agent's motive: it must be a good motive insofar as the good of the person killed is concerned." Likewise, James Field argued that euthanasia entails a sense of compassion towards the patient, in contrast to the diverse non-compassionate motives of serial killers who work in health care professions. Similarly, Heather Draper speaks to the importance of motive, arguing that "the motive forms a crucial part of arguments for euthanasia, because it must be in the best interests of the person on the receiving end." Definitions such as that offered by the House of Lords Select committee on Medical Ethics take this path, where euthanasia is defined as "a deliberate intervention undertaken with the express intention of ending a life, to relieve intractable suffering." Beauchamp and Davidson also highlight Baruch Brody's "an act of euthanasia is one in which one person ... (A) kills another person (B) for the benefit of the second person, who actually does benefit from being killed". Draper argued that any definition of euthanasia must incorporate four elements: an agent and a subject; an intention; a causal proximity, such that the actions of the agent lead to the outcome; and an outcome. Based on this, she offered a definition incorporating those elements, stating that euthanasia "must be defined as death that results from the intention of one person to kill another person, using the most gentle and painless means possible, that is motivated solely by the best interests of the person who dies." Prior to Draper, Beauchamp and Davidson had also offered a definition that includes these elements. Their definition specifically discounts fetuses to distinguish between abortions and euthanasia: Wreen, in part responding to Beauchamp and Davidson, offered a six-part definition: Wreen also considered a seventh requirement: "(7) The good specified in (6) is, or at least includes, the avoidance of evil", although as Wreen noted in the paper, he was not convinced that the restriction was required. In discussing his definition, Wreen noted the difficulty of justifying euthanasia when faced with the notion of the subject's "right to life". In response, Wreen argued that euthanasia has to be voluntary, and that "involuntary euthanasia is, as such, a great wrong". Other commentators incorporate consent more directly into their definitions. For example, in a discussion of euthanasia presented in 2003 by the European Association of Palliative Care (EPAC) Ethics Task Force, the authors offered: "Medicalized killing of a person without the person's consent, whether nonvoluntary (where the person is unable to consent) or involuntary (against the person's will) is not euthanasia: it is murder. Hence, euthanasia can be voluntary only." Although the EPAC Ethics Task Force argued that both non-voluntary and involuntary euthanasia could not be included in the definition of euthanasia, there is discussion in the literature about excluding one but not the other. Classification Euthanasia may be classified into three types, according to whether a person gives informed consent: voluntary, non-voluntary and involuntary. There is a debate within the medical and bioethics literature about whether or not the non-voluntary (and by extension, involuntary) killing of patients can be regarded as euthanasia, irrespective of intent or the patient's circumstances. In the definitions offered by Beauchamp and Davidson and, later, by Wreen, consent on the part of the patient was not considered one of their criteria, although it may have been required to justify euthanasia. However, others see consent as essential. Voluntary euthanasia Voluntary euthanasia is conducted with the consent of the patient. Active voluntary euthanasia is legal in Belgium, Luxembourg and the Netherlands. Passive voluntary euthanasia is legal throughout the US per Cruzan v. Director, Missouri Department of Health. When the patient brings about their own death with the assistance of a physician, the term assisted suicide is often used instead. Assisted suicide is legal in Switzerland and the U.S. states of California, Oregon, Washington, Montana and Vermont. Non-voluntary euthanasia Non-voluntary euthanasia is conducted when the consent of the patient is unavailable. Examples include child euthanasia, which is illegal worldwide but decriminalised under certain specific circumstances in the Netherlands under the Groningen Protocol. Passive forms of non-voluntary euthanasia (i.e. withholding treatment) are legal in a number of countries under specified conditions. Involuntary euthanasia Involuntary euthanasia is conducted against the will of the patient. Passive and active euthanasia Voluntary, non-voluntary and involuntary types can be further divided into passive or active variants. Passive euthanasia entails the withholding treatment necessary for the continuance of life. Active euthanasia entails the use of lethal substances or forces (such as administering a lethal injection), and is more controversial. While some authors consider these terms to be misleading and unhelpful, they are nonetheless commonly used. In some cases, such as the administration of increasingly necessary, but toxic doses of painkillers, there is a debate whether or not to regard the practice as active or passive. History Euthanasia was practiced in Ancient Greece and Rome: for example, hemlock was employed as a means of hastening death on the island of Kea, a technique also employed in Marseilles. Euthanasia, in the sense of the deliberate hastening of a person's death, was supported by Socrates, Plato and Seneca the Elder in the ancient world, although Hippocrates appears to have spoken against the practice, writing "I will not prescribe a deadly drug to please someone, nor give advice that may cause his death" (noting there is some debate in the literature about whether or not this was intended to encompass euthanasia). Early modern period The term euthanasia, in the earlier sense of supporting someone as they died, was used for the first time by Francis Bacon. In his work, Euthanasia medica, he chose this ancient Greek word and, in doing so, distinguished between euthanasia interior, the preparation of the soul for death, and euthanasia exterior, which was intended to make the end of life easier and painless, in exceptional circumstances by shortening life. That the ancient meaning of an easy death came to the fore again in the early modern period can be seen from its definition in the 18th century Zedlers Universallexikon: Euthanasia: a very gentle and quiet death, which happens without painful convulsions. The word comes from ευ, bene, well, and θανατος, mors, death. The concept of euthanasia in the sense of alleviating the process of death goes back to the medical historian, Karl Friedrich Heinrich Marx, who drew on Bacon's philosophical ideas. According to Marx, a doctor had a moral duty to ease the suffering of death through encouragement, support and mitigation using medication. Such an "alleviation of death" reflected the contemporary zeitgeist, but was brought into the medical canon of responsibility for the first time by Marx. Marx also stressed the distinction between the theological care of the soul of sick people from the physical care and medical treatment by doctors. Euthanasia in its modern sense has always been strongly opposed in the Judeo-Christian tradition. Thomas Aquinas opposed both and argued that the practice of euthanasia contradicted our natural human instincts of survival, as did Francois Ranchin (1565–1641), a French physician and professor of medicine, and Michael Boudewijns (1601–1681), a physician and teacher. Other voices argued for euthanasia, such as John Donne in 1624, and euthanasia continued to be practised. In 1678, the publication of Caspar Questel's De pulvinari morientibus non-subtrahend, ("On the pillow of which the dying should not be deprived"), initiated debate on the topic. Questel described various customs which were employed at the time to hasten the death of the dying, (including the sudden removal of a pillow, which was believed to accelerate death), and argued against their use, as doing so was "against the laws of God and Nature". This view was shared by others who followed, including Philipp Jakob Spener, Veit Riedlin and Johann Georg Krünitz. Despite opposition, euthanasia continued to be practised, involving techniques such as bleeding, suffocation, and removing people from their beds to be placed on the cold ground. Suicide and euthanasia became more accepted during the Age of Enlightenment. Thomas More wrote of euthanasia in Utopia, although it is not clear if More was intending to endorse the practice. Other cultures have taken different approaches: for example, in Japan suicide has not traditionally been viewed as a sin, as it is used in cases of honor, and accordingly, the perceptions of euthanasia are different from those in other parts of the world. Beginnings of the contemporary euthanasia debate In the mid-1800s, the use of morphine to treat "the pains of death" emerged, with John Warren recommending its use in 1848. A similar use of chloroform was revealed by Joseph Bullar in 1866. However, in neither case was it recommended that the use should be to hasten death. In 1870 Samuel Williams, a schoolteacher, initiated the contemporary euthanasia debate through a speech given at the Birmingham Speculative Club in England, which was subsequently published in a one-off publication entitled Essays of the Birmingham Speculative Club, the collected works of a number of members of an amateur philosophical society. Williams' proposal was to use chloroform to deliberately hasten the death of terminally ill patients: The essay was favourably reviewed in The Saturday Review, but an editorial against the essay appeared in The Spectator. From there it proved to be influential, and other writers came out in support of such views: Lionel Tollemache wrote in favour of euthanasia, as did Annie Besant, the essayist and reformer who later became involved with the National Secular Society, considering it a duty to society to "die voluntarily and painlessly" when one reaches the point of becoming a 'burden'. Popular Science analyzed the issue in May 1873, assessing both sides of the argument. Kemp notes that at the time, medical doctors did not participate in the discussion; it was
of a person's death, was supported by Socrates, Plato and Seneca the Elder in the ancient world, although Hippocrates appears to have spoken against the practice, writing "I will not prescribe a deadly drug to please someone, nor give advice that may cause his death" (noting there is some debate in the literature about whether or not this was intended to encompass euthanasia). Early modern period The term euthanasia, in the earlier sense of supporting someone as they died, was used for the first time by Francis Bacon. In his work, Euthanasia medica, he chose this ancient Greek word and, in doing so, distinguished between euthanasia interior, the preparation of the soul for death, and euthanasia exterior, which was intended to make the end of life easier and painless, in exceptional circumstances by shortening life. That the ancient meaning of an easy death came to the fore again in the early modern period can be seen from its definition in the 18th century Zedlers Universallexikon: Euthanasia: a very gentle and quiet death, which happens without painful convulsions. The word comes from ευ, bene, well, and θανατος, mors, death. The concept of euthanasia in the sense of alleviating the process of death goes back to the medical historian, Karl Friedrich Heinrich Marx, who drew on Bacon's philosophical ideas. According to Marx, a doctor had a moral duty to ease the suffering of death through encouragement, support and mitigation using medication. Such an "alleviation of death" reflected the contemporary zeitgeist, but was brought into the medical canon of responsibility for the first time by Marx. Marx also stressed the distinction between the theological care of the soul of sick people from the physical care and medical treatment by doctors. Euthanasia in its modern sense has always been strongly opposed in the Judeo-Christian tradition. Thomas Aquinas opposed both and argued that the practice of euthanasia contradicted our natural human instincts of survival, as did Francois Ranchin (1565–1641), a French physician and professor of medicine, and Michael Boudewijns (1601–1681), a physician and teacher. Other voices argued for euthanasia, such as John Donne in 1624, and euthanasia continued to be practised. In 1678, the publication of Caspar Questel's De pulvinari morientibus non-subtrahend, ("On the pillow of which the dying should not be deprived"), initiated debate on the topic. Questel described various customs which were employed at the time to hasten the death of the dying, (including the sudden removal of a pillow, which was believed to accelerate death), and argued against their use, as doing so was "against the laws of God and Nature". This view was shared by others who followed, including Philipp Jakob Spener, Veit Riedlin and Johann Georg Krünitz. Despite opposition, euthanasia continued to be practised, involving techniques such as bleeding, suffocation, and removing people from their beds to be placed on the cold ground. Suicide and euthanasia became more accepted during the Age of Enlightenment. Thomas More wrote of euthanasia in Utopia, although it is not clear if More was intending to endorse the practice. Other cultures have taken different approaches: for example, in Japan suicide has not traditionally been viewed as a sin, as it is used in cases of honor, and accordingly, the perceptions of euthanasia are different from those in other parts of the world. Beginnings of the contemporary euthanasia debate In the mid-1800s, the use of morphine to treat "the pains of death" emerged, with John Warren recommending its use in 1848. A similar use of chloroform was revealed by Joseph Bullar in 1866. However, in neither case was it recommended that the use should be to hasten death. In 1870 Samuel Williams, a schoolteacher, initiated the contemporary euthanasia debate through a speech given at the Birmingham Speculative Club in England, which was subsequently published in a one-off publication entitled Essays of the Birmingham Speculative Club, the collected works of a number of members of an amateur philosophical society. Williams' proposal was to use chloroform to deliberately hasten the death of terminally ill patients: The essay was favourably reviewed in The Saturday Review, but an editorial against the essay appeared in The Spectator. From there it proved to be influential, and other writers came out in support of such views: Lionel Tollemache wrote in favour of euthanasia, as did Annie Besant, the essayist and reformer who later became involved with the National Secular Society, considering it a duty to society to "die voluntarily and painlessly" when one reaches the point of becoming a 'burden'. Popular Science analyzed the issue in May 1873, assessing both sides of the argument. Kemp notes that at the time, medical doctors did not participate in the discussion; it was "essentially a philosophical enterprise ... tied inextricably to a number of objections to the Christian doctrine of the sanctity of human life". Early euthanasia movement in the United States The rise of the euthanasia movement in the United States coincided with the so-called Gilded Age, a time of social and technological change that encompassed an "individualistic conservatism that praised laissez-faire economics, scientific method, and rationalism", along with major depressions, industrialisation and conflict between corporations and labour unions. It was also the period in which the modern hospital system was developed, which has been seen as a factor in the emergence of the euthanasia debate. Robert Ingersoll argued for euthanasia, stating in 1894 that where someone is suffering from a terminal illness, such as terminal cancer, they should have a right to end their pain through suicide. Felix Adler offered a similar approach, although, unlike Ingersoll, Adler did not reject religion. In fact, he argued from an Ethical Culture framework. In 1891, Adler argued that those suffering from overwhelming pain should have the right to commit suicide, and, furthermore, that it should be permissible for a doctor to assist – thus making Adler the first "prominent American" to argue for suicide in cases where people were suffering from chronic illness. Both Ingersoll and Adler argued for voluntary euthanasia of adults suffering from terminal ailments. Dowbiggin argues that by breaking down prior moral objections to euthanasia and suicide, Ingersoll and Adler enabled others to stretch the definition of euthanasia. The first attempt to legalise euthanasia took place in the United States, when Henry Hunt introduced legislation into the General Assembly of Ohio in 1906. Hunt did so at the behest of Anna Sophina Hall, a wealthy heiress who was a major figure in the euthanasia movement during the early 20th century in the United States. Hall had watched her mother die after an extended battle with liver cancer, and had dedicated herself to ensuring that others would not have to endure the same suffering. Towards this end she engaged in an extensive letter writing campaign, recruited Lurana Sheldon and Maud Ballington Booth, and organised a debate on euthanasia at the annual meeting of the American Humane Association in 1905 – described by Jacob Appel as the first significant public debate on the topic in the 20th century. Hunt's bill called for the administration of an anesthetic to bring about a patient's death, so long as the person is of lawful age and sound mind, and was suffering from a fatal injury, an irrevocable illness, or great physical pain. It also required that the case be heard by a physician, required informed consent in front of three witnesses, and required the attendance of three physicians who had to agree that the patient's recovery was impossible. A motion to reject the bill outright was voted down, but the bill failed to pass, 79 to 23. Along with the Ohio euthanasia proposal, in 1906 Assemblyman Ross Gregory introduced a proposal to permit euthanasia to the Iowa legislature. However, the Iowa legislation was broader in scope than that offered in Ohio. It allowed for the death of any person of at least ten years of age who suffered from an ailment that would prove fatal and cause extreme pain, should they be of sound mind and express a desire to artificially hasten their death. In addition, it allowed for infants to be euthanised if they were sufficiently deformed, and permitted guardians to request euthanasia on behalf of their wards. The proposed legislation also imposed penalties on physicians who refused to perform euthanasia when requested: a 6–12-month prison term and a fine of between $200 and $1,000. The proposal proved to be controversial. It engendered considerable debate and failed to pass, having been withdrawn from consideration after being passed to the Committee on Public Health. After 1906 the euthanasia debate reduced in intensity, resurfacing periodically, but not returning to the same level of debate until the 1930s in the United Kingdom. Euthanasia opponent Ian Dowbiggin argues that the early membership of the Euthanasia Society of America (ESA) reflected how many perceived euthanasia at the time, often seeing it as a eugenics matter rather than an issue concerning individual rights. Dowbiggin argues that not every eugenist joined the ESA "solely for eugenic reasons", but he postulates that there were clear ideological connections between the eugenics and euthanasia movements. 1930s in Britain The Voluntary Euthanasia Legalisation Society was founded in 1935 by Charles Killick Millard (now called Dignity in Dying). The movement campaigned for the legalisation of euthanasia in Great Britain. In January 1936, King George V was given a fatal dose of morphine and cocaine to hasten his death. At the time he was suffering from cardio-respiratory failure, and the decision to end his life was made by his physician, Lord Dawson. Although this event was kept a secret for over 50 years, the death of George V coincided with proposed legislation in the House of Lords to legalise euthanasia. Nazi Euthanasia Program A 24 July 1939 killing of a severely disabled infant in Nazi Germany was described in a BBC "Genocide Under the Nazis Timeline" as the first "state-sponsored euthanasia". Parties that consented to the killing included Hitler's office, the parents, and the Reich Committee for the Scientific Registration of Serious and Congenitally Based Illnesses. The Telegraph noted that the killing of the disabled infant—whose name was Gerhard Kretschmar, born blind, with missing limbs, subject to convulsions, and reportedly "an idiot"— provided "the rationale for a secret Nazi decree that led to 'mercy killings' of almost 300,000 mentally and physically handicapped people". While Kretchmar's killing received parental consent, most of the 5,000 to 8,000 children killed afterwards were forcibly taken from their parents. The "euthanasia campaign" of mass murder gathered momentum on 14 January 1940 when the "handicapped" were killed with gas vans and killing centres, eventually leading to the deaths of 70,000 adult Germans. Professor Robert Jay Lifton, author of The Nazi Doctors and a leading authority on the T4 program, contrasts this program with what he considers to be a genuine euthanasia. He explains that the Nazi version of "euthanasia" was based on the work of Adolf Jost, who published The Right to Death (Das Recht auf den Tod) in 1895. Lifton writes: Jost argued that control over the death of the individual must ultimately belong to the social organism, the state. This concept is in direct opposition to the Anglo-American concept of euthanasia, which emphasizes the individual's 'right to die' or 'right to death' or 'right to his or her own death,' as the ultimate human claim. In contrast, Jost was pointing to the state's right to kill. ... Ultimately the argument was biological: 'The rights to death [are] the key to the fitness of life.' The state must own death—must kill—in order to keep the social organism alive and healthy. In modern terms, the use of "euthanasia" in the context of Action T4 is seen to be a euphemism to disguise a program of genocide, in which people were killed on the grounds of "disabilities, religious beliefs, and discordant individual values". Compared to the discussions of euthanasia that emerged post-war, the Nazi program may have been worded in terms that appear similar to the modern use of "euthanasia", but there was no "mercy" and the patients were not necessarily terminally ill. Despite these differences, historian and euthanasia opponent Ian Dowbiggin writes that "the origins of Nazi euthanasia, like those of the American euthanasia movement, predate the Third Reich and were intertwined with the history of eugenics and Social Darwinism, and with efforts to discredit traditional morality and ethics." 1949 New York State Petition for Euthanasia and Catholic opposition On 6 January 1949, the Euthanasia Society of America presented to the New York State Legislature a petition to legalize euthanasia, signed by 379 leading Protestant and Jewish ministers, the largest group of religious leaders ever to have taken this stance. A similar petition had been sent to the New York Legislature in 1947, signed by approximately 1,000 New York
Dried-up riverbeds, polar ice caps, volcanoes, and minerals that form in the presence of water have all been found. Nevertheless, present conditions on the subsurface of Mars may support life. Evidence obtained by the Curiosity rover studying Aeolis Palus, Gale Crater in 2013 strongly suggests an ancient freshwater lake that could have been a hospitable environment for microbial life. Current studies on Mars by the Curiosity and Opportunity rovers are searching for evidence of ancient life, including a biosphere based on autotrophic, chemotrophic and/or chemolithoautotrophic microorganisms, as well as ancient water, including fluvio-lacustrine environments (plains related to ancient rivers or lakes) that may have been habitable. The search for evidence of habitability, taphonomy (related to fossils), and organic carbon on Mars is now a primary NASA objective. Ceres Ceres, the only dwarf planet in the asteroid belt, has a thin water-vapor atmosphere. The vapor could have been produced by ice volcanoes or by ice near the surface sublimating (transforming from solid to gas). Nevertheless, the presence of water on Ceres had led to speculation that life may be possible there. It is one of the few places in the Solar System where scientists would like to search for possible signs of life. Although the dwarf planet might not have living things today, there could be signs it harbored life in the past. Jupiter system Jupiter Carl Sagan and others in the 1960s and 1970s computed conditions for hypothetical microorganisms living in the atmosphere of Jupiter. The intense radiation and other conditions, however, do not appear to permit encapsulation and molecular biochemistry, so life there is thought unlikely. In contrast, some of Jupiter's moons may have habitats capable of sustaining life. Scientists have indications that heated subsurface oceans of liquid water may exist deep under the crusts of the three outer Galilean moons—Europa, Ganymede, and Callisto. The EJSM/Laplace mission was planned to determine the habitability of these environments; however, due to lack of funding, the program was not continued. Similar missions, like ESA's JUICE and NASA's Europa Clipper are currently in development and are slated for launch in 2022 and 2024, respectively. Europa Jupiter's moon Europa has been the subject of speculation about the existence of life, due to the strong possibility of a liquid water ocean beneath its ice surface. Hydrothermal vents on the bottom of the ocean, if they exist, may warm the water and could be capable of supplying nutrients and energy to microorganisms. It is also possible that Europa could support aerobic macrofauna using oxygen created by cosmic rays impacting its surface ice. The case for life on Europa was greatly enhanced in 2011 when it was discovered that vast lakes exist within Europa's thick, icy shell. Scientists found that ice shelves surrounding the lakes appear to be collapsing into them, thereby providing a mechanism through which life-forming chemicals created in sunlit areas on Europa's surface could be transferred to its interior. On 11 December 2013, NASA reported the detection of "clay-like minerals" (specifically, phyllosilicates), often associated with organic materials, on the icy crust of Europa. The presence of the minerals may have been the result of a collision with an asteroid or comet, according to the scientists. The Europa Clipper, which would assess the habitability of Europa, is planned for launch in 2024. Europa's subsurface ocean is considered the best target for the discovery of life. Saturn system Like Jupiter, Saturn is not likely to host life. However, Titan and Enceladus have been speculated to have possible habitats supportive of life. Enceladus Enceladus, a moon of Saturn, has some of the conditions for life, including geothermal activity and water vapor, as well as possible under-ice oceans heated by tidal effects. The Cassini–Huygens probe detected carbon, hydrogen, nitrogen and oxygen—all key elements for supporting life—during its 2005 flyby through one of Enceladus's geysers spewing ice and gas. The temperature and density of the plumes indicate a warmer, watery source beneath the surface. Of the bodies on which life is possible, living organisms could most easily enter the other bodies of the Solar System from Enceladus. Titan Titan, the largest moon of Saturn, is the only known moon in the Solar System with a significant atmosphere. Data from the Cassini–Huygens mission refuted the hypothesis of a global hydrocarbon ocean, but later demonstrated the existence of liquid hydrocarbon lakes in the polar regions—the first stable bodies of surface liquid discovered outside Earth. Analysis of data from the mission has uncovered aspects of atmospheric chemistry near the surface that are consistent with—but do not prove—the hypothesis that organisms there, if present, could be consuming hydrogen, acetylene and ethane, and producing methane. NASA's Dragonfly mission is slated to land on Titan in the mid-2030s with a VTOL-capable rotorcraft with a launch date set for 2027. Small Solar System bodies Small Solar System bodies have also been speculated to host habitats for extremophiles. Fred Hoyle and Chandra Wickramasinghe have proposed that microbial life might exist on comets and asteroids. Other bodies Models of heat retention and heating via radioactive decay in smaller icy Solar System bodies suggest that Rhea, Titania, Oberon, Triton, Pluto, Eris, Sedna, and Orcus may have oceans underneath solid icy crusts approximately 100 km thick. Of particular interest in these cases is the fact that the models indicate that the liquid layers are in direct contact with the rocky core, which allows efficient mixing of minerals and salts into the water. This is in contrast with the oceans that may be inside larger icy satellites like Ganymede, Callisto, or Titan, where layers of high-pressure phases of ice are thought to underlie the liquid water layer. Hydrogen sulfide has been proposed as a hypothetical solvent for life and is quite plentiful on Jupiter's moon Io, and may be in liquid form a short distance below the surface. Scientific search The scientific search for extraterrestrial life is being carried out both directly and indirectly. , 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in our own solar system hold the potential for hosting primitive life such as microorganisms. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. Direct search Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. In February 2005 NASA scientists reported they may have found some evidence of extraterrestrial life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions. Though such methane findings are still debated, support among some scientists for the existence of life on Mars exists. In November 2011 NASA launched the Mars Science Laboratory that landed the Curiosity rover on Mars. It is designed to assess the past and present habitability on Mars using a variety of scientific instruments. The rover landed on Mars at Gale Crater in August 2012. The Gaia hypothesis stipulates that any planet with a robust population of life will have an atmosphere in chemical disequilibrium, which is relatively easy to determine from a distance by spectroscopy. However, significant advances in the ability to find and resolve light from smaller rocky worlds near their stars are necessary before such spectroscopic methods can be used to analyze extrasolar planets. To that effect, the Carl Sagan Institute was founded in 2014 and is dedicated to the atmospheric characterization of exoplanets in circumstellar habitable zones. Planetary spectroscopic data will be obtained from telescopes like WFIRST and ELT. In August 2011, findings by NASA, based on studies of meteorites found on Earth, suggest DNA and RNA components (adenine, guanine and related organic molecules), building blocks for life as we know it, may be formed extraterrestrially in outer space. In October 2011, scientists reported that cosmic dust contains complex organic matter ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. One of the scientists suggested that these compounds may have been related to the development of life on Earth and said that, "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." In August 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. Indirect search Projects such as SETI are monitoring the galaxy for electromagnetic interstellar communications from civilizations on other worlds. If there is an advanced extraterrestrial civilization, there is no guarantee that it is transmitting radio communications in the direction of Earth or that this information could be interpreted as such by humans. The length of time required for a signal to travel across the vastness of space means that any signal detected would come from the distant past. The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star were being used as an incinerator/repository for nuclear waste products. Extrasolar planets Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zones of their stars. Since 1992 over four thousand exoplanets have been discovered ( planets in planetary systems including multiple planetary systems as of ). The extrasolar planets so far discovered range in size from that of terrestrial planets similar to Earth's size to that of gas giants larger than Jupiter. The number of observed exoplanets is expected to increase greatly in the coming years. The Kepler space telescope has also detected a few thousand candidate planets, of which about 11% may be false positives. There is at least one planet on average per star. About 1 in 5 Sun-like stars have an "Earth-sized" planet in the habitable zone, with the nearest expected to be within 12 light-years distance from Earth. Assuming 200 billion stars in the Milky Way, that would be 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if red dwarfs are included. The rogue planets in the Milky Way possibly number in the trillions. The nearest known exoplanet is Proxima Centauri b, located from Earth in the southern constellation of Centaurus. , the least massive exoplanet known is PSR B1257+12 A, which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is DENIS-P J082303.1-491201 b, about 29 times the mass of Jupiter, although according to most definitions of a planet, it is too massive to be a planet and may be a brown dwarf instead. Almost all of the planets detected so far are within the Milky Way, but there have also been a few possible detections of extragalactic planets. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life. One sign that a planet probably already contains life is the presence of an atmosphere with significant amounts of oxygen, since that gas is highly reactive and generally would not last long without constant replenishment. This replenishment occurs on Earth through photosynthetic organisms. One way to analyze the atmosphere of an exoplanet is through spectrography when it transits its star, though this might only be feasible with dim stars like white dwarfs. Terrestrial analysis The science of astrobiology considers life on Earth as well, and in the broader astronomical context. In 2015, "remains of biotic life" were found in 4.1 billion-year-old rocks in Western Australia, when the young Earth was about 400 million years old. According to one of the researchers, "If life arose relatively quickly on Earth, then it could be common in the universe." Drake equation In 1961, University of California, Santa Cruz, astronomer and astrophysicist Frank Drake devised the Drake equation as a way to stimulate scientific dialogue at a meeting on the search for extraterrestrial intelligence (SETI). The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy. The equation is best understood not as an equation in the strictly mathematical sense, but to summarize all the various concepts which scientists must contemplate when considering the question of life elsewhere. The Drake equation is: where: N = the number of Milky Way galaxy civilizations already capable of communicating across interplanetary space and R* = the average rate of star formation in our galaxy fp = the fraction of those stars that have planets ne = the average number of planets that can potentially support life fl = the fraction of planets that actually support life fi = the fraction of planets with life that evolves to become intelligent life (civilizations) fc = the fraction of civilizations that develop a technology to broadcast detectable signs of their existence into space L = the length of time over which such civilizations broadcast detectable signals into space Drake's proposed estimates are as follows, but numbers on the right side of the equation are agreed as speculative and open to substitution: The Drake equation has proved controversial since several of its factors are uncertain and based on conjecture, not allowing conclusions to be made. This has led critics to label the equation a guesstimate, or even meaningless. Based on observations from the Hubble Space Telescope, there are between 125 and 250 billion galaxies in the observable universe. It is estimated that at least ten percent of all Sun-like
Earth to start life here, or sent from Earth to seed new stellar systems with life. The Nobel prize winner Francis Crick, along with Leslie Orgel, proposed that seeds of life may have been purposely spread by an advanced extraterrestrial civilization, but considering an early "RNA world" Crick noted later that life may have originated on Earth. Mercury The spacecraft MESSENGER found evidence of water ice on Mercury. There may be scientific support, based on studies reported in March 2020, for considering that parts of the planet Mercury may have been habitable, and perhaps that life forms, albeit likely primitive microorganisms, may have existed on the planet. Venus In the early 20th century, Venus was considered to be similar to Earth for habitability, but observations since the beginning of the Space Age revealed that the Venusian surface temperature is around , making it inhospitable for Earth-like life. Likewise, the atmosphere of Venus is almost completely carbon dioxide, which can be toxic to Earth-like life. Between the altitudes of 50 and 65 kilometers, the pressure and temperature are Earth-like, and it may accommodate thermoacidophilic extremophile microorganisms in the acidic upper layers of the Venusian atmosphere. Furthermore, Venus likely had liquid water on its surface for at least a few million years after its formation. In September 2020, a paper was published announcing the detection of phosphine in Venus's atmosphere in concentrations that, at the time of publication, could not be explained by known abiotic processes in the Venusian environment. Although lightning strikes and other geo-chemical sources are insufficient in explaining this phosphine detection, volcanic activity may still prove to be an adequate source of phosphine as phosphides found in the deep mantle could react with sulfuric acid in the atmosphere's aerosol layer. The Moon Humans have been speculating about life on the Moon since antiquity. One of the early scientific inquires into the topic appeared in an 1878 Scientific American article entitled "Is the Moon Inhabited?" Decades later a 1939 essay by Winston Churchill concluded that the Moon is unlikely to harbour life, due to the lack of an atmosphere. 3.5 to 4 billion years ago, the Moon could have had a magnetic field, an atmosphere, and liquid water sufficient to sustain life on its surface. Warm and pressurized regions in the Moon's interior might still contain liquid water. Several species of terrestrial life were briefly brought to the Moon, including humans, cotton plants, and tardigrades. As of 2021, no native lunar life has been found, including any signs of life in the samples of Moon rocks and soil. Mars Life on Mars has been long speculated. Liquid water is widely thought to have existed on Mars in the past, and now can occasionally be found as low-volume liquid brines in shallow Martian soil. The origin of the potential biosignature of methane observed in the atmosphere of Mars is unexplained, although hypotheses not involving life have been proposed. There is evidence that Mars had a warmer and wetter past: Dried-up riverbeds, polar ice caps, volcanoes, and minerals that form in the presence of water have all been found. Nevertheless, present conditions on the subsurface of Mars may support life. Evidence obtained by the Curiosity rover studying Aeolis Palus, Gale Crater in 2013 strongly suggests an ancient freshwater lake that could have been a hospitable environment for microbial life. Current studies on Mars by the Curiosity and Opportunity rovers are searching for evidence of ancient life, including a biosphere based on autotrophic, chemotrophic and/or chemolithoautotrophic microorganisms, as well as ancient water, including fluvio-lacustrine environments (plains related to ancient rivers or lakes) that may have been habitable. The search for evidence of habitability, taphonomy (related to fossils), and organic carbon on Mars is now a primary NASA objective. Ceres Ceres, the only dwarf planet in the asteroid belt, has a thin water-vapor atmosphere. The vapor could have been produced by ice volcanoes or by ice near the surface sublimating (transforming from solid to gas). Nevertheless, the presence of water on Ceres had led to speculation that life may be possible there. It is one of the few places in the Solar System where scientists would like to search for possible signs of life. Although the dwarf planet might not have living things today, there could be signs it harbored life in the past. Jupiter system Jupiter Carl Sagan and others in the 1960s and 1970s computed conditions for hypothetical microorganisms living in the atmosphere of Jupiter. The intense radiation and other conditions, however, do not appear to permit encapsulation and molecular biochemistry, so life there is thought unlikely. In contrast, some of Jupiter's moons may have habitats capable of sustaining life. Scientists have indications that heated subsurface oceans of liquid water may exist deep under the crusts of the three outer Galilean moons—Europa, Ganymede, and Callisto. The EJSM/Laplace mission was planned to determine the habitability of these environments; however, due to lack of funding, the program was not continued. Similar missions, like ESA's JUICE and NASA's Europa Clipper are currently in development and are slated for launch in 2022 and 2024, respectively. Europa Jupiter's moon Europa has been the subject of speculation about the existence of life, due to the strong possibility of a liquid water ocean beneath its ice surface. Hydrothermal vents on the bottom of the ocean, if they exist, may warm the water and could be capable of supplying nutrients and energy to microorganisms. It is also possible that Europa could support aerobic macrofauna using oxygen created by cosmic rays impacting its surface ice. The case for life on Europa was greatly enhanced in 2011 when it was discovered that vast lakes exist within Europa's thick, icy shell. Scientists found that ice shelves surrounding the lakes appear to be collapsing into them, thereby providing a mechanism through which life-forming chemicals created in sunlit areas on Europa's surface could be transferred to its interior. On 11 December 2013, NASA reported the detection of "clay-like minerals" (specifically, phyllosilicates), often associated with organic materials, on the icy crust of Europa. The presence of the minerals may have been the result of a collision with an asteroid or comet, according to the scientists. The Europa Clipper, which would assess the habitability of Europa, is planned for launch in 2024. Europa's subsurface ocean is considered the best target for the discovery of life. Saturn system Like Jupiter, Saturn is not likely to host life. However, Titan and Enceladus have been speculated to have possible habitats supportive of life. Enceladus Enceladus, a moon of Saturn, has some of the conditions for life, including geothermal activity and water vapor, as well as possible under-ice oceans heated by tidal effects. The Cassini–Huygens probe detected carbon, hydrogen, nitrogen and oxygen—all key elements for supporting life—during its 2005 flyby through one of Enceladus's geysers spewing ice and gas. The temperature and density of the plumes indicate a warmer, watery source beneath the surface. Of the bodies on which life is possible, living organisms could most easily enter the other bodies of the Solar System from Enceladus. Titan Titan, the largest moon of Saturn, is the only known moon in the Solar System with a significant atmosphere. Data from the Cassini–Huygens mission refuted the hypothesis of a global hydrocarbon ocean, but later demonstrated the existence of liquid hydrocarbon lakes in the polar regions—the first stable bodies of surface liquid discovered outside Earth. Analysis of data from the mission has uncovered aspects of atmospheric chemistry near the surface that are consistent with—but do not prove—the hypothesis that organisms there, if present, could be consuming hydrogen, acetylene and ethane, and producing methane. NASA's Dragonfly mission is slated to land on Titan in the mid-2030s with a VTOL-capable rotorcraft with a launch date set for 2027. Small Solar System bodies Small Solar System bodies have also been speculated to host habitats for extremophiles. Fred Hoyle and Chandra Wickramasinghe have proposed that microbial life might exist on comets and asteroids. Other bodies Models of heat retention and heating via radioactive decay in smaller icy Solar System bodies suggest that Rhea, Titania, Oberon, Triton, Pluto, Eris, Sedna, and Orcus may have oceans underneath solid icy crusts approximately 100 km thick. Of particular interest in these cases is the fact that the models indicate that the liquid layers are in direct contact with the rocky core, which allows efficient mixing of minerals and salts into the water. This is in contrast with the oceans that may be inside larger icy satellites like Ganymede, Callisto, or Titan, where layers of high-pressure phases of ice are thought to underlie the liquid water layer. Hydrogen sulfide has been proposed as a hypothetical solvent for life and is quite plentiful on Jupiter's moon Io, and may be in liquid form a short distance below the surface. Scientific search The scientific search for extraterrestrial life is being carried out both directly and indirectly. , 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in our own solar system hold the potential for hosting primitive life such as microorganisms. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. Direct search Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. In February 2005 NASA scientists reported they may have found some evidence of extraterrestrial life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions. Though such methane findings are still debated, support among some scientists for the existence of life on Mars exists. In November 2011 NASA
event-driven approach for designing information systems, developed by Colette Rolland. This methodology integrates behavioral and temporal aspects with concepts for modelling the structural aspects of an information system. In the ESPRIT I project TODOS, which has led to the development of an integrated environment for the design of office information systems (OISs), SAMPA: The Speech Assessment Methods Phonetic Alphabet (SAMPA) is a computer-readable phonetic script originally developed in the late 1980s. SCOPES: The Systematic Concurrent design of Products, Equipments and Control Systems project was a 3-year project launched in July, 1992, with the aim of specifying integrated computer-aided (CAD) tools for design and control of flexible assembly lines. SIP (Advanced Algorithms and Architectures for Speech and Image Processing), a partnership between Thomson-CSF, AEG, CSELT and ENSPS (ESPRIT P26), to develop the algorithmic and architectural techniques required for recognizing and understanding spoken or visual signals and to demonstrate these techniques in suitable applications. StatLog: "ESPRIT project 5170. Comparative testing and evaluation of statistical and logical learning algorithms on large-scale applications to classification, prediction and control" SUNDIAL (Speech UNderstanding DIALgue) started in September 1988 with Logica Ltd. as prime contractor, together with Erlangen University, CSELT, Daimler-Benz, Capgemini, Politecnico di Torino. Followed the Esprit P.26 to implement and evaluate dialogue systems to be used in telephone industry. The final results were 4 prototypes in 4 languages, involving speech and understanding technologies, and
Directorate General for Industry (DG III) of the European Commission. Programmes Five ESPRIT programmes (ESPRIT 0 to ESPRIT 4) ran consecutively from 1983 to 1998. ESPRIT 4 was succeeded by the Information Society Technologies (IST) programme in 1999. Projects Some of the projects and products supported by ESPRIT were: BBC Domesday Project, a partnership between Acorn Computers Ltd, Philips, Logica and the BBC with some funding from the European Commission's ESPRIT programme, to mark the 900th anniversary of the original Domesday Book, an 11th-century census of England. It is frequently cited as an example of digital obsolescence on account of the physical medium used for data storage. CGAL, the Computational Geometry Algorithms Library (CGAL) is a software library that aims to provide easy access to efficient and reliable algorithms in computational geometry. While primarily written in C++, Python bindings are also available. The original funding for the project came from the ESPRIT project. Eurocoop & Eurocode: ESPRIT III projects to develop systems for supporting distributed collaborative working. Open Document Architecture, a free and open international standard document file format maintained by the ITU-T to replace all proprietary document file formats. In 1985 ESPRIT financed a pilot implementation of the ODA concept, involving, among others, Bull corporation, Olivetti, ICL and Siemens AG. Paradise: A sub-project of the ESPRIT I project, COSINE which established
the many other children who lived in his neighborhood. He grew up in the company of such family friends as the philosophers William James and Josiah Royce. Many of Cummings's summers were spent on Silver Lake in Madison, New Hampshire, where his father had built two houses along the eastern shore. The family ultimately purchased the nearby Joy Farm where Cummings had his primary summer residence. He expressed transcendental leanings his entire life. As he matured, Cummings moved to an "I, Thou" relationship with God. His journals are replete with references to "le bon Dieu", as well as prayers for inspiration in his poetry and artwork (such as "Bon Dieu! may i some day do something truly great. amen."). Cummings "also prayed for strength to be his essential self ('may I be I is the only prayer—not may I be great or good or beautiful or wise or strong'), and for relief of spirit in times of depression ('almighty God! I thank thee for my soul; & may I never die spiritually into a mere mind through disease of loneliness')". Cummings wanted to be a poet from childhood and wrote poetry daily from age 8 to 22, exploring assorted forms. He graduated from Harvard University with a Bachelor of Arts degree magna cum laude and Phi Beta Kappa in 1915 and received a Master of Arts degree from the university in 1916. In his studies at Harvard, he developed an interest in modern poetry, which ignored conventional grammar and syntax, while aiming for a dynamic use of language. Upon graduating, he worked for a book dealer. War years In 1917, with the First World War ongoing in Europe, Cummings enlisted in the Norton-Harjes Ambulance Corps. On the boat to France, he met William Slater Brown and they would become friends. Due to an administrative error, Cummings and Brown did not receive an assignment for five weeks, a period they spent exploring Paris. Cummings fell in love with the city, to which he would return throughout his life. During their service in the ambulance corps, the two young writers sent letters home that drew the attention of the military censors. They were known to prefer the company of French soldiers over fellow ambulance drivers. The two openly expressed anti-war views; Cummings spoke of his lack of hatred for the Germans. On September 21, 1917, five months after starting his belated assignment, Cummings and William Slater Brown were arrested by the French military on suspicion of espionage and undesirable activities. They were held for three and a half months in a military detention camp at the Dépôt de Triage, in La Ferté-Macé, Orne, Normandy. They were imprisoned with other detainees in a large room. Cummings's father failed to obtain his son's release through diplomatic channels, and in December 1917 he wrote a letter to President Woodrow Wilson. Cummings was released on December 19, 1917, and Brown was released two months later. Cummings used his prison experience as the basis for his novel, The Enormous Room (1922), about which F. Scott Fitzgerald said, "Of all the work by young men who have sprung up since 1920 one book survives—The Enormous Room by e e cummings ... Those few who cause books to live have not been able to endure the thought of its mortality." Cummings returned to the United States on New Year's Day 1918. Later in 1918 he was drafted into the army. He served a training deployment in the 12th Division at Camp Devens, Massachusetts, until November 1918. Post-war years Cummings returned to Paris in 1921 and lived there for two years before returning to New York. His collection Tulips and Chimneys was published in 1923 and his inventive use of grammar and syntax is evident. The book was heavily cut by his editor. XLI Poems was published in 1925. With these collections, Cummings made his reputation as an avant garde poet. During the rest of the 1920s and 1930s, Cummings returned to Paris a number of times, and traveled throughout Europe, meeting, among others, artist Pablo Picasso. In 1931 Cummings traveled to the Soviet Union, recounting his experiences in Eimi, published two years later. During these years Cummings also traveled to Northern Africa and Mexico. He worked as an essayist and portrait artist for Vanity Fair magazine (1924–1927). In 1926, Cummings's parents were in a car crash; only his mother survived, although she was severely injured. Cummings later described the crash in the following passage from his i: six nonlectures series given at Harvard (as part of the Charles Eliot Norton Lectures) in 1952 and 1953: His father's death had a profound effect on Cummings, who entered a new period in his artistic life. He began to focus on more important aspects of life in his poetry. He started this new period by paying homage to his father in the poem "my father moved through dooms of love". In the 1930s Samuel Aiwaz Jacobs was Cummings's publisher; he had started the Golden Eagle Press after working as a typographer and publisher. Final years In 1952, his alma mater, Harvard University, awarded Cummings an honorary seat as a guest professor. The Charles Eliot Norton Lectures he gave in 1952 and 1955 were later collected as i: six nonlectures. Cummings spent the last decade of his life traveling, fulfilling speaking engagements, and spending time at his summer home, Joy Farm, in Silver Lake, New Hampshire. He died of a stroke on September 3, 1962, at the age of 67 at Memorial Hospital in North Conway, New Hampshire. Cummings was buried at Forest Hills Cemetery in Boston, Massachusetts. At the time of his death, Cummings was recognized as the "second most widely read poet in the United States, after Robert Frost". Cummings's papers are held at the Houghton Library at Harvard University and the Harry Ransom Center at the University of Texas at Austin. Personal life Marriages Cummings was married briefly twice, first to Elaine Thayer, then to Anne Minnerly Barton. His longest relationship lasted more than three decades with Marion Morehouse. In 2020, it was revealed that in 1917, before his first marriage, Cummings had shared several passionate love letters with a Parisian prostitute, Marie Louise Lallemand. Despite Cummings's efforts, he was unable to find Lallemand upon his return to Paris after the front. Cummings's first marriage, to Elaine Orr, his cousin, began as a love affair in 1918 while she was still married to Scofield Thayer, one of Cummings's friends from Harvard. During this time he wrote a good deal of his erotic poetry. After divorcing Thayer, Orr married Cummings on March 19, 1924. The couple had a daughter together out of wedlock. However, the couple separated after two months of marriage and divorced less than nine months later. Cummings married his second wife Anne Minnerly Barton on May 1, 1929. They separated three years later in 1932. That same year, Minnerly obtained a Mexican divorce; it was not officially recognized in the United States until August 1934. Anne died in 1970 aged 72. In 1934, after his separation from his second wife, Cummings met Marion Morehouse, a fashion model and photographer. Although it is not clear whether the two were ever formally married, Morehouse lived with Cummings until his death in 1962. She died on May 18, 1969, while living at 4 Patchin Place, Greenwich Village, New York City, where Cummings had resided since September 1924. Political views According to his testimony in EIMI, Cummings had little interest in politics until his trip to the Soviet Union in 1931. He subsequently shifted rightward on many political and social issues. Despite his radical and bohemian public image, he was a Republican and later an ardent supporter of Joseph McCarthy. Work Poetry Despite Cummings's familiarity with avant-garde styles (likely affected by the Calligrammes of French poet Apollinaire, according to a contemporary observation), much of his work is quite traditional. Many of his poems are sonnets, albeit often with a modern twist. He occasionally used the blues form and acrostics. Cummings's poetry often deals with themes of love and nature, as well as the relationship of the individual to the masses and to the world. His poems are also often rife with satire. While his poetic forms and themes share an affinity with the Romantic tradition, Cummings's work universally shows a particular idiosyncrasy of syntax, or way of arranging individual words into larger phrases and sentences. Many of his most striking poems do not involve any typographical or punctuation innovations at all, but purely syntactic ones. As well as being influenced by notable modernists, including Gertrude Stein and Ezra Pound, Cummings in his early work drew upon the imagist experiments of Amy Lowell. Later, his visits to Paris exposed him to Dada and Surrealism, which he reflected in his work. He began to rely on symbolism and allegory, where he once had used simile and metaphor. In his later work, he rarely used comparisons that required objects that were not previously mentioned in the poem, choosing to use a symbol instead. Due to this, his later poetry is "frequently more lucid, more moving, and more profound than his earlier". Cummings also liked to incorporate imagery of nature and death into much of his poetry. While some of his poetry is free verse (with no concern for rhyme or meter), many have a recognizable sonnet structure of 14 lines, with an intricate rhyme scheme. A number of his poems feature a typographically exuberant style, with words, parts of words, or punctuation symbols scattered across the page, often making little sense until read aloud, at which point the meaning and emotion become clear. Cummings, who was also a painter, understood the importance of presentation, and used typography to "paint a picture" with some of his poems. The seeds of Cummings's unconventional style appear well established even in his earliest work. At age six, he wrote to his father: Following his autobiographical novel, The Enormous Room, Cummings's first published work was a collection of poems titled Tulips and Chimneys (1923). This work was the public's first encounter with his characteristically eccentric use of grammar and punctuation. Some of Cummings's most famous poems do not involve much, if any, idiosyncratic typography or punctuation, but they still carry his unmistakable style, particularly in unusual and impressionistic word order. Cummings's works often do not follow the conventional rules that generate typical English sentences (for example, "they sowed their isn't"). In addition, a number of Cummings's poems feature, in part or in whole, intentional misspellings, and several incorporate phonetic spellings intended to represent particular dialects. Cummings also made use of inventive formations of compound words, as in his poem "in Just", which features words such as "mud-luscious", "puddle-wonderful", and "eddieandbill". This poem is part of a sequence of poems titled Chansons Innocentes; it has many references comparing the "balloonman" to Pan, the mythical creature that is half-goat and half-man. Literary critic R.P. Blackmur has commented that this use of language is "frequently unintelligible because [Cummings] disregards the historical accumulation of meaning in words in favour of merely private and personal associations". Fellow poet Edna St. Vincent Millay, in her equivocal letter recommending Cummings for the Guggenheim Fellowship he was awarded in 1934,
in North Conway, New Hampshire. Cummings was buried at Forest Hills Cemetery in Boston, Massachusetts. At the time of his death, Cummings was recognized as the "second most widely read poet in the United States, after Robert Frost". Cummings's papers are held at the Houghton Library at Harvard University and the Harry Ransom Center at the University of Texas at Austin. Personal life Marriages Cummings was married briefly twice, first to Elaine Thayer, then to Anne Minnerly Barton. His longest relationship lasted more than three decades with Marion Morehouse. In 2020, it was revealed that in 1917, before his first marriage, Cummings had shared several passionate love letters with a Parisian prostitute, Marie Louise Lallemand. Despite Cummings's efforts, he was unable to find Lallemand upon his return to Paris after the front. Cummings's first marriage, to Elaine Orr, his cousin, began as a love affair in 1918 while she was still married to Scofield Thayer, one of Cummings's friends from Harvard. During this time he wrote a good deal of his erotic poetry. After divorcing Thayer, Orr married Cummings on March 19, 1924. The couple had a daughter together out of wedlock. However, the couple separated after two months of marriage and divorced less than nine months later. Cummings married his second wife Anne Minnerly Barton on May 1, 1929. They separated three years later in 1932. That same year, Minnerly obtained a Mexican divorce; it was not officially recognized in the United States until August 1934. Anne died in 1970 aged 72. In 1934, after his separation from his second wife, Cummings met Marion Morehouse, a fashion model and photographer. Although it is not clear whether the two were ever formally married, Morehouse lived with Cummings until his death in 1962. She died on May 18, 1969, while living at 4 Patchin Place, Greenwich Village, New York City, where Cummings had resided since September 1924. Political views According to his testimony in EIMI, Cummings had little interest in politics until his trip to the Soviet Union in 1931. He subsequently shifted rightward on many political and social issues. Despite his radical and bohemian public image, he was a Republican and later an ardent supporter of Joseph McCarthy. Work Poetry Despite Cummings's familiarity with avant-garde styles (likely affected by the Calligrammes of French poet Apollinaire, according to a contemporary observation), much of his work is quite traditional. Many of his poems are sonnets, albeit often with a modern twist. He occasionally used the blues form and acrostics. Cummings's poetry often deals with themes of love and nature, as well as the relationship of the individual to the masses and to the world. His poems are also often rife with satire. While his poetic forms and themes share an affinity with the Romantic tradition, Cummings's work universally shows a particular idiosyncrasy of syntax, or way of arranging individual words into larger phrases and sentences. Many of his most striking poems do not involve any typographical or punctuation innovations at all, but purely syntactic ones. As well as being influenced by notable modernists, including Gertrude Stein and Ezra Pound, Cummings in his early work drew upon the imagist experiments of Amy Lowell. Later, his visits to Paris exposed him to Dada and Surrealism, which he reflected in his work. He began to rely on symbolism and allegory, where he once had used simile and metaphor. In his later work, he rarely used comparisons that required objects that were not previously mentioned in the poem, choosing to use a symbol instead. Due to this, his later poetry is "frequently more lucid, more moving, and more profound than his earlier". Cummings also liked to incorporate imagery of nature and death into much of his poetry. While some of his poetry is free verse (with no concern for rhyme or meter), many have a recognizable sonnet structure of 14 lines, with an intricate rhyme scheme. A number of his poems feature a typographically exuberant style, with words, parts of words, or punctuation symbols scattered across the page, often making little sense until read aloud, at which point the meaning and emotion become clear. Cummings, who was also a painter, understood the importance of presentation, and used typography to "paint a picture" with some of his poems. The seeds of Cummings's unconventional style appear well established even in his earliest work. At age six, he wrote to his father: Following his autobiographical novel, The Enormous Room, Cummings's first published work was a collection of poems titled Tulips and Chimneys (1923). This work was the public's first encounter with his characteristically eccentric use of grammar and punctuation. Some of Cummings's most famous poems do not involve much, if any, idiosyncratic typography or punctuation, but they still carry his unmistakable style, particularly in unusual and impressionistic word order. Cummings's works often do not follow the conventional rules that generate typical English sentences (for example, "they sowed their isn't"). In addition, a number of Cummings's poems feature, in part or in whole, intentional misspellings, and several incorporate phonetic spellings intended to represent particular dialects. Cummings also made use of inventive formations of compound words, as in his poem "in Just", which features words such as "mud-luscious", "puddle-wonderful", and "eddieandbill". This poem is part of a sequence of poems titled Chansons Innocentes; it has many references comparing the "balloonman" to Pan, the mythical creature that is half-goat and half-man. Literary critic R.P. Blackmur has commented that this use of language is "frequently unintelligible because [Cummings] disregards the historical accumulation of meaning in words in favour of merely private and personal associations". Fellow poet Edna St. Vincent Millay, in her equivocal letter recommending Cummings for the Guggenheim Fellowship he was awarded in 1934, expressed her frustration at his opaque symbolism. "[I]f he prints and offers for sale poetry which he is quite content should
Corps of Engineers to continue Maillefert's work, but the money was soon spent without appreciable change in the hazards of navigating the strait. An advisory council recommended in 1856 that the strait be cleared of all obstacles, but nothing was done, and the Civil War soon broke out. In the late 1860s, after the Civil War, Congress realized the military importance of having easily navigable waterways, and charged the Army Corps of Engineers with clearing Hell Gate of the rocks there that caused a danger to navigation. The Corps' Colonel James Newton estimated that the project would cost $1 million, as compared to the approximate annual loss in shipping of $2 million. Initial forays floundered, and Newton, by that time a general, took over direct control of the project. In 1868 Newton decided, with the support of both New York's mercantile class and local real estate interests, to focus on the Hallert's Point Reef off of Queens. The project would involve of tunnels equipped with trains to haul debris out as the reef was eviscerated, creating a reef structured like "swiss cheese" which Newton would then blow up. After seven years of digging seven thousand holes, and filling four thousand of them with of dynamite, on September 24, 1876, in front of an audience of people including the inhabitants of the insane asylum on Wards Island, but not the prisoners of Roosevelt Island – then called Blackwell's Island – who remained in their cells, Newton's daughter set off the explosion. The effect was immediate in decreased turbulence through the strait, and fewer accidents and shipwrecks. The city's Chamber of Commerce commented that "The Centennial year will be for ever known in the annals of commerce for this destruction of one of the terrors of navigation." Clearing out the debris from the explosion took until 1891. Then, in 1885, Flood Rock, a reef that Newton had begun to undermine even before starting on Hallert's Rock, removing of rock from the reef, was blown up as well, with Civil War General Philip Sheridan and abolitionist Henry Ward Beecher among those in attendance, and Newton's daughter once more setting off the blast, the biggest ever to that date, and reportedly the largest man-made explosion until the advent of the atomic bomb although the detonation at the Battle of Messines in 1917 was several times larger. Two years later, plans were in place to dredge Hell Gate to a consistent depth of . At the same time that Hell Gate was being cleared, the Harlem River Ship Canal was being planned. When it was completed in 1895, the "back door" to New York's center of ship-borne trade in the docks and warehouses of the East River was open from two directions, through the cleared East River, and from the Hudson River through the Harlem River to the East River. Ironically, though, while both forks of the northern shipping entrance to the city were now open, modern dredging techniques had cut through the sandbars of the Atlantic Ocean entrance, allowing new, even larger ships to use that traditional passage into New York's docks. At the beginning of the 19th century, the East River was the center of New York's shipping industry, but by the end of the century, much of it had moved to the Hudson River, leaving the East River wharves and slips to begin a long process of decay, until the area was finally rehabilitated in the mid-1960s, and the South Street Seaport Museum was opened in 1967. A new seawall By 1870, the condition of the Port of New York along both the East and Hudson Rivers had so deteriorated that the New York State legislature created the Department of Docks to renovate the port and keep New York competitive with other ports on the American East Coast. The Department of Docks was given the task of creating the master plan for the waterfront, and General George B. McClellan was engaged to head the project. McClellan held public hearings and invited plans to be submitted, ultimately receiving 70 of them, although in the end he and his successors put his own plan into effect. That plan called for the building of a seawall around Manhattan island from West 61st Street on the Hudson, around The Battery, and up to East 51st Street on the East River. The area behind the masonry wall (mostly concrete but in some parts granite blocks) would be filled in with landfill, and wide streets would be laid down on the new land. In this way, a new edge for the island (or at least the part of it used as a commercial port) would be created. The department had surveyed of shoreline by 1878, as well as documenting the currents and tides. By 1900, had been surveyed and core samples had been taken to inform the builders of how deep the bedrock was. The work was completed just as World War I began, allowing the Port of New York to be a major point of embarkation for troops and materiel. The new seawall helps protect Manhattan island from storm surges, although it is only above the mean sea level, so that particularly dangerous storms, such as the nor'easter of 1992 and Hurricane Sandy in 2012, which hit the city in a way to create surges which are much higher, can still do significant damage. (The Hurricane of September 3, 1821 created the biggest storm surge on record in New York City: a rise of in one hour at the Battery, flooding all of lower Manhattan up to Canal Street.) Still, the new seawall begun in 1871 gave the island a firmer edge, improved the quality of the port, and continues to protect Manhattan from normal storm surges. Bridges and tunnels The Brooklyn Bridge, completed in 1883, was the first bridge to span the East River, connecting the cities of New York and Brooklyn, and all but replacing the frequent ferry service between them, which did not return until the late 20th century. The bridge offered cable car service across the span. The Brooklyn Bridge was followed by the Williamsburg Bridge (1903), the Queensboro Bridge (1909), the Manhattan Bridge (1912) and the Hell Gate Railroad Bridge (1916). Later would come the Triborough Bridge (1936), the Bronx-Whitestone Bridge (1939), the Throgs Neck Bridge (1961) and the Rikers Island Bridge (1966). In addition, numerous rail tunnels pass under the East River – most of them part of the New York City Subway system – as does the Brooklyn-Battery Tunnel and the Queens-Midtown Tunnel. (See Crossings below for details.) Also under the river is Water Tunnel #1 of the New York City water supply system, built in 1917 to extend the Manhattan portion of the tunnel to Brooklyn, and via City Tunnel #2 (1936) to Queens; these boroughs became part of New York City after the city's consolidation in 1898. City Tunnel #3 will also run under the river, under the northern tip of Roosevelt Island, and is expected to be completed by 2018; the Manhattan portion of the tunnel went into service in 2013. 20th and 21st centuries Philanthropist John D. Rockefeller founded what is now Rockefeller University in 1901, between 63rd and 64th Streets on the river side of York Avenue, overlooking the river. The university is a research university for doctoral and post-doctoral scholars, primarily in the fields of medicine and biological science. North of it is one of the major medical centers in the city, NewYork Presbyterian / Weill Cornell Medical Center, which is associated with the medical schools of both Columbia University and Cornell University. Although it can trace its history back to 1771, the center on York Avenue, much of which overlooks the river, was built in 1932. The East River was the site of one of the greatest disasters in the history of New York City when, in June 1904, the PS General Slocum sank near North Brother Island due to a fire. It was carrying 1,400 German-Americans to a picnic site on Long Island for an annual outing. There were only 321 survivors of the disaster, one of the worst losses of life in the city's long history, and a devastating blow to the Little Germany neighborhood on the Lower East Side. The captain of the ship and the managers of the company that owned it were indicted, but only the captain was convicted; he spent 3 and a half years of his 10-year sentence at Sing Sing Prison before being released by a Federal parole board, and then pardoned by President William Howard Taft. Beginning in 1934, and then again from 1948 to 1966, the Manhattan shore of the river became the location for the limited-access East River Drive, which was later renamed after Franklin Delano Roosevelt, and is universally known by New Yorkers as the "FDR Drive". The road is sometimes at grade, sometimes runs under locations such as the site of the Headquarters of the United Nations and Carl Schurz Park and Gracie Mansion – the mayor's official residence, and is at time double-decked, because Hell Gate provides no room for more landfill. It begins at Battery Park, runs past the Brooklyn, Manhattan, Williamsburg and Queensboro Bridges, and the Ward's Island Footbridge, and terminates just before the Robert F. Kennedy Triboro Bridge when it connects to the Harlem River Drive. Between most of the FDR Drive and the River is the East River Greenway, part of the Manhattan Waterfront Greenway. The East River Greenway was primarily built in connection with the building of the FDR Drive, although some portions were built as recently as 2002, and other sections are still incomplete. In 1963, Con Edison built the Ravenswood Generating Station on the Long Island City shore of the river, on land some of which was once stone quarries which provided granite and marble slabs for Manhattan's buildings. The plant has since been owned by KeySpan. National Grid and TransCanada, the result of deregulation of the electrical power industry. The station, which can generate about 20% of the electrical needs of New York City – approximately 2,500 megawatts – receives some of its fuel by oil barge. North of the power plant can be found Socrates Sculpture Park, an illegal dumpsite and abandoned landfill that in 1986 was turned into an outdoor museum, exhibition space for artists, and public park by sculptor Mark di Suvero and local activists. The area also contains Rainey Park, which honors Thomas C. Rainey, who attempted for 40 years to get a bridge built in that location from Manhattan to Queens. The Queensboro Bridge was eventually built south of this location. In 2011, NY Waterway started operating its East River Ferry line. The route was a 7-stop East River service that runs in a loop between East 34th Street and Hunters Point, making two intermediate stops in Brooklyn and three in Queens. The ferry, an alternative to the New York City Subway, cost $4 per one-way ticket. It was instantly popular: from June to November 2011, the ferry saw 350,000 riders, over 250% of the initial ridership forecast of 134,000 riders. In December 2016, in preparation for the start of NYC Ferry service the next year, Hornblower Cruises purchased the rights to operate the East River Ferry. NYC Ferry started service on May 1, 2017, with the East River Ferry as part of the system. In February 2012 the federal government announced an agreement with Verdant Power to install 30 tidal turbines in the channel of the East River. The turbines were projected to begin operations in 2015 and are supposed to produce 1.05 megawatts of power. The strength of the current foiled an earlier effort in 2007 to tap the river for tidal power. On May 7, 2017, the catastrophic failure of a Con Edison substation in Brooklyn caused a spill into the river of over of dielectric fluid, a synthetic mineral oil used to cool electrical equipment and prevent electrical discharges. (See below.) Ecosystem collapse, pollution and health Throughout most of the history of New York City, and New Amsterdam before it, the East River has been the receptacle for the city's garbage and sewage. "Night men" who collected "night soil" from outdoor privies would dump their loads into the river, and even after the construction of the Croton Aqueduct (1842) and then the New Croton Aqueduct (1890) gave rise to indoor plumbing, the waste that was flushed away into the sewers, where it mixed with ground runoff, ran directly into the river, untreated. The sewers terminated at the slips where ships docked, until the waste began to build up, preventing dockage, after which the outfalls were moved to the end of the piers. The "landfill" which created new land along the shoreline when the river was "wharfed out" by the sale of "water lots" was largely garbage such as bones, offal, and even whole dead animals, along with excrement – human and animal. The result was that by the 1850s, if not before, the East River, like the other waterways around the city, was undergoing the process of eutrophication where the increase in nitrogen from excrement and other sources led to a decrease in free oxygen, which in turn led to an increase in phytoplankton such as algae and a decrease in other life forms, breaking the area's established food chain. The East River became very polluted, and its animal life decreased drastically. In an earlier time, one person had described the transparency of the water: "I remember the time, gentlemen, when you could go in twelve feet of water and you could see the pebbles on the bottom of this river." As the water got more polluted, it darkened, underwater vegetation (such as photosynthesizing seagrass) began dying, and as the seagrass beds declined, the many associated species of their ecosystems declined as well, contributing to the decline of the river. Also harmful was the general destruction of the once plentiful oyster beds in the waters around the city, and the over-fishing of menhaden, or mossbunker, a small silvery fish which had been used since the time of the Native Americans for fertilizing crops – however it took 8,000 of these schooling fish to fertilize a single acre, so mechanized fishing using the purse seine was developed, and eventually the menhaden population collapsed. Menhaden feed on phytoplankton, helping to keep them in check, and are also a vital step in the food chain, as bluefish, striped bass and other fish species which do not eat phytoplankton feed on the menhaden. The oyster is another filter feeder: oysters purify 10 to 100 gallons a day, while each menhaden filters four gallons in a minute, and their schools were immense: one report had a farmer collecting 20 oxcarts worth of menhaden using simple fishing nets deployed from the shore. The combination of more sewage, due to the availability of more potable water – New York's water consumption per capita was twice that of Europe – indoor plumbing, the destruction of filter feeders, and the collapse of the food chain, damaged the ecosystem of the waters around New York, including the East River, almost beyond repair. Because of these changes to the ecosystem, by 1909, the level of dissolved-oxygen in the lower part of the river had declined to less than 65%, where 55% of saturation is the point at which the amount of fish and the number of their species begins to be affected. Only 17 years later, by 1926, the level of dissolved oxygen in the river had fallen to 13%, below the point at which most fish species can survive. Due to heavy pollution, the East River is dangerous to people who fall in or attempt to swim in it, although as of mid-2007 the water was cleaner than it had been in decades. , the New York City Department of Environmental Protection (DEP) categorizes the East River as Use Classification I, meaning it is safe for secondary contact activities such as boating and fishing. According to the marine sciences section of the DEP, the channel is swift, with water moving as fast as four knots, just as it does in the Hudson River on the other side of Manhattan. That speed can push casual swimmers out to sea. A few people drown in the waters around New York City each year. , it was reported that the level of bacteria in the river was below Federal guidelines for swimming on most days, although the readings may vary significantly, so that the outflow from Newtown Creek or the Gowanus Canal can be tens or hundreds of times higher than recommended, according to Riverkeeper, a non-profit environmentalist advocacy group. The counts are also higher along the shores of the strait than they are in the middle of its flow. Nevertheless, the "Brooklyn Bridge Swim" is an annual event where swimmers cross the channel from Brooklyn Bridge Park to Manhattan. Still, thanks to reductions in pollution, cleanups, the restriction of development, and other environmental controls, the East River along Manhattan is one of the areas of New York's waterways – including the Hudson-Raritan Estuary and both shores of Long Island – which have shown signs of the return of biodiversity. On the other hand, the river is also under attack from hardy, competitive, alien species, such as the European green crab, which is considered to be one of the world's ten worst invasive species, and is present in the river. 2017 oil spill On May 7, 2017, the catastrophic failure of Con Edison's Farragut Substation at 89 John Street in Dumbo, Brooklyn, caused a spill of dielectric fluid – an insoluble synthetic mineral oil, considered non-toxic by New York state, used to cool electrical equipment and prevent electrical discharges – into the East River from a tank. The National Response Center received a report of the spill at 1:30pm that day, although the public did not learn of the spill for two days, and then only from tweets from NYC Ferry. A "safety zone" was established, extending from a line drawn between Dupont Street in Greenpoint, Brooklyn, to East 25th Street in Kips Bay, Manhattan, south to Buttermilk Channel. Recreational and human-powered vehicles such as kayaks and paddleboards were banned from the zone while the oil was being cleaned up, and the speed of commercial vehicles restricted so as not to spread the oil in their wakes, causing delays in NYC Ferry service. The clean-up efforts were being undertaken by Con Edison personnel and private environmental contractors, the U.S. Coast Guard, and the New York State Department of Environmental Conservation, with the assistance of NYC Emergency Management. The loss of the sub-station caused a voltage dip in the power provided by Con Ed to the Metropolitan Transportation Authority's New York City Subway system, which disrupted its signals. The Coast Guard estimated that of oil spilled into the water, with the remainder soaking into the soil at the substation. In the past the Coast Guard has on average been able to recover about 10% of oil spilled, however the complex tides in the river make the recovery much more difficult, with the turbulent water caused by the river's change of tides pushing contaminated water over the containment booms, where it is then carried out to sea and cannot be recovered. By Friday May 12, officials from Con Edison reported that almost had been taken out of the water. Environmental damage to wildlife is expected to be less than if the spill was of petroleum-based oil, but the oil can still block the sunlight necessary for the river's fish and other organisms to live. Nesting birds are also in possible danger from the oil contaminating their nests and potentially poisoning the birds or their eggs. Water from the East River was reported to have tested positive for low levels of PCB, a known carcinogen. Putting the spill into perspective, John Lipscomb, the vice president of advocacy for Riverkeepers said that the chronic release after heavy rains of overflow from city's wastewater treatment system was "a bigger problem for the harbor than this accident." The state Department of Environmental Conservation is investigating the spill. It was later reported that according to DEC data which dates back to 1978, the substation involved had spilled 179 times previously, more than any other Con Ed facility. The spills have included 8,400 gallons of dielectric oil, hydraulic oil, and antifreeze which leaked at various times into the soil around the substation, the sewers, and the East River. On June 22, Con Edison used non-toxic green dye and divers in the river to find the source of the leak. As a result, a hole was plugged. The utility continued to believe that the bulk of the spill went into the ground around the substation, and excavated and removed several hundred cubic yards of soil from the area. They estimated that about went into the river, of which were recovered. Con Edison said that it installed a new transformer, and intended to add new barrier around the facility to help guard against future spills propagating into the river. Crossings In popular culture The Brecker Brothers performed a song named after the river that is featured on their album Heavy Metal Be-Bop (1978) According to its author, Yasushi Akimoto, the noted Japanese song "Kawa no Nagare no Yō ni" – the "swan song" of the noted singer Hibari Misora
Development begins again After the war, East River waterfront development continued once more. New York State legislation, which in 1807 had authorized what would become the Commissioners Plan of 1811, authorized the creation of new land out to 400 feet from the low water mark into the river, and with the advent of gridded streets along the new waterline – Joseph Mangin had laid out such a grid in 1803 in his A Plan and Regulation of the City of New York, which was rejected by the city, but established the concept – the coastline become regularized at the same time that the strait became even narrower. One result of the narrowing of the East River along the shoreline of Manhattan and, later, Brooklyn – which continued until the mid-19th century when the state put a stop to it – was an increase in the speed of its current. Buttermilk Channel, the strait that divides Governors Island from Red Hook in Brooklyn, and which is located directly south of the "mouth" of the East River, was in the early 17th century a fordable waterway across which cattle could be driven. Further investigation by Colonel Jonathan Williams determined that the channel was by 1776 three fathoms deep (), five fathoms deep () in the same spot by 1798, and when surveyed by Williams in 1807 had deepened to 7 fathoms () at low tide. What had been almost a bridge between two landforms that were once connected had become a fully navigable channel, thanks to the constriction of the East River and the increased flow it caused. Soon, the current in the East River had become so strong that larger ships had to use auxiliary steam power in order to turn. The continued narrowing of the channel on both side may have been the reasoning behind the suggestion of one New York State Senator, who wanted to fill in the East River and annex Brooklyn, with the cost of doing so being covered by selling the newly made land. Others proposed a dam at Roosevelt Island (then Blackwell's Island) to create a wet basin for shipping. Filling in the river Filling in part of the river was also proposed in 1867 by engineer James E. Serrell, later a city surveyor, but with emphasis on solving the problem of Hell Gate. Serrell proposed filling in Hell Gate and building a "New East River" through Queens with an extension to Westchester County. Serrell's plan – which he publicized with maps, essay and lectures as well as presentations to the city, state and federal governments – would have filled in the river from 14th Street to 125th Street. The New East River through Queens would be about three times the average width of the existing one at an even throughout, and would run as straight as an arrow for five miles. The new land, and the portions of Queens which would become part of Manhattan, adding , would be covered with an extension of the existing street grid of Manhattan. Variations on Serrell's plan would be floated over the years. A pseudonymous "Terra Firma" brought up filling in the East River again in the Evening Post and Scientific American in 1904, and Thomas Alva Edison took it up in 1906. Then Thomas Kennard Thompson, a bridge and railway engineer, proposed in 1913 to fill in the river from Hell Gate to the tip of Manhattan and, as Serrell had suggested, make a new canalized East River, only this time from Flushing Bay to Jamaica Bay. He would also expand Brooklyn into the Upper Harbor, put up a dam from Brooklyn to Staten Island, and make extensive landfill in the Lower Bay. At around the same time, in the 1920s, Dr. John A. Harriss, New York City's chief traffic engineer, who had developed the first traffic signals in the city, also had plans for the river. Harriss wanted to dam the East River at Hell Gate and the Williamsburg Bridge, then remove the water, put a roof over it on stilts, and build boulevards and pedestrian lanes on the roof along with "majestic structures", with transportation services below. The East River's course would, once again, be shifted to run through Queens, and this time Brooklyn as well, to channel it to the Harbor. Clearing Hell Gate Periodically, merchants and other interested parties would try to get something done about the difficulty of navigating through Hell Gate. In 1832, the New York State legislature was presented with a petition for a canal to be built through nearby Hallet's Point, thus avoiding Hell Gate altogether. Instead, the legislature responded by providing ships with pilots trained to navigate the shoals for the next 15 years. In 1849, a French engineer whose specialty was underwater blasting, Benjamin Maillefert, had cleared some of the rocks which, along with the mix of tides, made the Hell Gate stretch of the river so dangerous to navigate. Ebenezer Meriam had organized a subscription to pay Maillefert $6,000 to, for instance, reduce "Pot Rock" to provide of depth at low-mean water. While ships continued to run aground (in the 1850s about 2% of ships did so) and petitions continued to call for action, the federal government undertook surveys of the area which ended in 1851 with a detailed and accurate map. By then Maillefert had cleared the rock "Baldheaded Billy", and it was reported that Pot Rock had been reduced to , which encouraged the United States Congress to appropriate $20,000 for further clearing of the strait. However, a more accurate survey showed that the depth of Pot Rock was actually a little more than , and eventually Congress withdrew its funding. With the main shipping channels through The Narrows into the harbor silting up with sand due to littoral drift, thus providing ships with less depth, and a new generation of larger ships coming online – epitomized by Isambard Kingdom Brunel's SS Great Eastern, popularly known as "Leviathan" – New York began to be concerned that it would start to lose its status as a great port if a "back door" entrance into the harbor was not created. In the 1850s the depth continued to lessen – the harbor commission said in 1850 that the mean water low was and the extreme water low was – while the draft required by the new ships continued to increase, meaning it was only safe for them to enter the harbor at high tide. The U.S. Congress, realizing that the problem needed to be addressed, appropriated $20,000 for the Army Corps of Engineers to continue Maillefert's work, but the money was soon spent without appreciable change in the hazards of navigating the strait. An advisory council recommended in 1856 that the strait be cleared of all obstacles, but nothing was done, and the Civil War soon broke out. In the late 1860s, after the Civil War, Congress realized the military importance of having easily navigable waterways, and charged the Army Corps of Engineers with clearing Hell Gate of the rocks there that caused a danger to navigation. The Corps' Colonel James Newton estimated that the project would cost $1 million, as compared to the approximate annual loss in shipping of $2 million. Initial forays floundered, and Newton, by that time a general, took over direct control of the project. In 1868 Newton decided, with the support of both New York's mercantile class and local real estate interests, to focus on the Hallert's Point Reef off of Queens. The project would involve of tunnels equipped with trains to haul debris out as the reef was eviscerated, creating a reef structured like "swiss cheese" which Newton would then blow up. After seven years of digging seven thousand holes, and filling four thousand of them with of dynamite, on September 24, 1876, in front of an audience of people including the inhabitants of the insane asylum on Wards Island, but not the prisoners of Roosevelt Island – then called Blackwell's Island – who remained in their cells, Newton's daughter set off the explosion. The effect was immediate in decreased turbulence through the strait, and fewer accidents and shipwrecks. The city's Chamber of Commerce commented that "The Centennial year will be for ever known in the annals of commerce for this destruction of one of the terrors of navigation." Clearing out the debris from the explosion took until 1891. Then, in 1885, Flood Rock, a reef that Newton had begun to undermine even before starting on Hallert's Rock, removing of rock from the reef, was blown up as well, with Civil War General Philip Sheridan and abolitionist Henry Ward Beecher among those in attendance, and Newton's daughter once more setting off the blast, the biggest ever to that date, and reportedly the largest man-made explosion until the advent of the atomic bomb although the detonation at the Battle of Messines in 1917 was several times larger. Two years later, plans were in place to dredge Hell Gate to a consistent depth of . At the same time that Hell Gate was being cleared, the Harlem River Ship Canal was being planned. When it was completed in 1895, the "back door" to New York's center of ship-borne trade in the docks and warehouses of the East River was open from two directions, through the cleared East River, and from the Hudson River through the Harlem River to the East River. Ironically, though, while both forks of the northern shipping entrance to the city were now open, modern dredging techniques had cut through the sandbars of the Atlantic Ocean entrance, allowing new, even larger ships to use that traditional passage into New York's docks. At the beginning of the 19th century, the East River was the center of New York's shipping industry, but by the end of the century, much of it had moved to the Hudson River, leaving the East River wharves and slips to begin a long process of decay, until the area was finally rehabilitated in the mid-1960s, and the South Street Seaport Museum was opened in 1967. A new seawall By 1870, the condition of the Port of New York along both the East and Hudson Rivers had so deteriorated that the New York State legislature created the Department of Docks to renovate the port and keep New York competitive with other ports on the American East Coast. The Department of Docks was given the task of creating the master plan for the waterfront, and General George B. McClellan was engaged to head the project. McClellan held public hearings and invited plans to be submitted, ultimately receiving 70 of them, although in the end he and his successors put his own plan into effect. That plan called for the building of a seawall around Manhattan island from West 61st Street on the Hudson, around The Battery, and up to East 51st Street on the East River. The area behind the masonry wall (mostly concrete but in some parts granite blocks) would be filled in with landfill, and wide streets would be laid down on the new land. In this way, a new edge for the island (or at least the part of it used as a commercial port) would be created. The department had surveyed of shoreline by 1878, as well as documenting the currents and tides. By 1900, had been surveyed and core samples had been taken to inform the builders of how deep the bedrock was. The work was completed just as World War I began, allowing the Port of New York to be a major point of embarkation for troops and materiel. The new seawall helps protect Manhattan island from storm surges, although it is only above the mean sea level, so that particularly dangerous storms, such as the nor'easter of 1992 and Hurricane Sandy in 2012, which hit the city in a way to create surges which are much higher, can still do significant damage. (The Hurricane of September 3, 1821 created the biggest storm surge on record in New York City: a rise of in one hour at the Battery, flooding all of lower Manhattan up to Canal Street.) Still, the new seawall begun in 1871 gave the island a firmer edge, improved the quality of the port, and continues to protect Manhattan from normal storm surges. Bridges and tunnels The Brooklyn Bridge, completed in 1883, was the first bridge to span the East River, connecting
not change the fact that freedom remains a condition of every action. Despair Despair is generally defined as a loss of hope. In existentialism, it is more specifically a loss of hope in reaction to a breakdown in one or more of the defining qualities of one's self or identity. If a person is invested in being a particular thing, such as a bus driver or an upstanding citizen, and then finds their being-thing compromised, they would normally be found in a state of despair—a hopeless state. For example, a singer who loses the ability to sing may despair if they have nothing else to fall back on—nothing to rely on for their identity. They find themselves unable to be what defined their being. What sets the existentialist notion of despair apart from the conventional definition is that existentialist despair is a state one is in even when they are not overtly in despair. So long as a person's identity depends on qualities that can crumble, they are in perpetual despair—and as there is, in Sartrean terms, no human essence found in conventional reality on which to constitute the individual's sense of identity, despair is a universal human condition. As Kierkegaard defines it in Either/Or: "Let each one learn what he can; both of us can learn that a person’s unhappiness never lies in his lack of control over external conditions, since this would only make him completely unhappy." In Works of Love, he says: Opposition to positivism and rationalism Existentialists oppose defining human beings as primarily rational, and, therefore, oppose both positivism and rationalism. Existentialism asserts that people make decisions based on subjective meaning rather than pure rationality. The rejection of reason as the source of meaning is a common theme of existentialist thought, as is the focus on the anxiety and dread that we feel in the face of our own radical free will and our awareness of death. Kierkegaard advocated rationality as a means to interact with the objective world (e.g., in the natural sciences), but when it comes to existential problems, reason is insufficient: "Human reason has boundaries". Like Kierkegaard, Sartre saw problems with rationality, calling it a form of "bad faith", an attempt by the self to impose structure on a world of phenomena—"the Other"—that is fundamentally irrational and random. According to Sartre, rationality and other forms of bad faith hinder people from finding meaning in freedom. To try to suppress feelings of anxiety and dread, people confine themselves within everyday experience, Sartre asserted, thereby relinquishing their freedom and acquiescing to being possessed in one form or another by "the Look" of "the Other" (i.e., possessed by another person—or at least one's idea of that other person). Religion An existentialist reading of the Bible would demand that the reader recognize that they are an existing subject studying the words more as a recollection of events. This is in contrast to looking at a collection of "truths" that are outside and unrelated to the reader, but may develop a sense of reality/God. Such a reader is not obligated to follow the commandments as if an external agent is forcing these commandments upon them, but as though they are inside them and guiding them from inside. This is the task Kierkegaard takes up when he asks: "Who has the more difficult task: the teacher who lectures on earnest things a meteor's distance from everyday life—or the learner who should put it to use?" Confusion with nihilism Although nihilism and existentialism are distinct philosophies, they are often confused with one another since both are rooted in the human experience of anguish and confusion that stems from the apparent meaninglessness of a world in which humans are compelled to find or create meaning. A primary cause of confusion is that Friedrich Nietzsche was an important philosopher in both fields. Existentialist philosophers often stress the importance of angst as signifying the absolute lack of any objective ground for action, a move that is often reduced to moral or existential nihilism. A pervasive theme in existentialist philosophy, however, is to persist through encounters with the absurd, as seen in Camus's The Myth of Sisyphus ("One must imagine Sisyphus happy") and it is only very rarely that existentialist philosophers dismiss morality or one's self-created meaning: Kierkegaard regained a sort of morality in the religious (although he would not agree that it was ethical; the religious suspends the ethical), and Sartre's final words in Being and Nothingness are: "All these questions, which refer us to a pure and not an accessory (or impure) reflection, can find their reply only on the ethical plane. We shall devote to them a future work." History 19th century Kierkegaard and Nietzsche Søren Kierkegaard is generally considered to have been the first existentialist philosopher. He proposed that each individual—not reason, society, or religious orthodoxy—is solely tasked with giving meaning to life and living it sincerely, or "authentically". Kierkegaard and Nietzsche were two of the first philosophers considered fundamental to the existentialist movement, though neither used the term "existentialism" and it is unclear whether they would have supported the existentialism of the 20th century. They focused on subjective human experience rather than the objective truths of mathematics and science, which they believed were too detached or observational to truly get at the human experience. Like Pascal, they were interested in people's quiet struggle with the apparent meaninglessness of life and the use of diversion to escape from boredom. Unlike Pascal, Kierkegaard and Nietzsche also considered the role of making free choices, particularly regarding fundamental values and beliefs, and how such choices change the nature and identity of the chooser. Kierkegaard's knight of faith and Nietzsche's Übermensch are representative of people who exhibit freedom, in that they define the nature of their own existence. Nietzsche's idealized individual invents his own values and creates the very terms they excel under. By contrast, Kierkegaard, opposed to the level of abstraction in Hegel, and not nearly as hostile (actually welcoming) to Christianity as Nietzsche, argues through a pseudonym that the objective certainty of religious truths (specifically Christian) is not only impossible, but even founded on logical paradoxes. Yet he continues to imply that a leap of faith is a possible means for an individual to reach a higher stage of existence that transcends and contains both an aesthetic and ethical value of life. Kierkegaard and Nietzsche were also precursors to other intellectual movements, including postmodernism, and various strands of psychotherapy. However, Kierkegaard believed that individuals should live in accordance with their thinking. Dostoevsky The first important literary author also important to existentialism was the Russian, Dostoevsky. Dostoevsky's Notes from Underground portrays a man unable to fit into society and unhappy with the identities he creates for himself. Sartre, in his book on existentialism Existentialism is a Humanism, quoted Dostoyevsky's The Brothers Karamazov as an example of existential crisis. Other Dostoyevsky novels covered issues raised in existentialist philosophy while presenting story lines divergent from secular existentialism: for example, in Crime and Punishment, the protagonist Raskolnikov experiences an existential crisis and then moves toward a Christian Orthodox worldview similar to that advocated by Dostoyevsky himself. Early 20th century In the first decades of the 20th century, a number of philosophers and writers explored existentialist ideas. The Spanish philosopher Miguel de Unamuno y Jugo, in his 1913 book The Tragic Sense of Life in Men and Nations, emphasized the life of "flesh and bone" as opposed to that of abstract rationalism. Unamuno rejected systematic philosophy in favor of the individual's quest for faith. He retained a sense of the tragic, even absurd nature of the quest, symbolized by his enduring interest in the eponymous character from the Miguel de Cervantes novel Don Quixote. A novelist, poet and dramatist as well as philosophy professor at the University of Salamanca, Unamuno wrote a short story about a priest's crisis of faith, Saint Manuel the Good, Martyr, which has been collected in anthologies of existentialist fiction. Another Spanish thinker, Ortega y Gasset, writing in 1914, held that human existence must always be defined as the individual person combined with the concrete circumstances of his life: "Yo soy yo y mi circunstancia" ("I am myself and my circumstances"). Sartre likewise believed that human existence is not an abstract matter, but is always situated ("en situation"). Although Martin Buber wrote his major philosophical works in German, and studied and taught at the Universities of Berlin and Frankfurt, he stands apart from the mainstream of German philosophy. Born into a Jewish family in Vienna in 1878, he was also a scholar of Jewish culture and involved at various times in Zionism and Hasidism. In 1938, he moved permanently to Jerusalem. His best-known philosophical work was the short book I and Thou, published in 1922. For Buber, the fundamental fact of human existence, too readily overlooked by scientific rationalism and abstract philosophical thought, is "man with man", a dialogue that takes place in the so-called "sphere of between" ("das Zwischenmenschliche"). Two Russian philosophers, Lev Shestov and Nikolai Berdyaev, became well known as existentialist thinkers during their post-Revolutionary exiles in Paris. Shestov had launched an attack on rationalism and systematization in philosophy as early as 1905 in his book of aphorisms All Things Are Possible. Berdyaev drew a radical distinction between the world of spirit and the everyday world of objects. Human freedom, for Berdyaev, is rooted in the realm of spirit, a realm independent of scientific notions of causation. To the extent the individual human being lives in the objective world, he is estranged from authentic spiritual freedom. "Man" is not to be interpreted naturalistically, but as a being created in God's image, an originator of free, creative acts. He published a major work on these themes, The Destiny of Man, in 1931. Marcel, long before coining the term "existentialism", introduced important existentialist themes to a French audience in his early essay "Existence and Objectivity" (1925) and in his Metaphysical Journal (1927). A dramatist as well as a philosopher, Marcel found his philosophical starting point in a condition of metaphysical alienation: the human individual searching for harmony in a transient life. Harmony, for Marcel, was to be sought through "secondary reflection", a "dialogical" rather than "dialectical" approach to the world, characterized by "wonder and astonishment" and open to the "presence" of other people and of God rather than merely to "information" about them. For Marcel, such presence implied more than simply being there (as one thing might be in the presence of another thing); it connoted "extravagant" availability, and the willingness to put oneself at the disposal of the other. Marcel contrasted secondary reflection with abstract, scientific-technical primary reflection, which he associated with the activity of the abstract Cartesian ego. For Marcel, philosophy was a concrete activity undertaken by a sensing, feeling human being incarnate—embodied—in a concrete world. Although Sartre adopted the term "existentialism" for his own philosophy in the 1940s, Marcel's thought has been described as "almost diametrically opposed" to that of Sartre. Unlike Sartre, Marcel was a Christian, and became a Catholic convert in 1929. In Germany, the psychologist and philosopher Karl Jaspers—who later described existentialism as a "phantom" created by the public—called his own thought, heavily influenced by Kierkegaard and Nietzsche, Existenzphilosophie. For Jaspers, "Existenz-philosophy is the way of thought by means of which man seeks to become himself...This way of thought does not cognize objects, but elucidates and makes actual the being of the thinker". Jaspers, a professor at the University of Heidelberg, was acquainted with Heidegger, who held a professorship at Marburg before acceding to Husserl's chair at Freiburg in 1928. They held many philosophical discussions, but later became estranged over Heidegger's support of National Socialism (Nazism). They shared an admiration for Kierkegaard, and in the 1930s, Heidegger lectured extensively on Nietzsche. Nevertheless, the extent to which Heidegger should be considered an existentialist is debatable. In Being and Time he presented a method of rooting philosophical explanations in human existence (Dasein) to be analysed in terms of existential categories (existentiale); and this has led many commentators to treat him as an important figure in the existentialist movement. After the Second World War Following the Second World War, existentialism became a well-known and significant philosophical and cultural movement, mainly through the public prominence of two French writers, Jean-Paul Sartre and Albert Camus, who wrote best-selling novels, plays and widely read journalism as well as theoretical texts. These years also saw the growing reputation of Being and Time outside Germany. Sartre dealt with existentialist themes in his 1938 novel Nausea and the short stories in his 1939 collection The Wall, and had published his treatise on existentialism, Being and Nothingness, in 1943, but it was in the two years following the liberation of Paris from the German occupying forces that he and his close associates—Camus, Simone de Beauvoir, Maurice Merleau-Ponty, and others—became internationally famous as the leading figures of a movement known as existentialism. In a very short period of time, Camus and Sartre in particular became the leading public intellectuals of post-war France, achieving by the end of 1945 "a fame that reached across all audiences." Camus was an editor of the most popular leftist (former French Resistance) newspaper Combat; Sartre launched his journal of leftist thought, Les Temps Modernes, and two weeks later gave the widely reported lecture on existentialism and secular humanism to a packed meeting of the Club Maintenant. Beauvoir wrote that "not a week passed without the newspapers discussing us"; existentialism became "the first media craze of the postwar era." By the end of 1947, Camus' earlier fiction and plays had been reprinted, his new play Caligula had been performed and his novel The Plague published; the first two novels of Sartre's The Roads to Freedom trilogy had appeared, as had Beauvoir's novel The Blood of Others. Works by Camus and Sartre were already appearing in foreign editions. The Paris-based existentialists had become famous. Sartre had traveled to Germany in 1930 to study the phenomenology of Edmund Husserl and Martin Heidegger, and he included critical comments on their work in his major treatise Being and Nothingness. Heidegger's thought had also become known in French philosophical circles through its use by Alexandre Kojève in explicating Hegel in a series of lectures given in Paris in the 1930s. The lectures were highly influential; members of the audience included not only Sartre and Merleau-Ponty, but Raymond Queneau, Georges Bataille, Louis Althusser, André Breton, and Jacques Lacan. A selection from Being and Time was published in French in 1938, and his essays began to appear in French philosophy journals. Heidegger read Sartre's work and was initially impressed, commenting: "Here for the first time I encountered an independent thinker who, from the foundations up, has experienced the area out of which I think. Your work shows such an immediate comprehension of my philosophy as I have never before encountered." Later, however, in response to a question posed by his French follower Jean Beaufret, Heidegger distanced himself from Sartre's position and existentialism in general in his Letter on Humanism. Heidegger's reputation continued to grow in France during the 1950s and 1960s. In the 1960s, Sartre attempted to reconcile existentialism and Marxism in his work Critique of Dialectical Reason. A major theme throughout his writings was freedom and responsibility. Camus was a friend of Sartre, until their falling-out, and wrote several works with existential themes including The Rebel, Summer in Algiers, The Myth of Sisyphus, and The Stranger, the latter being "considered—to what would have been Camus's irritation—the exemplary existentialist novel." Camus, like many others, rejected the existentialist label, and considered his works concerned with facing the absurd. In the titular book, Camus uses the analogy of the Greek myth of Sisyphus to demonstrate the futility of existence. In the myth, Sisyphus is condemned for eternity to roll a rock up a hill, but when he reaches the summit, the rock will roll to the bottom again. Camus believes that this existence is pointless but that Sisyphus ultimately finds meaning and purpose in his task, simply by continually applying himself to it. The first half of the book contains an extended rebuttal of what Camus took to be existentialist philosophy in the works of Kierkegaard, Shestov, Heidegger, and Jaspers. Simone de Beauvoir, an important existentialist who spent much of her life as Sartre's partner, wrote about feminist and existentialist ethics in her works, including The Second Sex and The Ethics of Ambiguity. Although often overlooked due to her relationship with Sartre, de Beauvoir integrated existentialism with other forms of thinking such as feminism, unheard of at the time, resulting in alienation from fellow writers such as Camus. Paul Tillich, an important existentialist theologian following Kierkegaard and Karl Barth, applied existentialist concepts to Christian theology, and helped introduce existential theology to the general public. His seminal work The Courage to Be follows Kierkegaard's analysis of anxiety and life's absurdity, but puts forward the thesis that modern humans must, via God, achieve selfhood in spite of life's absurdity. Rudolf Bultmann used Kierkegaard's and Heidegger's philosophy of existence to demythologize Christianity by interpreting Christian mythical concepts into existentialist concepts. Maurice Merleau-Ponty, an existential phenomenologist, was for a time a companion of Sartre. Merleau-Ponty's Phenomenology of Perception (1945) was recognized as a major statement of French existentialism. It has been said that Merleau-Ponty's work Humanism and Terror greatly influenced Sartre. However, in later years they were to disagree irreparably, dividing many existentialists such as de Beauvoir, who sided with Sartre. Colin Wilson, an English writer, published his study The Outsider in 1956, initially to critical acclaim. In this book and others (e.g. Introduction to the New Existentialism), he attempted to reinvigorate what he perceived as a pessimistic philosophy and bring it to a wider audience. He was not, however, academically trained, and his work was attacked by professional philosophers for lack of rigor and critical standards. Influence outside philosophy Art Film and television Stanley Kubrick's 1957 anti-war film Paths of Glory "illustrates, and even illuminates...existentialism" by examining the "necessary absurdity of the human condition" and the "horror of war". The film tells the story of a fictional World War I French army regiment ordered to attack an impregnable German stronghold; when the attack fails, three soldiers are chosen at random, court-martialed by a "kangaroo court", and executed by firing squad. The film examines existentialist ethics, such as the issue of whether objectivity is possible and the "problem of authenticity". Orson Welles's 1962 film The Trial, based upon Franz Kafka's book of the same name (Der Process), is characteristic of both existentialist and absurdist themes in its depiction of a man (Joseph K.) arrested for a crime for which the charges are neither revealed to him nor to the reader. Neon Genesis Evangelion is a Japanese science fiction animation series created by the anime studio Gainax and was both directed and written by Hideaki Anno. Existential themes of individuality, consciousness, freedom, choice, and responsibility are heavily relied upon throughout the entire series, particularly through the philosophies of Jean-Paul Sartre and Søren Kierkegaard. Episode 16's title, is a reference to Kierkegaard's book, The Sickness Unto Death. Some contemporary films dealing with existentialist issues include Melancholia, Fight Club, I Heart Huckabees, Waking Life, The Matrix, Ordinary People, and Life in a Day. Likewise, films throughout the 20th century such as The Seventh Seal, Ikiru, Taxi Driver, the Toy Story films, The Great Silence, Ghost in the Shell, Harold and Maude, High Noon, Easy Rider, One Flew Over the Cuckoo's Nest, A Clockwork Orange, Groundhog Day, Apocalypse Now, Badlands, and Blade Runner also have existentialist qualities. Notable directors known for their existentialist films include Ingmar Bergman, François Truffaut, Jean-Luc Godard, Michelangelo Antonioni, Akira Kurosawa, Terrence Malick, Stanley Kubrick, Andrei Tarkovsky, Hideaki Anno, Wes Anderson, Gaspar Noé, Woody Allen, and Christopher Nolan. Charlie Kaufman's Synecdoche, New York focuses on the protagonist's desire to find existential meaning. Similarly, in Kurosawa's Red Beard, the protagonist's experiences as an intern in a rural health clinic in Japan lead him to an existential crisis whereby he questions his reason for being. This, in turn, leads him to a better understanding of humanity. The French film, Mood Indigo (directed by Michel Gondry) embraced various elements of existentialism. The film The Shawshank Redemption, released in 1994, depicts life in a prison in Maine, United States to explore several existentialist concepts. Literature Existential perspectives are also found in modern literature to varying degrees, especially since the 1920s. Louis-Ferdinand Céline's Journey to the End of the Night (Voyage au bout de la nuit, 1932) celebrated by both Sartre and Beauvoir, contained many of the themes that would be found in later existential literature, and is in some ways, the proto-existential novel. Jean-Paul Sartre's 1938 novel Nausea was "steeped in Existential ideas", and is considered an accessible way of grasping his philosophical stance. Between 1900 and 1960, other authors such as Albert Camus, Franz Kafka, Rainer Maria Rilke, T. S. Eliot, Hermann Hesse, Luigi Pirandello, Ralph Ellison, and Jack Kerouac, composed literature or poetry that contained, to varying degrees, elements of existential or proto-existential thought. The philosophy's influence even reached pulp literature shortly after the turn of the 20th century, as seen in the existential disparity witnessed in Man's lack of control of his fate in the works of H. P. Lovecraft. Theatre Sartre wrote No Exit in 1944, an existentialist play originally published in French as Huis Clos (meaning In Camera or "behind closed doors"), which is the source of the popular quote, "Hell is other people." (In French, "L'enfer, c'est les autres"). The play begins with a Valet leading a man into a room that the audience soon realizes is in hell. Eventually he is joined by two women. After their entry, the Valet leaves and the door is shut and locked. All three expect to be tortured, but no torturer arrives. Instead, they realize they are there to torture each other, which they do effectively by probing each other's sins, desires, and unpleasant memories. Existentialist themes are displayed in the Theatre of the Absurd, notably in Samuel Beckett's Waiting for Godot, in which two men divert themselves while they wait expectantly for someone (or something) named Godot who never arrives. They claim Godot is an acquaintance, but in fact, hardly know him, admitting they would not recognize him if they saw him. Samuel Beckett, once asked who or what Godot is, replied, "If I knew, I would have said so in the play." To occupy themselves, the men eat, sleep, talk, argue, sing, play games, exercise, swap hats, and contemplate suicide—anything "to hold the terrible silence at bay". The play "exploits several archetypal forms and situations, all of which lend themselves to both comedy and pathos." The play also illustrates an attitude toward human experience on earth: the poignancy, oppression, camaraderie, hope, corruption, and bewilderment of human experience that can be reconciled only in the mind and art of the absurdist. The play examines questions such as death, the meaning of human existence and the place of God in human existence. Tom Stoppard's Rosencrantz & Guildenstern Are Dead is an absurdist tragicomedy first staged at the Edinburgh Festival Fringe in 1966. The play expands upon the exploits of two minor characters from Shakespeare's Hamlet. Comparisons have also been drawn to Samuel Beckett's Waiting for Godot, for the presence of two central characters who appear almost as two halves of a single character. Many plot features are similar as well: the characters pass time by playing Questions, impersonating other characters, and interrupting each other or remaining silent for long periods of time. The two characters are portrayed as two clowns or fools in a world beyond their understanding. They stumble through philosophical arguments while not realizing the implications, and muse on the irrationality and randomness of the world. Jean Anouilh's Antigone also presents arguments founded on existentialist ideas. It is a tragedy inspired by Greek mythology and the play of the same name (Antigone, by Sophocles) from the 5th century BC. In English, it is often distinguished from its antecedent by being pronounced in its original French form, approximately "Ante-GŌN." The play was first performed in Paris on 6 February 1944, during the Nazi occupation of France. Produced under Nazi censorship, the play is purposefully ambiguous with regards to the rejection of authority (represented by Antigone) and the acceptance of it (represented by Creon). The parallels to the French Resistance and the Nazi occupation
inside. This is the task Kierkegaard takes up when he asks: "Who has the more difficult task: the teacher who lectures on earnest things a meteor's distance from everyday life—or the learner who should put it to use?" Confusion with nihilism Although nihilism and existentialism are distinct philosophies, they are often confused with one another since both are rooted in the human experience of anguish and confusion that stems from the apparent meaninglessness of a world in which humans are compelled to find or create meaning. A primary cause of confusion is that Friedrich Nietzsche was an important philosopher in both fields. Existentialist philosophers often stress the importance of angst as signifying the absolute lack of any objective ground for action, a move that is often reduced to moral or existential nihilism. A pervasive theme in existentialist philosophy, however, is to persist through encounters with the absurd, as seen in Camus's The Myth of Sisyphus ("One must imagine Sisyphus happy") and it is only very rarely that existentialist philosophers dismiss morality or one's self-created meaning: Kierkegaard regained a sort of morality in the religious (although he would not agree that it was ethical; the religious suspends the ethical), and Sartre's final words in Being and Nothingness are: "All these questions, which refer us to a pure and not an accessory (or impure) reflection, can find their reply only on the ethical plane. We shall devote to them a future work." History 19th century Kierkegaard and Nietzsche Søren Kierkegaard is generally considered to have been the first existentialist philosopher. He proposed that each individual—not reason, society, or religious orthodoxy—is solely tasked with giving meaning to life and living it sincerely, or "authentically". Kierkegaard and Nietzsche were two of the first philosophers considered fundamental to the existentialist movement, though neither used the term "existentialism" and it is unclear whether they would have supported the existentialism of the 20th century. They focused on subjective human experience rather than the objective truths of mathematics and science, which they believed were too detached or observational to truly get at the human experience. Like Pascal, they were interested in people's quiet struggle with the apparent meaninglessness of life and the use of diversion to escape from boredom. Unlike Pascal, Kierkegaard and Nietzsche also considered the role of making free choices, particularly regarding fundamental values and beliefs, and how such choices change the nature and identity of the chooser. Kierkegaard's knight of faith and Nietzsche's Übermensch are representative of people who exhibit freedom, in that they define the nature of their own existence. Nietzsche's idealized individual invents his own values and creates the very terms they excel under. By contrast, Kierkegaard, opposed to the level of abstraction in Hegel, and not nearly as hostile (actually welcoming) to Christianity as Nietzsche, argues through a pseudonym that the objective certainty of religious truths (specifically Christian) is not only impossible, but even founded on logical paradoxes. Yet he continues to imply that a leap of faith is a possible means for an individual to reach a higher stage of existence that transcends and contains both an aesthetic and ethical value of life. Kierkegaard and Nietzsche were also precursors to other intellectual movements, including postmodernism, and various strands of psychotherapy. However, Kierkegaard believed that individuals should live in accordance with their thinking. Dostoevsky The first important literary author also important to existentialism was the Russian, Dostoevsky. Dostoevsky's Notes from Underground portrays a man unable to fit into society and unhappy with the identities he creates for himself. Sartre, in his book on existentialism Existentialism is a Humanism, quoted Dostoyevsky's The Brothers Karamazov as an example of existential crisis. Other Dostoyevsky novels covered issues raised in existentialist philosophy while presenting story lines divergent from secular existentialism: for example, in Crime and Punishment, the protagonist Raskolnikov experiences an existential crisis and then moves toward a Christian Orthodox worldview similar to that advocated by Dostoyevsky himself. Early 20th century In the first decades of the 20th century, a number of philosophers and writers explored existentialist ideas. The Spanish philosopher Miguel de Unamuno y Jugo, in his 1913 book The Tragic Sense of Life in Men and Nations, emphasized the life of "flesh and bone" as opposed to that of abstract rationalism. Unamuno rejected systematic philosophy in favor of the individual's quest for faith. He retained a sense of the tragic, even absurd nature of the quest, symbolized by his enduring interest in the eponymous character from the Miguel de Cervantes novel Don Quixote. A novelist, poet and dramatist as well as philosophy professor at the University of Salamanca, Unamuno wrote a short story about a priest's crisis of faith, Saint Manuel the Good, Martyr, which has been collected in anthologies of existentialist fiction. Another Spanish thinker, Ortega y Gasset, writing in 1914, held that human existence must always be defined as the individual person combined with the concrete circumstances of his life: "Yo soy yo y mi circunstancia" ("I am myself and my circumstances"). Sartre likewise believed that human existence is not an abstract matter, but is always situated ("en situation"). Although Martin Buber wrote his major philosophical works in German, and studied and taught at the Universities of Berlin and Frankfurt, he stands apart from the mainstream of German philosophy. Born into a Jewish family in Vienna in 1878, he was also a scholar of Jewish culture and involved at various times in Zionism and Hasidism. In 1938, he moved permanently to Jerusalem. His best-known philosophical work was the short book I and Thou, published in 1922. For Buber, the fundamental fact of human existence, too readily overlooked by scientific rationalism and abstract philosophical thought, is "man with man", a dialogue that takes place in the so-called "sphere of between" ("das Zwischenmenschliche"). Two Russian philosophers, Lev Shestov and Nikolai Berdyaev, became well known as existentialist thinkers during their post-Revolutionary exiles in Paris. Shestov had launched an attack on rationalism and systematization in philosophy as early as 1905 in his book of aphorisms All Things Are Possible. Berdyaev drew a radical distinction between the world of spirit and the everyday world of objects. Human freedom, for Berdyaev, is rooted in the realm of spirit, a realm independent of scientific notions of causation. To the extent the individual human being lives in the objective world, he is estranged from authentic spiritual freedom. "Man" is not to be interpreted naturalistically, but as a being created in God's image, an originator of free, creative acts. He published a major work on these themes, The Destiny of Man, in 1931. Marcel, long before coining the term "existentialism", introduced important existentialist themes to a French audience in his early essay "Existence and Objectivity" (1925) and in his Metaphysical Journal (1927). A dramatist as well as a philosopher, Marcel found his philosophical starting point in a condition of metaphysical alienation: the human individual searching for harmony in a transient life. Harmony, for Marcel, was to be sought through "secondary reflection", a "dialogical" rather than "dialectical" approach to the world, characterized by "wonder and astonishment" and open to the "presence" of other people and of God rather than merely to "information" about them. For Marcel, such presence implied more than simply being there (as one thing might be in the presence of another thing); it connoted "extravagant" availability, and the willingness to put oneself at the disposal of the other. Marcel contrasted secondary reflection with abstract, scientific-technical primary reflection, which he associated with the activity of the abstract Cartesian ego. For Marcel, philosophy was a concrete activity undertaken by a sensing, feeling human being incarnate—embodied—in a concrete world. Although Sartre adopted the term "existentialism" for his own philosophy in the 1940s, Marcel's thought has been described as "almost diametrically opposed" to that of Sartre. Unlike Sartre, Marcel was a Christian, and became a Catholic convert in 1929. In Germany, the psychologist and philosopher Karl Jaspers—who later described existentialism as a "phantom" created by the public—called his own thought, heavily influenced by Kierkegaard and Nietzsche, Existenzphilosophie. For Jaspers, "Existenz-philosophy is the way of thought by means of which man seeks to become himself...This way of thought does not cognize objects, but elucidates and makes actual the being of the thinker". Jaspers, a professor at the University of Heidelberg, was acquainted with Heidegger, who held a professorship at Marburg before acceding to Husserl's chair at Freiburg in 1928. They held many philosophical discussions, but later became estranged over Heidegger's support of National Socialism (Nazism). They shared an admiration for Kierkegaard, and in the 1930s, Heidegger lectured extensively on Nietzsche. Nevertheless, the extent to which Heidegger should be considered an existentialist is debatable. In Being and Time he presented a method of rooting philosophical explanations in human existence (Dasein) to be analysed in terms of existential categories (existentiale); and this has led many commentators to treat him as an important figure in the existentialist movement. After the Second World War Following the Second World War, existentialism became a well-known and significant philosophical and cultural movement, mainly through the public prominence of two French writers, Jean-Paul Sartre and Albert Camus, who wrote best-selling novels, plays and widely read journalism as well as theoretical texts. These years also saw the growing reputation of Being and Time outside Germany. Sartre dealt with existentialist themes in his 1938 novel Nausea and the short stories in his 1939 collection The Wall, and had published his treatise on existentialism, Being and Nothingness, in 1943, but it was in the two years following the liberation of Paris from the German occupying forces that he and his close associates—Camus, Simone de Beauvoir, Maurice Merleau-Ponty, and others—became internationally famous as the leading figures of a movement known as existentialism. In a very short period of time, Camus and Sartre in particular became the leading public intellectuals of post-war France, achieving by the end of 1945 "a fame that reached across all audiences." Camus was an editor of the most popular leftist (former French Resistance) newspaper Combat; Sartre launched his journal of leftist thought, Les Temps Modernes, and two weeks later gave the widely reported lecture on existentialism and secular humanism to a packed meeting of the Club Maintenant. Beauvoir wrote that "not a week passed without the newspapers discussing us"; existentialism became "the first media craze of the postwar era." By the end of 1947, Camus' earlier fiction and plays had been reprinted, his new play Caligula had been performed and his novel The Plague published; the first two novels of Sartre's The Roads to Freedom trilogy had appeared, as had Beauvoir's novel The Blood of Others. Works by Camus and Sartre were already appearing in foreign editions. The Paris-based existentialists had become famous. Sartre had traveled to Germany in 1930 to study the phenomenology of Edmund Husserl and Martin Heidegger, and he included critical comments on their work in his major treatise Being and Nothingness. Heidegger's thought had also become known in French philosophical circles through its use by Alexandre Kojève in explicating Hegel in a series of lectures given in Paris in the 1930s. The lectures were highly influential; members of the audience included not only Sartre and Merleau-Ponty, but Raymond Queneau, Georges Bataille, Louis Althusser, André Breton, and Jacques Lacan. A selection from Being and Time was published in French in 1938, and his essays began to appear in French philosophy journals. Heidegger read Sartre's work and was initially impressed, commenting: "Here for the first time I encountered an independent thinker who, from the foundations up, has experienced the area out of which I think. Your work shows such an immediate comprehension of my philosophy as I have never before encountered." Later, however, in response to a question posed by his French follower Jean Beaufret, Heidegger distanced himself from Sartre's position and existentialism in general in his Letter on Humanism. Heidegger's reputation continued to grow in France during the 1950s and 1960s. In the 1960s, Sartre attempted to reconcile existentialism and Marxism in his work Critique of Dialectical Reason. A major theme throughout his writings was freedom and responsibility. Camus was a friend of Sartre, until their falling-out, and wrote several works with existential themes including The Rebel, Summer in Algiers, The Myth of Sisyphus, and The Stranger, the latter being "considered—to what would have been Camus's irritation—the exemplary existentialist novel." Camus, like many others, rejected the existentialist label, and considered his works concerned with facing the absurd. In the titular book, Camus uses the analogy of the Greek myth of Sisyphus to demonstrate the futility of existence. In the myth, Sisyphus is condemned for eternity to roll a rock up a hill, but when he reaches the summit, the rock will roll to the bottom again. Camus believes that this existence is pointless but that Sisyphus ultimately finds meaning and purpose in his task, simply by continually applying himself to it. The first half of the book contains an extended rebuttal of what Camus took to be existentialist philosophy in the works of Kierkegaard, Shestov, Heidegger, and Jaspers. Simone de Beauvoir, an important existentialist who spent much of her life as Sartre's partner, wrote about feminist and existentialist ethics in her works, including The Second Sex and The Ethics of Ambiguity. Although often overlooked due to her relationship with Sartre, de Beauvoir integrated existentialism with other forms of thinking such as feminism, unheard of at the time, resulting in alienation from fellow writers such as Camus. Paul Tillich, an important existentialist theologian following Kierkegaard and Karl Barth, applied existentialist concepts to Christian theology, and helped introduce existential theology to the general public. His seminal work The Courage to Be follows Kierkegaard's analysis of anxiety and life's absurdity, but puts forward the thesis that modern humans must, via God, achieve selfhood in spite of life's absurdity. Rudolf Bultmann used Kierkegaard's and Heidegger's philosophy of existence to demythologize Christianity by interpreting Christian mythical concepts into existentialist concepts. Maurice Merleau-Ponty, an existential phenomenologist, was for a time a companion of Sartre. Merleau-Ponty's Phenomenology of Perception (1945) was recognized as a major statement of French existentialism. It has been said that Merleau-Ponty's work Humanism and Terror greatly influenced Sartre. However, in later years they were to disagree irreparably, dividing many existentialists such as de Beauvoir, who sided with Sartre. Colin Wilson, an English writer, published his study The Outsider in 1956, initially to critical acclaim. In this book and others (e.g. Introduction to the New Existentialism), he attempted to reinvigorate what he perceived as a
...). The Chicago Style Q&A recommends that writers avoid using the precomposed (U+2026) character in manuscripts and to place three periods plus two nonbreaking spaces (. . .) instead, leaving the editor, publisher, or typographer to replace them later. The Modern Language Association (MLA) used to indicate that an ellipsis must include spaces before and after each dot in all uses. If an ellipsis is meant to represent an omission, square brackets must surround the ellipsis to make it clear that there was no pause in the original quote: . Currently, the MLA has removed the requirement of brackets in its style handbooks. However, some maintain that the use of brackets is still correct because it clears confusion. The MLA now indicates that a three-dot, spaced ellipsis should be used for removing material from within one sentence within a quote. When crossing sentences (when the omitted text contains a period, so that omitting the end of a sentence counts), a four-dot, spaced (except for before the first dot) ellipsis should be used. When ellipsis points are used in the original text, ellipsis points that are not in the original text should be distinguished by enclosing them in square brackets (e.g. ). According to the Associated Press, the ellipsis should be used to condense quotations. It is less commonly used to indicate a pause in speech or an unfinished thought or to separate items in material such as show business gossip. The stylebook indicates that if the shortened sentence before the mark can stand as a sentence, it should do so, with an ellipsis placed after the period or other ending punctuation. When material is omitted at the end of a paragraph and also immediately following it, an ellipsis goes both at the end of that paragraph and at the beginning of the next, according to this style. According to Robert Bringhurst's Elements of Typographic Style, the details of typesetting ellipses depend on the character and size of the font being set and the typographer's preference. Bringhurst writes that a full space between each dot is "another Victorian eccentricity. In most contexts, the Chicago ellipsis is much too wide"—he recommends using flush dots (with a normal word space before and after), or thin-spaced dots (up to one-fifth of an em), or the prefabricated ellipsis character . Bringhurst suggests that normally an ellipsis should be spaced fore-and-aft to separate it from the text, but when it combines with other punctuation, the leading space disappears and the other punctuation follows. This is the usual practice in typesetting. He provides the following examples: In legal writing in the United States, Rule 5.3 in the Bluebook citation guide governs the use of ellipses and requires a space before the first dot and between the two subsequent dots. If an ellipsis ends the sentence, then there are three dots, each separated by a space, followed by the final punctuation (e.g. ). In some legal writing, an ellipsis is written as three asterisks, or , to make it obvious that text has been omitted or to signal that the omitted text extends beyond the end of the paragraph. British English The Oxford Style Guide recommends setting the ellipsis as a single character or as a series of three (narrow) spaced dots surrounded by spaces, thus: . If there is an ellipsis at the end of an incomplete sentence, the final full stop is omitted. However, it is retained if the following ellipsis represents an omission between two complete sentences. The … fox jumps … The quick brown fox jumps over the lazy dog. … And if they have not died, they are still alive today. It is not cold … it is freezing cold. Contrary to The Oxford Style Guide, the University of Oxford Style Guide demands an ellipsis not to be surrounded by spaces, except when it stands for a pause; then, a space has to be set after the ellipsis (but not before). An ellipsis is never preceded or followed by a full stop. The...fox jumps... The quick brown fox jumps over the lazy dog...And if they have not died, they are still alive today. It is not cold... it is freezing cold. In Polish When applied in Polish syntax, the ellipsis is called , literally 'multidot'. The word wielokropek distinguishes the ellipsis of Polish syntax from that of mathematical notation, in which it is known as an . When an ellipsis replaces a fragment omitted from a quotation, the ellipsis is enclosed in parentheses or square brackets. An unbracketed ellipsis indicates an interruption or pause in speech. The syntactic rules for ellipses are standardized by the 1983 Polska Norma document PN-83/P-55366, (Rules for Setting Texts in Polish). In Russian The combination "ellipsis+period" is replaced by the ellipsis. The combinations "ellipsis+exclamation mark" and "ellipsis+question mark" are written in this way: !.. ?.. In Japanese The most common character corresponding to an ellipsis is called 3-ten rīdā ("3-dot leaders", ). 2-ten rīdā exists as a character, but it is used less commonly. In writing, the ellipsis consists usually of six dots (two 3-ten rīdā characters, ). Three dots (one 3-ten rīdā character) may be used where space is limited, such as in a header. However, variations in the number of dots exist. In horizontally written text the dots are commonly vertically centered within the text height (between the baseline and the ascent line), as in the standard Japanese Windows fonts; in vertically written text the dots are always centered horizontally. As the Japanese word for dot is pronounced "", the dots are colloquially called "" (, akin to the English "dot dot dot"). In text in Japanese media, such as in manga or video games, ellipses are much more frequent than in English, and are often changed to another punctuation sign in translation. The ellipsis by itself represents speechlessness, or a "pregnant pause". Depending on the context, this could be anything from an admission of guilt to an expression of being dumbfounded at another person's words or actions. As a device, the ten-ten-ten is intended to focus the reader on a character while allowing the character to not speak any dialogue. This conveys to the reader a focus of the narrative "camera" on the silent subject, implying
ellipsis, periods of ellipsis, or (colloquially) "dot-dot-dot". Depending on their context and placement in a sentence, ellipses can indicate an unfinished thought, a leading statement, a slight pause, an echoing voice, or a nervous or awkward silence. Aposiopesis is the use of an ellipsis to trail off into silence—for example: "But I thought he was..." When placed at the end of a sentence, an ellipsis may be used to suggest melancholy or longing. The most common forms of an ellipsis include a row of three periods or full points or a precomposed triple-dot glyph, the horizontal ellipsis . Style guides often have their own rules governing the use of ellipses. For example, The Chicago Manual of Style (Chicago style) recommends that an ellipsis be formed by typing three periods, each with a space on both sides , while the Associated Press Stylebook (AP style) puts the dots together, but retains a space before and after the group, thus: . Whether an ellipsis at the end of a sentence needs a fourth dot to finish the sentence is a matter of debate; Chicago advises it, as does the Publication Manual of the American Psychological Association (APA style), while some other style guides do not; the Merriam-Webster Dictionary and related works treat this style as optional, saying that it "may" be used. When text is omitted following a sentence, a normal full stop (period) terminates the sentence, and then a separate three-dot ellipsis is commonly used to indicate one or more subsequent omitted sentences before continuing a longer quotation. Business Insider magazine suggests this style and it is also used in many academic journals. The Associated Press Stylebook favors this approach. In writing In her book on the ellipsis, Ellipsis in English Literature: Signs of Omission (Cambridge University Press, 2015), Anne Toner suggests that the first use of the punctuation in the English language dates to a 1588 translation of Terence's Andria, by Maurice Kyffin. In this case, however, the ellipsis consists not of dots but of short dashes. "Subpuncting" of medieval manuscripts also denotes omitted meaning and may be related. Occasionally, it would be used in pulp fiction and other works of early 20th-century fiction to denote expletives that would otherwise have been censored. An ellipsis may also imply an unstated alternative indicated by context. For example, "I never drink wine ..." implies that the speaker does drink something elsesuch as vodka. In reported speech, the ellipsis can be used to represent an intentional silence. In poetry, an ellipsis is used as a thought-pause or line break at the caesura or this is used to highlight sarcasm or make the reader think about the last points in the poem. In news reporting, often put inside square brackets, it is used to indicate that a quotation has been condensed for space, brevity or relevance, as in "The President said that [...] he would not be satisfied", where the exact quotation was "The President said that, for as long as this situation continued, he would not be satisfied". Herb Caen, Pulitzer-prize-winning columnist for the San Francisco Chronicle, became famous for his "three-dot journalism". In different languages In English American English The Chicago Manual of Style suggests the use of an ellipsis for any omitted word, phrase, line, or paragraph from within but not at the end of a quoted passage. There are two commonly used methods of using ellipses: one uses three dots for any omission, while the second one makes a distinction between omissions within a sentence (using three dots: . . .) and omissions between sentences (using a period and a space followed by three dots: . ...). The Chicago Style Q&A recommends that writers avoid using the precomposed (U+2026) character in manuscripts and to place three periods plus two nonbreaking spaces (. . .) instead, leaving the editor, publisher, or typographer to replace them later. The Modern Language Association (MLA) used to indicate that an ellipsis must include spaces before and after each dot in all uses. If an ellipsis is meant to represent an omission, square brackets must surround the ellipsis to make it clear that there was no pause in the original quote: . Currently, the MLA has removed the requirement of brackets in its style handbooks. However, some maintain that the use of brackets is still correct because it clears confusion. The MLA now indicates that a three-dot, spaced ellipsis should be used for removing material from within one sentence within a quote. When crossing sentences (when the omitted text contains a period, so that omitting the end of a sentence counts), a four-dot, spaced (except for before the first dot) ellipsis should be used. When ellipsis points are used in the original text, ellipsis points that are not in the original text should be distinguished by enclosing them in square brackets (e.g. ). According to the Associated Press, the ellipsis should be used to condense quotations. It is less commonly used to indicate a pause in speech or an unfinished thought or to separate items in material such as show business gossip. The stylebook indicates that if the shortened sentence before the mark can stand as a sentence, it should do so, with an ellipsis placed after the period or other ending punctuation. When material is omitted at the end of a paragraph and also immediately following it, an ellipsis goes both at the end of that paragraph and at the beginning of the next, according to this style. According to Robert Bringhurst's Elements of Typographic Style, the details of typesetting ellipses depend on the character and size of the font being set and the typographer's preference. Bringhurst writes that a full space between each dot is "another Victorian eccentricity. In most contexts, the Chicago ellipsis is much too wide"—he recommends using flush dots (with a normal word space before and after), or thin-spaced dots (up to one-fifth of an em), or the prefabricated ellipsis character . Bringhurst suggests that normally an ellipsis should be spaced fore-and-aft to separate it from the text, but when it combines with other punctuation, the leading space disappears and the other punctuation follows. This is the usual practice in typesetting. He provides the following examples: In legal writing in the United States, Rule 5.3 in the Bluebook citation guide governs the use of ellipses and requires a space before the first dot and between the two subsequent dots. If an ellipsis ends the sentence, then there are three dots, each separated by a space, followed by the final punctuation (e.g. ). In some legal writing, an ellipsis is written as three asterisks, or , to make it obvious that text has been omitted or to signal that the omitted text extends beyond the end of the paragraph. British English The Oxford Style Guide recommends setting the ellipsis as a single character or as a series of three (narrow) spaced dots surrounded by spaces, thus: . If there is an ellipsis at the end of an incomplete sentence, the final full stop is omitted. However, it is retained if the following ellipsis represents an omission between two complete sentences. The … fox jumps … The quick brown fox jumps over the lazy dog. … And if they have not died, they are still alive today. It is not cold … it is freezing cold. Contrary to The Oxford Style Guide, the University of Oxford Style Guide demands an ellipsis not to be surrounded by spaces, except when it stands for a pause; then, a space has to be set after the ellipsis (but not before). An ellipsis is never preceded or followed by a full stop. The...fox jumps... The quick brown fox jumps over the lazy dog...And if they have not died, they are still alive today. It is not cold... it is freezing cold. In Polish When applied in Polish syntax, the ellipsis is called , literally 'multidot'. The word wielokropek distinguishes the ellipsis of Polish syntax from that of mathematical notation, in which it is known as an . When an ellipsis replaces a fragment omitted from a quotation, the ellipsis is enclosed in parentheses or square brackets. An unbracketed ellipsis indicates an interruption or pause in speech. The syntactic rules for ellipses are standardized by the 1983 Polska Norma document PN-83/P-55366, (Rules for Setting Texts in Polish). In Russian The combination "ellipsis+period" is replaced by the ellipsis. The combinations "ellipsis+exclamation mark" and "ellipsis+question mark" are written in this way: !.. ?.. In Japanese The most common character corresponding to an ellipsis is called 3-ten rīdā ("3-dot leaders", ). 2-ten rīdā exists as a character, but it is used less commonly. In writing, the ellipsis consists usually of six dots (two 3-ten rīdā characters, ). Three dots (one 3-ten rīdā character) may be used where space is limited, such as in a header. However, variations in the number of dots exist. In horizontally written text the dots are commonly vertically centered within the text height (between the baseline and the ascent line), as in the standard Japanese Windows fonts; in vertically written text the dots are always centered horizontally. As the Japanese word for dot is pronounced "", the dots are colloquially called "" (, akin to the English "dot dot dot"). In text in Japanese media, such as in manga or video games, ellipses are much more frequent than in English, and are often changed to another punctuation sign in translation. The ellipsis by itself represents speechlessness, or a "pregnant pause". Depending on the context, this could be anything from an admission of guilt to an expression of being dumbfounded at another person's words or actions. As a device, the ten-ten-ten is intended to focus the reader on a character while allowing the character to not speak any dialogue. This conveys to the reader a focus of the narrative "camera" on the silent subject, implying an expectation of some motion or action. It is not unheard of to see inanimate objects "speaking" the ellipsis. In Chinese In Chinese, the ellipsis is six dots (in two groups of three dots, occupying the same horizontal or vertical space as two characters) (i.e. ). In Spanish In Spanish, the ellipsis is
training flights, and flew two missions, on 24 and 26 July, to drop pumpkin bombs on industrial targets at Kobe and Nagoya. Enola Gay was used on 31 July on a rehearsal flight for the actual mission. The partially assembled Little Boy gun-type fission weapon L-11, weighing , was contained inside a × × wooden crate that was secured to the deck of the . Unlike the six uranium-235 target discs, which were later flown to Tinian on three separate aircraft arriving 28 and 29 July, the assembled projectile with the nine uranium-235 rings installed was shipped in a single lead-lined steel container weighing that was locked to brackets welded to the deck of Captain Charles B. McVay III's quarters. Both the L-11 and projectile were dropped off at Tinian on 26 July 1945. Hiroshima mission On 5 August 1945, during preparation for the first atomic mission, Tibbets assumed command of the aircraft and named it after his mother, Enola Gay Tibbets, who, in turn, had been named for the heroine of a novel. When it came to selecting a name for the plane, Tibbets later recalled that: In the early morning hours, just prior to the 6 August mission, Tibbets had a young Army Air Forces maintenance man, Private Nelson Miller, paint the name just under the pilot's window. Regularly-assigned aircraft commander Robert Lewis was unhappy to be displaced by Tibbets for this important mission, and became furious when he arrived at the aircraft on the morning of 6 August to see it painted with the now-famous nose art. Hiroshima was the primary target of the first nuclear bombing mission on 6 August, with Kokura and Nagasaki as alternative targets. Enola Gay, piloted by Tibbets, took off from North Field, in the Northern Mariana Islands, about six hours' flight time from Japan, accompanied by two other B-29s, The Great Artiste, carrying instrumentation, and a then-nameless aircraft later called Necessary Evil, commanded by Captain George Marquardt, to take photographs. The director of the Manhattan Project, Major General Leslie R. Groves Jr., wanted the event recorded for posterity, so the takeoff was illuminated by floodlights. When he wanted to taxi, Tibbets leaned out the window to direct the bystanders out of the way. On request, he gave a friendly wave for the cameras. After leaving Tinian, the three aircraft made their way separately to Iwo Jima, where they rendezvoused at and set course for Japan. The aircraft arrived over the target in clear visibility at . Captain William S. "Deak" Parsons of Project Alberta, who was in command of the mission, armed the bomb during the flight to minimize the risks during takeoff. His assistant, Second Lieutenant Morris R. Jeppson, removed the safety devices 30 minutes before reaching the target area. The release at 08:15 (Hiroshima time) went as planned, and the Little Boy took 53 seconds to fall from the aircraft flying at to the predetermined detonation height about above the city. Enola Gay traveled before it felt the shock waves from the blast. Although buffeted by the shock, neither Enola Gay nor The Great Artiste was damaged. The detonation created a blast equivalent to . The U-235 weapon was considered very inefficient, with only 1.7% of its fissile material reacting. The radius of total destruction was about one mile (1.6 km), with resulting fires across . Americans estimated that of the city were destroyed. Japanese officials determined that 69% of Hiroshima's buildings were destroyed and another 6–7% damaged. Some 70,000–80,000 people, 30% of the city's population, were killed by the blast and resultant firestorm, and another 70,000 injured. Out of those killed, 20,000 were soldiers and 20,000 Korean slave laborers. Enola Gay returned safely to its base on Tinian to great fanfare, touching down at 2:58 pm, after 12 hours 13 minutes. The Great Artiste and Necessary Evil followed at short intervals. Several hundred people, including journalists and photographers, had gathered to watch the planes return. Tibbets was the first to disembark, and was presented with the Distinguished Service Cross on the spot. Nagasaki mission The Hiroshima mission was followed by another atomic strike. Originally scheduled for 11 August, it was brought forward by two days to 9 August owing to a forecast of bad weather. This time, a nuclear bomb code-named "Fat Man" was carried by B-29 Bockscar, piloted by Major Charles W. Sweeney. Enola Gay, flown by Captain George Marquardt's Crew B-10, was the weather reconnaissance aircraft for Kokura, the primary target. Enola Gay reported clear skies over Kokura, but by the time Bockscar arrived, the city was obscured by smoke from fires from the conventional bombing of Yahata by 224 B-29s the day before. After three unsuccessful passes, Bockscar diverted to its secondary target, Nagasaki, where it dropped its bomb. In contrast to the Hiroshima mission, the Nagasaki mission has been described as tactically botched, although the mission did meet its objectives. The crew encountered a number of problems in execution, and had very little fuel by the time they landed at the emergency backup landing site Yontan Airfield on Okinawa. Crews Hiroshima mission Enola Gay'''s crew on 6 August 1945, consisted of 12 men. The crew was: Colonel Paul W. Tibbets Jr. – pilot and aircraft commander Captain Robert A. Lewis – co-pilot; Enola Gay's regularly assigned aircraft commander* Major Thomas Ferebee – bombardier Captain Theodore "Dutch" Van Kirk – navigator Captain William S. "Deak" Parsons, USN – weaponeer and mission commander. First Lieutenant Jacob Beser – radar countermeasures
Service Cross on the spot. Nagasaki mission The Hiroshima mission was followed by another atomic strike. Originally scheduled for 11 August, it was brought forward by two days to 9 August owing to a forecast of bad weather. This time, a nuclear bomb code-named "Fat Man" was carried by B-29 Bockscar, piloted by Major Charles W. Sweeney. Enola Gay, flown by Captain George Marquardt's Crew B-10, was the weather reconnaissance aircraft for Kokura, the primary target. Enola Gay reported clear skies over Kokura, but by the time Bockscar arrived, the city was obscured by smoke from fires from the conventional bombing of Yahata by 224 B-29s the day before. After three unsuccessful passes, Bockscar diverted to its secondary target, Nagasaki, where it dropped its bomb. In contrast to the Hiroshima mission, the Nagasaki mission has been described as tactically botched, although the mission did meet its objectives. The crew encountered a number of problems in execution, and had very little fuel by the time they landed at the emergency backup landing site Yontan Airfield on Okinawa. Crews Hiroshima mission Enola Gay'''s crew on 6 August 1945, consisted of 12 men. The crew was: Colonel Paul W. Tibbets Jr. – pilot and aircraft commander Captain Robert A. Lewis – co-pilot; Enola Gay's regularly assigned aircraft commander* Major Thomas Ferebee – bombardier Captain Theodore "Dutch" Van Kirk – navigator Captain William S. "Deak" Parsons, USN – weaponeer and mission commander. First Lieutenant Jacob Beser – radar countermeasures (also the only man to fly on both of the nuclear bombing aircraft.) Second Lieutenant Morris R. Jeppson – assistant weaponeer Staff Sergeant Robert "Bob" Caron – tail gunner* Staff Sergeant Wyatt E. Duzenbury – flight engineer* Sergeant Joe S. Stiborik – radar operator* Sergeant Robert H. Shumard – assistant flight engineer* Private First Class Richard H. Nelson – VHF radio operator* Asterisks denote regular crewmen of the Enola Gay. Of mission commander Parsons, it was said: "There is no one more responsible for getting this bomb out of the laboratory and into some form useful for combat operations than Captain Parsons, by his plain genius in the ordnance business." Nagasaki mission For the Nagasaki mission, Enola Gay was flown by Crew B-10, normally assigned to Up An' Atom: Captain George W. Marquardt – aircraft commander Second Lieutenant James M. Anderson – co-pilot Second Lieutenant Russell Gackenbach – navigator Captain James W. Strudwick – bombardier Technical Sergeant James R. Corliss – flight engineer Sergeant Warren L. Coble – radio operator Sergeant Joseph M. DiJulio – radar operator Sergeant Melvin H. Bierman – tail gunner Sergeant Anthony D. Capua Jr. – assistant engineer/scanner Subsequent history On 6 November 1945, Lewis flew the Enola Gay back to the United States, arriving at the 509th's new base at Roswell Army Air Field, New Mexico, on 8 November. On 29 April 1946, Enola Gay left Roswell as part of the Operation Crossroads nuclear weapons tests in the Pacific. It flew to Kwajalein Atoll on 1 May. It was not chosen to make the test drop at Bikini Atoll and left Kwajalein on 1 July, the date of the test, reaching Fairfield-Suisun Army Air Field, California, the next day. The decision was made to preserve the Enola Gay, and on 24 July 1946, the aircraft was flown to Davis–Monthan Air Force Base, Tucson, Arizona, in preparation for storage. On 30 August 1946, the title to the aircraft was transferred to the Smithsonian Institution and the Enola Gay was removed from the USAAF inventory. From 1946 to 1961, the Enola Gay was put into temporary storage at a number of locations. It was at Davis-Monthan from 1 September 1946 until 3 July 1949, when it was flown to Orchard Place Air Field, Park Ridge, Illinois, by Tibbets for acceptance by the Smithsonian. It was moved to Pyote Air Force Base, Texas, on 12 January 1952, and then to Andrews Air Force Base, Maryland, on 2 December 1953, because the Smithsonian had no storage space for the aircraft. It was hoped that the Air Force would guard the plane, but, lacking hangar space, it was left outdoors on a remote part of the air base, exposed to the elements. Souvenir hunters broke in and removed parts. Insects and birds then gained access to the aircraft. Paul E. Garber of the Smithsonian Institution, became concerned about the Enola Gays condition, and on 10 August 1960, Smithsonian staff began dismantling the aircraft. The components were transported to the Smithsonian storage facility at Suitland, Maryland, on 21 July 1961.Enola Gay remained at Suitland for many years. By the early 1980s, two veterans of the 509th, Don Rehl and his former navigator in the 509th, Frank B. Stewart, began lobbying for the aircraft to be restored and put on display. They enlisted Tibbets and Senator Barry Goldwater in their campaign. In 1983, Walter J. Boyne, a former B-52 pilot with the Strategic Air Command, became director of the National Air and Space Museum, and he made the Enola Gays restoration a priority. Looking at the aircraft, Tibbets recalled, was a "sad meeting. [My] fond memories, and I don't mean the dropping of the bomb, were the numerous occasions I flew the airplane ... I pushed it very, very hard and it never failed me ... It was probably the most beautiful piece of machinery that any pilot ever flew." Restoration Restoration of the bomber began on 5 December 1984, at the Paul E. Garber Preservation, Restoration, and Storage Facility in Suitland-Silver Hill, Maryland. The propellers that were used on the bombing mission were later shipped to Texas A&M University. One of these propellers was trimmed to for use in the university's Oran W. Nicks Low Speed Wind Tunnel. The lightweight aluminum variable-pitch propeller is powered by a 1,250 kVA electric motor, providing a wind speed up to . Two engines were rebuilt at Garber and two at San Diego Air & Space Museum. Some parts and instruments had been removed and could not be located. Replacements were found or fabricated, and marked so that future curators could distinguish them from the original components. Exhibition controversyEnola Gay became the center of a controversy at the Smithsonian Institution when the museum planned to put its fuselage on public display in 1995 as part of an exhibit commemorating the 50th anniversary of the atomic bombing of Hiroshima. The exhibit, The Crossroads: The End of World War II, the Atomic Bomb and the Cold War, was drafted by the Smithsonian's National Air and Space Museum staff, and arranged around the restored Enola Gay. Critics of the planned exhibit, especially those of the American Legion and the Air Force Association, charged that the exhibit focused too much attention on the Japanese casualties inflicted by the nuclear bomb, rather than on the motives for the bombing or the discussion of the bomb's role in ending the conflict with Japan. The exhibit brought to national attention many long-standing academic and political issues related to retrospective views of the bombings. After attempts to revise the exhibit to meet the satisfaction of competing interest groups, the exhibit was canceled on 30 January 1995. Martin O. Harwit, Director of the National Air and Space Museum, was compelled to resign over the controversy. He later reflected that The forward fuselage went on display on 28 June 1995. On 2 July 1995, three people were arrested
one can describe the momentum of an electron in units of eV/c. The fundamental velocity constant c is often dropped from the units of momentum by way of defining units of length such that the value of c is unity. For example, if the momentum p of an electron is said to be , then the conversion to MKS can be achieved by: Distance In particle physics, a system of "natural units" in which the speed of light in vacuum c and the reduced Planck constant ħ are dimensionless and equal to unity is widely used: . In these units, both distances and times are expressed in inverse energy units (while energy and mass are expressed in the same units, see mass–energy equivalence). In particular, particle scattering lengths are often presented in units of inverse particle masses. Outside this system of units, the conversion factors between electronvolt, second, and nanometer are the following: The above relations also allow expressing the mean lifetime τ of an unstable particle (in seconds) in terms of its decay width Γ (in eV) via . For example, the B0 meson has a lifetime of 1.530(9) picoseconds, mean decay length is , or a decay width of . Conversely, the tiny meson mass differences responsible for meson oscillations are often expressed in the more convenient inverse picoseconds. Energy in electronvolts is sometimes expressed through the wavelength of light with photons of the same energy: Temperature In certain fields, such as plasma physics, it is convenient to use the electronvolt to express temperature. The electronvolt is divided by the Boltzmann constant to convert to the Kelvin scale: Where kB is the Boltzmann constant, K is Kelvin, J is Joules, eV is electronvolts. The kB is assumed when using the electronvolt to express temperature, for example, a typical magnetic confinement fusion plasma is (kilo-electronvolts), which is equal to 174 MK (million Kelvin). As an approximation: kBT is about (≈ ) at a temperature of . Properties The energy E, frequency v, and wavelength λ of a photon are related by where h is the Planck constant, c is the speed of light. This reduces to A photon with a wavelength of (green light) would have an energy of approximately . Similarly, would correspond to an infrared photon of wavelength or frequency . Scattering experiments In a low-energy nuclear scattering experiment, it is conventional to refer to the nuclear recoil energy in units of eVr, keVr, etc. This distinguishes the nuclear recoil energy from the "electron equivalent" recoil energy (eVee, keVee, etc.) measured by scintillation light. For example, the yield of a phototube is measured in phe/keVee (photoelectrons per keV electron-equivalent energy). The relationship between eV, eVr, and eVee depends on the medium the scattering takes place in, and must be established empirically for each material. Energy comparisons Per mole One mole of particles given 1 eV of energy has approximately 96.5 kJ of energy – this corresponds to the Faraday constant (F ≈ ), where the energy in joules of n moles of particles each with energy E eV is equal to E·F·n. See also Orders of magnitude (energy) References External links BIPM's definition of the electronvolt physical constants reference; CODATA data Particle physics Units of chemical measurement Units of energy
of energy (i.e., ). This gives rise to usage of eV (and keV, MeV, GeV or TeV) as units of momentum, for the energy supplied results in acceleration of the particle. The dimensions of momentum units are . The dimensions of energy units are . Then, dividing the units of energy (such as eV) by a fundamental constant that has units of velocity (), facilitates the required conversion of using energy units to describe momentum. In the field of high-energy particle physics, the fundamental velocity unit is the speed of light in vacuum c. By dividing energy in eV by the speed of light, one can describe the momentum of an electron in units of eV/c. The fundamental velocity constant c is often dropped from the units of momentum by way of defining units of length such that the value of c is unity. For example, if the momentum p of an electron is said to be , then the conversion to MKS can be achieved by: Distance In particle physics, a system of "natural units" in which the speed of light in vacuum c and the reduced Planck constant ħ are dimensionless and equal to unity is widely used: . In these units, both distances and times are expressed in inverse energy units (while energy and mass are expressed in the same units, see mass–energy equivalence). In particular, particle scattering lengths are often presented in units of inverse particle masses. Outside this system of units, the conversion factors between electronvolt, second, and nanometer are the following: The above relations also allow expressing the mean lifetime τ of an unstable particle (in seconds) in terms of its decay width Γ (in eV) via . For example, the B0 meson has a lifetime of 1.530(9) picoseconds, mean decay length is , or a decay width of . Conversely, the tiny meson mass differences responsible for meson oscillations are often expressed in the more convenient inverse picoseconds. Energy in electronvolts is sometimes expressed through the wavelength of
C3H8 → 3 CO2 + 20 e− + 20 H+ As in acid and basic medium, electrons which were used to compensate oxidation changes are multiplied to opposite half reactions, thus solving the equation. 20 H+ + 5 O2 + 20 e− → 10 H2O 6 H2O + C3H8 → 3 CO2 + 20 e− + 20 H+ Equation balanced: C3H8 + 5 O2 → 3 CO2 + 4 H2O Electrochemical cells An electrochemical cell is a device that produces an electric current from energy released by a spontaneous redox reaction, this can be caused from electricity. This kind of cell includes the Galvanic cell or Voltaic cell, named after Luigi Galvani and Alessandro Volta, both scientists who conducted several experiments on chemical reactions and electric current during the late 18th century. Electrochemical cells have two conductive electrodes (the anode and the cathode). The anode is defined as the electrode where oxidation occurs and the cathode is the electrode where the reduction takes place. Electrodes can be made from any sufficiently conductive materials, such as metals, semiconductors, graphite, and even conductive polymers. In between these electrodes is the electrolyte, which contains ions that can freely move. The galvanic cell uses two different metal electrodes, each in an electrolyte where the positively charged ions are the oxidized form of the electrode metal. One electrode will undergo oxidation (the anode) and the other will undergo reduction (the cathode). The metal of the anode will oxidize, going from an oxidation state of 0 (in the solid form) to a positive oxidation state and become an ion. At the cathode, the metal ion in solution will accept one or more electrons from the cathode and the ion's oxidation state is reduced to 0. This forms a solid metal that electrodeposits on the cathode. The two electrodes must be electrically connected to each other, allowing for a flow of electrons that leave the metal of the anode and flow through this connection to the ions at the surface of the cathode. This flow of electrons is an electric current that can be used to do work, such as turn a motor or power a light. A galvanic cell whose electrodes are zinc and copper submerged in zinc sulfate and copper sulfate, respectively, is known as a Daniell cell. Half reactions for a Daniell cell are these: Zinc electrode (anode): Zn → Zn2+ + 2 e− Copper electrode (cathode): Cu2+ + 2 e− → Cu In this example, the anode is the zinc metal which is oxidized (loses electrons) to form zinc ions in solution, and copper ions accept electrons from the copper metal electrode and the ions deposit at the copper cathode as an electrodeposit. This cell forms a simple battery as it will spontaneously generate a flow of electric current from the anode to the cathode through the external connection. This reaction can be driven in reverse by applying a voltage, resulting in the deposition of zinc metal at the anode and formation of copper ions at the cathode. To provide a complete electric circuit, there must also be an ionic conduction path between the anode and cathode electrolytes in addition to the electron conduction path. The simplest ionic conduction path is to provide a liquid junction. To avoid mixing between the two electrolytes, the liquid junction can be provided through a porous plug that allows ion flow while reducing electrolyte mixing. To further minimize mixing of the electrolytes, a salt bridge can be used which consists of an electrolyte saturated gel in an inverted U-tube. As the negatively charged electrons flow in one direction around this circuit, the positively charged metal ions flow in the opposite direction in the electrolyte. A voltmeter is capable of measuring the change of electrical potential between the anode and the cathode. Electrochemical cell voltage is also referred to as electromotive force or emf. A cell diagram can be used to trace the path of the electrons in the electrochemical cell. For example, here is a cell diagram of a Daniell cell: Zn | Zn2+ (1 M) || Cu2+ (1 M) | Cu First, the reduced form of the metal to be oxidized at the anode (Zn) is written. This is separated from its oxidized form by a vertical line, which represents the limit between the phases (oxidation changes). The double vertical lines represent the saline bridge on the cell. Finally, the oxidized form of the metal to be reduced at the cathode, is written, separated from its reduced form by the vertical line. The electrolyte concentration is given as it is an important variable in determining the cell potential. Standard electrode potential To allow prediction of the cell potential, tabulations of standard electrode potential are available. Such tabulations are referenced to the standard hydrogen electrode (SHE). The standard hydrogen electrode undergoes the reaction 2 H+ + 2 e− → H2 which is shown as reduction but, in fact, the SHE can act as either the anode or the cathode, depending on the relative oxidation/reduction potential of the other electrode/electrolyte combination. The term standard in SHE requires a supply of hydrogen gas bubbled through the electrolyte at a pressure of 1 atm and an acidic electrolyte with H+ activity equal to 1 (usually assumed to be [H+] = 1 mol/liter). The SHE electrode can be connected to any other electrode by a salt bridge to form a cell. If the second electrode is also at standard conditions, then the measured cell potential is called the standard electrode potential for the electrode. The standard electrode potential for the SHE is zero, by definition. The polarity of the standard electrode potential provides information about the relative reduction potential of the electrode compared to the SHE. If the electrode has a positive potential with respect to the SHE, then that means it is a strongly reducing electrode which forces the SHE to be the anode (an example is Cu in aqueous CuSO4 with a standard electrode potential of 0.337 V). Conversely, if the measured potential is negative, the electrode is more oxidizing than the SHE (such as Zn in ZnSO4 where the standard electrode potential is −0.76 V). Standard electrode potentials are usually tabulated as reduction potentials. However, the reactions are reversible and the role of a particular electrode in a cell depends on the relative oxidation/reduction potential of both electrodes. The oxidation potential for a particular electrode is just the negative of the reduction potential. A standard cell potential can be determined by looking up the standard electrode potentials for both electrodes (sometimes called half cell potentials). The one that is smaller will be the anode and will undergo oxidation. The cell potential is then calculated as the sum of the reduction potential for the cathode and the oxidation potential for the anode. E°cell = E°red (cathode) – E°red (anode) = E°red (cathode) + E°oxi (anode) For example, the standard electrode potential for a copper electrode is: Cell diagram Pt | H2 (1 atm) | H+ (1 M) || Cu2+ (1 M) | Cu E°cell = E°red (cathode) – E°red (anode) At standard temperature, pressure and concentration conditions, the cell's emf (measured by a multimeter) is 0.34 V. By definition, the electrode potential for the SHE is zero. Thus, the Cu is the cathode and the SHE is the anode giving Ecell = E°(Cu2+/Cu) – E°(H+/H2) Or, E°(Cu2+/Cu) = 0.34 V Changes in the stoichiometric coefficients of a balanced cell equation will not change E°red value because the standard electrode potential is an intensive property. Spontaneity of redox reaction During operation of electrochemical cells, chemical energy is transformed into electrical energy and is expressed mathematically as the product of the cell's emf and the electric charge transferred through the external circuit. Electrical energy = EcellCtrans where Ecell is the cell potential measured in volts (V) and Ctrans is the cell current integrated over time and measured in coulombs (C); Ctrans can also be determined by multiplying the total number of electrons transferred (measured in moles) times Faraday's constant (F). The emf of the cell at zero current is the maximum possible emf. It is used to calculate the maximum possible electrical energy that could be obtained from a chemical reaction. This energy is referred to as electrical work and is expressed by the following equation: , where work is defined as positive into the system. Since the free energy is the maximum amount of work that can be extracted from a system, one can write: A positive cell potential gives a negative change in Gibbs free energy. This is consistent with the cell production of an electric current from the cathode to the anode through the external circuit. If the current is driven in the opposite direction by imposing an external potential, then work is done on the cell to drive electrolysis. A spontaneous electrochemical reaction (change in Gibbs free energy less than zero) can be used to generate an electric current in electrochemical cells. This is the basis of all batteries and fuel cells. For example, gaseous oxygen (O2) and hydrogen (H2) can be combined in a fuel cell to form water and energy, typically a combination of heat and electrical energy. Conversely, non-spontaneous electrochemical reactions can be driven forward by the application of a current at sufficient voltage. The electrolysis of water into gaseous oxygen and hydrogen is a typical example. The relation between the equilibrium constant, K, and the Gibbs free energy for an electrochemical cell is expressed as follows: . Rearranging to express the relation between standard potential and equilibrium constant yields . The previous equation can use Briggsian logarithm as shown below: Cell emf dependency on changes in concentration Nernst equation The standard potential of an electrochemical cell requires standard conditions (ΔG°) for all of the reactants. When reactant concentrations differ from standard conditions, the cell potential will deviate from the standard potential. In the 20th century German chemist Walther Nernst proposed a mathematical model to determine the effect of reactant concentration on electrochemical cell potential. In the late 19th century, Josiah Willard Gibbs had formulated a theory to predict whether a chemical reaction is spontaneous based on the free energy Here ΔG is change in Gibbs free energy, ΔG° is the cell potential when Q is equal to 1, T is absolute temperature (Kelvin), R is the gas constant and Q is reaction quotient which can be found by dividing products by reactants using only those products and reactants that are aqueous or gaseous. Gibbs' key contribution was to formalize the understanding of the effect of reactant concentration on spontaneity. Based on Gibbs' work, Nernst extended the theory to include the contribution from electric potential on charged species. As shown in the previous section, the change in Gibbs free energy for an electrochemical cell can be related to the cell potential. Thus, Gibbs' theory becomes Here n is the number of electrons/mole product, F is the Faraday constant (coulombs/mole), and ΔE is cell potential. Finally, Nernst divided through by the amount of charge transferred to arrive at a new equation which now bears his name: Assuming standard conditions (T = 25 °C) and R = 8.3145 J/(K·mol), the equation above can be expressed on base—10 logarithm as shown below: Note that is also known as the thermal voltage VT and is found in the study of plasma's and semiconductors as well. The value 0.05916 V in the above equation is just the thermal voltage at standard temperature multiplied by the natural logarithm of 10. Concentration cells A concentration cell is an electrochemical cell where the two electrodes are the same material, the electrolytes on the two half-cells involve the same ions, but the electrolyte concentration differs between the two half-cells. An example is an electrochemical cell, where two copper electrodes are submerged in two copper(II) sulfate solutions, whose concentrations are 0.05 M and 2.0 M, connected through a salt bridge. This type of cell will generate a potential that can be predicted by the Nernst equation. Both can undergo the same chemistry (although the reaction proceeds in reverse at the anode) Cu2+ + 2 e− → Cu Le Chatelier's principle indicates that the reaction is more favorable to reduction as the concentration of Cu2+ ions increases. Reduction will take place in the cell's compartment where concentration is higher and oxidation will occur on the more dilute side. The following cell diagram describes the cell mentioned above: Cu | Cu2+ (0.05 M) || Cu2+ (2.0 M) | Cu Where the half cell reactions for oxidation and reduction are: Oxidation: Cu → Cu2+ (0.05 M) + 2 e− Reduction: Cu2+ (2.0 M) + 2 e− → Cu Overall reaction: Cu2+ (2.0 M) → Cu2+ (0.05 M) The cell's emf is calculated through the Nernst equation as follows: The value of E° in this kind of cell is zero, as electrodes and ions are the same in both half-cells. After replacing values from the case mentioned, it is possible to calculate cell's potential: or by: However, this value is only approximate, as reaction quotient is defined in terms of ion activities which can be approximated with the concentrations as calculated here. The Nernst equation plays an important role in understanding electrical effects in cells and organelles. Such effects include nerve synapses and cardiac beat as well as the resting potential of a somatic cell. Battery Many types of battery have been commercialized and represent an important practical application of electrochemistry. Early wet cells powered the first telegraph and telephone systems, and were the source of current for electroplating. The zinc-manganese dioxide dry cell was the first portable, non-spillable battery type that made flashlights and other portable devices practical. The mercury battery using zinc and mercuric oxide provided higher levels of power and capacity than the original dry cell for early electronic devices, but has been phased out of common use due to the danger of mercury pollution from discarded cells. The lead–acid battery was the first practical secondary (rechargeable) battery that could have its capacity replenished from an external source. The electrochemical reaction that produced current was (to a useful degree) reversible, allowing electrical energy and chemical energy to be interchanged as needed. Common lead acid batteries contain a mixture of sulfuric acid and water, as well as lead plates. The most common mixture used today is 30% acid. One problem however is if left uncharged acid will crystallize within the lead plates of the battery rendering it useless. These batteries last an average of 3 years with daily use however it is not unheard of for a lead acid battery to still be functional after 7–10 years. Lead-acid cells continue to be widely used in automobiles. All the preceding types have water-based electrolytes, which limits the maximum voltage per cell. The freezing of water limits low temperature performance. The lithium battery, which does not (and cannot) use water in the electrolyte, provides improved performance over other types; a rechargeable lithium-ion battery is an essential part of many mobile devices. The flow battery, an experimental type, offers the option of vastly larger energy capacity because its reactants can be replenished from external reservoirs. The fuel cell can turn the chemical energy bound in hydrocarbon gases or hydrogen directly into electrical energy with much higher efficiency than any combustion process; such devices have powered many spacecraft and are being applied to grid energy storage for the public power system. Corrosion Corrosion is an electrochemical process, which reveals itself as rust or tarnish on metals like iron or copper and their respective alloys, steel and brass. Iron corrosion For iron rust to occur the metal has to be in contact with oxygen and water, although chemical reactions for this process are relatively complex and not all of them are completely understood. It is believed the causes are the following: Electron transfer (reduction-oxidation) One area on the surface of the metal acts as the anode, which is where the oxidation (corrosion) occurs. At the anode, the metal gives up electrons. Fe → Fe2+ + 2 e− Electrons are transferred from iron, reducing oxygen in the atmosphere into water on the cathode, which is placed in another region of the metal. O2 + 4 H+ + 4 e− → 2 H2O Global reaction for the process: 2 Fe + O2 + 4 H+ → 2 Fe2+ + 2 H2O Standard emf for iron rusting: E° = E° (cathode) − E° (anode) E° = 1.23V − (−0.44 V) = 1.67 V Iron corrosion takes place in an acid medium; H+ ions come from reaction between carbon dioxide in the atmosphere and water, forming carbonic acid. Fe2+ ions oxidize, following this equation: 4 Fe2+ + O2 + (4+2) H2O → 2 Fe2O3·H2O + 8 H+ Iron(III) oxide hydrate is known as rust. The concentration of water associated with iron oxide varies, thus the chemical formula is represented by Fe2O3·H2O. An electric circuit is formed as passage of electrons and ions occurs, thus if an electrolyte is present it will facilitate oxidation, explaining why rusting is quicker in salt water. Corrosion of common metals Coinage metals, such as copper and silver, slowly corrode through use. A patina of green-blue copper carbonate forms on the surface of copper with exposure to the water and carbon dioxide in the air. Silver coins or cutlery that are exposed to high sulfur foods such as eggs or the low levels of sulfur species in the air develop a layer of black Silver sulfide. Gold and platinum are extremely difficult to oxidize under normal circumstances, and require exposure to a powerful chemical oxidizing agent such as aqua regia. Some common metals oxidize extremely rapidly in air. Titanium and aluminium oxidize instantaneously in contact with the oxygen in the air. These metals form an extremely thin layer of oxidized metal on the surface which bonds with the underlying metal. This thin layer of oxide protects the underlying layers of the metal from the air preventing the entire metal from oxidizing. These metals are used in applications where corrosion resistance is important. Iron, in contrast, has an oxide that forms in air and water, called rust, that does not bond with the iron and therefore does not stop the further oxidation of the iron. Thus iron left exposed to air and water will continue to rust until all of the iron is oxidized. Prevention of corrosion Attempts to save a metal from becoming anodic are of two general types. Anodic regions dissolve and destroy the structural integrity of the metal. While it is almost impossible to prevent anode/cathode formation, if a non-conducting material covers the metal, contact with the electrolyte is not possible and corrosion will not occur. Coating Metals can be coated with paint or other less conductive metals (passivation). This prevents the metal surface from being exposed to electrolytes. Scratches exposing the metal substrate will result in corrosion. The region under the coating adjacent to the scratch acts as the anode of the reaction. See Anodizing Sacrificial anodes A method commonly used to protect a structural metal is to attach a metal which is more anodic than the metal to be protected. This forces the structural metal to be cathodic, thus spared corrosion. It is called "sacrificial" because the anode dissolves and has to be replaced periodically. Zinc bars are attached to various locations on steel ship hulls to render the ship hull cathodic. The zinc bars are replaced periodically. Other metals, such as magnesium, would work very well but zinc is the least expensive useful metal. To protect pipelines, an ingot of buried or exposed magnesium (or zinc) is buried beside the pipeline and is connected electrically to the pipe above ground. The pipeline is forced to be a cathode and is protected from being oxidized and rusting. The magnesium anode is sacrificed. At intervals new ingots are buried to replace those lost. Electrolysis The spontaneous redox reactions of a conventional battery produce electricity through the different chemical potentials of the cathode and anode in the electrolyte. However, electrolysis requires an external source of electrical energy to induce a chemical reaction, and this process takes place in a compartment called an electrolytic cell. Electrolysis of molten sodium chloride When molten, the salt sodium chloride can be electrolyzed to yield metallic sodium and gaseous chlorine. Industrially this process takes place in a special cell named Down's cell. The cell is connected to an electrical power supply, allowing electrons to migrate from the power supply to the electrolytic cell. Reactions that take place at Down's cell are the following: Anode (oxidation): 2 Cl− → Cl2 + 2 e− Cathode (reduction): 2 Na+ + 2 e− → 2 Na Overall reaction: 2 Na+ + 2 Cl− → 2 Na + Cl2 This process can yield large amounts of metallic sodium and gaseous chlorine, and is widely used on mineral dressing and metallurgy industries. The emf for
change of electrical potential between the anode and the cathode. Electrochemical cell voltage is also referred to as electromotive force or emf. A cell diagram can be used to trace the path of the electrons in the electrochemical cell. For example, here is a cell diagram of a Daniell cell: Zn | Zn2+ (1 M) || Cu2+ (1 M) | Cu First, the reduced form of the metal to be oxidized at the anode (Zn) is written. This is separated from its oxidized form by a vertical line, which represents the limit between the phases (oxidation changes). The double vertical lines represent the saline bridge on the cell. Finally, the oxidized form of the metal to be reduced at the cathode, is written, separated from its reduced form by the vertical line. The electrolyte concentration is given as it is an important variable in determining the cell potential. Standard electrode potential To allow prediction of the cell potential, tabulations of standard electrode potential are available. Such tabulations are referenced to the standard hydrogen electrode (SHE). The standard hydrogen electrode undergoes the reaction 2 H+ + 2 e− → H2 which is shown as reduction but, in fact, the SHE can act as either the anode or the cathode, depending on the relative oxidation/reduction potential of the other electrode/electrolyte combination. The term standard in SHE requires a supply of hydrogen gas bubbled through the electrolyte at a pressure of 1 atm and an acidic electrolyte with H+ activity equal to 1 (usually assumed to be [H+] = 1 mol/liter). The SHE electrode can be connected to any other electrode by a salt bridge to form a cell. If the second electrode is also at standard conditions, then the measured cell potential is called the standard electrode potential for the electrode. The standard electrode potential for the SHE is zero, by definition. The polarity of the standard electrode potential provides information about the relative reduction potential of the electrode compared to the SHE. If the electrode has a positive potential with respect to the SHE, then that means it is a strongly reducing electrode which forces the SHE to be the anode (an example is Cu in aqueous CuSO4 with a standard electrode potential of 0.337 V). Conversely, if the measured potential is negative, the electrode is more oxidizing than the SHE (such as Zn in ZnSO4 where the standard electrode potential is −0.76 V). Standard electrode potentials are usually tabulated as reduction potentials. However, the reactions are reversible and the role of a particular electrode in a cell depends on the relative oxidation/reduction potential of both electrodes. The oxidation potential for a particular electrode is just the negative of the reduction potential. A standard cell potential can be determined by looking up the standard electrode potentials for both electrodes (sometimes called half cell potentials). The one that is smaller will be the anode and will undergo oxidation. The cell potential is then calculated as the sum of the reduction potential for the cathode and the oxidation potential for the anode. E°cell = E°red (cathode) – E°red (anode) = E°red (cathode) + E°oxi (anode) For example, the standard electrode potential for a copper electrode is: Cell diagram Pt | H2 (1 atm) | H+ (1 M) || Cu2+ (1 M) | Cu E°cell = E°red (cathode) – E°red (anode) At standard temperature, pressure and concentration conditions, the cell's emf (measured by a multimeter) is 0.34 V. By definition, the electrode potential for the SHE is zero. Thus, the Cu is the cathode and the SHE is the anode giving Ecell = E°(Cu2+/Cu) – E°(H+/H2) Or, E°(Cu2+/Cu) = 0.34 V Changes in the stoichiometric coefficients of a balanced cell equation will not change E°red value because the standard electrode potential is an intensive property. Spontaneity of redox reaction During operation of electrochemical cells, chemical energy is transformed into electrical energy and is expressed mathematically as the product of the cell's emf and the electric charge transferred through the external circuit. Electrical energy = EcellCtrans where Ecell is the cell potential measured in volts (V) and Ctrans is the cell current integrated over time and measured in coulombs (C); Ctrans can also be determined by multiplying the total number of electrons transferred (measured in moles) times Faraday's constant (F). The emf of the cell at zero current is the maximum possible emf. It is used to calculate the maximum possible electrical energy that could be obtained from a chemical reaction. This energy is referred to as electrical work and is expressed by the following equation: , where work is defined as positive into the system. Since the free energy is the maximum amount of work that can be extracted from a system, one can write: A positive cell potential gives a negative change in Gibbs free energy. This is consistent with the cell production of an electric current from the cathode to the anode through the external circuit. If the current is driven in the opposite direction by imposing an external potential, then work is done on the cell to drive electrolysis. A spontaneous electrochemical reaction (change in Gibbs free energy less than zero) can be used to generate an electric current in electrochemical cells. This is the basis of all batteries and fuel cells. For example, gaseous oxygen (O2) and hydrogen (H2) can be combined in a fuel cell to form water and energy, typically a combination of heat and electrical energy. Conversely, non-spontaneous electrochemical reactions can be driven forward by the application of a current at sufficient voltage. The electrolysis of water into gaseous oxygen and hydrogen is a typical example. The relation between the equilibrium constant, K, and the Gibbs free energy for an electrochemical cell is expressed as follows: . Rearranging to express the relation between standard potential and equilibrium constant yields . The previous equation can use Briggsian logarithm as shown below: Cell emf dependency on changes in concentration Nernst equation The standard potential of an electrochemical cell requires standard conditions (ΔG°) for all of the reactants. When reactant concentrations differ from standard conditions, the cell potential will deviate from the standard potential. In the 20th century German chemist Walther Nernst proposed a mathematical model to determine the effect of reactant concentration on electrochemical cell potential. In the late 19th century, Josiah Willard Gibbs had formulated a theory to predict whether a chemical reaction is spontaneous based on the free energy Here ΔG is change in Gibbs free energy, ΔG° is the cell potential when Q is equal to 1, T is absolute temperature (Kelvin), R is the gas constant and Q is reaction quotient which can be found by dividing products by reactants using only those products and reactants that are aqueous or gaseous. Gibbs' key contribution was to formalize the understanding of the effect of reactant concentration on spontaneity. Based on Gibbs' work, Nernst extended the theory to include the contribution from electric potential on charged species. As shown in the previous section, the change in Gibbs free energy for an electrochemical cell can be related to the cell potential. Thus, Gibbs' theory becomes Here n is the number of electrons/mole product, F is the Faraday constant (coulombs/mole), and ΔE is cell potential. Finally, Nernst divided through by the amount of charge transferred to arrive at a new equation which now bears his name: Assuming standard conditions (T = 25 °C) and R = 8.3145 J/(K·mol), the equation above can be expressed on base—10 logarithm as shown below: Note that is also known as the thermal voltage VT and is found in the study of plasma's and semiconductors as well. The value 0.05916 V in the above equation is just the thermal voltage at standard temperature multiplied by the natural logarithm of 10. Concentration cells A concentration cell is an electrochemical cell where the two electrodes are the same material, the electrolytes on the two half-cells involve the same ions, but the electrolyte concentration differs between the two half-cells. An example is an electrochemical cell, where two copper electrodes are submerged in two copper(II) sulfate solutions, whose concentrations are 0.05 M and 2.0 M, connected through a salt bridge. This type of cell will generate a potential that can be predicted by the Nernst equation. Both can undergo the same chemistry (although the reaction proceeds in reverse at the anode) Cu2+ + 2 e− → Cu Le Chatelier's principle indicates that the reaction is more favorable to reduction as the concentration of Cu2+ ions increases. Reduction will take place in the cell's compartment where concentration is higher and oxidation will occur on the more dilute side. The following cell diagram describes the cell mentioned above: Cu | Cu2+ (0.05 M) || Cu2+ (2.0 M) | Cu Where the half cell reactions for oxidation and reduction are: Oxidation: Cu → Cu2+ (0.05 M) + 2 e− Reduction: Cu2+ (2.0 M) + 2 e− → Cu Overall reaction: Cu2+ (2.0 M) → Cu2+ (0.05 M) The cell's emf is calculated through the Nernst equation as follows: The value of E° in this kind of cell is zero, as electrodes and ions are the same in both half-cells. After replacing values from the case mentioned, it is possible to calculate cell's potential: or by: However, this value is only approximate, as reaction quotient is defined in terms of ion activities which can be approximated with the concentrations as calculated here. The Nernst equation plays an important role in understanding electrical effects in cells and organelles. Such effects include nerve synapses and cardiac beat as well as the resting potential of a somatic cell. Battery Many types of battery have been commercialized and represent an important practical application of electrochemistry. Early wet cells powered the first telegraph and telephone systems, and were the source of current for electroplating. The zinc-manganese dioxide dry cell was the first portable, non-spillable battery type that made flashlights and other portable devices practical. The mercury battery using zinc and mercuric oxide provided higher levels of power and capacity than the original dry cell for early electronic devices, but has been phased out of common use due to the danger of mercury pollution from discarded cells. The lead–acid battery was the first practical secondary (rechargeable) battery that could have its capacity replenished from an external source. The electrochemical reaction that produced current was (to a useful degree) reversible, allowing electrical energy and chemical energy to be interchanged as needed. Common lead acid batteries contain a mixture of sulfuric acid and water, as well as lead plates. The most common mixture used today is 30% acid. One problem however is if left uncharged acid will crystallize within the lead plates of the battery rendering it useless. These batteries last an average of 3 years with daily use however it is not unheard of for a lead acid battery to still be functional after 7–10 years. Lead-acid cells continue to be widely used in automobiles. All the preceding types have water-based electrolytes, which limits the maximum voltage per cell. The freezing of water limits low temperature performance. The lithium battery, which does not (and cannot) use water in the electrolyte, provides improved performance over other types; a rechargeable lithium-ion battery is an essential part of many mobile devices. The flow battery, an experimental type, offers the option of vastly larger energy capacity because its reactants can be replenished from external reservoirs. The fuel cell can turn the chemical energy bound in hydrocarbon gases or hydrogen directly into electrical energy with much higher efficiency than any combustion process; such devices have powered many spacecraft and are being applied to grid energy storage for the public power system. Corrosion Corrosion is an electrochemical process, which reveals itself as rust or tarnish on metals like iron or copper and their respective alloys, steel and brass. Iron corrosion For iron rust to occur the metal has to be in contact with oxygen and water, although chemical reactions for this process are relatively complex and not all of them are completely understood. It is believed the causes are the following: Electron transfer (reduction-oxidation) One area on the surface of the metal acts as the anode, which is where the oxidation (corrosion) occurs. At the anode, the metal gives up electrons. Fe → Fe2+ + 2 e− Electrons are transferred from iron, reducing oxygen in the atmosphere into water on the cathode, which is placed in another region of the metal. O2 + 4 H+ + 4 e− → 2 H2O Global reaction for the process: 2 Fe + O2 + 4 H+ → 2 Fe2+ + 2 H2O Standard emf for iron rusting: E° = E° (cathode) − E° (anode) E° = 1.23V − (−0.44 V) = 1.67 V Iron corrosion takes place in an acid medium; H+ ions come from reaction between carbon dioxide in the atmosphere and water, forming carbonic acid. Fe2+ ions oxidize, following this equation: 4 Fe2+ + O2 + (4+2) H2O → 2 Fe2O3·H2O + 8 H+ Iron(III) oxide hydrate is known as rust. The concentration of water associated with iron oxide varies, thus the chemical formula is represented by Fe2O3·H2O. An electric circuit is formed as passage of electrons and ions occurs, thus if an electrolyte is present it will facilitate oxidation, explaining why rusting is quicker in salt water. Corrosion of common metals Coinage metals, such as copper and silver, slowly corrode through use. A patina of green-blue copper carbonate forms on the surface of copper with exposure to the water and carbon dioxide in the air. Silver coins or cutlery that are exposed to high sulfur foods such as eggs or the low levels of sulfur species in the air develop a layer of black Silver sulfide. Gold and platinum are extremely difficult to oxidize under normal circumstances, and require exposure to a powerful chemical oxidizing agent such as aqua regia. Some common metals oxidize extremely rapidly in air. Titanium and aluminium oxidize instantaneously in contact with the oxygen in the air. These metals form an extremely thin layer of oxidized metal on the surface which bonds with the underlying metal. This thin layer of oxide protects the underlying layers of the metal from the air preventing the entire metal from oxidizing. These metals are used in applications where corrosion resistance is important. Iron, in contrast, has an oxide that forms in air and water, called rust, that does not bond with the iron and therefore does not stop the further oxidation of the iron. Thus iron left exposed to air and water will continue to rust until all of the iron is oxidized. Prevention of corrosion Attempts to save a metal from becoming anodic are of two general types. Anodic regions dissolve and destroy the structural integrity of the metal. While it is almost impossible to prevent anode/cathode formation, if a non-conducting material covers the metal, contact with the electrolyte is not possible and corrosion will not occur. Coating Metals can be coated with paint or other less conductive metals (passivation). This prevents the metal surface from being exposed to electrolytes. Scratches exposing the metal substrate will result in corrosion. The region under the coating adjacent to the scratch acts as the anode of the reaction. See Anodizing Sacrificial anodes A method commonly used to protect a structural metal is to attach a metal which is more anodic than the metal to be protected. This forces the structural metal to be cathodic, thus spared corrosion. It is called "sacrificial" because the anode dissolves and has to be replaced periodically. Zinc bars are attached to various locations on steel ship hulls to render the ship hull cathodic. The zinc bars are replaced periodically. Other metals, such as magnesium, would work very well but zinc is the least expensive useful metal. To protect pipelines, an ingot of buried or exposed magnesium (or zinc) is buried beside the pipeline and is connected electrically to the pipe above ground. The pipeline is forced to be a cathode and is protected from being oxidized and rusting. The magnesium anode is sacrificed. At intervals new ingots are buried to replace those lost. Electrolysis The spontaneous redox reactions of a conventional battery produce electricity through the different chemical potentials of the cathode and anode in the electrolyte. However, electrolysis requires an external source of electrical energy to induce a chemical reaction, and this process takes place in a compartment called an electrolytic cell. Electrolysis of molten sodium chloride When molten, the salt sodium chloride can be electrolyzed to yield metallic sodium and gaseous chlorine. Industrially this process takes place in a special cell named Down's cell. The cell is connected to an electrical power supply, allowing electrons to migrate from the power supply to the electrolytic cell. Reactions that take place at Down's cell are the following: Anode (oxidation): 2 Cl− → Cl2 + 2 e− Cathode (reduction): 2 Na+ + 2 e− → 2 Na Overall reaction: 2 Na+ + 2 Cl− → 2 Na + Cl2 This process can yield large amounts of metallic sodium and gaseous chlorine, and is widely used on mineral dressing and metallurgy industries. The emf for this process is approximately −4 V indicating a (very) non-spontaneous process. In order for this reaction to occur the power supply should provide at least a potential of 4 V. However, larger voltages must be used for this reaction to occur at a high rate. Electrolysis of water Water can be converted to its component elemental gasses, H2 and O2 through the application of an external voltage. Water doesn't decompose into hydrogen and oxygen spontaneously as the Gibbs free energy for the process at standard conditions is about 474.4 kJ. The decomposition of water into hydrogen and oxygen can be performed in an electrolytic cell. In it, a pair of inert electrodes usually made of platinum immersed in water act as anode and cathode in the electrolytic process. The electrolysis starts with the application of an external voltage between the electrodes. This process will not occur except at extremely high voltages without an electrolyte such as sodium chloride or sulfuric acid (most used 0.1 M). Bubbles from the gases will be seen near both electrodes. The following half reactions describe the process mentioned above: Anode (oxidation): 2 H2O → O2 + 4 H+ + 4 e− Cathode (reduction): 2 H2O + 2 e− → H2 + 2 OH− Overall reaction: 2 H2O → 2 H2 + O2 Although strong acids may be used in the apparatus, the reaction will not net consume the acid. While this reaction will work at any conductive electrode at a sufficiently large potential, platinum catalyzes both hydrogen and oxygen formation, allowing for relatively mild voltages (~2 V depending on the pH). Electrolysis of aqueous solutions Electrolysis in an aqueous solution is a similar process as mentioned in electrolysis of water. However, it is considered to be a complex process because the contents in solution have to be analyzed in half reactions, whether reduced or oxidized. Electrolysis of a solution of sodium chloride The presence of water in a solution of sodium chloride must be examined in respect to its reduction and oxidation in both electrodes. Usually, water is electrolysed as mentioned in electrolysis of water yielding gaseous oxygen in the anode and gaseous hydrogen in the cathode. On the other hand, sodium chloride in water dissociates in Na+ and Cl− ions, cation, which is the positive ion, will be attracted to the cathode (−), thus reducing the sodium ion. The anion will then be attracted to the anode (+) oxidizing chloride ion. The following half reactions describes the process
were exacerbated. Poor sanitary arrangements resulted in a high incidence of disease, with outbreaks of cholera occurring in 1832, 1848 and 1866. The construction of the New Town from 1767 onwards witnessed the migration of the professional and business classes from the difficult living conditions in the Old Town to the lower density, higher quality surroundings taking shape on land to the north. Expansion southwards from the Old Town saw more tenements being built in the 19th century, giving rise to Victorian suburbs such as Dalry, Newington, Marchmont and Bruntsfield. Early 20th-century population growth coincided with lower-density suburban development. As the city expanded to the south and west, detached and semi-detached villas with large gardens replaced tenements as the predominant building style. Nonetheless, the 2001 census revealed that over 55% of Edinburgh's population were still living in tenements or blocks of flats, a figure in line with other Scottish cities, but much higher than other British cities, and even central London. From the early to mid 20th century, the growth in population, together with slum clearance in the Old Town and other areas, such as Dumbiedykes, Leith, and Fountainbridge, led to the creation of new estates such as Stenhouse and Saughton, Craigmillar and Niddrie, Pilton and Muirhouse, Piershill, and Sighthill. Religion In 2018 the Church of Scotland had 20,956 members in 71 congregations in the Presbytery of Edinburgh. Its most prominent church is St Giles' on the Royal Mile, first dedicated in 1243 but believed to date from before the 12th century. Saint Giles is historically the patron saint of Edinburgh. St Cuthbert's, situated at the west end of Princes Street Gardens in the shadow of Edinburgh Castle and St Giles' can lay claim to being the oldest Christian sites in the city, though the present St Cuthbert's, designed by Hippolyte Blanc, was dedicated in 1894. Other Church of Scotland churches include Greyfriars Kirk, the Canongate Kirk, St Andrew's and St George's West Church and the Barclay Church. The Church of Scotland Offices are in Edinburgh, as is the Assembly Hall where the annual General Assembly is held. The Roman Catholic Archdiocese of St Andrews and Edinburgh has 27 parishes across the city. The Archbishop of St Andrews and Edinburgh has his official residence in Greenhill, and the diocesan offices are in nearby Marchmont. The Diocese of Edinburgh of the Scottish Episcopal Church has over 50 churches, half of them in the city. Its centre is the late-19th-century Gothic style St Mary's Cathedral in the West End's Palmerston Place. Orthodox Christianity is represented by Pan, Romanian and Russian Orthodox churches. There are several independent churches in the city, both Catholic and Protestant, including Charlotte Chapel, Carrubbers Christian Centre, Bellevue Chapel and Sacred Heart. There are also churches belonging to Quakers, Christadelphians, Seventh-day Adventists, Church of Christ, Scientist, The Church of Jesus Christ of Latter-day Saints (LDS Church) and Elim Pentecostal Church. Muslims have several places of worship across the city. Edinburgh Central Mosque, the largest Islamic place of worship, is located in Potterrow on the city's Southside, near Bristo Square. Construction was largely financed by a gift from King Fahd of Saudi Arabia and was completed in 1998. There is also an Ahmadiyya Muslim community. The first recorded presence of a Jewish community in Edinburgh dates back to the late 18th century. Edinburgh's Orthodox synagogue, opened in 1932, is in Salisbury Road and can accommodate a congregation of 2000. A Liberal Jewish congregation also meets in the city. A Sikh gurdwara and a Hindu mandir are located in Leith. The city also has a Brahma Kumaris centre in the Polwarth area. The Edinburgh Buddhist Centre, run by the Triratna Buddhist Community, formerly situated in Melville Terrace, now runs sessions at the Healthy Life Centre, Bread Street. Other Buddhist traditions are represented by groups which meet in the capital: the Community of Interbeing (followers of Thich Nhat Hanh), Rigpa, Samye Dzong, Theravadin, Pure Land and Shambala. There is a Sōtō Zen Priory in Portobello and a Theravadin Thai Buddhist Monastery in Slateford Road. Edinburgh is home to a Baháʼí community, and a Theosophical Society meets in Great King Street. Edinburgh has an Inter-Faith Association. Edinburgh has over 39 graveyards and cemeteries, many of which are listed and of historical character, including several former church burial grounds. Examples include Old Calton Burial Ground, Greyfriars Kirkyard and Dean Cemetery. Economy Edinburgh has the strongest economy of any city in the United Kingdom outside London and the highest percentage of professionals in the UK with 43% of the population holding a degree-level or professional qualification. According to the Centre for International Competitiveness, it is the most competitive large city in the United Kingdom. It also has the highest gross value added per employee of any city in the UK outside London, measuring £57,594 in 2010. It was named European Best Large City of the Future for Foreign Direct Investment and Best Large City for Foreign Direct Investment Strategy in the Financial Times fDi magazine awards 2012/13. In the 19th century, Edinburgh's economy was known for banking and insurance, publishing and printing, and brewing and distilling. Today, its economy is based mainly on financial services, scientific research, higher education, and tourism. In March 2010, unemployment in Edinburgh was comparatively low at 3.6%, and it remains consistently below the Scottish average of 4.5%. Edinburgh is the second most visited city by foreign visitors in the UK after London. Banking has been a mainstay of the Edinburgh economy for over 300 years, since the Bank of Scotland was established by an act of the Scottish Parliament in 1695. Today, the financial services industry, with its particularly strong insurance and investment sectors, and underpinned by Edinburgh-based firms such as Scottish Widows and Standard Life Aberdeen, accounts for the city being the UK's second financial centre after London and Europe's fourth in terms of equity assets. The NatWest Group (formerly Royal Bank of Scotland Group) opened new global headquarters at Gogarburn in the west of the city in October 2005. The city is home to the headquarters of Bank of Scotland, Sainsbury's Bank, Tesco Bank, and TSB Bank. Tourism is also an important element in the city's economy. As a World Heritage Site, tourists visit historical sites such as Edinburgh Castle, the Palace of Holyroodhouse and the Old and New Towns. Their numbers are augmented in August each year during the Edinburgh Festivals, which attracts 4.4 million visitors, and generates over £100m for the local economy. As the centre of Scotland's government and legal system, the public sector plays a central role in Edinburgh's economy. Many departments of the Scottish Government are in the city. Other major employers include NHS Scotland and local government administration. When the £1.3bn Edinburgh & South East Scotland City Region Deal was signed in 2018, the region's Gross Value Added (GVA) contribution to the Scottish economy was cited as £33bn, or 33% of the country's output. But the Deal's partners noted that prosperity was not evenly spread across the city region, citing 22.4% of children living in poverty and a shortage of affordable housing. Culture Festivals and celebrations Edinburgh festival The city hosts a series of festivals that run between the end of July and early September each year. The best known of these events are the Edinburgh Festival Fringe, the Edinburgh International Festival, the Edinburgh Military Tattoo, the Edinburgh Art Festival and the Edinburgh International Book Festival. The longest established of these festivals is the Edinburgh International Festival, which was first held in 1947 and consists mainly of a programme of high-profile theatre productions and classical music performances, featuring international directors, conductors, theatre companies and orchestras. This has since been overtaken in size by the Edinburgh Fringe which began as a programme of marginal acts alongside the "official" Festival and has become the world's largest performing arts festival. In 2017, nearly 3400 different shows were staged in 300 venues across the city. Comedy has become one of the mainstays of the Fringe, with numerous well-known comedians getting their first 'break' there, often by being chosen to receive the Edinburgh Comedy Award. The Edinburgh Military Tattoo, occupies the Castle Esplanade every night for three weeks each August, with massed pipe bands and military bands drawn from around the world. Performances end with a short fireworks display. As well as the summer festivals, many other festivals are held during the rest of the year, including the Edinburgh International Film Festival and Edinburgh International Science Festival. The summer of 2020 was the first time in its 70-year history that the Edinburgh festival was not run, being cancelled due to the COVID-19 pandemic. This affected many of the tourist-focused businesses in Edinburgh which depend on the various festivals over summer to return an annual profit. Edinburgh's Hogmanay The annual Edinburgh Hogmanay celebration was originally an informal street party focused on the Tron Kirk in the Old Town's High Street. Since 1993, it has been officially organised with the focus moved to Princes Street. In 1996, over 300,000 people attended, leading to ticketing of the main street party in later years up to a limit of 100,000 tickets. Hogmanay now covers four days of processions, concerts and fireworks, with the street party beginning on Hogmanay. Alternative tickets are available for entrance into the Princes Street Gardens concert and Cèilidh, where well-known artists perform and ticket holders can participate in traditional Scottish cèilidh dancing. The event attracts thousands of people from all over the world. Beltane and other festivals On the night of 30 April the Beltane Fire Festival takes place on Calton Hill, involving a procession followed by scenes inspired by pagan old spring fertility celebrations. At the beginning of October each year the Dussehra Hindu Festival is also held on Calton Hill. Music, theatre and film Outside the Festival season, Edinburgh supports several theatres and production companies. The Royal Lyceum Theatre has its own company, while the King's Theatre, Edinburgh Festival Theatre and Edinburgh Playhouse stage large touring shows. The Traverse Theatre presents a more contemporary repertoire. Amateur theatre companies productions are staged at the Bedlam Theatre, Church Hill Theatre and King's Theatre among others. The Usher Hall is Edinburgh's premier venue for classical music, as well as occasional popular music concerts. It was the venue for the Eurovision Song Contest 1972. Other halls staging music and theatre include The Hub, the Assembly Rooms and the Queen's Hall. The Scottish Chamber Orchestra is based in Edinburgh. Edinburgh has two repertory cinemas, the Edinburgh Filmhouse and The Cameo, as well as the independent Dominion Cinema and a range of multiplexes. Edinburgh has a healthy popular music scene. Occasionally large concerts are staged at Murrayfield and Meadowbank, while mid-sized events take place at smaller venues such as 'The Corn Exchange', 'The Liquid Rooms' and 'The Bongo Club'. In 2010, PRS for Music listed Edinburgh among the UK's top ten 'most musical' cities. Several city pubs are well known for their live performances of folk music. They include 'Sandy Bell's' in Forrest Road, 'Captain's Bar' in South College Street and 'Whistlebinkies' in South Bridge. Like many other cities in the UK, numerous nightclub venues host Electronic dance music events. Edinburgh is home to a flourishing group of contemporary composers such as Nigel Osborne, Peter Nelson, Lyell Cresswell, Hafliði Hallgrímsson, Edward Harper, Robert Crawford, Robert Dow and John McLeod. McLeod's music is heard regularly on BBC Radio 3 and throughout the UK. Media Newspapers The main local newspaper is the Edinburgh Evening News. It is owned and published alongside its sister titles The Scotsman and Scotland on Sunday by JPIMedia. Radio The city has two commercial radio stations: Forth 1, a station which broadcasts mainstream chart music, and Forth 2 on medium wave which plays classic hits. Capital Radio Scotland and Eklipse Sports Radio also have transmitters covering Edinburgh. Along with the UK national radio stations, Radio Scotland and the Gaelic language service BBC Radio nan Gàidheal are also broadcast. DAB digital radio is broadcast over two local multiplexes. BFBS Radio broadcasts from studios on the base at Dreghorn Barracks across the city on 98.5FM as part of its UK Bases network Television Television, along with most radio services, is broadcast to the city from the Craigkelly transmitting station situated in Fife on the opposite side of the Firth of Forth and the Black Hill transmitting station in North Lanarkshire to the west. There are no television stations based in the city. Edinburgh Television existed in the late 1990s to early 2003 and STV Edinburgh existed from 2015 to 2018. Museums, libraries and galleries Edinburgh has many museums and libraries. These include the National Museum of Scotland, the National Library of Scotland, National War Museum, the Museum of Edinburgh, Surgeons' Hall Museum, the Writers' Museum, the Museum of Childhood and Dynamic Earth. The Museum on The Mound has exhibits on money and banking. Edinburgh Zoo, covering on Corstorphine Hill, is the second most visited paid tourist attraction in Scotland, and home to two giant pandas, Tian Tian and Yang Guang, on loan from the People's Republic of China. Edinburgh is also home to The Royal Yacht Britannia, decommissioned in 1997 and now a five-star visitor attraction and evening events venue permanently berthed at Ocean Terminal. Edinburgh contains Scotland's three National Galleries of Art as well as numerous smaller art galleries. The national collection is housed in the Scottish National Gallery, located on The Mound, comprising the linked National Gallery of Scotland building and the Royal Scottish Academy building. Contemporary collections are shown in the Scottish National Gallery of Modern Art which occupies a split site at Belford. The Scottish National Portrait Gallery on Queen Street focuses on portraits and photography. The council-owned City Art Centre in Market Street mounts regular art exhibitions. Across the road, The Fruitmarket Gallery offers world-class exhibitions of contemporary art, featuring work by British and international artists with both emerging and established international reputations. The city hosts several of Scotland's galleries and organisations dedicated to contemporary visual art. Significant strands of this infrastructure include Creative Scotland, Edinburgh College of Art, Talbot Rice Gallery (University of Edinburgh), Collective Gallery (based at the City Observatory) and the Edinburgh Annuale. There are also many small private shops/galleries that provide space to showcase works from local artists. Shopping The locale around Princes Street is the main shopping area in the city centre, with souvenir shops, chain stores such as Boots the Chemist, Edinburgh Woollen Mill, H&M and Jenners. George Street, north of Princes Street, is the preferred location for some upmarket shops and independent stores. At the east end of Princes Street, the redeveloped St James Quarter opened its doors in June 2021, while next to the Balmoral Hotel and Waverley Station is the Waverley Mall. Multrees Walk, adjacent to the St. James Centre, is a recent addition to the central shopping district, dominated by the presence of Harvey Nichols. Shops here include Louis Vuitton, Mulberry and Calvin Klein. Edinburgh also has substantial retail parks outside the city centre. These include The Gyle Shopping Centre and Hermiston Gait in the west of the city, Cameron Toll Shopping Centre, Straiton Retail Park (actually just outside the city, in Midlothian) and Fort Kinnaird in the south and east, and Ocean Terminal in the north on the Leith waterfront. Governance Local government Following local government reorganisation in 1996, the City of Edinburgh Council constitutes one of the 32 council areas of Scotland. Like all other local authorities of Scotland, the council has powers over most matters of local administration such as housing, planning, local transport, parks, economic development and regeneration. The council comprises 58 elected councillors, returned from 17 multi-member electoral wards in the city. Following the 2007 City of Edinburgh Council election the incumbent Labour Party lost majority control of the council after 23 years to a Liberal Democrat/SNP coalition. The 2012 City of Edinburgh Council election saw a Scottish Labour/SNP coalition. The 2017 City of Edinburgh Council election, saw a continuation of this administration, but with the SNP as the largest party. The city's coat of arms was registered by the Lord Lyon King of Arms in 1732. Scottish Parliament Edinburgh, like all of Scotland, is represented in the Scottish Parliament, situated in the Holyrood area of the city. For electoral purposes, the city is divided into six constituencies which, along with 3 seats outside of the city, form part of the Lothian region. Each constituency elects one Member of the Scottish Parliament (MSP) by the first past the post system of election, and the region elects seven additional MSPs to produce a result based on a form of proportional representation. As of the 2016 election, the Scottish National Party have three MSPs: Ash Denham for Edinburgh Eastern, Ben Macpherson for Edinburgh Northern and Leith and Gordon MacDonald for Edinburgh Pentlands constituencies. Alex Cole-Hamilton of the Scottish Liberal Democrats represents Edinburgh Western, Daniel Johnson of the Scottish Labour Party represents Edinburgh Southern constituency, and former Leader of the Scottish Conservative Party, Ruth Davidson represents the Edinburgh Central constituency. In addition, the city is also represented by seven regional MSPs representing the Lothian electoral region: The Conservatives have three regional MSPs: Jeremy Balfour, Miles Briggs and Gordon Lindhurst, Labour have two regional MSPs: Sarah Boyack and Neil Findlay, Scottish Greens have one regional MSP: Alison Johnstone and there is one independent MSP: Andy Wightman (elected as a Scottish Green). UK Parliament Edinburgh is also represented in the House of Commons of the United Kingdom by five Members of Parliament. The city is divided into Edinburgh North and Leith, Edinburgh East, Edinburgh South, Edinburgh South West, and Edinburgh West, each constituency electing one member by the first past the post system. Edinburgh is represented by three MPs affiliated with the Scottish National Party, one Liberal Democrat MP in Edinburgh West and one Labour MP in Edinburgh South. Transport Edinburgh Airport is Scotland's busiest airport and the principal international gateway to the capital, handling over 14.7 million passengers, it was also the sixth-busiest airport in the United Kingdom by total passengers in 2019. In anticipation of rising passenger numbers, the former operator of the airport BAA outlined a draft masterplan in 2011 to provide for the expansion of the airfield and the terminal building. In June 2012, Global Infrastructure Partners purchased the airport for £807 million. The possibility of building a second runway to cope with an increased number of aircraft movements has also been mooted. Travel in Edinburgh is undertaken predominantly by bus. Lothian Buses, the successor company to Edinburgh Corporation Transport Department, operate the majority of city bus services within the city and to surrounding suburbs, with the most routes running via Princes Street. Services further afield operate from the Edinburgh Bus Station off St Andrew Square and Waterloo Place and are operated mainly by Stagecoach East Scotland, Scottish Citylink, National Express Coaches and Borders Buses. Lothian Buses also operates all of the city's branded public tour buses, night bus service and airport bus link. In 2019, Lothian Buses recorded 124.2 million passenger journeys. Edinburgh Waverley is the second-busiest railway station in Scotland, with only Glasgow Central handling more passengers. On the evidence of passenger
Site, tourists visit historical sites such as Edinburgh Castle, the Palace of Holyroodhouse and the Old and New Towns. Their numbers are augmented in August each year during the Edinburgh Festivals, which attracts 4.4 million visitors, and generates over £100m for the local economy. As the centre of Scotland's government and legal system, the public sector plays a central role in Edinburgh's economy. Many departments of the Scottish Government are in the city. Other major employers include NHS Scotland and local government administration. When the £1.3bn Edinburgh & South East Scotland City Region Deal was signed in 2018, the region's Gross Value Added (GVA) contribution to the Scottish economy was cited as £33bn, or 33% of the country's output. But the Deal's partners noted that prosperity was not evenly spread across the city region, citing 22.4% of children living in poverty and a shortage of affordable housing. Culture Festivals and celebrations Edinburgh festival The city hosts a series of festivals that run between the end of July and early September each year. The best known of these events are the Edinburgh Festival Fringe, the Edinburgh International Festival, the Edinburgh Military Tattoo, the Edinburgh Art Festival and the Edinburgh International Book Festival. The longest established of these festivals is the Edinburgh International Festival, which was first held in 1947 and consists mainly of a programme of high-profile theatre productions and classical music performances, featuring international directors, conductors, theatre companies and orchestras. This has since been overtaken in size by the Edinburgh Fringe which began as a programme of marginal acts alongside the "official" Festival and has become the world's largest performing arts festival. In 2017, nearly 3400 different shows were staged in 300 venues across the city. Comedy has become one of the mainstays of the Fringe, with numerous well-known comedians getting their first 'break' there, often by being chosen to receive the Edinburgh Comedy Award. The Edinburgh Military Tattoo, occupies the Castle Esplanade every night for three weeks each August, with massed pipe bands and military bands drawn from around the world. Performances end with a short fireworks display. As well as the summer festivals, many other festivals are held during the rest of the year, including the Edinburgh International Film Festival and Edinburgh International Science Festival. The summer of 2020 was the first time in its 70-year history that the Edinburgh festival was not run, being cancelled due to the COVID-19 pandemic. This affected many of the tourist-focused businesses in Edinburgh which depend on the various festivals over summer to return an annual profit. Edinburgh's Hogmanay The annual Edinburgh Hogmanay celebration was originally an informal street party focused on the Tron Kirk in the Old Town's High Street. Since 1993, it has been officially organised with the focus moved to Princes Street. In 1996, over 300,000 people attended, leading to ticketing of the main street party in later years up to a limit of 100,000 tickets. Hogmanay now covers four days of processions, concerts and fireworks, with the street party beginning on Hogmanay. Alternative tickets are available for entrance into the Princes Street Gardens concert and Cèilidh, where well-known artists perform and ticket holders can participate in traditional Scottish cèilidh dancing. The event attracts thousands of people from all over the world. Beltane and other festivals On the night of 30 April the Beltane Fire Festival takes place on Calton Hill, involving a procession followed by scenes inspired by pagan old spring fertility celebrations. At the beginning of October each year the Dussehra Hindu Festival is also held on Calton Hill. Music, theatre and film Outside the Festival season, Edinburgh supports several theatres and production companies. The Royal Lyceum Theatre has its own company, while the King's Theatre, Edinburgh Festival Theatre and Edinburgh Playhouse stage large touring shows. The Traverse Theatre presents a more contemporary repertoire. Amateur theatre companies productions are staged at the Bedlam Theatre, Church Hill Theatre and King's Theatre among others. The Usher Hall is Edinburgh's premier venue for classical music, as well as occasional popular music concerts. It was the venue for the Eurovision Song Contest 1972. Other halls staging music and theatre include The Hub, the Assembly Rooms and the Queen's Hall. The Scottish Chamber Orchestra is based in Edinburgh. Edinburgh has two repertory cinemas, the Edinburgh Filmhouse and The Cameo, as well as the independent Dominion Cinema and a range of multiplexes. Edinburgh has a healthy popular music scene. Occasionally large concerts are staged at Murrayfield and Meadowbank, while mid-sized events take place at smaller venues such as 'The Corn Exchange', 'The Liquid Rooms' and 'The Bongo Club'. In 2010, PRS for Music listed Edinburgh among the UK's top ten 'most musical' cities. Several city pubs are well known for their live performances of folk music. They include 'Sandy Bell's' in Forrest Road, 'Captain's Bar' in South College Street and 'Whistlebinkies' in South Bridge. Like many other cities in the UK, numerous nightclub venues host Electronic dance music events. Edinburgh is home to a flourishing group of contemporary composers such as Nigel Osborne, Peter Nelson, Lyell Cresswell, Hafliði Hallgrímsson, Edward Harper, Robert Crawford, Robert Dow and John McLeod. McLeod's music is heard regularly on BBC Radio 3 and throughout the UK. Media Newspapers The main local newspaper is the Edinburgh Evening News. It is owned and published alongside its sister titles The Scotsman and Scotland on Sunday by JPIMedia. Radio The city has two commercial radio stations: Forth 1, a station which broadcasts mainstream chart music, and Forth 2 on medium wave which plays classic hits. Capital Radio Scotland and Eklipse Sports Radio also have transmitters covering Edinburgh. Along with the UK national radio stations, Radio Scotland and the Gaelic language service BBC Radio nan Gàidheal are also broadcast. DAB digital radio is broadcast over two local multiplexes. BFBS Radio broadcasts from studios on the base at Dreghorn Barracks across the city on 98.5FM as part of its UK Bases network Television Television, along with most radio services, is broadcast to the city from the Craigkelly transmitting station situated in Fife on the opposite side of the Firth of Forth and the Black Hill transmitting station in North Lanarkshire to the west. There are no television stations based in the city. Edinburgh Television existed in the late 1990s to early 2003 and STV Edinburgh existed from 2015 to 2018. Museums, libraries and galleries Edinburgh has many museums and libraries. These include the National Museum of Scotland, the National Library of Scotland, National War Museum, the Museum of Edinburgh, Surgeons' Hall Museum, the Writers' Museum, the Museum of Childhood and Dynamic Earth. The Museum on The Mound has exhibits on money and banking. Edinburgh Zoo, covering on Corstorphine Hill, is the second most visited paid tourist attraction in Scotland, and home to two giant pandas, Tian Tian and Yang Guang, on loan from the People's Republic of China. Edinburgh is also home to The Royal Yacht Britannia, decommissioned in 1997 and now a five-star visitor attraction and evening events venue permanently berthed at Ocean Terminal. Edinburgh contains Scotland's three National Galleries of Art as well as numerous smaller art galleries. The national collection is housed in the Scottish National Gallery, located on The Mound, comprising the linked National Gallery of Scotland building and the Royal Scottish Academy building. Contemporary collections are shown in the Scottish National Gallery of Modern Art which occupies a split site at Belford. The Scottish National Portrait Gallery on Queen Street focuses on portraits and photography. The council-owned City Art Centre in Market Street mounts regular art exhibitions. Across the road, The Fruitmarket Gallery offers world-class exhibitions of contemporary art, featuring work by British and international artists with both emerging and established international reputations. The city hosts several of Scotland's galleries and organisations dedicated to contemporary visual art. Significant strands of this infrastructure include Creative Scotland, Edinburgh College of Art, Talbot Rice Gallery (University of Edinburgh), Collective Gallery (based at the City Observatory) and the Edinburgh Annuale. There are also many small private shops/galleries that provide space to showcase works from local artists. Shopping The locale around Princes Street is the main shopping area in the city centre, with souvenir shops, chain stores such as Boots the Chemist, Edinburgh Woollen Mill, H&M and Jenners. George Street, north of Princes Street, is the preferred location for some upmarket shops and independent stores. At the east end of Princes Street, the redeveloped St James Quarter opened its doors in June 2021, while next to the Balmoral Hotel and Waverley Station is the Waverley Mall. Multrees Walk, adjacent to the St. James Centre, is a recent addition to the central shopping district, dominated by the presence of Harvey Nichols. Shops here include Louis Vuitton, Mulberry and Calvin Klein. Edinburgh also has substantial retail parks outside the city centre. These include The Gyle Shopping Centre and Hermiston Gait in the west of the city, Cameron Toll Shopping Centre, Straiton Retail Park (actually just outside the city, in Midlothian) and Fort Kinnaird in the south and east, and Ocean Terminal in the north on the Leith waterfront. Governance Local government Following local government reorganisation in 1996, the City of Edinburgh Council constitutes one of the 32 council areas of Scotland. Like all other local authorities of Scotland, the council has powers over most matters of local administration such as housing, planning, local transport, parks, economic development and regeneration. The council comprises 58 elected councillors, returned from 17 multi-member electoral wards in the city. Following the 2007 City of Edinburgh Council election the incumbent Labour Party lost majority control of the council after 23 years to a Liberal Democrat/SNP coalition. The 2012 City of Edinburgh Council election saw a Scottish Labour/SNP coalition. The 2017 City of Edinburgh Council election, saw a continuation of this administration, but with the SNP as the largest party. The city's coat of arms was registered by the Lord Lyon King of Arms in 1732. Scottish Parliament Edinburgh, like all of Scotland, is represented in the Scottish Parliament, situated in the Holyrood area of the city. For electoral purposes, the city is divided into six constituencies which, along with 3 seats outside of the city, form part of the Lothian region. Each constituency elects one Member of the Scottish Parliament (MSP) by the first past the post system of election, and the region elects seven additional MSPs to produce a result based on a form of proportional representation. As of the 2016 election, the Scottish National Party have three MSPs: Ash Denham for Edinburgh Eastern, Ben Macpherson for Edinburgh Northern and Leith and Gordon MacDonald for Edinburgh Pentlands constituencies. Alex Cole-Hamilton of the Scottish Liberal Democrats represents Edinburgh Western, Daniel Johnson of the Scottish Labour Party represents Edinburgh Southern constituency, and former Leader of the Scottish Conservative Party, Ruth Davidson represents the Edinburgh Central constituency. In addition, the city is also represented by seven regional MSPs representing the Lothian electoral region: The Conservatives have three regional MSPs: Jeremy Balfour, Miles Briggs and Gordon Lindhurst, Labour have two regional MSPs: Sarah Boyack and Neil Findlay, Scottish Greens have one regional MSP: Alison Johnstone and there is one independent MSP: Andy Wightman (elected as a Scottish Green). UK Parliament Edinburgh is also represented in the House of Commons of the United Kingdom by five Members of Parliament. The city is divided into Edinburgh North and Leith, Edinburgh East, Edinburgh South, Edinburgh South West, and Edinburgh West, each constituency electing one member by the first past the post system. Edinburgh is represented by three MPs affiliated with the Scottish National Party, one Liberal Democrat MP in Edinburgh West and one Labour MP in Edinburgh South. Transport Edinburgh Airport is Scotland's busiest airport and the principal international gateway to the capital, handling over 14.7 million passengers, it was also the sixth-busiest airport in the United Kingdom by total passengers in 2019. In anticipation of rising passenger numbers, the former operator of the airport BAA outlined a draft masterplan in 2011 to provide for the expansion of the airfield and the terminal building. In June 2012, Global Infrastructure Partners purchased the airport for £807 million. The possibility of building a second runway to cope with an increased number of aircraft movements has also been mooted. Travel in Edinburgh is undertaken predominantly by bus. Lothian Buses, the successor company to Edinburgh Corporation Transport Department, operate the majority of city bus services within the city and to surrounding suburbs, with the most routes running via Princes Street. Services further afield operate from the Edinburgh Bus Station off St Andrew Square and Waterloo Place and are operated mainly by Stagecoach East Scotland, Scottish Citylink, National Express Coaches and Borders Buses. Lothian Buses also operates all of the city's branded public tour buses, night bus service and airport bus link. In 2019, Lothian Buses recorded 124.2 million passenger journeys. Edinburgh Waverley is the second-busiest railway station in Scotland, with only Glasgow Central handling more passengers. On the evidence of passenger entries and exits between April 2015 and March 2016, Edinburgh Waverley is the fifth-busiest station outside London; it is also the UK's second biggest station in terms of the number of platforms and area size. Waverley is the terminus for most trains arriving from London King's Cross and the departure point for many rail services within Scotland operated by Abellio ScotRail. To the west of the city centre lies Haymarket Station which is an important commuter stop. Opened in 2003, Edinburgh Park station serves the Gyle business park in the west of the city and the nearby Gogarburn headquarters of the Royal Bank of Scotland. The Edinburgh Crossrail route connects Edinburgh Park with Haymarket, Edinburgh Waverley and the suburban stations of Brunstane and Newcraighall in the east of the city. There are also commuter lines to South Gyle and Dalmeny, the latter serving South Queensferry by the Forth Bridges, and to Wester Hailes and Curriehill in the south-west of the city. To tackle traffic congestion, Edinburgh is now served by six park and ride sites on the periphery of the city at Sheriffhall (in Midlothian), Ingliston, Riccarton, Inverkeithing (in Fife), Newcraighall and Straiton (in Midlothian). A referendum of Edinburgh residents in February 2005 rejected a proposal to introduce congestion charging in the city. Edinburgh Trams became operational on 31 May 2014. The city had been without a tram system since Edinburgh Corporation Tramways ceased on 16 November 1956. Following parliamentary approval in 2007, construction began in early 2008. The first stage of the project was expected to be completed by July 2011 but, following delays caused by extra utility work and a long-running contractual dispute between the council and the main contractor, Bilfinger SE, the project was rescheduled. The cost of the project rose from the original projection of £545 million to £750 million in mid-2011 and some suggest it could eventually exceed £1 billion. The completed line is in length, running from Edinburgh Airport, west of the city, to its terminus at York Place in the city centre's East End. It was originally planned to continue down Leith Walk to Ocean Terminal and terminate at Newhaven. Should the original plan be taken to completion, trams will also run from Haymarket through Ravelston and Craigleith to Granton Square on the Waterfront Edinburgh. Long-term proposals envisage a line running west from the airport to Ratho and Newbridge and another connecting Granton Square to Newhaven via Lower Granton Road, thus completing the Line 1 (North Edinburgh) loop. A further line serving the south of the city has also been suggested. Lothian Buses and Edinburgh Trams are both owned and operated by Transport for Edinburgh. Despite its modern transport links, Edinburgh has been named the most congested city in the UK for the fourth year running. Education There are three universities in Edinburgh, the University of Edinburgh, Heriot-Watt University and Edinburgh Napier University. Established by royal charter in 1583, the University of Edinburgh is one of Scotland's ancient universities and is the fourth oldest in the country after St Andrews, Glasgow and Aberdeen. Originally centred on Old College the university expanded to premises on The Mound, the Royal Mile and George Square. Today, the King's Buildings in the south of the city contain most of the schools within the College of Science and Engineering. In 2002, the medical school moved to purpose built accommodation adjacent to the new Royal Infirmary of Edinburgh at Little France. The university is placed 16th in the QS World University Rankings for 2022. Heriot-Watt University is based at the Riccarton campus in the west of Edinburgh. Originally established in 1821 as the world's first mechanics' institute it was granted university status by royal charter in 1966. It has other campuses in the Scottish Borders, Orkney, United Arab Emirates and Putrajaya in Malaysia. It takes the name Heriot-Watt from Scottish inventor James Watt and Scottish philanthropist and goldsmith George Heriot. Heriot-Watt University has been named International University of the Year by The Times and Sunday Times Good University Guide 2018. In the latest Research Excellence Framework, it was ranked overall in the Top 25% of UK universities and 1st in Scotland for research impact. Edinburgh Napier University was originally founded as the Napier College which was renamed Napier Polytechnic in 1986 and gained university status in 1992. Edinburgh Napier University has campuses in the south and west of the city, including the former Merchiston Tower and Craiglockhart Hydropathic. It is home to the Screen Academy Scotland. Queen Margaret University was located in Edinburgh before it moved to a new campus just outside the city boundary on the edge of Musselburgh in 2008. Until 2012 further education colleges in the city included Jewel and Esk College (incorporating Leith Nautical College founded in 1903), Telford College, opened in 1968, and Stevenson College, opened in 1970. These have now been amalgamated to form Edinburgh College. Scotland's Rural College also has a campus in south Edinburgh. Other institutions include the Royal College of Surgeons of Edinburgh and the Royal College of Physicians of Edinburgh which were established by royal charter in 1506 and 1681 respectively. The Trustees Drawing Academy of Edinburgh, founded in 1760, became the Edinburgh College of Art in 1907. There are 18 nursery, 94 primary and 23 secondary schools administered by the City of Edinburgh Council. Edinburgh is home to The Royal High School, one of the oldest schools in the country and the world. The city also has several independent, fee-paying schools including Edinburgh Academy, Fettes College, George Heriot's School, George Watson's College, Merchiston Castle School, Stewart's Melville College and The Mary Erskine School. In 2009, the proportion of pupils attending independent schools was 24.2%, far above the Scottish national average of just over 7% and higher than in any other region of Scotland. In August 2013, the City of Edinburgh Council opened the city's first stand-alone Gaelic primary school, Bun-sgoil Taobh na Pàirce. Healthcare The main NHS Lothian hospitals serving the Edinburgh area are the Royal Infirmary of Edinburgh, which includes the University of Edinburgh Medical School, and the Western General Hospital, which has a large cancer treatment centre and nurse-led Minor Injuries Clinic. The Royal Edinburgh Hospital in Morningside specialises in mental health. The Royal Hospital for Sick Children, colloquially referred to as 'the Sick Kids', is a specialist paediatrics hospital. There are two private hospitals: Murrayfield Hospital in the west of the city and Shawfair Hospital in the south. Both are owned by Spire Healthcare. Sport Football Men's Edinburgh has three football clubs that play in the Scottish Professional Football League (SPFL): Heart of Midlothian, founded in 1874, Hibernian, founded in 1875 and Edinburgh City, founded in 1966. Heart of Midlothian and Hibernian are known locally as "Hearts" and "Hibs", respectively. Both play in the Scottish Premiership. They are the oldest city rivals in Scotland and the Edinburgh derby is one of the oldest derby matches in world football. Both clubs have won the Scottish league championship four times. Hearts have won the Scottish Cup eight times and the Scottish League Cup four times. Hibs have won the Scottish Cup and the Scottish League Cup three times each. Edinburgh City were promoted to Scottish League Two in the 2015–16 season, becoming the first club to win promotion to the SPFL via the pyramid system playoffs. Edinburgh was also home to four other former Scottish Football League clubs: the original Edinburgh City (founded in 1928), Leith Athletic, Meadowbank Thistle and St Bernard's. Meadowbank Thistle played at Meadowbank Stadium until 1995,
represent something different from his own alpha and beta rays, due to its very much greater penetrating power. Rutherford therefore gave this third type of radiation the name of gamma ray. All three of Rutherford's terms are in standard use today – other types of radioactive decay have since been discovered, but Rutherford's three types are among the most common. In 1904, Rutherford suggested that radioactivity provides a source of energy sufficient to explain the existence of the Sun for the many millions of years required for the slow biological evolution on Earth proposed by biologists such as Charles Darwin. The physicist Lord Kelvin had argued earlier for a much younger Earth (see also William Thomson, 1st Baron Kelvin#Age of the Earth: geology) based on the insufficiency of known energy sources, but Rutherford pointed out at a lecture attended by Kelvin that radioactivity could solve this problem. In Manchester, he continued to work with alpha radiation. In conjunction with Hans Geiger, he developed zinc sulfide scintillation screens and ionisation chambers to count alphas. By dividing the total charge they produced by the number counted, Rutherford decided that the charge on the alpha was two. In late 1907, Ernest Rutherford and Thomas Royds allowed alphas to penetrate a very thin window into an evacuated tube. As they sparked the tube into discharge, the spectrum obtained from it changed, as the alphas accumulated in the tube. Eventually, the clear spectrum of helium gas appeared, proving that alphas were at least ionised helium atoms, and probably helium nuclei. A long-standing myth existed, at least as early as 1948, running at least to 2017, that Rutherford was the first scientist to observe and report an artificial transmutation of a stable element into another element: nitrogen into oxygen. It was thought by many people to be one of Rutherford's greatest accomplishments. The New Zealand government even issued a commemorative stamp in the belief that the nitrogen-to-oxygen discovery belonged to Rutherford. Beginning in 2017, many scientific institutions corrected their versions of this history to indicate that the discovery credit for the reaction belongs to Patrick Blackett. Rutherford did detect the ejected proton in 1919 and interpreted it as evidence for disintegration of the nitrogen nucleus (to lighter nuclei). In 1925, Blackett showed that the actual product is oxygen and identified the true reaction as 14N + α → 17O + p. Rutherford therefore recognized "that the nucleus may increase rather than diminish in mass as the result of collisions in which the proton is expelled". Gold foil experiment Rutherford performed his most famous work after receiving the Nobel prize in 1908. Along with Hans Geiger and Ernest Marsden in 1909, he carried out the Geiger–Marsden experiment, which demonstrated the nuclear nature of atoms by deflecting alpha particles passing through a thin gold foil. Rutherford was inspired to ask Geiger and Marsden in this experiment to look for alpha particles with very high deflection angles, of a type not expected from any theory of matter at that time. Such deflections, though rare, were found, and proved to be a smooth but high-order function of the deflection angle. It was Rutherford's interpretation of this data that led him to formulate the Rutherford model of the atom in 1911that a very small charged nucleus, containing much of the atom's mass, was orbited by low-mass electrons. In 1919–1920, Rutherford found that nitrogen and other light elements ejected a proton, which he called a "hydrogen atom", when hit with α (alpha) particles. This result showed Rutherford that hydrogen nuclei were a part of nitrogen nuclei (and by inference, probably other nuclei as well). Such a construction had been suspected for many years on the basis of atomic weights which were whole numbers of that of hydrogen; see Prout's hypothesis. Hydrogen was known to be the lightest element, and its nuclei presumably the lightest nuclei. Now, because of all these considerations, Rutherford decided that a hydrogen nucleus was possibly a fundamental building block of all nuclei, and also possibly a new fundamental particle as well, since nothing was known from the nucleus that was lighter. Thus, confirming and extending the work of Wilhelm Wien who in 1898 discovered the proton in streams of ionized gas, Rutherford postulated the hydrogen nucleus to be a new particle in 1920, which he dubbed the proton. In 1921, while working with Niels Bohr (who postulated that electrons moved in specific orbits), Rutherford theorized about the existence of neutrons, (which he had christened in his 1920 Bakerian Lecture), which could somehow compensate for the repelling effect of the positive charges of protons by causing an attractive nuclear force and thus keep the nuclei from flying apart from the repulsion between protons. The only alternative to neutrons was the existence of "nuclear electrons" which would counteract some of the proton charges in the nucleus, since by then it was known that nuclei had about twice the mass that could be accounted for if they were simply assembled from hydrogen nuclei (protons). But how these nuclear electrons could be trapped in the nucleus, was a mystery. Rutherford is widely quoted as saying, regarding the results of these experiments: "It was quite the most incredible event that has ever happened to me in my life. It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you." Rutherford's theory of neutrons was proved in 1932 by his associate James Chadwick, who recognized neutrons immediately when they were produced by other scientists and later himself, in bombarding beryllium with alpha particles. In 1935, Chadwick was awarded the Nobel Prize in Physics for this discovery. Legacy Rutherford is considered to have been among the greatest scientists in history. At the opening session of the 1938 Indian Science Congress, which Rutherford had been expected to preside over before his death, astrophysicist James Jeans spoke in his place and deemed him "one of the greatest scientists of all time", saying: Nuclear physics Rutherford's research, and work done under him as laboratory director, established the nuclear structure of the atom and the essential nature of radioactive decay as a nuclear process. Patrick Blackett, a research fellow working under Rutherford, using natural alpha particles, demonstrated induced nuclear transmutation. Rutherford's team later, using protons from an accelerator, demonstrated artificially-induced nuclear reactions and transmutation. He is known as the father of nuclear physics. Rutherford died too early to see Leó Szilárd's idea of controlled nuclear chain reactions come into being. However, a speech of Rutherford's about his artificially-induced transmutation in lithium, printed on 12 September 1933 London paper The Times, was reported by Szilárd to have been his inspiration for thinking of the possibility of a controlled energy-producing nuclear chain reaction. Szilard had this idea while walking in London, on the same day. Rutherford's speech touched on the 1932 work of his students John Cockcroft and Ernest Walton in "splitting" lithium into alpha particles by bombardment with protons from a particle accelerator they
the concept of radioactive half-life, the radioactive element radon, and differentiated and named alpha and beta radiation. This work was performed at McGill University in Montreal, Quebec, Canada. It is the basis for the Nobel Prize in Chemistry he was awarded in 1908 "for his investigations into the disintegration of the elements, and the chemistry of radioactive substances", for which he was the first Oceanian Nobel laureate, and the first to perform the awarded work in Canada. In 1904, he was elected as a member to the American Philosophical Society. Rutherford moved in 1907 to the Victoria University of Manchester (today University of Manchester) in the UK, where he and Thomas Royds proved that alpha radiation is helium nuclei. Rutherford performed his most famous work after he became a Nobel laureate. In 1911, although he could not prove that it was positive or negative, he theorized that atoms have their charge concentrated in a very small nucleus, and thereby pioneered the Rutherford model of the atom, through his discovery and interpretation of Rutherford scattering by the gold foil experiment of Hans Geiger and Ernest Marsden. He performed the first artificially induced nuclear reaction in 1917 in experiments where nitrogen nuclei were bombarded with alpha particles. As a result, he discovered the emission of a subatomic particle which, in 1919, he called the "hydrogen atom" but, in 1920, he more accurately named the proton. Rutherford became Director of the Cavendish Laboratory at the University of Cambridge in 1919. Under his leadership the neutron was discovered by James Chadwick in 1932 and in the same year the first experiment to split the nucleus in a fully controlled manner was performed by students working under his direction, John Cockcroft and Ernest Walton. After his death in 1937, he was buried in Westminster Abbey near Sir Isaac Newton. The chemical element rutherfordium (element 104) was named after him in 1997. Biography Early life and education Ernest Rutherford was the son of James Rutherford, a farmer, and his wife Martha Thompson, originally from Hornchurch, Essex, England. James had emigrated to New Zealand from Perth, Scotland, "to raise a little flax and a lot of children". Ernest was born at Brightwater, near Nelson, New Zealand. His first name was mistakenly spelled 'Earnest' when his birth was registered. Rutherford's mother Martha Thompson was a schoolteacher. He studied at Havelock School and then Nelson College and won a scholarship to study at Canterbury College, University of New Zealand, where he participated in the debating society and played rugby. After gaining his BA, MA and BSc, and doing two years of research during which he invented a new form of radio receiver, in 1895 Rutherford was awarded an 1851 Research Fellowship from the Royal Commission for the Exhibition of 1851, to travel to England for postgraduate study at the Cavendish Laboratory, University of Cambridge. He was among the first of the 'aliens' (those without a Cambridge degree) allowed to do research at the university, under the leadership of J. J. Thomson, which aroused jealousies from the more conservative members of the Cavendish fraternity. With Thomson's encouragement, he managed to detect radio waves at half a mile and briefly held the world record for the distance over which electromagnetic waves could be detected, though when he presented his results at the British Association meeting in 1896, he discovered he had been outdone by Guglielmo Marconi, who was also lecturing. In 1898, Thomson recommended Rutherford for a position at McGill University in Montreal, Canada. He was to replace Hugh Longbourne Callendar who held the chair of Macdonald Professor of physics and was coming to Cambridge. Rutherford was accepted, which meant that in 1900 he could marry Mary Georgina Newton (1876–1954) to whom he had become engaged before leaving New Zealand; they married at St Paul's Anglican Church, Papanui in Christchurch, they had one daughter, Eileen Mary (1901–1930), who married the physicist Ralph Fowler. In 1901, Rutherford gained a DSc from the University of New Zealand. In 1907, he returned to Britain to take the chair of physics at the Victoria University of Manchester. Later years and honours Rutherford was knighted in 1914. During World War I, he worked on a top secret project to solve the practical problems of submarine detection by sonar. In 1916, he was awarded the Hector Memorial Medal. In 1919, he returned to the Cavendish succeeding J. J. Thomson as the Cavendish professor and Director. Under him, Nobel Prizes were awarded to James Chadwick for discovering the neutron (in 1932), John Cockcroft and Ernest Walton for an experiment which was to be known as splitting the atom using a particle accelerator, and Edward Appleton for demonstrating the existence of the ionosphere. In 1925, Rutherford pushed calls to the New Zealand Government to support education and research, which led to the formation of the Department of Scientific and Industrial Research (DSIR) in the following year. Between 1925 and 1930, he served as President of the Royal Society, and later as president of the Academic Assistance Council which helped almost 1,000 university refugees from Germany. He was appointed to the Order of Merit in the 1925 New Year Honours and raised to the peerage as Baron Rutherford of Nelson, of Cambridge in the County of Cambridge in 1931, a title that became extinct upon his unexpected death in 1937. In 1933, Rutherford was one of the two inaugural recipients of the T. K. Sidey Medal, set up by the Royal Society of New Zealand as an award for outstanding scientific research. For some time before his death, Rutherford had a small hernia, which he had neglected to have fixed, and it became strangulated, causing him to be violently ill. Despite an emergency operation in London, he died four days afterwards of what physicians termed "intestinal paralysis", at Cambridge. After cremation at Golders Green Crematorium, he was given the high honour of burial in Westminster Abbey, near Isaac Newton and other illustrious British scientists. Scientific research At Cambridge, Rutherford started to work with J. J. Thomson on the conductive effects of X-rays on gases, work which led to the discovery of the electron which Thomson presented to the world in 1897. Hearing of Becquerel's experience with uranium, Rutherford started to explore its radioactivity, discovering two types that differed from X-rays in their penetrating power. Continuing his research in Canada, he coined the terms alpha ray and beta ray in 1899 to describe the two distinct types of radiation. He then discovered that thorium gave off a gas which produced an emanation which was itself radioactive and would coat other substances. He found that a sample of this radioactive material of any size invariably took the same amount of time for half the sample to decay – its "half-life" (11½ minutes in this case). From 1900 to 1903, he was joined at McGill by the young chemist Frederick Soddy (Nobel Prize in Chemistry, 1921) for whom he set the problem of identifying the thorium emanations. Once he had eliminated all the normal chemical reactions, Soddy suggested that it must be one of the inert gases, which they named thoron (later found to be an isotope of radon). They also found another type of thorium they called Thorium X, and kept on finding traces of helium. They also worked with samples of "Uranium X" from William Crookes and radium from Marie Curie. In 1903, they published their "Law of Radioactive Change", to account for
to describe the measurement process. To address the quantitative problem, Everett proposed a derivation of the Born rule based on the properties that a measure on the branches of the wavefunction should have. His derivation has been criticized as relying on unmotivated assumptions. Since then several other derivations of the Born rule in the many-worlds framework have been proposed. There is no consensus on whether this has been successful. Frequentism DeWitt and Graham and Farhi et al., among others, have proposed derivations of the Born rule based on a frequentist interpretation of probability. They try to show that in the limit of infinitely many measurements no worlds would have relative frequencies that didn't match the probabilities given by the Born rule, but these derivations have been shown to be mathematically incorrect. Decision theory A decision-theoretic derivation of the Born rule was produced by David Deutsch (1999) and refined by Wallace (2002–2009) and Saunders (2004). They consider an agent who takes part in a quantum gamble: the agent makes a measurement on a quantum system, branches as a consequence, and each of the agent's future selves receives a reward that depends on the measurement result. The agent uses decision theory to evaluate the price they would pay to take part in such a gamble, and concludes that the price is given by the utility of the rewards weighted according to the Born rule. Some reviews have been positive, although these arguments remain highly controversial; some theoretical physicists have taken them as supporting the case for parallel universes. For example, a New Scientist story on a 2007 conference about Everettian interpretations quoted physicist Andy Albrecht as saying, "This work will go down as one of the most important developments in the history of science." In contrast, the philosopher Huw Price, also attending the conference, found the Deutsch–Wallace–Saunders approach fundamentally flawed. Symmetries and invariance Zurek (2005) has produced a derivation of the Born rule based on the symmetries of entangled states; Schlosshauer and Fine argue that Zurek's derivation is not rigorous, as it does not define what probability is and has several unstated assumptions about how it should behave. Charles Sebens and Sean M. Carroll, building on work by Lev Vaidman, proposed a similar approach based on self-locating uncertainty. In this approach, decoherence creates multiple identical copies of observers, who can assign credences to being on different branches using the Born rule. The Sebens–Carroll approach has been criticized by Adrian Kent, and Vaidman himself does not find it satisfactory. The preferred basis problem As originally formulated by Everett and DeWitt, the many-worlds interpretation had a privileged role for measurements: they determined which basis of a quantum system would give rise to the eponymous worlds. Without this the theory was ambiguous, as a quantum state can equally well be described (e.g.) as having a well-defined position or as being a superposition of two delocalised states. The assumption that the preferred basis to use is the one from a measurement of position results in worlds having objects in well-defined positions, instead of worlds with delocalised objects (which would be grossly incompatible with experiment). This special role for measurements is problematic for the theory, as it contradicts Everett and DeWitt's goal of having a reductionist theory and undermines their criticism of the ill-defined measurement postulate of the Copenhagen interpretation. This is known today as the preferred basis problem. The preferred basis problem has been solved, according to Saunders and Wallace, among others, by incorporating decoherence in the many-worlds theory. In this approach, the preferred basis does not have to be postulated, but rather is identified as the basis stable under environmental decoherence. In this way measurements no longer play a special role; rather, any interaction that causes decoherence causes the world to split. Since decoherence is never complete, there will always remain some infinitesimal overlap between two worlds, making it arbitrary whether a pair of worlds has split or not. Wallace argues that this is not problematic: it only shows that worlds are not a part of the fundamental ontology, but rather of the emergent ontology, where these approximate, effective descriptions are routine in the physical sciences. Since in this approach the worlds are derived, it follows that they must be present in any other interpretation of quantum mechanics that does not have a collapse mechanism, such as Bohmian mechanics. This approach to deriving the preferred basis has been criticized as creating a circularity with derivations of probability in the many-worlds interpretation, as decoherence theory depends on probability, and probability depends on the ontology derived from decoherence. Wallace contends that decoherence theory depends not on probability but only on the notion that one is allowed to do approximations in physics. History MWI originated in Everett's Princeton Ph.D. thesis "The Theory of the Universal Wavefunction", developed under his thesis advisor John Archibald Wheeler, a shorter summary of which was published in 1957 under the title "Relative State Formulation of Quantum Mechanics" (Wheeler contributed the title "relative state"; Everett originally called his approach the "Correlation Interpretation", where "correlation" refers to quantum entanglement). The phrase "many-worlds" is due to Bryce DeWitt, who was responsible for the wider popularisation of Everett's theory, which had been largely ignored for a decade after publication in 1957. Everett's proposal was not without precedent. In 1952, Erwin Schrödinger gave a lecture in Dublin in which at one point he jocularly warned his audience that what he was about to say might "seem lunatic". He went on to assert that while the Schrödinger equation seemed to be describing several different histories, they were "not alternatives but all really happen simultaneously". According to David Deutsch, this is the earliest known reference to many-worlds; Jeffrey A. Barrett describes it as indicating the similarity of "general views" between Everett and Schrödinger. Schrödinger's writings from the period also contain elements resembling the modal interpretation originated by Bas van Fraassen. Because Schrödinger subscribed to a kind of post-Machian neutral monism, in which "matter" and "mind" are only different aspects or arrangements of the same common elements, treating the wavefunction as physical and treating it as information became interchangeable. Reception MWI's initial reception was overwhelmingly negative, in the sense that it was ignored, with the notable exception of DeWitt. Wheeler made considerable efforts to formulate the theory in a way that would be palatable to Bohr, visited Copenhagen in 1956 to discuss it with him, and convinced Everett to visit as well, which happened in 1959. Nevertheless, Bohr and his collaborators completely rejected the theory. Everett left academia in 1956, never to return, and Wheeler eventually disavowed the theory. Support One of MWI's strongest advocates is David Deutsch. According to Deutsch, the single photon interference pattern observed in the double slit experiment can be explained by interference of photons in multiple universes. Viewed this way, the single photon interference experiment is indistinguishable from the multiple photon interference experiment. In a more practical vein, in one of the earliest papers on quantum computing, he suggested that parallelism that results from MWI could lead to "a method by which certain probabilistic tasks can be performed faster by a universal quantum computer than by any classical restriction of it". Deutsch has also proposed that MWI will be testable (at least against "naive" Copenhagenism) when reversible computers become conscious via the reversible observation of spin. Equivocal Philosophers of science James Ladyman and Don Ross say that the MWI could be true, but that they do not embrace it. They note that no quantum theory is yet empirically adequate for describing all of reality, given its lack of unification with general relativity, and so they do not see a reason to regard any interpretation of quantum mechanics as the final word in metaphysics. They also suggest that the multiple branches may be an artifact of incomplete descriptions and of using quantum mechanics to represent the states of macroscopic objects. They argue that macroscopic objects are significantly different from microscopic objects in not being isolated from the environment, and that using quantum formalism to describe them lacks explanatory and descriptive power and accuracy. Victor J. Stenger remarked that Murray Gell-Mann's published work explicitly rejects the existence of simultaneous parallel universes. Collaborating with James Hartle, Gell-Mann worked toward the development a more "palatable" post-Everett quantum mechanics. Stenger thought it fair to say that most physicists find the MWI too extreme, while noting it "has merit in finding a place for the observer inside the system being analyzed and doing away with the troublesome notion of wave function collapse". Richard Feynman, described as an Everettian in some sources, said of the MWI in 1982, "It's possible, but I'm not very happy with it." Rejection Some scientists consider MWI unfalsifiable and hence unscientific because the multiple parallel universes are non-communicating, in the sense that no information can be passed between them. Others claim
observation. Wavefunction collapse was widely regarded as artificial and ad hoc, so an alternative interpretation in which the behavior of measurement could be understood from more fundamental physical principles was considered desirable. Everett's Ph.D. work provided such an interpretation. He argued that for a composite system—such as a subject (the "observer" or measuring apparatus) observing an object (the "observed" system, such as a particle)—the claim that either the observer or the observed has a well-defined state is meaningless; in modern parlance, the observer and the observed have become entangled: we can only specify the state of one relative to the other, i.e., the state of the observer and the observed are correlated after the observation is made. This led Everett to derive from the unitary, deterministic dynamics alone (i.e., without assuming wavefunction collapse) the notion of a relativity of states. Everett noticed that the unitary, deterministic dynamics alone entailed that after an observation is made each element of the quantum superposition of the combined subject–object wavefunction contains two "relative states": a "collapsed" object state and an associated observer who has observed the same collapsed outcome; what the observer sees and the state of the object have become correlated by the act of measurement or observation. The subsequent evolution of each pair of relative subject–object states proceeds with complete indifference as to the presence or absence of the other elements, as if wavefunction collapse has occurred, which has the consequence that later observations are always consistent with the earlier observations. Thus the appearance of the object's wavefunction's collapse has emerged from the unitary, deterministic theory itself. (This answered Einstein's early criticism of quantum theory, that the theory should define what is observed, not for the observables to define the theory.) Since the wavefunction merely appears to have collapsed then, Everett reasoned, there was no need to actually assume that it had collapsed. And so, invoking Occam's razor, he removed the postulate of wavefunction collapse from the theory. Testability In 1985, David Deutsch proposed a variant of the Wigner's friend thought experiment as a test of many-worlds versus the Copenhagen interpretation. It consists of an experimenter (Wigner's friend) making a measurement on a quantum system in an isolated laboratory, and another experimenter (Wigner) who would make a measurement on the first one. According to the many-worlds theory, the first experimenter would end up in a macroscopic superposition of seeing one result of the measurement in one branch, and another result in another branch. The second experimenter could then interfere these two branches in order to test whether it is in fact in a macroscopic superposition or has collapsed into a single branch, as predicted by the Copenhagen interpretation. Since then Lockwood (1989), Vaidman and others have made similar proposals. These proposals require placing macroscopic objects in a coherent superposition and interfering them, a task now beyond experimental capability. Probability and the Born rule Since the many-worlds interpretation's inception, physicists have been puzzled about the role of probability in it. As put by Wallace, there are two facets to the question: the incoherence problem, which asks why we should assign probabilities at all to outcomes that are certain to occur in some worlds, and the quantitative problem, which asks why the probabilities should be given by the Born rule. Everett tried to answer these questions in the paper that introduced many-worlds. To address the incoherence problem, he argued that an observer who makes a sequence of measurements on a quantum system will in general have an apparently random sequence of results in their memory, which justifies the use of probabilities to describe the measurement process. To address the quantitative problem, Everett proposed a derivation of the Born rule based on the properties that a measure on the branches of the wavefunction should have. His derivation has been criticized as relying on unmotivated assumptions. Since then several other derivations of the Born rule in the many-worlds framework have been proposed. There is no consensus on whether this has been successful. Frequentism DeWitt and Graham and Farhi et al., among others, have proposed derivations of the Born rule based on a frequentist interpretation of probability. They try to show that in the limit of infinitely many measurements no worlds would have relative frequencies that didn't match the probabilities given by the Born rule, but these derivations have been shown to be mathematically incorrect. Decision theory A decision-theoretic derivation of the Born rule was produced by David Deutsch (1999) and refined by Wallace (2002–2009) and Saunders (2004). They consider an agent who takes part in a quantum gamble: the agent makes a measurement on a quantum system, branches as a consequence, and each of the agent's future selves receives a reward that depends on the measurement result. The agent uses decision theory to evaluate the price they would pay to take part in such a gamble, and concludes that the price is given by the utility of the rewards weighted according to the Born rule. Some reviews have been positive, although these arguments remain highly controversial; some theoretical physicists have taken them as supporting the case for parallel universes. For example, a New Scientist story on a 2007 conference about Everettian interpretations quoted physicist Andy Albrecht as saying, "This work will go down as one of the most important developments in the history of science." In contrast, the philosopher Huw Price, also attending the conference, found the Deutsch–Wallace–Saunders approach fundamentally flawed. Symmetries and invariance Zurek (2005) has produced a derivation of the Born rule based on the symmetries of entangled states; Schlosshauer and Fine argue that Zurek's derivation is not rigorous, as it does not define what probability is and has several unstated assumptions about how it should behave. Charles Sebens and Sean M. Carroll, building on work by Lev Vaidman, proposed a similar approach based on self-locating uncertainty. In this approach, decoherence creates multiple identical copies of observers, who can assign credences to being on different branches using the Born rule. The Sebens–Carroll approach has been criticized by Adrian Kent, and Vaidman himself does not find it satisfactory. The preferred basis problem As originally formulated by Everett and DeWitt, the many-worlds interpretation had a privileged role for measurements: they determined which basis of a quantum system would give rise to the eponymous worlds. Without this the theory was ambiguous, as a quantum state can equally well be described (e.g.) as having a well-defined position or as being a superposition of two delocalised states. The assumption that the preferred basis to use is the one from a measurement of position results in worlds having objects in well-defined positions, instead of worlds with delocalised objects (which would be grossly incompatible with experiment). This special role for measurements is problematic for the theory, as it contradicts Everett and DeWitt's goal of having a reductionist theory and undermines their criticism of the ill-defined measurement postulate of the Copenhagen interpretation. This is known today as the preferred basis problem. The preferred basis problem has been solved, according to Saunders and Wallace, among others, by incorporating decoherence in the many-worlds theory. In this approach, the preferred basis does not have to be postulated, but rather is identified as the basis stable under environmental decoherence. In this way measurements no longer play a special role; rather, any interaction that causes decoherence causes the world to split. Since decoherence is never complete, there will always remain some infinitesimal overlap between two worlds, making it arbitrary whether a pair of worlds has split or not. Wallace argues that this is not problematic: it only shows that worlds are not a part of the fundamental ontology, but rather of the emergent ontology, where these approximate, effective descriptions are routine in the physical sciences. Since in this approach the worlds are derived, it follows that they must be present in any other interpretation of quantum mechanics that does not have a collapse mechanism, such as Bohmian mechanics. This approach to deriving the preferred basis has been criticized as creating a circularity with derivations of probability in the many-worlds interpretation, as decoherence theory depends on probability, and probability depends on the ontology derived from decoherence. Wallace contends that decoherence theory depends not on probability but only on the notion that one is allowed to do approximations in physics. History MWI originated in Everett's Princeton Ph.D. thesis "The Theory of the Universal Wavefunction", developed under his thesis advisor John Archibald Wheeler, a shorter summary of which was published in 1957 under the title "Relative State Formulation of Quantum Mechanics" (Wheeler contributed the title "relative state"; Everett originally called his approach the "Correlation Interpretation", where "correlation" refers to quantum entanglement). The phrase "many-worlds" is due to Bryce DeWitt, who was responsible for the wider popularisation of Everett's theory, which had been largely ignored for a decade after publication in 1957. Everett's proposal was not without precedent. In 1952, Erwin Schrödinger gave a lecture in Dublin in which at one point he jocularly warned his audience that what he was about to say might "seem lunatic". He went on to assert that while the Schrödinger equation seemed to be describing several different histories, they were "not alternatives but all really happen simultaneously". According to David Deutsch, this is the earliest known reference to many-worlds; Jeffrey A. Barrett describes it as indicating the similarity of "general views" between Everett and Schrödinger. Schrödinger's writings from the period also contain elements resembling the modal interpretation originated by Bas van Fraassen. Because Schrödinger subscribed to a kind of post-Machian neutral monism, in which "matter" and "mind" are only different aspects or arrangements of the same common elements, treating the wavefunction as physical and treating it as information became interchangeable. Reception MWI's initial reception was overwhelmingly negative, in the sense that it was ignored, with the notable exception of DeWitt. Wheeler made considerable efforts to formulate the theory in a way that would be palatable to Bohr, visited Copenhagen in 1956 to discuss it with him, and convinced Everett to visit as well, which happened in 1959. Nevertheless, Bohr and his collaborators completely rejected the theory. Everett left academia in 1956, never to return, and Wheeler eventually disavowed the theory. Support One of MWI's strongest advocates is David Deutsch. According to Deutsch, the single photon interference pattern observed in the double slit experiment can be explained by interference of photons in multiple universes. Viewed this way, the single photon interference experiment is indistinguishable from the multiple photon interference experiment. In a more practical vein, in one of the earliest papers on quantum computing, he suggested that parallelism that results from MWI could lead to "a method by which certain probabilistic tasks can be performed faster by a universal quantum computer than by any classical restriction of it". Deutsch has also proposed that MWI will be testable (at least against "naive" Copenhagenism) when reversible computers become conscious via the reversible observation of spin. Equivocal Philosophers of science James Ladyman and Don Ross say that the MWI could be true, but that they do not embrace it. They note that no quantum theory is yet empirically adequate for describing all of reality, given its lack of unification with general relativity, and so they do not see a reason to regard any interpretation of quantum mechanics as the final word in metaphysics. They also suggest that the multiple branches may be an artifact of incomplete descriptions and of using quantum mechanics to represent the states of macroscopic objects. They argue that macroscopic objects are significantly different from microscopic objects in not being isolated from the environment, and that using quantum formalism to describe them lacks explanatory and descriptive power and accuracy. Victor J. Stenger remarked that Murray Gell-Mann's published work explicitly rejects the existence of simultaneous parallel universes. Collaborating with James Hartle, Gell-Mann worked toward the development a more "palatable" post-Everett quantum mechanics. Stenger thought it fair to say that most physicists find the MWI too extreme, while noting it "has merit in finding a place for the observer inside the system being analyzed and doing away with the troublesome notion of wave function collapse". Richard Feynman, described as an Everettian in some sources, said of the MWI in 1982, "It's possible, but I'm not very happy with it." Rejection Some scientists consider MWI unfalsifiable and hence unscientific because the multiple parallel universes are non-communicating, in the sense that no information can be passed between them. Others claim MWI is directly testable. Roger Penrose argues that the idea is flawed because it is based on an oversimple version of quantum mechanics that does not account for gravity. In his view, applying conventional quantum mechanics to the universe implies the MWI, but the lack of a successful theory of quantum gravity negates the claimed universality of conventional quantum mechanics. According to Penrose, "the rules must change when gravity is involved". He further asserts that gravity helps anchor reality and "blurry" events have only one allowable outcome: "electrons, atoms, molecules, etc., are so minute that they require almost no amount of energy to maintain their gravity, and therefore their overlapping states. They can stay in that state forever, as described in standard quantum theory". On the other hand, "in the case of large objects, the duplicate states disappear in an instant due to the fact that these objects create a large gravitational field". Philosopher of science Robert P. Crease says that the MWI is "one of the most implausible and unrealistic ideas in the history of science" because it means that everything conceivable happens. Science writer Philip Ball describes the MWI's implications as fantasies, since "beneath their apparel of scientific equations or symbolic logic, they are acts of imagination, of 'just supposing'". Theoretical physicist Gerard 't
down unnecessary intermediate links, thereby reducing the cost price, and can benefit from one on one large customer data analysis, to achieve a high degree of personal customization strategic plan, in order to fully enhance the core competitiveness of the products in the company. Modern 3D graphics technologies, such as Facebook 3D Posts, are considered by some social media marketers and advertisers as a preferable way to promote consumer goods than static photos, and some brands like Sony are already paving the way for augmented reality commerce. Wayfair now lets you inspect a 3D version of its furniture in a home setting before buying. Logistics Logistics in e-commerce mainly concerns fulfillment. Online markets and retailers have to find the best possible way to fill orders and deliver products. Small companies usually control their own logistic operation because they do not have the ability to hire an outside company. Most large companies hire a fulfillment service that takes care of a company's logistic needs. Contrary to common misconception, there are significant barriers to entry in e-commerce. Impacts Impact on markets and retailers E-commerce markets are growing at noticeable rates. The online market is expected to grow by 56% in 2015–2020. In 2017, retail e-commerce sales worldwide amounted to 2.3 trillion US dollars and e-retail revenues are projected to grow to 4.891 trillion US dollars in 2021. Traditional markets are only expected 2% growth during the same time. Brick and mortar retailers are struggling because of online retailer's ability to offer lower prices and higher efficiency. Many larger retailers are able to maintain a presence offline and online by linking physical and online offerings. E-commerce allows customers to overcome geographical barriers and allows them to purchase products anytime and from anywhere. Online and traditional markets have different strategies for conducting business. Traditional retailers offer fewer assortment of products because of shelf space where, online retailers often hold no inventory but send customer orders directly to the manufacture. The pricing strategies are also different for traditional and online retailers. Traditional retailers base their prices on store traffic and the cost to keep inventory. Online retailers base prices on the speed of delivery. There are two ways for marketers to conduct business through e-commerce: fully online or online along with a brick and mortar store. Online marketers can offer lower prices, greater product selection, and high efficiency rates. Many customers prefer online markets if the products can be delivered quickly at relatively low price. However, online retailers cannot offer the physical experience that traditional retailers can. It can be difficult to judge the quality of a product without the physical experience, which may cause customers to experience product or seller uncertainty. Another issue regarding the online market is concerns about the security of online transactions. Many customers remain loyal to well-known retailers because of this issue. Security is a primary problem for e-commerce in developed and developing countries. E-commerce security is protecting business' websites and customers from unauthorized access, use, alteration, or destruction. The type of threats include: malicious codes, unwanted programs (ad ware, spyware), phishing, hacking, and cyber vandalism. E-commerce websites use different tools to avert security threats. These tools include firewalls, encryption software, digital certificates, and passwords. Impact on supply chain management For a long time, companies had been troubled by the gap between the benefits which supply chain technology has and the solutions to deliver those benefits. However, the emergence of e-commerce has provided a more practical and effective way of delivering the benefits of the new supply chain technologies. E-commerce has the capability to integrate all inter-company and intra-company functions, meaning that the three flows (physical flow, financial flow and information flow) of the supply chain could be also affected by e-commerce. The affections on physical flows improved the way of product and inventory movement level for companies. For the information flows, e-commerce optimized the capacity of information processing than companies used to have, and for the financial flows, e-commerce allows companies to have more efficient payment and settlement solutions. In addition, e-commerce has a more sophisticated level of impact on supply chains: Firstly, the performance gap will be eliminated since companies can identify gaps between different levels of supply chains by electronic means of solutions; Secondly, as a result of e-commerce emergence, new capabilities such implementing ERP systems, like SAP ERP, Xero, or Megaventory, have helped companies to manage operations with customers and suppliers. Yet these new capabilities are still not fully exploited. Thirdly, technology companies would keep investing on new e-commerce software solutions as they are expecting investment return. Fourthly, e-commerce would help to solve many aspects of issues that companies may feel difficult to cope with, such as political barriers or cross-country changes. Finally, e-commerce provides companies a more efficient and effective way to collaborate with each other within the supply chain. Impact on employment E-commerce helps create new job opportunities due to information related services, software app and digital products. It also causes job losses. The areas with the greatest predicted job-loss are retail, postal, and travel agencies. The development of e-commerce will create jobs that require highly skilled workers to manage large amounts of information, customer demands, and production processes. In contrast, people with poor technical skills cannot enjoy the wages welfare. On the other hand, because e-commerce requires sufficient stocks that could be delivered to customers in time, the warehouse becomes an important element. Warehouse needs more staff to manage, supervise and organize, thus the condition of warehouse environment will be concerned by employees. Impact on customers E-commerce brings convenience for customers as they do not have to leave home and only need to browse website online, especially for buying the products which are not sold in nearby shops. It could help customers buy wider range of products and save customers' time. Consumers also gain power through online shopping. They are able to research products and compare prices among retailers. Also, online shopping often provides sales promotion or discounts code, thus it is more price effective for customers. Moreover, e-commerce provides products' detailed information; even the in-store staff cannot offer such detailed explanation. Customers can also review and track the order history online. E-commerce technologies cut transaction costs by allowing both manufactures and consumers to skip through the intermediaries. This is achieved through by extending the search area best price deals and by group purchase. The success of e-commerce in urban and regional levels depend on how the local firms and consumers have adopted to e-commerce. However, e-commerce lacks human interaction for customers, especially who prefer face-to-face connection. Customers are also concerned with the security of online transactions and tend to remain loyal to well-known retailers. In recent years, clothing retailers such as Tommy Hilfiger have started adding Virtual Fit platforms to their e-commerce sites to reduce the risk of customers buying the wrong sized clothes, although these vary greatly in their fit for purpose. When the customer regret the purchase of a product, it involves returning goods and refunding process. This process is inconvenient as customers need to pack and post the goods. If the products are expensive, large or fragile, it refers to safety issues. Impact on the environment In 2018, E-commerce generated 1.3 million tons of container cardboard in North America, an increase from 1.1 million in 2017. Only 35 percent of North American cardboard manufacturing capacity is from recycled content. The recycling rate in Europe is 80 percent and Asia is 93 percent. Amazon, the largest user of boxes, has a strategy to cut back on packing material and has reduced packaging material used by 19 percent by weight since 2016. Amazon is requiring retailers to manufacture their product packaging in a way that doesn't require additional shipping packaging. Amazon also has an 85-person team researching ways to reduce and improve their packaging and shipping materials. Impact on traditional retail E-commerce has been cited as a major force for the failure of major U.S. retailers in a trend frequently referred to as a "retail apocalypse." The rise of e-commerce outlets like Amazon has made it harder for traditional retailers to attract customers to their stores and forced companies to change their sales strategies. Many companies have turned to sales promotions and increased digital efforts to lure shoppers while shutting down brick-and-mortar locations. The trend has forced some traditional retailers to shutter its brick and mortar operations. Distribution channels E-commerce has grown in importance as companies have adopted pure-click and brick-and-click channel systems. We can distinguish pure-click and brick-and-click channel system adopted by companies. Pure-click or pure-play companies are those that have launched a website without any previous existence as a firm. Bricks-and-clicks companies are those existing companies that have added an online site for e-commerce. Click-to-brick online retailers that later open physical locations to supplement their online efforts. E-commerce may take place on retailers' Web sites or mobile apps, or those of e-commerce marketplaces such as on Amazon, or Tmall from AliBaba. Those channels may also be supported by conversational commerce, e.g. live chat or chatbots on Web sites. Conversational commerce may also be standalone such as live chat or chatbots on messaging apps and via voice assistants. Recommendation The contemporary e-commerce trend recommends companies to shift the traditional business model where focus on "standardized products, homogeneous market and long product life cycle" to the new business model where focus on "varied and customized products". E-commerce requires the company to have the ability to satisfy multiple needs of different customers and provide them with wider range of products. With more choices of products, the information of products for customers to select and meet their needs become crucial. In order to address the mass customization principle to the company, the use of recommender system is suggested. This system helps recommend the proper products
Moore (D-L.A.) and enacted in 1984. A timeline for the development of e-commerce: 1971 or 1972: The ARPANET is used to arrange a cannabis sale between students at the Stanford Artificial Intelligence Laboratory and the Massachusetts Institute of Technology, later described as "the seminal act of e-commerce" in John Markoff's book What the Dormouse Said. 1976: Atalla Technovation (founded by Mohamed Atalla) and Bunker Ramo Corporation (founded by George Bunker and Simon Ramo) introduce products designed for secure online transaction processing, intended for financial institutions. 1979: Michael Aldrich demonstrates the first online shopping system. 1981: Thomson Holidays UK is the first business-to-business (B2B) online shopping system to be installed. 1982: Minitel was introduced nationwide in France by France Télécom and used for online ordering. 1983: California State Assembly holds first hearing on "electronic commerce" in Volcano, California. Testifying are CPUC, MCI Mail, Prodigy, CompuServe, Volcano Telephone, and Pacific Telesis. (Not permitted to testify is Quantum Technology, later to become AOL.) California's Electronic Commerce Act was passed in 1984. 1983: Karen Earle Lile (AKA Karen Bean) and Kendall Ross Bean create e-commerce service in San Francisco Bay Area. Buyers and sellers of pianos connect through a database created by Piano Finders on a Kaypro personal computer using DOS interface. Pianos for sale are listed on a Bulletin board system. Buyers print list of pianos for sale by a dot matrix printer. Customer service happened through a Piano Advice Hotline listed in the San Francisco Chronicle classified ads and money transferred by a bank wire transfer when a sale was completed. 1984: Gateshead SIS/Tesco is first B2C online shopping system and Mrs Snowball, 72, is the first online home shopper 1984: In April 1984, CompuServe launches the Electronic Mall in the US and Canada. It is the first comprehensive electronic commerce service. 1989: In May 1989, Sequoia Data Corp. introduced Compumarket, the first internet based system for e-commerce. Sellers and buyers could post items for sale and buyers could search the database and make purchases with a credit card. 1990: Tim Berners-Lee writes the first web browser, WorldWideWeb, using a NeXT computer. 1992: Book Stacks Unlimited in Cleveland opens a commercial sales website (www.books.com) selling books online with credit card processing. 1993: Paget Press releases edition No. 3 of the first app store, The Electronic AppWrapper 1994: Netscape releases the Navigator browser in October under the code name Mozilla. Netscape 1.0 is introduced in late 1994 with SSL encryption that made transactions secure. 1994: Ipswitch IMail Server becomes the first software available online for sale and immediate download via a partnership between Ipswitch, Inc. and OpenMarket. 1994: "Ten Summoner's Tales" by Sting becomes the first secure online purchase through NetMarket. 1995: The US National Science Foundation lifts its former strict prohibition of commercial enterprise on the Internet. 1995: Thursday 27 April 1995, the purchase of a book by Paul Stanfield, product manager for CompuServe UK, from W H Smith's shop within CompuServe's UK Shopping Centre is the UK's first national online shopping service secure transaction. The shopping service at launch featured W H Smith, Tesco, Virgin Megastores/Our Price, Great Universal Stores (GUS), Interflora, Dixons Retail, Past Times, PC World (retailer) and Innovations. 1995: Amazon is launched by Jeff Bezos. 1995: eBay is founded by computer programmer Pierre Omidyar as AuctionWeb. It is the first online auction site supporting person-to-person transactions. 1995: The first commercial-free 24-hour, internet-only radio stations, Radio HK and NetRadio start broadcasting. 1996: The use of Excalibur BBS with replicated "storefronts" was an early implementation of electronic commerce started by a group of SysOps in Australia and replicated to global partner sites. 1998: Electronic postal stamps can be purchased and downloaded for printing from the Web. 1999: Alibaba Group is established in China. Business.com sold for US$7.5 million to eCompanies, which was purchased in 1997 for US$149,000. The peer-to-peer filesharing software Napster launches. ATG Stores launches to sell decorative items for the home online. 1999: Global e-commerce reaches $150 billion 2000: The dot-com bust. 2001: eBay has the largest userbase of any e-commerce site. 2001: Alibaba.com achieved profitability in December 2001. 2002: eBay acquires PayPal for $1.5 billion. Niche retail companies Wayfair and NetShops are founded with the concept of selling products through several targeted domains, rather than a central portal. 2003: Amazon posts first yearly profit. 2004: DHgate.com, China's first online B2B transaction platform, is established, forcing other B2B sites to move away from the "yellow pages" model. 2007: Business.com acquired by R.H. Donnelley for $345 million. 2014: US e-commerce and online retail sales projected to reach $294 billion, an increase of 12 percent over 2013 and 9% of all retail sales. Alibaba Group has the largest Initial public offering ever, worth $25 billion. 2015: Amazon accounts for more than half of all e-commerce growth, selling almost 500 Million SKU's in the US. 2017: Retail e-commerce sales across the world reaches $2.304 trillion, which was a 24.8 percent increase than previous year. 2017: Global e-commerce transactions generate , including for business-to-business (B2B) transactions and for business-to-consumer (B2C) sales. Business application Some common applications related to electronic commerce are: Governmental regulation In the United States, California's Electronic Commerce Act (1984), enacted by the Legislature, and the more recent California Privacy Act (2020) enacted through a popular election proposition, control specifically how electronic commerce may be conducted in California. In the US in its entirety, electronic commerce activities are regulated more broadly by the Federal Trade Commission (FTC). These activities include the use of commercial e-mails, online advertising and consumer privacy. The CAN-SPAM Act of 2003 establishes national standards for direct marketing over e-mail. The Federal Trade Commission Act regulates all forms of advertising, including online advertising, and states that advertising must be truthful and non-deceptive. Using its authority under Section 5 of the FTC Act, which prohibits unfair or deceptive practices, the FTC has brought a number of cases to enforce the promises in corporate privacy statements, including promises about the security of consumers' personal information. As a result, any corporate privacy policy related to e-commerce activity may be subject to enforcement by the FTC. The Ryan Haight Online Pharmacy Consumer Protection Act of 2008, which came into law in 2008, amends the Controlled Substances Act to address online pharmacies. Conflict of laws in cyberspace is a major hurdle for harmonization of legal framework for e-commerce around the world. In order to give a uniformity to e-commerce law around the world, many countries adopted the UNCITRAL Model Law on Electronic Commerce (1996). Internationally there is the International Consumer Protection and Enforcement Network (ICPEN), which was formed in 1991 from an informal network of government customer fair trade organisations. The purpose was stated as being to find ways of co-operating on tackling consumer problems connected with cross-border transactions in both goods and services, and to help ensure exchanges of information among the participants for mutual benefit and understanding. From this came Econsumer.gov, an ICPEN initiative since April 2001. It is a portal to report complaints about online and related transactions with foreign companies. There is also Asia Pacific Economic Cooperation (APEC) was established in 1989 with the vision of achieving stability, security and prosperity for the region through free and open trade and investment. APEC has an Electronic Commerce Steering Group as well as working on common privacy regulations throughout the APEC region. In Australia, Trade is covered under Australian Treasury Guidelines for electronic commerce and the Australian Competition & Consumer Commission regulates and offers advice on how to deal with businesses online, and offers specific advice on what happens if things go wrong. In the United Kingdom, The Financial Services Authority (FSA) was formerly the regulating authority for most aspects of the EU's Payment Services Directive (PSD), until its replacement in 2013 by the Prudential Regulation Authority and the Financial Conduct Authority. The UK implemented the PSD through the Payment Services Regulations 2009 (PSRs), which came into effect on 1 November 2009. The PSR affects firms providing payment services and their customers. These firms include banks, non-bank credit card issuers and non-bank merchant acquirers, e-money issuers, etc. The PSRs created a new class of regulated firms known as payment institutions (PIs), who are subject to prudential requirements. Article 87 of the PSD requires the European Commission to report on the implementation and impact of the PSD by 1 November 2012. In India, the Information Technology Act 2000 governs the basic applicability of e-commerce. In China, the Telecommunications Regulations of the People's Republic of China (promulgated on 25 September 2000), stipulated the Ministry of Industry and Information Technology (MIIT) as the government department regulating all telecommunications related activities, including electronic commerce. On the same day, The Administrative Measures on Internet Information Services released, is the first administrative regulation to address profit-generating activities conducted through the Internet, and lay the foundation for future regulations governing e-commerce in China. On 28 August 2004, the eleventh session of the tenth NPC Standing Committee adopted The Electronic Signature Law, which regulates data message, electronic signature authentication and legal liability issues. It is considered the first law in China's e-commerce legislation. It was a milestone in the course of improving China's electronic commerce legislation, and also marks the entering of China's rapid development stage for electronic commerce legislation. Forms Contemporary electronic commerce can be classified into two categories. The first category is business based on types of goods sold (involves everything from ordering "digital" content for immediate online consumption, to ordering conventional goods and services, to "meta" services to facilitate other types of electronic commerce). The second category is based on the nature of the participant (B2B, B2C, C2B and C2C). On the institutional level, big corporations and financial institutions use the internet to exchange financial data to facilitate domestic and international business. Data integrity and security are pressing issues for electronic commerce. Aside from traditional e-commerce, the terms m-Commerce (mobile commerce) as well (around 2013) t-Commerce have also been used. Global trends In 2010, the United Kingdom had the highest per capita e-commerce spending in the world. As of 2013, the Czech Republic was the European country where e-commerce delivers the biggest contribution to the enterprises' total revenue. Almost a quarter (24%) of the country's total turnover is generated via the online channel. Among emerging economies, China's e-commerce presence continues to expand every year. With 668 million Internet users, China's online shopping sales reached $253 billion
the formula Applications Applications in complex number theory Interpretation of the formula This formula can be interpreted as saying that the function is a unit complex number, i.e., it traces out the unit circle in the complex plane as ranges through the real numbers. Here is the angle that a line connecting the origin with a point on the unit circle makes with the positive real axis, measured counterclockwise and in radians. The original proof is based on the Taylor series expansions of the exponential function (where is a complex number) and of and for real numbers (see below). In fact, the same proof shows that Euler's formula is even valid for all complex numbers . A point in the complex plane can be represented by a complex number written in cartesian coordinates. Euler's formula provides a means of conversion between cartesian coordinates and polar coordinates. The polar form simplifies the mathematics when used in multiplication or powers of complex numbers. Any complex number , and its complex conjugate, , can be written as where is the real part, is the imaginary part, is the magnitude of and . is the argument of , i.e., the angle between the x axis and the vector z measured counterclockwise in radians, which is defined up to addition of . Many texts write instead of , but the first equation needs adjustment when . This is because for any real and , not both zero, the angles of the vectors and differ by radians, but have the identical value of . Use of the formula to define the logarithm of complex numbers Now, taking this derived formula, we can use Euler's formula to define the logarithm of a complex number. To do this, we also use the definition of the logarithm (as the inverse operator of exponentiation): and that both valid for any complex numbers and . Therefore, one can write: for any . Taking the logarithm of both sides shows that and in fact, this can be used as the definition for the complex logarithm. The logarithm of a complex number is thus a multi-valued function, because is multi-valued. Finally, the other exponential law which can be seen to hold for all integers , together with Euler's formula, implies several trigonometric identities, as well as de Moivre's formula. Relationship to trigonometry Euler's formula provides a powerful connection between analysis and trigonometry, and provides an interpretation of the sine and cosine functions as weighted sums of the exponential function: The two equations above can be derived by adding or subtracting Euler's formulas: and solving for either cosine or sine. These formulas can even serve as the definition of the trigonometric functions for complex arguments . For example, letting , we have: Complex exponentials can simplify trigonometry, because they are easier to manipulate than their sinusoidal components. One technique is simply to convert sinusoids into equivalent expressions in terms of exponentials. After the manipulations, the simplified result is still real-valued. For example: Another technique is to represent the sinusoids in terms of the real part of a complex expression and perform the manipulations on the complex expression. For example: This formula is used for recursive generation of for integer values of and arbitrary (in radians). See also Phasor arithmetic. Topological interpretation In the language of topology, Euler's formula states that the imaginary exponential function is a (surjective) morphism of topological groups from the real line to the unit circle . In fact, this exhibits as a covering space of . Similarly, Euler's identity says that the kernel of this map is , where . These observations may be combined and summarized in the commutative diagram below: Other applications In differential equations, the function is often used to simplify solutions, even if the final answer is a real function involving sine and cosine. The
and depending on , No assumptions are being made about and ; they will be determined in the course of the proof. From any of the definitions of the exponential function it can be shown that the derivative of is . Therefore, differentiating both sides gives Substituting for and equating real and imaginary parts in this formula gives and . Thus, is a constant, and is for some constant . The initial values and come from , giving and . This proves the formula Applications Applications in complex number theory Interpretation of the formula This formula can be interpreted as saying that the function is a unit complex number, i.e., it traces out the unit circle in the complex plane as ranges through the real numbers. Here is the angle that a line connecting the origin with a point on the unit circle makes with the positive real axis, measured counterclockwise and in radians. The original proof is based on the Taylor series expansions of the exponential function (where is a complex number) and of and for real numbers (see below). In fact, the same proof shows that Euler's formula is even valid for all complex numbers . A point in the complex plane can be represented by a complex number written in cartesian coordinates. Euler's formula provides a means of conversion between cartesian coordinates and polar coordinates. The polar form simplifies the mathematics when used in multiplication or powers of complex numbers. Any complex number , and its complex conjugate, , can be written as where is the real part, is the imaginary part, is the magnitude of and . is the argument of , i.e., the angle between the x axis and the vector z measured counterclockwise in radians, which is defined up to addition of . Many texts write instead of , but the first equation needs adjustment when . This is because for any real and , not both zero, the angles of the vectors and differ by radians, but have the identical value of . Use of the formula to define the logarithm of complex numbers Now, taking this derived formula, we can use Euler's formula to define the logarithm of a complex number. To do this, we also use the definition of the logarithm (as the inverse operator of exponentiation): and that both valid for any complex numbers and . Therefore, one can write: for any . Taking the logarithm of both sides shows
to pursue a career in law. His uncle, Edmond Fournier, encouraged him to pursue painting and took young Manet to the Louvre. In 1841 he enrolled at secondary school, the Collège Rollin. In 1845, at the advice of his uncle, Manet enrolled in a special course of drawing where he met Antonin Proust, future Minister of Fine Arts and subsequent lifelong friend. At his father's suggestion, in 1848 he sailed on a training vessel to Rio de Janeiro. After he twice failed the examination to join the Navy, his father relented to his wishes to pursue an art education. From 1850 to 1856, Manet studied under the academic painter Thomas Couture. In his spare time, Manet copied the Old Masters in the Louvre. From 1853 to 1856, Manet visited Germany, Italy, and the Netherlands, during which time he was influenced by the Dutch painter Frans Hals, and the Spanish artists Diego Velázquez and Francisco José de Goya. Career In 1856, Manet opened a studio. His style in this period was characterized by loose brush strokes, simplification of details and the suppression of transitional tones. Adopting the current style of realism initiated by Gustave Courbet, he painted The Absinthe Drinker (1858–59) and other contemporary subjects such as beggars, singers, Gypsies, people in cafés, and bullfights. After his early career, he rarely painted religious, mythological, or historical subjects; examples include his Christ Mocked, now in the Art Institute of Chicago, and Christ with Angels, in the Metropolitan Museum of Art, New York. Manet had two canvases accepted at the Salon in 1861. A portrait of his mother and father, who at the time was paralysed and robbed of speech by a stroke, was ill-received by critics. The other, The Spanish Singer, was admired by Théophile Gautier, and placed in a more conspicuous location as a result of its popularity with Salon-goers. Manet's work, which appeared "slightly slapdash" when compared with the meticulous style of so many other Salon paintings, intrigued some young artists. The Spanish Singer, painted in a "strange new fashion[,] caused many painters' eyes to open and their jaws to drop." Music in the Tuileries Music in the Tuileries is an early example of Manet's painterly style. Inspired by Hals and Velázquez, it is a harbinger of his lifelong interest in the subject of leisure. While the picture was regarded as unfinished by some, the suggested atmosphere imparts a sense of what the Tuileries gardens were like at the time; one may imagine the music and conversation. Here, Manet has depicted his friends, artists, authors, and musicians who take part, and he has included a self-portrait among the subjects. Luncheon on the Grass (Le déjeuner sur l'herbe) A major early work is The Luncheon on the Grass (Le Déjeuner sur l'herbe), originally Le Bain. The Paris Salon rejected it for exhibition in 1863, but Manet agreed to exhibit it at the Salon des Refusés (Salon of the Rejected) which was a parallel exhibition to the official Salon, as an alternative exhibition in the Palais des Champs-Elysée. The Salon des Refusés was initiated by Emperor Napoleon III as a solution to a problematic situation which came about as the Selection Committee of the Salon that year rejected 2,783 paintings of the c. 5000. Each painter could decide whether to take the opportunity to exhibit at the Salon des Refusés, although less than 500 of the rejected painters chose to do so. Manet employed model Victorine Meurent, his wife Suzanne, future brother-in-law Ferdinand Leenhoff, and one of his brothers to pose. Meurent also posed for several more of Manet's important paintings including Olympia; and by the mid-1870s she became an accomplished painter in her own right. The painting's juxtaposition of fully dressed men and a nude woman was controversial, as was its abbreviated, sketch-like handling, an innovation that distinguished Manet from Courbet. At the same time, Manet's composition reveals his study of the old masters, as the disposition of the main figures is derived from Marcantonio Raimondi's engraving of the Judgement of Paris (c. 1515) based on a drawing by Raphael. Two additional works cited by scholars as important precedents for Le déjeuner sur l'herbe are Pastoral Concert (c. 1510, The Louvre) and The Tempest (Gallerie dell'Accademia, Venice), both of which are attributed variously to Italian Renaissance masters Giorgione or Titian. The Tempest is an enigmatic painting featuring a fully dressed man and a nude woman in a rural setting. The man is standing to the left and gazing to the side, apparently at the woman, who is seated and breastfeeding a baby; the relationship between the two figures is unclear. In Pastoral Concert, two clothed men and a nude woman are seated on the grass, engaged in music making, while a second nude woman stands beside them. Olympia As he had in Luncheon on the Grass, Manet again paraphrased a respected work by a Renaissance artist in the painting Olympia (1863), a nude portrayed in a style reminiscent of early studio photographs, but whose pose was based on Titian's Venus of Urbino (1538). The painting is also reminiscent of Francisco Goya's painting The Nude Maja (1800). Manet embarked on the canvas after being challenged to give the Salon a nude painting to display. His uniquely frank depiction of a self-assured prostitute was accepted by the Paris Salon in 1865, where it created a scandal. According to Antonin Proust, "only the precautions taken by the administration prevented the painting being punctured and torn" by offended viewers. The painting was controversial partly because the nude is wearing some small items of clothing such as an orchid in her hair, a bracelet, a ribbon around her neck, and mule slippers, all of which accentuated her nakedness, sexuality, and comfortable courtesan lifestyle. The orchid, upswept hair, black cat, and bouquet of flowers were all recognized symbols of sexuality at the time. This modern Venus' body is thin, counter to prevailing standards; the painting's lack of idealism rankled viewers. The painting's flatness, inspired by Japanese wood block art, serves to make the nude more human and less voluptuous. A fully dressed black servant is featured, exploiting the then-current theory that black people were hyper-sexed. That she is wearing the clothing of a servant to a courtesan here furthers the sexual tension of the piece. Olympia's body as well as her gaze is unabashedly confrontational. She defiantly looks out as her servant offers flowers from one of her male suitors. Although her hand rests on her leg, hiding her pubic area, the reference to traditional female virtue is ironic; a notion of modesty is notoriously absent in this work. A contemporary critic denounced Olympia's "shamelessly flexed" left hand, which seemed to him a mockery of the relaxed, shielding hand of Titian's Venus. Likewise, the alert black cat at the foot of the bed strikes a sexually rebellious note in contrast to that of the sleeping dog in Titian's portrayal of the goddess in his Venus of Urbino. Olympia was the subject of caricatures in the popular press, but was championed by the French avant-garde community, and the painting's significance was appreciated by artists such as Gustave Courbet, Paul Cézanne, Claude Monet, and later Paul Gauguin. As with Luncheon on the Grass, the painting raised the issue of prostitution within contemporary France and the roles of women within society. Life and times After the death of his father in 1862, Manet married Suzanne Leenhoff in 1863. Leenhoff was a Dutch-born piano teacher two years Manet's senior with whom he had been romantically involved for approximately ten years. Leenhoff initially had been employed by Manet's father, Auguste, to teach Manet and his younger brother piano. She also may have been Auguste's mistress. In 1852, Leenhoff gave birth, out of wedlock, to a son, Leon Koella Leenhoff. Manet painted his wife in The Reading, among other paintings. Her son, Leon Leenhoff, whose father may have been either of the Manets, posed often for Manet. Most famously, he is the subject of the Boy Carrying a Sword of 1861 (Metropolitan Museum of Art, New York). He also appears as the boy carrying a tray in the background of The Balcony (1868–69). Manet became friends with the Impressionists Edgar Degas, Claude Monet, Pierre-Auguste Renoir, Alfred Sisley, Paul Cézanne, and Camille Pissarro through another painter, Berthe Morisot, who was a member of the group and drew him into their activities. They later became widely known as the Batignolles group (Le groupe des Batignolles). The supposed grand-niece of the painter Jean-Honoré Fragonard, Morisot had her first painting accepted in the Salon de Paris in 1864, and she continued to show in the salon for the next ten years. Manet became the friend and colleague of Morisot in 1868. She is credited with convincing Manet to attempt plein air painting, which she had been practicing since she was introduced to it by another friend of hers, Camille Corot. They had a reciprocating relationship and Manet incorporated some of her techniques into his paintings. In 1874, she became his sister-in-law when she married his brother, Eugène. Unlike the core Impressionist group, Manet maintained that modern artists should seek to exhibit at the Paris Salon rather than abandon it in favor of independent exhibitions. Nevertheless, when Manet was excluded from the International Exhibition of 1867, he set up his own exhibition. His mother worried that he would waste all his inheritance on this project, which was enormously expensive. While the exhibition earned poor reviews from the major critics, it also provided his first contacts with several future Impressionist painters, including Degas. Although his own work influenced and anticipated the Impressionist style, Manet resisted involvement in Impressionist exhibitions, partly because he did not wish to be seen as the representative of a group identity, and partly because he preferred to exhibit at the Salon. Eva Gonzalès, a daughter of the novelist Emmanuel Gonzalès,
points for the young painters who would create Impressionism. Today, these are considered watershed paintings that mark the start of modern art. The last 20 years of Manet's life saw him form bonds with other great artists of the time; he developed his own simple and direct style that would be heralded as innovative and serve as a major influence for future painters. Early life Édouard Manet was born in Paris on 23 January 1832, in the ancestral hôtel particulier (mansion) on the Rue des Petits Augustins (now Rue Bonaparte) to an affluent and well-connected family. His mother, Eugénie-Desirée Fournier, was the daughter of a diplomat and goddaughter of the Swedish crown prince Charles Bernadotte, from whom the Swedish monarchs are descended. His father, Auguste Manet, was a French judge who expected Édouard to pursue a career in law. His uncle, Edmond Fournier, encouraged him to pursue painting and took young Manet to the Louvre. In 1841 he enrolled at secondary school, the Collège Rollin. In 1845, at the advice of his uncle, Manet enrolled in a special course of drawing where he met Antonin Proust, future Minister of Fine Arts and subsequent lifelong friend. At his father's suggestion, in 1848 he sailed on a training vessel to Rio de Janeiro. After he twice failed the examination to join the Navy, his father relented to his wishes to pursue an art education. From 1850 to 1856, Manet studied under the academic painter Thomas Couture. In his spare time, Manet copied the Old Masters in the Louvre. From 1853 to 1856, Manet visited Germany, Italy, and the Netherlands, during which time he was influenced by the Dutch painter Frans Hals, and the Spanish artists Diego Velázquez and Francisco José de Goya. Career In 1856, Manet opened a studio. His style in this period was characterized by loose brush strokes, simplification of details and the suppression of transitional tones. Adopting the current style of realism initiated by Gustave Courbet, he painted The Absinthe Drinker (1858–59) and other contemporary subjects such as beggars, singers, Gypsies, people in cafés, and bullfights. After his early career, he rarely painted religious, mythological, or historical subjects; examples include his Christ Mocked, now in the Art Institute of Chicago, and Christ with Angels, in the Metropolitan Museum of Art, New York. Manet had two canvases accepted at the Salon in 1861. A portrait of his mother and father, who at the time was paralysed and robbed of speech by a stroke, was ill-received by critics. The other, The Spanish Singer, was admired by Théophile Gautier, and placed in a more conspicuous location as a result of its popularity with Salon-goers. Manet's work, which appeared "slightly slapdash" when compared with the meticulous style of so many other Salon paintings, intrigued some young artists. The Spanish Singer, painted in a "strange new fashion[,] caused many painters' eyes to open and their jaws to drop." Music in the Tuileries Music in the Tuileries is an early example of Manet's painterly style. Inspired by Hals and Velázquez, it is a harbinger of his lifelong interest in the subject of leisure. While the picture was regarded as unfinished by some, the suggested atmosphere imparts a sense of what the Tuileries gardens were like at the time; one may imagine the music and conversation. Here, Manet has depicted his friends, artists, authors, and musicians who take part, and he has included a self-portrait among the subjects. Luncheon on the Grass (Le déjeuner sur l'herbe) A major early work is The Luncheon on the Grass (Le Déjeuner sur l'herbe), originally Le Bain. The Paris Salon rejected it for exhibition in 1863, but Manet agreed to exhibit it at the Salon des Refusés (Salon of the Rejected) which was a parallel exhibition to the official Salon, as an alternative exhibition in the Palais des Champs-Elysée. The Salon des Refusés was initiated by Emperor Napoleon III as a solution to a problematic situation which came about as the Selection Committee of the Salon that year rejected 2,783 paintings of the c. 5000. Each painter could decide whether to take the opportunity to exhibit at the Salon des Refusés, although less than 500 of the rejected painters chose to do so. Manet employed model Victorine Meurent, his wife Suzanne, future brother-in-law Ferdinand Leenhoff, and one of his brothers to pose. Meurent also posed for several more of Manet's important paintings including Olympia; and by the mid-1870s she became an accomplished painter in her own right. The painting's juxtaposition of fully dressed men and a nude woman was controversial, as was its abbreviated, sketch-like handling, an innovation that distinguished Manet from Courbet. At the same time, Manet's composition reveals his study of the old masters, as the disposition of the main figures is derived from Marcantonio Raimondi's engraving of the Judgement of Paris (c. 1515) based on a drawing by Raphael. Two additional works cited by scholars as important precedents for Le déjeuner sur l'herbe are Pastoral Concert (c. 1510, The Louvre) and The Tempest (Gallerie dell'Accademia, Venice), both of which are attributed variously to Italian Renaissance masters Giorgione or Titian. The Tempest is an enigmatic painting featuring a fully dressed man and a nude woman in a rural setting. The man is standing to the left and gazing to the side, apparently at the woman, who is seated and breastfeeding a baby; the relationship between the two figures is unclear. In Pastoral Concert, two clothed men and a nude woman are seated on the grass, engaged in music making, while a second nude woman stands beside them. Olympia As he had in Luncheon on the Grass, Manet again paraphrased a respected work by a Renaissance artist in the painting Olympia (1863), a nude portrayed in a style reminiscent of early studio photographs, but whose pose was based on Titian's Venus of Urbino (1538). The painting is also reminiscent of Francisco Goya's painting The Nude Maja (1800). Manet embarked on the canvas after being challenged to give the Salon a nude painting to display. His uniquely frank depiction of a self-assured prostitute was accepted by the Paris Salon in 1865, where it created a scandal. According to Antonin Proust, "only the precautions taken by the administration prevented the painting being punctured and torn" by offended viewers. The painting was controversial partly because the nude is wearing some small items of clothing such as an orchid in her hair, a bracelet, a ribbon around her neck, and mule slippers, all of which accentuated her nakedness, sexuality, and comfortable courtesan lifestyle. The orchid, upswept hair, black cat, and bouquet of flowers were all recognized symbols of sexuality at the time. This modern Venus' body is thin, counter to prevailing standards; the painting's lack of idealism rankled viewers. The painting's flatness, inspired by Japanese wood block art, serves to make the nude more human and less voluptuous. A fully dressed black servant is featured, exploiting the then-current theory that black people were hyper-sexed. That she is wearing the clothing of a servant to a courtesan here furthers the sexual tension of the piece. Olympia's body as well as her gaze is unabashedly confrontational. She defiantly looks out as her servant offers flowers from one of her male suitors. Although her hand rests on her leg, hiding her pubic area, the reference to traditional female virtue is ironic; a notion of modesty is notoriously absent in this work. A contemporary critic denounced Olympia's "shamelessly flexed" left hand, which seemed to him a mockery of the relaxed, shielding hand of Titian's Venus. Likewise, the alert black cat at the foot of the bed strikes a sexually rebellious note in contrast to that of the sleeping dog in Titian's portrayal of the goddess in his Venus of Urbino. Olympia was the subject of caricatures in the popular press, but was championed by the French avant-garde community, and the painting's significance was appreciated by artists such as Gustave Courbet, Paul Cézanne, Claude Monet, and later Paul Gauguin. As with Luncheon on the Grass, the painting raised the issue of prostitution within contemporary France and the roles of women within society. Life and times After the death of his father in 1862, Manet married Suzanne Leenhoff in 1863. Leenhoff was a Dutch-born piano teacher two years Manet's senior with whom he had been romantically involved for approximately ten years. Leenhoff initially had been employed by Manet's father, Auguste, to teach Manet and his younger brother piano. She also may have been Auguste's mistress. In 1852, Leenhoff gave birth, out of wedlock, to a son, Leon Koella Leenhoff. Manet painted his wife
game-theoretical terms, an ESS is an equilibrium refinement of the Nash equilibrium, being a Nash equilibrium that is also "evolutionarily stable." Thus, once fixed in a population, natural selection alone is sufficient to prevent alternative (mutant) strategies from replacing it (although this does not preclude the possibility that a better strategy, or set of strategies, will emerge in response to selective pressures resulting from environmental change). History Evolutionarily stable strategies were defined and introduced by John Maynard Smith and George R. Price in a 1973 Nature paper. Such was the time taken in peer-reviewing the paper for Nature that this was preceded by a 1972 essay by Maynard Smith in a book of essays titled On Evolution. The 1972 essay is sometimes cited instead of the 1973 paper, but university libraries are much more likely to have copies of Nature. Papers in Nature are usually short; in 1974, Maynard Smith published a longer paper in the Journal of Theoretical Biology. Maynard Smith explains further in his 1982 book Evolution and the Theory of Games. Sometimes these are cited instead. In fact, the ESS has become so central to game theory that often no citation is given, as the reader is assumed to be familiar with it. Maynard Smith mathematically formalised a verbal argument made by Price, which he read while peer-reviewing Price's paper. When Maynard Smith realized that the somewhat disorganised Price was not ready to revise his article for publication, he offered to add Price as co-author. The concept was derived from R. H. MacArthur and W. D. Hamilton's work on sex ratios, derived from Fisher's principle, especially Hamilton's (1967) concept of an unbeatable strategy. Maynard Smith was jointly awarded the 1999 Crafoord Prize for his development of the concept of evolutionarily stable strategies and the application of game theory to the evolution of behaviour. Uses of ESS: The ESS was a major element used to analyze evolution in Richard Dawkins' bestselling 1976 book The Selfish Gene. The ESS was first used in the social sciences by Robert Axelrod in his 1984 book The Evolution of Cooperation. Since then, it has been widely used in the social sciences, including anthropology, economics, philosophy, and political science. In the social sciences, the primary interest is not in an ESS as the end of biological evolution, but as an end point in cultural evolution or individual learning. In evolutionary psychology, ESS is used primarily as a model for human biological evolution. Motivation The Nash equilibrium is the traditional solution concept in game theory. It depends on the cognitive abilities of the players. It is assumed that players are aware of the structure of the game and consciously try to predict the moves of their opponents and to maximize their own payoffs. In addition, it is presumed that all the players know this (see common knowledge). These assumptions are then used to explain why players choose Nash equilibrium strategies. Evolutionarily stable strategies are motivated entirely differently. Here, it is presumed that the players' strategies are biologically encoded and heritable. Individuals have no control over their strategy and need not be aware of the game. They reproduce and are subject to the forces of natural selection, with the payoffs of the game representing reproductive success (biological fitness). It is imagined that alternative strategies of the game occasionally occur, via a process like mutation. To be an ESS, a strategy must be resistant to these alternatives. Given the radically different motivating assumptions, it may come as a surprise that ESSes and Nash equilibria often coincide. In fact, every ESS corresponds to a Nash equilibrium, but some Nash equilibria are not ESSes. Nash equilibrium An ESS is a refined or modified form of a Nash equilibrium. (See the next section for examples which contrast the two.) In a Nash equilibrium, if all players adopt their respective parts, no player can benefit by switching to any alternative strategy. In a two player game, it is a strategy pair. Let E(S,T) represent the payoff for playing strategy S against strategy T. The strategy pair (S, S) is a Nash equilibrium in a two player game if and only if for both players, for any strategy T: E(S,S) ≥ E(T,S) In this definition, a strategy T≠S can be a neutral alternative to S (scoring equally well, but not better). A Nash equilibrium is presumed to be stable even if T scores equally, on the assumption that there is no long-term incentive for players to adopt T instead of S. This fact represents the point of departure of the ESS. Maynard Smith and Price specify two conditions for a strategy S to be an ESS. For all T≠S, either E(S,S) > E(T,S), or E(S,S) = E(T,S) and E(S,T) > E(T,T) The first condition is sometimes called a strict Nash equilibrium. The second is sometimes called "Maynard Smith's second condition". The second condition means that although strategy T is neutral with respect to the payoff against strategy S, the population of players who continue to play strategy S has an advantage when playing against T. There is also an alternative, stronger definition of ESS, due to Thomas. This places a different emphasis on the role of the Nash equilibrium concept in the ESS concept. Following the terminology given in the first definition above, this definition requires that for all T≠S E(S,S) ≥ E(T,S), and E(S,T) > E(T,T) In this formulation, the first condition specifies that the strategy is a Nash equilibrium, and the second specifies that Maynard Smith's second condition is met. Note that the two definitions are not precisely equivalent: for example, each pure strategy in the coordination game below is an ESS by the first definition but not the second. In words, this definition looks like this: The payoff of the first player when both players play strategy S is higher than (or equal to) the payoff of the first player when he changes to another strategy T and the second player keeps his strategy S and the payoff of the first player when only his opponent changes his strategy to T is higher than his payoff in case that both of players change their strategies to T. This formulation more clearly highlights the role of the Nash equilibrium condition in the ESS. It also allows for a natural definition of related concepts such as a weak ESS or an evolutionarily stable set. Examples of differences between Nash equilibria and ESSes In most simple games, the ESSes and Nash equilibria coincide perfectly. For instance, in the prisoner's dilemma there is only one Nash equilibrium, and its strategy (Defect) is also an ESS. Some games may have Nash equilibria that are not ESSes. For example, in harm thy neighbor (whose payoff matrix is shown here) both (A, A) and (B, B) are Nash equilibria, since players cannot do better by switching away from either. However, only B is an ESS (and a strong Nash). A is not an ESS, so B can neutrally invade a population of A strategists and predominate, because B scores higher against B than A does against B. This dynamic is captured by Maynard Smith's second condition, since E(A, A) = E(B, A), but it is not the case that E(A,B)
to predict the moves of their opponents and to maximize their own payoffs. In addition, it is presumed that all the players know this (see common knowledge). These assumptions are then used to explain why players choose Nash equilibrium strategies. Evolutionarily stable strategies are motivated entirely differently. Here, it is presumed that the players' strategies are biologically encoded and heritable. Individuals have no control over their strategy and need not be aware of the game. They reproduce and are subject to the forces of natural selection, with the payoffs of the game representing reproductive success (biological fitness). It is imagined that alternative strategies of the game occasionally occur, via a process like mutation. To be an ESS, a strategy must be resistant to these alternatives. Given the radically different motivating assumptions, it may come as a surprise that ESSes and Nash equilibria often coincide. In fact, every ESS corresponds to a Nash equilibrium, but some Nash equilibria are not ESSes. Nash equilibrium An ESS is a refined or modified form of a Nash equilibrium. (See the next section for examples which contrast the two.) In a Nash equilibrium, if all players adopt their respective parts, no player can benefit by switching to any alternative strategy. In a two player game, it is a strategy pair. Let E(S,T) represent the payoff for playing strategy S against strategy T. The strategy pair (S, S) is a Nash equilibrium in a two player game if and only if for both players, for any strategy T: E(S,S) ≥ E(T,S) In this definition, a strategy T≠S can be a neutral alternative to S (scoring equally well, but not better). A Nash equilibrium is presumed to be stable even if T scores equally, on the assumption that there is no long-term incentive for players to adopt T instead of S. This fact represents the point of departure of the ESS. Maynard Smith and Price specify two conditions for a strategy S to be an ESS. For all T≠S, either E(S,S) > E(T,S), or E(S,S) = E(T,S) and E(S,T) > E(T,T) The first condition is sometimes called a strict Nash equilibrium. The second is sometimes called "Maynard Smith's second condition". The second condition means that although strategy T is neutral with respect to the payoff against strategy S, the population of players who continue to play strategy S has an advantage when playing against T. There is also an alternative, stronger definition of ESS, due to Thomas. This places a different emphasis on the role of the Nash equilibrium concept in the ESS concept. Following the terminology given in the first definition above, this definition requires that for all T≠S E(S,S) ≥ E(T,S), and E(S,T) > E(T,T) In this formulation, the first condition specifies that the strategy is a Nash equilibrium, and the second specifies that Maynard Smith's second condition is met. Note that the two definitions are not precisely equivalent: for example, each pure strategy in the coordination game below is an ESS by the first definition but not the second. In words, this definition looks like this: The payoff of the first player when both players play strategy S is higher than (or equal to) the payoff of the first player when he changes to another strategy T and the second player keeps his strategy S and the payoff of the first player when only his opponent changes his strategy to T is higher than his payoff in case that both of players change their strategies to T. This formulation more clearly highlights the role of the Nash equilibrium condition in the ESS. It also allows for a natural definition of related concepts such as a weak ESS or an evolutionarily stable set. Examples of differences between Nash equilibria and ESSes In most simple games, the ESSes and Nash equilibria coincide perfectly. For instance, in the prisoner's dilemma there is only one Nash equilibrium, and its strategy (Defect) is also an ESS. Some games may have Nash equilibria that are not ESSes. For example, in harm thy neighbor (whose payoff matrix is shown here) both (A, A) and (B, B) are Nash equilibria, since players cannot do better by switching away from either. However, only B is an ESS (and a strong Nash). A is not an ESS, so B can neutrally invade a population of A strategists
US Element by Westin, a brand of Starwood Hotels and Resorts Worldwide Element Electronics, an American electronics company Element Skateboards, a skateboard manufacturer Elements (restaurant), in Princeton, New Jersey Elements, Hong Kong, a shopping mall in Hong Kong Entertainment Music "Element" (song), a 2017 song by Kendrick Lamar "Element", a song by Deerhunter from their 2019 album Why Hasn't Everything Already Disappeared? "Elements", a song by Stratovarius from the 2003 album Elements Pt. 1 Element (production team), a Norwegian production and songwriting team Elements (band), American jazz band 1980s–1990s Elements (Atheist album), 1993 Elements (B.o.B album), 2016 Elements (Ludovico Einaudi album), 2015 Elements (Roger Glover album), 1978 Elements (Steve Howe album), 2003 Elements, debut EP by Elaine, 2019 Elements Box by Mike Oldfield, four CD edition Elements – The Best of Mike Oldfield, single CD edition Elements – The Best of Mike Oldfield (video) The Elements (Joe Henderson album), 1973 The Elements (Second Person album), 2007 The Elements (TobyMac album), 2018 "The Elements" (song), by Tom Lehrer "Element", a song by the American rapper Pop Smoke from his mixtape Meet the Woo 2 Other entertainment Elements (miniseries), a Cartoon Network miniseries Elements trilogy, three films written and directed by Deepa Mehta Elements (esports), a team in the European League of Legends Championship Series Other Element (criminal law), a basic set of common law principles regarding criminal liability Elements (journal), a scientific publication about mineralogy, geochemistry, and petrology Element Magazine, Asian men's magazine The elements, a term used to refer to natural perils such as erosion, rough terrain, rust, cold, heat,
air, fire, water) The elements, a religious term referring to the bread and wine of the Eucharist Five elements (Japanese philosophy), the basis of the universe according to Japanese philosophy Mahābhūta, the four great elements in Buddhism, five in Hinduism Tattva, an elemental basis of the universe according to Hindu Samkhya philosophy Wuxing (Chinese philosophy), sometimes translated as five elements, the basis of the universe according to Chinese Taoin Technology Element (UML), part of the Unified Modeling Language superstructure Data element, a unit of data Electrical element, an abstract part of a circuit HTML element, a standard part of an HTML document Markup element, a part of a document defined by a markup language Structural element, in construction and engineering Adobe Photoshop Elements, a bitmap graphics program Adobe Premiere Elements, a video editing computer program Honda Element, a car Element (software),
Latin meaning "extreme" and Greek () meaning "love") is an organism that is able to live (or in some cases thrive) in extreme environments, i.e. environment that make survival challenging such as due to extreme temperature, radiation, salinity, or pH level. These organisms are ecologically dominant in the evolutionary history of the planet. Dating back to more than 40 million years ago, extremophiles have continued to thrive in the most extreme conditions, making them one of the most abundant lifeforms. Characteristics In the 1980s and 1990s, biologists found that microbial life has great flexibility for surviving in extreme environments—niches that are acidic, extraordinarily hot or within irregular air pressure for example—that would be completely inhospitable to complex organisms. Some scientists even concluded that life may have begun on Earth in hydrothermal vents far under the ocean's surface. According to astrophysicist Steinn Sigurdsson, "There are viable bacterial spores that have been found that are 40 million years old on Earth—and we know they're very hardened to radiation." Some bacteria were found living in the cold and dark in a lake buried a half-mile deep under the ice in Antarctica, and in the Marianas Trench, the deepest place in Earth's oceans. Expeditions of the International Ocean Discovery Program found microorganisms in 120 °C sediment that is 1.2 km below seafloor in the Nankai Trough subduction zone. Some microorganisms have been found thriving inside rocks up to below the sea floor under of ocean off the coast of the northwestern United States. According to one of the researchers, "You can find microbes everywhere—they're extremely adaptable to conditions, and survive wherever they are." A key to extremophile adaptation is their amino acid composition, affecting their protein folding ability under particular conditions. Studying extreme environments on Earth can help researchers understand the limits of habitability on other worlds. Tom Gheysens from Ghent University in Belgium and some of his colleagues have presented research findings that show spores from a species of Bacillus bacteria survived and were still viable after being heated to temperatures of . Classifications There are many classes of extremophiles that range all around the globe; each corresponding to the way its environmental niche differs from mesophilic conditions. These classifications are not exclusive. Many extremophiles fall under multiple categories and are classified as polyextremophiles. For example, organisms living inside hot rocks deep under Earth's surface are thermophilic and piezophilic such as Thermococcus barophilus. A polyextremophile living at the summit of a mountain in the Atacama Desert might be a radioresistant xerophile, a psychrophile, and an oligotroph. Polyextremophiles are well known for their ability to tolerate both high and low pH levels. Terms AcidophileAn organism with optimal growth at pH levels of 3.0 or below. AlkaliphileAn organism with optimal growth at pH levels of 9.0 or above. AnaerobeAn organism with optimal growth in the absence of molecular oxygen. Two sub-types exist: facultative anaerobe and obligate anaerobe. A facultative anaerobe can tolerate anoxic and oxic conditions whilst an obligate anaerobe will die in the presence of even low levels of molecular oxygen.: Capnophile An organism with optimal growth conditions in high concentrations of carbon dioxide. An example would be Mannheimia succiniciproducens, a bacterium that inhabits a ruminant animal's digestive system. Cryptoendolith An organism that lives in microscopic spaces within rocks, such as pores between aggregate grains. These may also be called endolith, a term that also includes organisms populating fissures, aquifers, and faults filled with groundwater in the deep subsurface. HalophileAn organism with optimal growth at a concentration of dissolved salts of 50 g/L (= 5% m/v) or above. HyperpiezophileAn organism with optimal growth at hydrostatic pressures above 50 MPa (= 493 atm = 7,252 psi). HyperthermophileAn organism with optimal growth at temperatures above . HypolithAn organism that lives underneath rocks in cold deserts. MetallotolerantCapable of tolerating high levels of dissolved heavy metals in solution, such as copper, cadmium, arsenic, and zinc. Examples include Ferroplasma sp., Cupriavidus metallidurans and GFAJ-1. OligotrophAn organism with optimal growth in nutritionally limited environments. OsmophileAn organism with optimal growth in environments with a high sugar concentration. PiezophileAn organism with optimal growth in hydrostatic pressures above 10 MPa (= 99 atm = 1,450 psi). Also referred to as barophile. PolyextremophileA polyextremophile (faux Ancient Latin/Greek for 'affection for many extremes') is an organism that qualifies as an extremophile under more than one category. Psychrophile/CryophileAn organism with optimal growth at temperatures of or lower. RadioresistantOrganisms resistant to high levels of ionizing radiation, most commonly ultraviolet radiation. This category also includes organisms capable of resisting nuclear radiation.: Sulphophile An organism with optimal growth conditions in high concentrations of sulfur. An example would be Sulfurovum Epsilonproteobacteria, a sulfur-oxidizing bacteria that inhabits deep-water sulfur vents. ThermophileAn organism with optimal growth at temperatures above . XerophileAn organism with optimal growth at water activity below 0.8. In astrobiology Astrobiology is the multidisciplinary field that investigates the deterministic conditions and contingent events with which life arises, distributes, and evolves in the universe. Astrobiology makes use of physics, chemistry, astronomy, solar physics, biology, molecular biology, ecology, planetary science, geography, and geology to investigate the possibility of life on other worlds and help recognize biospheres that might be different from that on Earth. Astrobiologists are particularly interested in studying extremophiles, as it allows them to map what is known about the limits of life on Earth to potential extraterrestrial environments For example, analogous deserts of Antarctica are exposed to harmful UV radiation, low temperature, high salt concentration and low mineral concentration. These conditions are similar to those on Mars. Therefore, finding viable microbes in the subsurface of Antarctica suggests that there may be microbes surviving in endolithic communities and living under the Martian surface. Research indicates it is unlikely that Martian microbes exist on the surface or at shallow depths, but may be found at subsurface depths of around 100 meters. Recent research carried out on extremophiles in Japan involved a variety of bacteria including Escherichia coli and Paracoccus denitrificans being subject to conditions of extreme gravity. The bacteria were cultivated while being rotated in an ultracentrifuge at high speeds corresponding to 403,627 g (i.e. 403,627 times the gravity experienced on Earth). Paracoccus denitrificans was one of the bacteria which displayed not only survival but also robust cellular growth
can help researchers understand the limits of habitability on other worlds. Tom Gheysens from Ghent University in Belgium and some of his colleagues have presented research findings that show spores from a species of Bacillus bacteria survived and were still viable after being heated to temperatures of . Classifications There are many classes of extremophiles that range all around the globe; each corresponding to the way its environmental niche differs from mesophilic conditions. These classifications are not exclusive. Many extremophiles fall under multiple categories and are classified as polyextremophiles. For example, organisms living inside hot rocks deep under Earth's surface are thermophilic and piezophilic such as Thermococcus barophilus. A polyextremophile living at the summit of a mountain in the Atacama Desert might be a radioresistant xerophile, a psychrophile, and an oligotroph. Polyextremophiles are well known for their ability to tolerate both high and low pH levels. Terms AcidophileAn organism with optimal growth at pH levels of 3.0 or below. AlkaliphileAn organism with optimal growth at pH levels of 9.0 or above. AnaerobeAn organism with optimal growth in the absence of molecular oxygen. Two sub-types exist: facultative anaerobe and obligate anaerobe. A facultative anaerobe can tolerate anoxic and oxic conditions whilst an obligate anaerobe will die in the presence of even low levels of molecular oxygen.: Capnophile An organism with optimal growth conditions in high concentrations of carbon dioxide. An example would be Mannheimia succiniciproducens, a bacterium that inhabits a ruminant animal's digestive system. Cryptoendolith An organism that lives in microscopic spaces within rocks, such as pores between aggregate grains. These may also be called endolith, a term that also includes organisms populating fissures, aquifers, and faults filled with groundwater in the deep subsurface. HalophileAn organism with optimal growth at a concentration of dissolved salts of 50 g/L (= 5% m/v) or above. HyperpiezophileAn organism with optimal growth at hydrostatic pressures above 50 MPa (= 493 atm = 7,252 psi). HyperthermophileAn organism with optimal growth at temperatures above . HypolithAn organism that lives underneath rocks in cold deserts. MetallotolerantCapable of tolerating high levels of dissolved heavy metals in solution, such as copper, cadmium, arsenic, and zinc. Examples include Ferroplasma sp., Cupriavidus metallidurans and GFAJ-1. OligotrophAn organism with optimal growth in nutritionally limited environments. OsmophileAn organism with optimal growth in environments with a high sugar concentration. PiezophileAn organism with optimal growth in hydrostatic pressures above 10 MPa (= 99 atm = 1,450 psi). Also referred to as barophile. PolyextremophileA polyextremophile (faux Ancient Latin/Greek for 'affection for many extremes') is an organism that qualifies as an extremophile under more than one category. Psychrophile/CryophileAn organism with optimal growth at temperatures of or lower. RadioresistantOrganisms resistant to high levels of ionizing radiation, most commonly ultraviolet radiation. This category also includes organisms capable of resisting nuclear radiation.: Sulphophile An organism with optimal growth conditions in high concentrations of sulfur. An example would be Sulfurovum Epsilonproteobacteria, a sulfur-oxidizing bacteria that inhabits deep-water sulfur vents. ThermophileAn organism with optimal growth at temperatures above . XerophileAn organism with optimal growth at water activity below 0.8. In astrobiology Astrobiology is the multidisciplinary field that investigates the deterministic conditions and contingent events with which life arises, distributes, and evolves in the universe. Astrobiology makes use of physics, chemistry, astronomy, solar physics, biology, molecular biology, ecology, planetary science, geography, and geology to investigate the possibility of life on other worlds and help recognize biospheres that might be different from that on Earth. Astrobiologists are particularly interested in studying extremophiles, as it allows them to map what is known about the limits of life on Earth to potential extraterrestrial environments For example, analogous deserts of Antarctica are exposed to harmful UV radiation, low temperature, high salt concentration and low mineral concentration. These conditions are similar to those on Mars. Therefore, finding viable microbes in the subsurface of Antarctica suggests that there may be microbes surviving in endolithic communities and living under the Martian surface. Research indicates it is unlikely that Martian microbes exist on the surface or at shallow depths, but may be found at subsurface depths of around 100 meters. Recent research carried out on extremophiles in Japan involved a variety of bacteria including Escherichia coli and Paracoccus denitrificans being subject to conditions of extreme gravity. The bacteria were cultivated while being rotated in an ultracentrifuge at high speeds corresponding to 403,627 g (i.e. 403,627 times the gravity experienced on Earth). Paracoccus denitrificans was one of the bacteria which displayed not only survival but also robust cellular growth under these conditions of hyperacceleration which are usually found only in cosmic environments, such as on very massive stars or in the shock waves of supernovas. Analysis showed that the small size of prokaryotic cells is essential for successful growth under hypergravity. The research has implications on the feasibility of panspermia. On 26 April 2012, scientists reported that lichen survived and showed remarkable results on the adaptation capacity of photosynthetic activity within the simulation time of 34 days under Martian conditions in the Mars Simulation Laboratory (MSL) maintained by the German Aerospace Center (DLR). On 29 April 2013, scientists at Rensselaer Polytechnic Institute, funded by NASA, reported that, during spaceflight on the International Space Station, microbes seem to adapt to the space environment in ways "not observed on Earth" and in ways that "can lead to increases in growth and virulence". On 19 May 2014, scientists announced that numerous microbes, like Tersicoccus phoenicis, may be resistant to methods usually used in spacecraft assembly clean rooms. It's not currently known if such resistant microbes could have withstood space travel and are present on the Curiosity rover now on the planet Mars. On 20 August 2014, scientists confirmed the existence of microorganisms living half a mile below the ice of Antarctica. In September 2015, scientists from CNR-National Research Council of Italy reported that S.soflataricus was able to survive under Martian radiation at a wavelength that was considered extremely lethal to most bacteria. This discovery is significant because it indicates that not only bacterial spores, but also growing cells can be remarkably resistant to strong UV radiation. In June 2016, scientists from Brigham Young University conclusively reported that endospores of Bacillus subtilis were able to survive high speed impacts up to 299±28 m/s, extreme shock, and extreme deceleration. They pointed out that this feature might allow endospores to survive and to be transferred between planets by traveling within meteorites or by experiencing atmosphere disruption. Moreover, they suggested that the landing of spacecraft may also result in interplanetary spore transfer, given that spores can survive high-velocity impact while ejected from the spacecraft onto the planet surface. This is the first study which reported that bacteria can survive in such high-velocity impact. However, the lethal impact speed is unknown, and further experiments should be done by introducing higher-velocity impact to bacterial endospores. In August 2020 scientists reported that bacteria that feed on air discovered 2017 in Antarctica are likely not limited to Antarctica after discovering the
more commonly known as HBCUs."The Higher Education Act of 1965, as amended, defines an HBCU as: "…any historically black college or university that was established prior to 1964, whose principal mission was, and is, the education of black Americans, and that is accredited by a nationally recognized accrediting agency or association determined by the Secretary [of Education] to be a reliable authority as to the quality of training offered or is, according to such an agency or association, making reasonable progress toward accreditation."Known as the Bilingual Education Act, Title VII of ESEA (Public Law 90-247), offered federal aid to school districts to provide bilingual instruction for students with limited English speaking ability. The Education Amendments of 1972 (Public Law 92-318, 86 Stat. 327) establishes the Education Division in the U.S. Department of Health, Education, and Welfare and the National Institute of Education. Title IX of the Education Amendments of 1972 states, "No person in the United States shall, on the basis of sex, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any education program or activity receiving Federal financial assistance." Equal Educational Opportunities Act of 1974 (Public Law 93-380) - Civil Rights Amendments to the Elementary and Secondary Education Act of 1965:"Title I: Bilingual Education Act - Authorizes appropriations for carrying out the provisions of this Act. Establishes, in the Office of Education, an Office of Bilingual Education through which the Commissioner of Education shall carry out his functions relating to bilingual education. Authorizes appropriations for school nutrition and health services, correction education services, and ethnic heritage studies centers. Title II: Equal Educational Opportunities and the Transportation of Students: Equal Educational Opportunities Act - Provides that no state shall deny equal educational opportunity to an individual on account of his or her race, color, sex, or national origin by means of specified practices... Title IV: Consolidation of Certain Education Programs: Authorizes appropriations for use in various education programs including libraries and learning resources, education for use of the metric system of measurement, gifted and talented children programs, community schools, career education, consumers' education, women's equity in education programs, and arts in education programs. Community Schools Act - Authorizes the Commissioner to make grants to local educational agencies to assist in planning, establishing, expanding, and operating community education programs Women's Educational Equity Act - Establishes the Advisory Council on Women's Educational Programs and sets forth the composition of such Council. Authorizes the Commissioner of Education to make grants to, and enter into contracts with, public agencies, private nonprofit organizations, and individuals for activities designed to provide educational equity for women in the United States. Title V: Education Administration: Family Educational Rights and Privacy Act (FERPA)- Provides that no funds shall be made available under the General Education Provisions Act to any State or local educational agency or educational institution which denies or prevents the parents of students to inspect and review all records and files regarding their children. Title VII: National Reading Improvement Program: Authorizes the Commissioner to contract with State or local educational agencies for the carrying out by such agencies, in schools having large numbers of children with reading deficiencies, of demonstration projects involving the use of innovative methods, systems, materials, or programs which show promise of overcoming such reading deficiencies."In 1975, The Education for All Handicapped Children Act (Public Law 94-142) ensured that all handicapped children (age 3-21) receive a "free, appropriate public education" designed to meet their special needs. 1980-1989: A Nation at Risk During the 1980s, some of the momentum of education reform moved from the left to the right, with the release of A Nation at Risk, Ronald Reagan's efforts to reduce or eliminate the United States Department of Education. "[T]he federal government and virtually all state governments, teacher training institutions, teachers' unions, major foundations, and the mass media have all pushed strenuously for higher standards, greater accountability, more "time on task," and more impressive academic results". Per the shift in educational motivation, families sought institutional alternatives, including "charter schools, progressive schools, Montessori schools, Waldorf schools, Afrocentric schools, religious schools - or home school instruction in their communities." In 1984 President Reagan enacted the Education for Economic Security Act (Public Law 98-377) In 1989, the Child Development and Education Act of 1989 (Public Law 101-239) authorized funds for Head Start Programs to include child care services. In the latter half of the decade, E. D. Hirsch put forth an influential attack on one or more versions of progressive education. Advocating an emphasis on "cultural literacy"—the facts, phrases, and texts. See also Uncommon Schools. 1990-2000: Standards Based Education Model In 1994, the land grant system was expanded via the Elementary and Secondary Education Act to include tribal colleges. Most states and districts in the 1990s adopted Outcome-Based Education (OBE) in some form or another. A state would create a committee to adopt standards, and choose a quantitative instrument to assess whether the students knew the required content or could perform the required tasks. In 1992 The National Commission on Time and Learning, Extension (Public Law 102–359) revise funding for civic education programs and those educationally disadvantaged children." In 1994 the Improving America's Schools Act (IASA) (Public Law 103-382) reauthorized the Elementary and Secondary Education Act of 1965; amended as The Eisenhower Professional Development Program; IASA designated Title I funds for low income and otherwise marginalized groups; i.e., females, minorities, individuals with disabilities, individuals with limited English proficiency (LEP). By tethering federal funding distributions to student achievement, IASA meant use high stakes testing and curriculum standards to hold schools accountable for their results at the same level as other students. The Act significantly increased impact aid for the establishment of the Charter School Program, drug awareness campaigns, bilingual education, and technology. In 1998 The Charter School Expansion Act (Public Law 105-278) amended the Charter School Program, enacted in 1994. 2001-2015: No Child Left Behind Consolidated Appropriations Act of 2001 (Public Law 106-554) appropriated funding to repair educational institution's buildings as well as repair and renovate charter school facilities, reauthorized the Even Start program, and enacted the Children's Internet Protection Act. The standards-based National Education Goals 2000, set by the U.S. Congress in the 1990s, were based on the principles of outcomes-based education. In 2002, the standards-based reform movement culminated as the No Child left Behind Act of 2001 (Public Law 107-110) where achievement standard were set by each individual state. This federal policy was active until 2015 in the United States . An article released by CBNC.com said a principal Senate Committee will take into account legislation that reauthorizes and modernizes the Carl D. Perkins Act. President George Bush approved this statute in 2006 on August 12, 2006. This new bill will emphasize the importance of federal funding for various Career and Technical (CTE) programs that will better provide learners with in-demand skills. Pell Grants are specific amount of money is given by the government every school year for disadvantaged students who need to pay tuition fees in college. At present, there are many initiatives aimed at dealing with these concerns like innovative cooperation between federal and state governments, educators, and the business sector. One of these efforts is the Pathways to Technology Early College High School (P-TECH). This six-year program was launched in cooperation with IBM, educators from three cities in New York, Chicago, and Connecticut, and over 400 businesses. The program offers students in high school and associate programs focusing on the STEM curriculum. The High School Involvement Partnership, private and public venture, was established through the help of Northrop Grumman, a global security firm. It has given assistance to some 7,000 high school students (juniors and seniors) since 1971 by means of one-on-one coaching as well as exposure to STEM areas and careers. 2016-2021: Every Student Succeeds Act The American Reinvestment and Recovery Act, enacted in 2009, reserved more than $85 billion in public funds to be used for education. The 2009 Council of Chief State School Officers and the National Governors Association launch the Common Core State Standards Initiative. In 2012 the Obama administration launched the Race to the Top competition aimed at spurring K–12 education reform through higher standards."The Race to the Top – District competition will encourage transformative change within schools, targeted toward leveraging, enhancing, and improving classroom practices and resources. The four key areas of reform include: Development of rigorous standards and better assessments Adoption of better data systems to provide schools, teachers, and parents with information about student progress Support for teachers and school leaders to become more effective Increased emphasis and resources for the rigorous interventions needed to turn around the lowest-performing schools"In 2015, under the Obama administration, many of the more restrictive elements that were enacted under No Child Left Behind (NCLB, 2001), were removed in the Every Student Succeeds Act (ESSA, 2015) which limits the role of the federal government in school liability. Every Student Succeeds Act (Public Law 114-95) reformed educational standards by "moving away from such high stakes and assessment based accountability models" and focused on assessing student achievement from a holistic approach by utilizing qualitative measures. Some argue that giving states more authority can help prevent considerable discrepancies in educational performance across different states. ESSA was approved by former President Obama in 2015 which amended and empowered the Elementary and Secondary Education Act of 1965. The Department of Education has the choice to carry out measures in drawing attention to said differences by pinpointing lowest-performing state governments and supplying information on the condition and progress of each state on different educational parameters. It can also provide reasonable funding along with technical aid to help states with similar demographics collaborate in improving their public education programs. Social and Emotional Learning: Strengths-Based Education Model This uses a methodology that values purposeful engagement in activities that turn students into self-reliant and efficient learners. Holding on to the view that everyone possesses natural gifts that are unique to one's personality (e.g. computational aptitude, musical talent, visual arts abilities), it likewise upholds the idea that children, despite their inexperience and tender age, are capable of coping with anguish, able to survive hardships, and can rise above difficult times. Trump Administration In 2017, Betsy DeVos was instated as the 11th Secretary of Education. A strong proponent of school choice, school voucher programs, and charter schools, DeVos was a much-contested choice as her own education and career had little to do with formal experience in the US education system. In a Republican-dominated senate, she received a 50–50 vote - a tie that was broken by Vice President Mike Pence. Prior to her appointment, DeVos received a BA degree in business economics from Calvin College in Grand Rapids, Michigan and she served as chairman of an investment management firm, The Windquest Group. She supported the idea of leaving education to state governments under the new K-12 legislation. DeVos cited the interventionist approach of the federal government to education policy following the signing of the ESSA. The primary approach to that rule has not changed significantly. Her opinion was that the education movement populist politics or populism encouraged reformers to commit promises which were not very realistic and therefore difficult to deliver. On July 31, 2018, President Donald Trump signed the Strengthening Career and Technical Education for the 21st Century Act (HR 2353) The Act reauthorized the Carl D. Perkins Career and Technical Education Act, a $1.2 billion program modified by the United States Congress in 2006. A move to change the Higher Education Act was also deferred. The legislation enacted on July 1, 2019, replaced the Carl D. Perkins Career and Technical Education (Perkins IV) Act of 2006. Stipulations in Perkins V enables school districts to make use of federal subsidies for all students' career search and development activities in the middle grades as well as comprehensive guidance and academic mentoring in the upper grades. At the same time, this law revised the meaning of "special populations" to include homeless persons, foster youth, those who left the foster care system, and children with parents on active duty in the United States armed forces. Barriers to Reform Education Inequalities Facing Students of Color Another factor to consider in education reform is that of equity and access. Contemporary issues in the United States regarding education faces a history of inequalities that come with consequences for education attainment across different social groups. Racial and Socioeconomic Class Segregation A history of racial, and subsequently class, segregation in the U.S. resulted from practices of law. Residential segregation is a direct result of twentieth century policies that separated by race using zoning and redlining practices, in addition to other housing policies, whose effects continue to endure in the United States. These neighborhoods that have been segregated de jure—by force of purposeful public policy at the federal, state, and local levels—disadvantage people of color as students must attend school near their homes. With the inception of the New Deal between 1933 and 1939, and during and following World War II, federally funded public housing was explicitly racially segregated by the local government in conjunction with federal policies through projects that were designated for Whites or Black Americans in the South, Northeast, Midwest, and West. Following an ease on the housing shortage post-World War II, the federal government subsidized the relocation of Whites to suburbs. The Federal Housing and Veterans Administration constructed such developments on the East Coast in towns like Levittown on Long Island, New Jersey, Pennsylvania, and Delaware. On the West Coast, there was Panorama City, Lakewood, Westlake, and Seattle suburbs developed by Bertha and William Boeing. As White families left for the suburbs, Black families remained in public housing and were explicitly placed in Black neighborhoods. Policies such as public housing director, Harold Ickes', "neighborhood composition rule" maintained this segregation by establishing that public housing must not interfere with pre-existing racial compositions of neighborhoods. Federal loan guarantees were given to builders who adhered to the condition that no sales were made to Black families and each deed prohibited re-sales to Black families, what the Federal Housing Administration (FHA) described as an "incompatible racial element". In addition, banks and savings intuitions refused loans to Black families in White suburbs and Black families in Black neighborhoods. In the mid-twentieth century, urban renewal programs forced low-income black residents to reside in places farther from universities, hospitals, or business districts and relocation options consisted of public housing high-rises and ghettos. This history of de jure segregation has impacted resource allocation for public education in the United States, with schools continuing to be segregated by race and class. Low-income White students are more likely than Black students to be integrated into middle-class neighborhoods and less likely to attend schools with other predominantly disadvantaged students. Students of color disproportionately attend underfunded schools and Title I schools in environments entrenched in environmental pollution and stagnant economic mobility with limited access to college readiness resources. According to research, schools attended by primarily Hispanic or African American students often have high turnover of teaching staff and are labeled high-poverty schools, in addition to having limited educational specialists, less available extracurricular opportunities, greater numbers of provisionally licensed teachers, little access to technology, and buildings that are not well maintained. With this segregation, more local property tax is allocated to wealthier communities and public schools' dependence on local property taxes has led to large disparities in funding between neighboring districts. The top 10% of wealthiest school districts spend approximately ten times more per student than the poorest 10% of school districts. The Racial Wealth Gap This history of racial and socioeconomic class segregation in the U.S. has manifested into a racial wealth divide. With this history of geographic and economic segregation, trends illustrate a racial wealth gap that has impacted educational outcomes and its concomitant economic gains for minorities. Wealth or net worth—the difference between gross assets and debt—is a stock of financial resources and a significant indicator of financial security that offers a more complete measure of household capability and functioning than income. Within the same income bracket, the chance of completing college differs for White and Black students. Nationally, White students are at least 11% more likely to complete college across all four income groups. Intergenerational wealth is another result of this history, with White college-educated families three times as likely as Black families to get an inheritance of $10,000 or more. 10.6% of White children from low-income backgrounds and 2.5% of Black children from low-income backgrounds reach the top 20% of income distribution as adults. Less than 10% of Black children from low-income backgrounds reach the top 40%. Access to Early Childhood Education These disadvantages facing students of color are apparent early on in early childhood education. By the age of five, children of color are impacted by opportunity gaps indicated by poverty, school readiness gap, segregated low-income neighborhoods, implicit bias, and inequalities within the justice system as Hispanic and African American boys account for as much as 60% of total prisoners within the incarceration population. These populations are also more likely to experience adverse childhood experiences (ACEs). High-quality early care and education are less accessible to
class to work, the social hygiene of a lower or immigrant class, the preparation of citizens in a democracy or republic, etc. The idea that all children should be provided with a high level of education is a relatively recent idea, and has arisen largely in the context of Western democracy in the 20th century. The "beliefs" of school districts are optimistic that quite literally "all students will succeed", which in the context of high school graduation examination in the United States, all students in all groups, regardless of heritage or income will pass tests that in the introduction typically fall beyond the ability of all but the top 20 to 30 percent of students. The claims clearly renounce historical research that shows that all ethnic and income groups score differently on all standardized tests and standards based assessments and that students will achieve on a bell curve. Instead, education officials across the world believe that by setting clear, achievable, higher standards, aligning the curriculum, and assessing outcomes, learning can be increased for all students, and more students can succeed than the 50 percent who are defined to be above or below grade level by norm referenced standards. States have tried to use state schools to increase state power, especially to make better soldiers and workers. This strategy was first adopted to unify related linguistic groups in Europe, including France, Germany and Italy. Exact mechanisms are unclear, but it often fails in areas where populations are culturally segregated, as when the U.S. Indian school service failed to suppress Lakota and Navaho, or when a culture has widely respected autonomous cultural institutions, as when the Spanish failed to suppress Catalan. Many students of democracy have desired to improve education in order to improve the quality of governance in democratic societies; the necessity of good public education follows logically if one believes that the quality of democratic governance depends on the ability of citizens to make informed, intelligent choices, and that education can improve these abilities. Politically motivated educational reforms of the democratic type are recorded as far back as Plato in The Republic. In the United States, this lineage of democratic education reform was continued by Thomas Jefferson, who advocated ambitious reforms partly along Platonic lines for public schooling in Virginia. Another motivation for reform is the desire to address socio-economic problems, which many people see as having significant roots in lack of education. Starting in the 20th century, people have attempted to argue that small improvements in education can have large returns in such areas as health, wealth and well-being. For example, in Kerala, India in the 1950s, increases in women's health were correlated with increases in female literacy rates. In Iran, increased primary education was correlated with increased farming efficiencies and income. In both cases some researchers have concluded these correlations as representing an underlying causal relationship: education causes socio-economic benefits. In the case of Iran, researchers concluded that the improvements were due to farmers gaining reliable access to national crop prices and scientific farming information. History of Education Reform Classical Education As taught from the 18th to the 19th century, Western classical education curriculums focused on concrete details like "Who?", "What?", "When?", "Where?". Unless carefully taught, large group instruction naturally neglects asking the theoretical "Why?" and "Which?" questions that can be discussed in smaller groups. Classical education in this period also did not teach local (vernacular) languages and culture. Instead, it taught high-status ancient languages (Greek and Latin) and their cultures. This produced odd social effects in which an intellectual class might be more loyal to ancient cultures and institutions than to their native vernacular languages and their actual governing authorities. 18th Century Reform Child-Study Jean-Jacques Rousseau, father of the Child Study Movement, centered the child as an object of study. In Emile: Or, On Education, Rousseau's principal work on education lays out an educational program for a hypothetical newborn's education through adulthood. Rousseau provided a dual critique of the educational vision outlined in Plato's Republic and that of his society in contemporary Europe. He regarded the educational methods contributing to the child's development; he held that a person could either be a man or a citizen. While Plato's plan could have brought the latter at the expense of the former, contemporary education failed at both tasks. He advocated a radical withdrawal of the child from society and an educational process that utilized the child's natural potential and curiosity, teaching the child by confronting them with simulated real-life obstacles and conditioning the child through experience rather intellectual instruction. Rousseau ideas were rarely implemented directly, but influenced later thinkers, particularly Johann Heinrich Pestalozzi and Friedrich Wilhelm August Fröbel, the inventor of the kindergarten. National Identity European and Asian nations regard education as essential to maintaining national, cultural, and linguistic unity. In the late 18th century (~1779), Prussia instituted primary school reforms expressly to teach a unified version of the national language, "Hochdeutsch". One significant reform was kindergarten whose purpose was to have the children participate in supervised activities taught by instructors who spoke the national language. The concept embraced the idea that children absorb new language skills more easily and quickly when they are young The current model of kindergarten is reflective of the Prussian model. In other countries, such as the Soviet Union, France, Spain, and Germany, the Prussian model has dramatically improved reading and math test scores for linguistic minorities. 19th Century - England In the 19th century, before the advent of government-funded public schools, Protestant organizations established Charity Schools to educate the lower social classes. The Roman Catholic Church and governments later adopted the model. Designed to be inexpensive, Charity schools operated on minimal budgets and strived to serve as many needy children as possible. This led to the development of grammar schools, which primarily focused on teaching literacy, grammar, and bookkeeping skills so that the students could use books as an inexpensive resource to continue their education. Grammar was the first third of the then-prevalent system of classical education.. Educators Joseph Lancaster and Andrew Bell developed the monitorial system, also known as "mutual instruction" or the "Bell–Lancaster method". Their contemporary, educationalist and writer Elizabeth Hamilton, suggested that in some important aspects the method had been "anticipated" by the Belfast schoolmaster David Manson. In the 1760s Manson had developed a peer-teaching and monitoring system within the context of what he called a "play school" that dispensed with "the discipline of the rod". (More radically, Manson proposed the "liberty of each [child] to take the quantity [of lessons] agreeable to his inclination"). Lancaster, an impoverished Quaker during the early 19th century in London and Bell at the Madras School of India developed this model independent of one another. However, by design, their model utilizes more advanced students as a resource to teach the less advanced students; achieving student-teacher ratios as small as 1:2 and educating more than 1000 students per adult. The lack of adult supervision at the Lancaster school resulted in the older children acting as disciplinary monitors and taskmasters. To provide order and promote discipline the school implemented a unique internal economic system, inventing a currency called a Scrip. Although the currency was worthless in the outside world, it was created at a fixed exchange rate from a student's tuition and student's could use scrip to buy food, school supplies, books, and other items from the school store. Students could earn scrip through tutoring. To promote discipline, the school adopted a work-study model. Every job of the school was bid-for by students, with the largest bid winning. However, any student tutor could auction positions in his or her classes to earn scrip. The bids for student jobs paid for the adult supervision. Lancaster promoted his system in a piece called Improvements in Education that spread widely throughout the English-speaking world. Lancaster schools provided a grammar-school education with fully developed internal economies for a cost per student near $40 per year in 1999 U.S. dollars. To reduce cost and motivated to save up scrip, Lancaster students rented individual pages of textbooks from the school library instead of purchasing the textbook. Student's would read aloud their pages to groups. Students commonly exchanged tutoring and paid for items and services with receipts from down tutoring. The schools did not teach submission to orthodox Christian beliefs or government authorities. As a result, most English-speaking countries developed mandatory publicly paid education explicitly to keep public education in "responsible" hands. These elites said that Lancaster schools might become dishonest, provide poor education, and were not accountable to established authorities. Lancaster's supporters responded that any child could cheat given the opportunity, and that the government was not paying for the education and thus deserved no say in their composition. Though motivated by charity, Lancaster claimed in his pamphlets to be surprised to find that he lived well on the income of his school, even while the low costs made it available to the most impoverished street children. Ironically, Lancaster lived on the charity of friends in his later life. Modern Reformist Although educational reform occurred on a local level at various points throughout history, the modern notion of education reform is tied with the spread of compulsory education. Economic growth and the spread of democracy raised the value of education and increased the importance of ensuring that all children and adults have access to free, high-quality, effective education. Modern education reforms are increasingly driven by a growing understanding of what works in education and how to go about successfully improving teaching and learning in schools. However, in some cases, the reformers' goals of "high-quality education" has meant "high-intensity education", with a narrow emphasis on teaching individual, test-friendly subskills quickly, regardless of long-term outcomes, developmental appropriateness, or broader educational goals. Horace Mann In the United States, Horace Mann (1796 – 1859) of Massachusetts used his political base and role as Secretary of the Massachusetts State Board of Education to promote public education in his home state and nationwide. Advocating a substantial public investment be made in education, Mann and his proponents developed a strong system of state supported common schools.. His crusading style attracted wide middle class support. Historian Ellwood P. Cubberley asserts: No one did more than he to establish in the minds of the American people the conception that education should be universal, non-sectarian, free, and that its aims should be social efficiency, civic virtue, and character, rather than mere learning or the advancement of sectarian ends. In 1852, Massachusetts passed a law making education mandatory. This model of free, accessible education spread throughout the country and in 1917 Mississippi was the final state to adopt the law. John Dewey John Dewey, a philosopher and educator based in Chicago and New York, helped conceptualize the role of American and international education during the first four decades of the 20th century. An important member of the American Pragmatist movement, he carried the subordination of knowledge to action into the educational world by arguing for experiential education that would enable children to learn theory and practice simultaneously; a well-known example is the practice of teaching elementary physics and biology to students while preparing a meal. He was a harsh critic of "dead" knowledge disconnected from practical human life. Dewey criticized the rigidity and volume of humanistic education, and the emotional idealizations of education based on the child-study movement that had been inspired by Rousseau and those who followed him. Dewey understood that children are naturally active and curious and learn by doing. Dewey's understanding of logic is presented in his work "Logic, the Theory of Inquiry" (1938). His educational philosophies were presented in "My Pedagogic Creed", The School and Society, The Child and Curriculum, and Democracy and Education (1916). Bertrand Russell criticized Dewey's conception of logic, saying "What he calls "logic" does not seem to me to be part of logic at all; I should call it part of psychology." Dewey left the University of Chicago in 1904 over issues relating to the Dewey School. Dewey's influence began to decline in the time after the Second World War and particularly in the Cold War era, as more conservative educational policies came to the fore. Administrative Progressives The form of educational progressivism which was most successful in having its policies implemented has been dubbed "administrative progressivism" by historians. This began to be implemented in the early 20th century. While influenced particularly in its rhetoric by Dewey and even more by his popularizers, administrative progressivism was in its practice much more influenced by the Industrial Revolution and the concept economies of scale. The administrative progressives are responsible for many features of modern American education, especially American high schools: counseling programs, the move from many small local high schools to large centralized high schools, curricular differentiation in the form of electives and tracking, curricular, professional, and other forms of standardization, and an increase in state and federal regulation and bureaucracy, with a corresponding reduction of local control at the school board level. (Cf. "State, federal, and local control of education in the United States", below) (Tyack and Cuban, pp. 17–26) These reforms have since become heavily entrenched, and many today who identify themselves as progressives are opposed to many of them, while conservative education reform during the Cold War embraced them as a framework for strengthening traditional curriculum and standards. More recent methods, instituted by groups such as the think tank Reform's education division, and S.E.R. have attempted to pressure the government of the U.K. into more modernist educational reform, though this has met with limited success. History of Public School Reform - United States In the United States, public education is characterized as "any federally funded primary or secondary school, administered to some extent by the government, and charged with educating all citizens. Although there is typically a cost to attend some public higher education institutions, they are still considered part of public education." Colonial America In what would become the United States, the first public school was established in Boston, Massachusetts, on April 23, 1635. Puritan schoolmaster Philemon Pormont led instruction at the Boston Latin School. During this time, post-secondary education was a commonly utilized tool to distinguish one's social class and social status. Access to education was the "privilege of white, upper-class, Christian male children" in preparation for university education in ministry. In colonial America, to maintain Puritan religious traditions, formal and informal education instruction focused on teaching literacy. All colonists needed to understand the written language on some fundamental level in order to read the Bible and the colony's written secular laws. Religious leaders recognized that each person should be "educated enough to meet the individual needs of their station in life and social harmony." The first compulsory education laws were passed in Massachusetts between 1642 and 1648 when religious leaders noticed not all parents were providing their children with proper education. These laws stated that all towns with 50 or more families were obligated to hire a schoolmaster to teach children reading, writing, and basic arithmetic."In 1642 the General Court passed a law that required heads of households to teach all their dependents — apprentices and servants as well as their own children — to read English or face a fine. Parents could provide the instruction themselves or hire someone else to do it. Selectmen were to keep 'a vigilant eye over their brethren and neighbors,' young people whose education was neglected could be removed from their parents or masters."The 1647 law eventually led to establishing publicly funded district schools in all Massachusetts towns, although, despite the threat of fines, compliance and quality of public schools were less than satisfactory."Many towns were 'shamefully neglectful' of children's education. In 1718 '...by sad experience, it is found that many towns that not only are obliged by law, but are very able to support a grammar school, yet choose rather to incur and pay the fine or penalty than maintain a grammar school."When John Adams drafted the Massachusetts Constitution in 1780, he included provisions for a comprehensive education law that guaranteed public education to "all" citizens. However, access to formal education in secondary schools and colleges was reserved for free, white males. During the 17th and 18th centuries, females received little or no formal education except for home learning or attending Dame Schools. Likewise, many educational institutions maintained a policy of refusing to admit Black applicants. The Virginia Code of 1819 outlawed teaching enslaved people to read or write. Post Revolution Soon after the American Revolution, early leaders, like Thomas Jefferson and John Adams, proposed the creation of a more "formal and unified system of publicly funded schools" to satiate the need to "build and maintain commerce, agriculture and shipping interests". Their concept of free public education was not well received and did not begin to take hold on until the 1830s. However, in 1790, evolving socio-cultural ideals in the Commonwealth of Pennsylvania led to the first significant and systematic reform in education legislation that mandated economic conditions would not inhibit a child's access to education:"Constitution of the Commonwealth of Pennsylvania – 1790 ARTICLE VII Section I. The legislature shall, as soon as conveniently may be, provide, by law, for the establishment of schools throughout the state, in such manner that the poor may be taught gratis." Reconstruction and the American Industrial Revolution During Reconstruction, from 1865 to 1877, African Americans worked to encourage public education in the South. With the U.S. Supreme Court decision in Plessy v. Ferguson, which held that "segregated public facilities were constitutional so long as the black and white facilities were equal to each other", this meant that African American children were legally allowed to attend public schools, although these schools were still segregated based on race. However, by the mid-twentieth century, civil rights groups would challenge racial segregation. During the second half of the nineteenth century (1870 and 1914), America's Industrial Revolution refocused the nation's attention on the need for a universally accessible public school system. Inventions, innovations, and improved production methods were critical to the continued growth of American manufacturing. To compete in the global economy, an overwhelming demand for literate workers that possessed practical training emerged. Citizens argued, "educating children of the poor and middle classes would prepare them to obtain good jobs, thereby strengthen the nation's economic position." Institutions became an essential tool in yielding ideal factory workers with sought-after attitudes and desired traits such as dependability, obedience,
of Art Museum of Culture & Environment, Central Washington University Every first Friday of each month, Ellensburg hosts First Friday Art Walk from 5:00 to 7:00 pm. Events The Ellensburg Farmers Market is held every Saturday from May to October in the heart of downtown Ellensburg. Ellensburg hosts the annual Winterhop Brewfest in January. Over 21 micro breweries from around the Pacific Northwest serve their product at various venues in the historic downtown buildings. Every June, Ellensburg hosts Dachshunds on Parade, an event that draws Dachshund dog owners from all over the Northwest. Events include a parade, Dachshund races, pet tricks, and a dog costume contest. Ellensburg hosts the annual Jazz in the Valley music festival on the last weekend in July. Ellensburg is a stop on the PRCA professional rodeo circuit, occurring each year on Labor Day weekend. The Ellensburg Rodeo has been a town tradition since 1923, and is the largest rodeo in Washington state. The rodeo arena is encompassed by the popular Kittitas County Fair, also held during Labor Day weekend. The Kittitas County Fair officially began in 1885, and has been held at its current location since 1923. Downtown Ellensburg hosts Buskers in the Burg the last Saturday in September. Featuring a variety of street performers (buskers), giant puppet art parade, tasting halls, children's activities, and outdoor evening concert. Geography According to the United States Census Bureau, the city has a total area of , of which is land and is water. Climate Owing to the strong Cascade rain shadow, Ellensburg experiences a typical Intermountain cool semi-arid climate (Köppen BSk). Demographics 2010 census As of the census of 2010, there were 18,174 people, 7,301 households, and 2,889 families living in the city. The population density was . There were 7,867 housing units at an average density of . The racial makeup of the city was 85.7% White, 1.5% African American, 1.0% Native American, 3.2% Asian, 0.2% Pacific Islander, 4.6% from other races, and 3.7% from two or more races. Hispanic or Latino of any race were 9.7% of the population. There were 7,301 households, of which 19.3% had children under the age of 18 living with them, 28.2% were married couples living together, 8.2% had a female householder with no husband present, 3.1% had a male householder with no wife present, and 60.4% were non-families. 35.1% of all households were made up of individuals, and 9.6% had someone living alone who was 65 years of age or older. The average household size was 2.16 and the average family size was 2.86. The median age in the city was 23.5 years. 14.2% of residents were under the age of 18; 41.2% were between the ages of 18 and 24;
Ellensburg is home to a number of local art museums and galleries: Kittitas County Historical Museum The Goodey Gallery Clymer Museum and Gallery Gallery-One Visual Arts Center 420 Loft Art Gallery Sarah Spurgeon Gallery, Central Washington University (CWU) Department of Art Museum of Culture & Environment, Central Washington University Every first Friday of each month, Ellensburg hosts First Friday Art Walk from 5:00 to 7:00 pm. Events The Ellensburg Farmers Market is held every Saturday from May to October in the heart of downtown Ellensburg. Ellensburg hosts the annual Winterhop Brewfest in January. Over 21 micro breweries from around the Pacific Northwest serve their product at various venues in the historic downtown buildings. Every June, Ellensburg hosts Dachshunds on Parade, an event that draws Dachshund dog owners from all over the Northwest. Events include a parade, Dachshund races, pet tricks, and a dog costume contest. Ellensburg hosts the annual Jazz in the Valley music festival on the last weekend in July. Ellensburg is a stop on the PRCA professional rodeo circuit, occurring each year on Labor Day weekend. The Ellensburg Rodeo has been a town tradition since 1923, and is the largest rodeo in Washington state. The rodeo arena is encompassed by the popular Kittitas County Fair, also held during Labor Day weekend. The Kittitas County Fair officially began in 1885, and has been held at its current location since 1923. Downtown Ellensburg hosts Buskers in the Burg the last Saturday in September. Featuring a variety of street performers (buskers), giant puppet art parade, tasting halls, children's activities, and outdoor evening concert. Geography According to the United States Census Bureau, the city has a total area of , of which is land and is water. Climate Owing to the strong Cascade rain shadow, Ellensburg experiences a typical Intermountain cool semi-arid climate (Köppen BSk). Demographics 2010 census As of the census of 2010, there were 18,174 people, 7,301 households, and 2,889 families living in the city. The population density was . There were 7,867 housing units at an average density of . The racial makeup of the city was 85.7% White, 1.5% African American, 1.0% Native American, 3.2% Asian, 0.2% Pacific Islander, 4.6% from other races, and 3.7% from two or more races. Hispanic or Latino of any race were 9.7% of the population. There were 7,301 households, of which 19.3% had children under the age of 18 living with them, 28.2% were married couples living together, 8.2% had a female householder with no husband present, 3.1% had a male householder with no wife present, and 60.4% were non-families. 35.1% of all households were made up of individuals, and 9.6% had someone living alone who was 65 years of age or older. The average household size was 2.16 and the average family size was 2.86. The median age in the city was 23.5 years. 14.2% of residents were under the age of 18; 41.2% were between the ages of 18 and 24; 21.8% were from 25 to 44; 13.9% were from 45 to 64; and 8.9% were 65 years of age or older. The gender makeup of the city was 50.1% male and 49.9% female. 2000 census As of the census of 2000, there were 15,414 people, 6,249 households, and 2,649 families living in the city. The population density was 2,338.9 people per square mile (903.1/km2). There were 6,732 housing units at an average density of 1,021.5 per square mile (394.4/km2). The racial makeup of the city was 88.07% White, 1.17% Black or African American, 0.95% Native American, 4.09% Asian, 0.16% Pacific
south or west slope of what the Kalapuyans called Ya-po-ah. The "isolated hill" is now known as Skinner's Butte. The cabin was used as a trading post and was registered as an official post office on January 8, 1850. At this time the settlement was known by Anglos as Skinner's Mudhole. It was relocated in 1853 and named Eugene City in 1853. Formally incorporated as a city in 1862, it was named simply Eugene in 1889. Skinner ran a ferry service across the Willamette River where the Ferry Street Bridge now stands. Educational institutions The first major educational institution in the area was Columbia College, founded a few years earlier than the University of Oregon. It fell victim to two major fires in four years, and after the second fire, the college decided not to rebuild again. The part of south Eugene known as College Hill was the former location of Columbia College. There is no college there today. The town raised the initial funding to start a public university, which later became the University of Oregon, with the hope of turning the small town into a center of learning. In 1872, the Legislative Assembly passed a bill creating the University of Oregon as a state institution. Eugene bested the nearby town of Albany in the competition for the state university. In 1873, community member J.H.D. Henderson donated the hilltop land for the campus, overlooking the city. The university first opened in 1876 with the regents electing the first faculty and naming John Wesley Johnson as president. The first students registered on October 16, 1876. The first building was completed in 1877; it was named Deady Hall in honor of the first Board of Regents President and community leader Judge Matthew P. Deady. Twentieth century Eugene grew rapidly throughout most of the twentieth century, with the exception being the early 1980s when a downturn in the timber industry caused high unemployment. By 1985, the industry had recovered and Eugene began to attract more high-tech industries, earning it the moniker the "Emerald Shire". In 2012, Eugene and the surrounding metro area was dubbed the Silicon shire. The first Nike shoe was used in 1972 during the US Olympic trials held in Eugene. Activism The 1970s saw an increase in community activism. Local activists stopped a proposed freeway and lobbied for the construction of the Washington Jefferson Park beneath the Washington-Jefferson Street Bridge. Community Councils soon began to form as a result of these efforts. A notable impact of the turn to community-organized politics came with Eugene Local Measure 51, a ballot measure in 1978 that repealed a gay rights ordinance approved by the Eugene City Council in 1977 that prohibited discrimination by sexual orientation. Eugene is also home to Beyond Toxics, a nonprofit environmental justice organization founded in 2000. One hotspot for protest activity since the 1990s has been the Whitaker district, located in the northwest of downtown Eugene. Whitaker is primarily a working-class neighborhood that has become a vibrant cultural hub, center of community and activism and home to alternative artists. It saw an increase of activity in the 1990s after many young people drawn to Eugene's political climate relocated there. Animal rights groups have had a heavy presence in the Whiteaker, and several vegan restaurants are located there. According to David Samuels, the Animal Liberation Front and the Earth Liberation Front have had an underground presence in the neighborhood. The neighborhood is home to a number of communal apartment buildings, which are often organized by anarchist or environmentalist groups. Local activists have also produced independent films and started art galleries, community gardens, and independent media outlets. Copwatch, Food Not Bombs, and Critical Mass are also active in the neighborhood. Geography According to the United States Census Bureau, the city has a total area of , of which is land and is water. Eugene is at an elevation of . To the north of downtown is Skinner Butte. Northeast of the city is the Coburg Hills. Spencer Butte is a prominent landmark south of the city. Mount Pisgah is southeast of Eugene and includes Mount Pisgah Arboretum and Howard Buford Recreation Area, a Lane County Park. Eugene is surrounded by foothills and forests to the south, east, and west, while to the north the land levels out into the Willamette Valley and consists of mostly farmland. The Willamette and McKenzie Rivers run through Eugene and its neighboring city, Springfield. Another important stream is Amazon Creek, whose headwaters are near Spencer Butte. The creek discharges into the Long Tom River north Fern Ridge Reservoir, maintained for winter flood control by the Army Corps of Engineers. The Eugene Yacht Club hosts a sailing school and sailing regattas at Fern Ridge during summer months. Neighborhoods Eugene has 23 neighborhood associations: Active Bethel Citizens Amazon Neighbors Association Cal Young Neighborhood Association Churchill Area Neighbors Downtown Neighborhood Association Fairmount Neighbors Association Far West Neighborhood Association Friendly Area Neighbors Goodpasture Island Neighbors Harlow Industrial Corridor Community Organization Jefferson Westside Neighbors Laurel Hill Valley Citizens Northeast Neighbors River Road Community Santa Clara Community (including Irving) South University Neighborhood Association Southeast Neighbors Southwest Hills Neighborhood Association Trainsong Neighbors West Eugene Community West University Neighbors Whiteaker Community Council Climate Like the rest of the Willamette Valley, Eugene lies in the Marine West Coast climate zone, with Mediterranean characteristics. Under the Köppen climate classification scheme, Eugene has a warm-summer Mediterranean climate (Köppen Csb). Temperatures can vary from cool to warm, with warm, dry summers and cool, wet winters. Spring and fall are also moist seasons, with light rain falling for long periods. The average rainfall is , with the wettest "rain year" being from July 1973 to June 1974 with and the driest from July 2000 to June 2001 with . Winter snowfall does occur, but it is sporadic and rarely accumulates in large amounts: the normal seasonal amount is , but the median is zero. The record snowfall was of accumulation due to a pineapple express on January 25–29, 1969. Ice storms, like snowfall, are rare, but occur sporadically. The hottest months are July and August, with a normal monthly mean temperature of , with an average of 16 days per year reaching . The coolest month is December, with a mean temperature of , and there are 53 mornings per year with a low at or below freezing, and 2.7 afternoons with highs not exceeding the freezing mark. Eugene's average annual temperature is , and annual precipitation at . Eugene is more wet and slightly cooler on average than Portland. Despite being about south and having only a slightly higher elevation, Eugene has a more continental climate, less subject to the maritime air that blows inland from the Pacific Ocean via the Columbia River. Eugene's normal annual mean minimum is , compared to in Portland; in August, the gap in the normal mean minimum widens to and for Eugene and Portland, respectively. Average winter temperatures (and summer high temperatures) are similar for the two cities. This disparity may be additionally caused by Portland's urban heat island, where the combination of black pavement and urban energy use raises nighttime temperatures. Extreme temperatures range from , recorded on December 8, 1972, to on June 27, 2021; the record cold daily maximum is , recorded on December 13, 1919, while, conversely, the record warm daily minimum is on July 22, 2006. Air quality and allergies Eugene is downwind of Willamette Valley grass seed farms. The combination of summer grass pollen and the confining shape of the hills around Eugene make it "the area of the highest grass pollen counts in the USA (>1,500 pollen grains/m3 of air)." These high pollen counts have led to difficulties for some track athletes who compete in Eugene. In the Olympic trials in 1972, "Jim Ryun won the 1,500 after being flown in by helicopter because he was allergic to Eugene's grass seed pollen." Further, six-time Olympian Maria Mutola abandoned Eugene as a training area "in part to avoid allergies". Demographics 2010 census According to the 2010 census, Eugene's population was 156,185. The population density was 3,572.2 people per square mile. There were 69,951 housing units at an average density of 1,600 per square mile. Those age 18 and over accounted for 81.8% of the total population. The racial makeup of the city was 85.8% White, 4.0% Asian, 1.4% Black or African American, 1.0% Native American, 0.2% Pacific Islander, and 4.7% from other races. Hispanics and Latinos of any race accounted for 7.8% of the total population. Of the non-Hispanics, 82% were White, 1.3% Black or African American, 0.8% Native American, 4% Asian, 0.2% Pacific Islander, 0.2% some other race alone, and 3.4% were of two or more races. Females represented 51.1% of the total population, and males represented 48.9%. The median age in the city was 33.8 years. 2000 census The census of 2000 showed there were 137,893 people, 58,110 households, and 31,321 families residing in the city of Eugene. The population density was 3,404.8 people per square mile (1,314.5/km). There were 61,444 housing units at an average density of 1,516.4 per square mile (585.5/km). The racial makeup of the city was 88.15% White, down from 99.5% in 1950, 3.57% Asian, 1.25% Black or African American, 0.93% Native American, 0.21% Pacific Islander, 2.18% from other races, and 3.72% from two or more races. 4.96% of the population were Hispanic or Latino of any race. There were 58,110 households, of which 25.8% had children under the age of 18 living with them, 40.6% were married couples living together, 9.7% had a female householder with no husband present, and 46.1% were non-families. 31.7% of all households were made up of individuals, and 9.4% had someone living alone who was 65 years of age or older. The average household size was 2.27 and the average family size was 2.87. In the city, the population was 20.3% under the age of 18, 17.3% from 18 to 24, 28.5% from 25 to 44, 21.8% from 45 to 64, and 12.1% who were 65 years of age or older. The median age was 33 years. For every 100 females, there were 96.0 males. For every 100 females age 18 and over, there were 94.0 males. The median income for a household in the city was $35,850, and the median income for a family was $48,527. Males had a median income of $35,549 versus $26,721 for females. The per capita income for the city was $21,315. About 8.7% of families and 17.1% of the population were below the poverty line, including 14.8% of those under age 18 and 7.1% of those age 65 or over. Economy Eugene's largest employers are PeaceHealth Medical Group, the University of Oregon, and the Eugene School District. Eugene's largest industries are wood products manufacturing and recreational vehicle manufacturing. Luckey's Club Cigar Store is one of the oldest bars in Oregon. Tad Luckey, Sr., purchased it in 1911, making it one of the oldest businesses in Eugene. The "Club Cigar", as it was called in the late 19th century, was for many years a men-only salon. It survived both the Great Depression and Prohibition, partly because Eugene was a "dry town" before the end of Prohibition. Corporate headquarters for the employee-owned Bi-Mart corporation and family-owned supermarket Market of Choice remain in Eugene. The city has over 25 breweries, offers a variety of dining options with a local focus; the city is surrounded by wineries. The most notable fungi here is the truffle; Eugene hosts the annual Oregon Truffle Festival in January. Organically Grown Company, the largest distributor of organic fruits and vegetables in the northwest, started in Eugene in 1978 as a non-profit co-op for organic farmers. Notable local food processors, many of whom manufacture certified organic products, include Golden Temple (Yogi Tea), Merry Hempsters and Springfield Creamery (Nancy's Yogurt & owned by the Kesey Family), and Mountain Rose Herbs. Until July 2008, Hynix Semiconductor America had operated a large semiconductor plant in west Eugene. In late September 2009, Uni-Chem of South Korea announced its intention to purchase the Hynix site for solar cell manufacturing. However, this deal fell through and as of late 2012, is no longer planned. In 2015, semiconductor manufacturer Broadcom purchased the plant with plans to upgrade and reopen it. The company abandoned these plans and put it up for sale in November 2016. The footwear repair product Shoe Goo is manufactured by Eclectic Products, based in Eugene. Run Gum, an energy gum created for runners, also began its life in Eugene. Run Gum was created by track athlete Nick Symmonds and track and field coach Sam Lapray in 2014. Burley Design LLC produces bicycle trailers and was founded in Eugene by Alan Scholz out of a Saturday Market business in 1978. Eugene is also the birthplace and home of Bike Friday bicycle manufacturer Green Gear Cycling. Many multinational businesses were launched in Eugene. Some of the most famous include Nike, Taco Time, and Brøderbund Software. In 2012, the Eugene metro region was dubbed the Silicon Shire for its growing tech industry. Top employers According to Eugene's 2017 Comprehensive Annual Financial Report, the city's top employers are: Homelessness Eugene has a growing problem with homelessness. The problem has been referenced in popular culture, including in the episode The 30% Iron Chef in Futurama. During the COVID-19 pandemic, the city experienced a controversy over its continuing policy of homeless removal, despite CDC guidelines to not engage in homeless removal. Arts and culture Eugene has a significant population of people in pursuit of alternative ideas and a large original hippie population. Beginning in the 1960s, the countercultural ideas and viewpoints espoused by Ken Kesey became established as the seminal elements of the vibrant social tapestry that continue to define Eugene. The Merry Prankster, as Kesey was known, has arguably left the most indelible imprint of any cultural icon in his hometown. He is best known as the author of One Flew Over the Cuckoo's Nest and as the male protagonist in Tom Wolfe's The Electric Kool-Aid Acid Test. In 2005, the city council unanimously approved a new slogan for the city: "World's Greatest City for the Arts & Outdoors". While Eugene has a vibrant arts community for a city its size, and is well situated near many outdoor opportunities, this slogan was frequently criticized by locals as embarrassing and ludicrous. In early 2010, the slogan was changed to "A Great City for the Arts & Outdoors." Eugene's Saturday Market, open every Saturday from April through November, was founded in 1970 as the first "Saturday Market" in the United States. It is adjacent to the Lane County Farmer's Market in downtown Eugene. All vendors must create or grow all their own products. The market reappears as the "Holiday Market" between Thanksgiving and New Year's in the Lane County Events Center at the fairgrounds. Community Eugene is noted for its "community inventiveness." Many U.S. trends in community development originated in Eugene. The University of Oregon's participatory planning process, known as The Oregon Experiment, was the result of student protests in the early 1970s. The book of the same name is a major document in modern enlightenment thinking in planning and architectural circles. The process, still used by the university in modified form, was created by Christopher Alexander, whose works also directly inspired the creation of the Wiki. Some research for the book A Pattern Language, which inspired the Design Patterns movement and Extreme Programming, was done by Alexander in Eugene. Not coincidentally, those engineering movements also had origins here. Decades after its publication, A Pattern Language is still one of the best-selling books on urban design. In the 1970s, Eugene was packed with cooperative and community projects. It still has small natural food stores in many neighborhoods, some of the oldest student cooperatives in the country, and alternative schools have been part of the school district since 1971. The old Grower's Market, downtown near the Amtrak depot, is the only food cooperative in the U.S. with no employees. It is possible to see Eugene's trend-setting non-profit tendencies in much newer projects, such as the Tango Center and the Center for Appropriate Transport. In 2006, an initiative began to create a tenant-run development process for downtown Eugene. In the fall of 2003, neighbors noticed "an unassuming two-acre remnant orchard tucked into the Friendly Area Neighborhood" had been put up for sale by its owner, a resident of New York City. Learning a prospective buyer had plans to build several houses on the property, they formed a nonprofit organization called Madison Meadow in June 2004 in order to buy the property and "preserve it as undeveloped space in perpetuity." In 2007 their effort was named Third Best Community Effort by the Eugene Weekly, and by the end of 2008 they had raised enough money to purchase the property. The City of Eugene has an active Neighborhood Program. Several neighborhoods are known for their green activism. Friendly Neighborhood has a highly popular neighborhood garden established on the right of way of a street never built. There are a number of community gardens on public property. Amazon Neighborhood has a former church turned into a community center. Whiteaker hosts a housing co-op that dates from the early 1970s that has re-purposed both their parking lots into food production and play space. An unusual eco-village with natural building techniques and large shared garden can be found in Jefferson Westside neighborhood. A several block area in the River Road Neighborhood is known as a permaculture hotspot with an increasing number of suburban homes trading grass for garden, installing rain water catchment systems, food producing landscapes and solar retrofits. Several sites have planted gardens by removing driveways. Citizen volunteers are working with the City of Eugene to restore a 65-tree filbert grove on public property. There are deepening social and economic networks in the neighborhood. Annual cultural events Asian Celebration, presented by the Asian Council of Eugene and Springfield, takes place in February at the Lane County Fairgrounds. The KLCC Microbrew Festival is held in February at the Lane County Fairgrounds. It provides participants with an introduction to a large range of microbrewery and craft beers, which play an important role in Pacific Northwest culture and the economy. Mount Pisgah Arboretum, which resides at the base of Mount Pisgah, holds a Wildflower Festival in May and a Mushroom Festival and Plant Sale in October. Oregon Festival of American Music, or OFAM is held annually in the early summer. Art and the Vineyard festival, held around the Fourth of July at Alton Baker Park, is the principal fundraiser for the Maude Kerns Art Center. The Oregon Bach Festival is a major international festival in July, hosted by the University of Oregon. The nonprofit Oregon Country Fair takes place in July in nearby Veneta. The Lane County Fair occurs in July at the Lane County Fairgrounds. The Eugene/Springfield Pride Festival is held annually on the second Saturday in August from noon to 7:00 p.m. at Alton Baker Park. A part of Eugene LGBT culture since 1993, it provides a lighthearted and supportive social venue for the LGBT community, families, and friends. Eugene Celebration is a three-day block party that usually takes place in the downtown area in August or September. The SLUG Queen coronation in August, a pageant with a campy spin, crowns a new SLUG Queen who "rains" over the Eugene Celebration Parade and is an unofficial ambassador of Eugene. Museums Eugene museums include the University of Oregon's Jordan Schnitzer Museum of Art and Museum of Natural and Cultural History,
May and a Mushroom Festival and Plant Sale in October. Oregon Festival of American Music, or OFAM is held annually in the early summer. Art and the Vineyard festival, held around the Fourth of July at Alton Baker Park, is the principal fundraiser for the Maude Kerns Art Center. The Oregon Bach Festival is a major international festival in July, hosted by the University of Oregon. The nonprofit Oregon Country Fair takes place in July in nearby Veneta. The Lane County Fair occurs in July at the Lane County Fairgrounds. The Eugene/Springfield Pride Festival is held annually on the second Saturday in August from noon to 7:00 p.m. at Alton Baker Park. A part of Eugene LGBT culture since 1993, it provides a lighthearted and supportive social venue for the LGBT community, families, and friends. Eugene Celebration is a three-day block party that usually takes place in the downtown area in August or September. The SLUG Queen coronation in August, a pageant with a campy spin, crowns a new SLUG Queen who "rains" over the Eugene Celebration Parade and is an unofficial ambassador of Eugene. Museums Eugene museums include the University of Oregon's Jordan Schnitzer Museum of Art and Museum of Natural and Cultural History, the Oregon Air and Space Museum, Lane County History Museum, Maude Kerns Art Center, Shelton McMurphey Johnson House, and the Eugene Science Center. Performing arts Eugene is home to numerous cultural organizations, including the Eugene Symphony, the Eugene Ballet, the Eugene Opera, the Eugene Concert Choir, the Bushnell University Community Choir, the Oregon Mozart Players, the Oregon Bach Festival, the Oregon Children's Choir, the Eugene-Springfield Youth Orchestras, Ballet Fantastique and Oregon Festival of American Music. Principal performing arts venues include the Hult Center for the Performing Arts, The John G. Shedd Institute for the Arts ("The Shedd"), Matthew Knight Arena, Beall Concert Hall and the Erb Memorial Union ballroom on the University of Oregon campus, the McDonald Theatre, and W.O.W. Hall. A number of live theater groups are based in Eugene, including Free Shakespeare in the Park, Oregon Contemporary Theatre, The Very Little Theatre, Actors Cabaret, LCC Theatre, Rose Children's Theatre, and University Theatre. Each has its own performance venue. Music Because of its status as a college town, Eugene has been home to many music genres, musicians and bands, ranging from electronic dance music such as dubstep and drum and bass to garage rock, hip hop, folk and heavy metal. Eugene also has a growing reggae and street-performing bluegrass and jug band scene. Multi-genre act the Cherry Poppin' Daddies became a prominent figure in Eugene's music scene and became the house band at Eugene's W.O.W. Hall. In the late 1990s, their contributions to the swing revival movement propelled them to national stardom. Rock band Floater originated in Eugene as did the Robert Cray blues band. Doom metal band YOB is among the leaders of the Eugene heavy music scene. Eugene is home to "Classical Gas" Composer and two-time Grammy award winner Mason Williams who spent his years as a youth living between his parents in Oakridge, Oregon and Oklahoma. Mason Williams puts on a yearly Christmas show at the Hult center for performing arts with a full orchestra produced by author, audio engineer and University of Oregon professor Don Latarski. Dick Hyman, noted jazz pianist and musical director for many of Woody Allen's films, designs and hosts the annual Now Hear This! jazz festival at the Oregon Festival of American Music (OFAM). OFAM and the Hult Center routinely draw major jazz talent for concerts. Eugene is also home to a large Zimbabwean music community. Kutsinhira Cultural Arts Center, which is "dedicated to the music and people of Zimbabwe," is based in Eugene. Visual arts Eugene's visual arts community is supported by over 20 private art galleries and several organizations, including Maude Kerns Art Center, Lane Arts Council, DIVA (the Downtown Initiative for the Visual Arts) and the Eugene Glass School. In 2015 installations from a group of Eugene-based artists known as Light At Play were showcased in several events around the world as part of the International Year of Light, including displays at the Smithsonian and the National Academy of Sciences. Film The Eugene area has been used as a filming location for several Hollywood films, most famously for 1978's National Lampoon's Animal House, which was also filmed in nearby Cottage Grove. John Belushi had the idea for the film The Blues Brothers during filming of Animal House when he happened to meet Curtis Salgado at what was then the Eugene Hotel. Getting Straight, starring Elliott Gould and Candice Bergen, was filmed at Lane Community College in 1969. As the campus was still under construction at the time, the "occupation scenes" were easier to shoot. The "Chicken Salad on Toast" scene in the 1970 Jack Nicholson movie Five Easy Pieces was filmed at the Denny's restaurant at the southern I-5 freeway interchange near Glenwood. Nicholson directed the 1971 film Drive, He Said in Eugene. How to Beat the High Co$t of Living, starring Jane Curtin, Jessica Lange and Susan St. James, was filmed in Eugene in the fall of 1979. Locations visible in the film include Valley River Center (which is a driving force in the plot), Skinner Butte and Ya-Po-Ah Terrace, the Willamette River and River Road Hardware. Several track and field movies have used Eugene as a setting and/or a filming location. Personal Best, starring Mariel Hemingway, was filmed in Eugene in 1982. The film centered on a group of women who are trying to qualify for the Olympic track and field team. Two track and field movies about the life of Steve Prefontaine, Prefontaine and Without Limits, were released within a year of each other in 1997–1998. Kenny Moore, Eugene-trained Olympic runner and co-star in Prefontaine, co-wrote the screenplay for Without Limits. Prefontaine was filmed in Washington because the Without Limits production bought out Hayward Field for the summer to prevent its competition from shooting there. Kenny Moore also wrote a biography of Bill Bowerman, played in Without Limits by Donald Sutherland back in Eugene 20 years after he had appeared in Animal House. Moore had also had a role in Personal Best. Stealing Time, a 2003 independent film, was partially filmed in Eugene. When the film premiered in June 2001 at the Seattle International Film Festival, it was titled Rennie's Landing after a popular bar near the University of Oregon campus. The title was changed for its DVD release. Zerophilia was filmed in Eugene in 2006. The 2016 Tracktown was about a distance runner training for the Olympics in Eugene. Religion Religious institutions of higher learning in Eugene include Bushnell University and New Hope Christian College. Bushnell University (formerly Northwest Christian University), founded in 1895, has ties with the Christian Church (Disciples of Christ). New Hope Christian College (formerly Eugene Bible College) originated with the Bible Standard Conference in 1915, which joined with Open Bible Evangelistic Association to create Open Bible Standard Churches in 1932. Eugene Bible College was started from this movement by Fred Hornshuh in 1925. There are two Eastern Orthodox Church parishes in Eugene: St John the Wonderworker Orthodox Christian Church in the Historic Whiteaker Neighborhood and Saint George Greek Orthodox Church. There are six Roman Catholic parishes in Eugene as well: St. Mary Catholic Church, St. Jude Catholic Church, St. Mark Catholic Church, St. Peter Catholic Church, St. Paul Catholic Church, and St. Thomas More Catholic Church. Eugene also has a Ukrainian Catholic Church named Nativity of the Mother of God. There is a mainline Protestant contingency in the city as well—such as the largest of the Lutheran Churches, Central Lutheran near the U of O Campus and the Episcopal Church of the Resurrection. The Eugene area has a sizeable LDS Church presence, with three stakes, consisting of 23 congregations (wards and branches). The Church of Jesus Christ announced plans in April 2020 to build a temple in Eugene. The greater Eugene-Springfield area also has a Jehovah's Witnesses presence with five Kingdom Halls, several having multiple congregations in one Kingdom Hall. The Reconstructionist Temple Beth Israel is Eugene's largest Jewish congregation. It was also, for many decades, Eugene's only synagogue, until Orthodox members broke away in 1992 and formed "Congregation Ahavas Torah". Eugene has a community of some 140 Sikhs, who have established a Sikh temple. The 340-member congregation of the Unitarian Universalist Church in Eugene (UUCE) purchased the former Eugene Scottish Rite Temple in May 2010, renovated it, and began services there in September 2012. Saraha Nyingma Buddhist Temple in Eugene opened in 2012 in the former site of the Unitarian Universalist Church. The First Congregational Church, UCC is a large progressive Christian Church with a long history of justice focused ministries and a very active membership. Three years ago, the congregation coordinated with the Connections Program of the St Vincent DePaul organization to provide transitional homes for two unhoused families on the church's property. Through life - skills support and training and a more stable housing situation these families are then able to make their way into independent living. Sports Eugene's Oregon Ducks are part of the Pac-12 Conference (Pac-12). American football is especially popular, with intense rivalries between the Ducks and both the Oregon State University Beavers and the University of Washington Huskies. Autzen Stadium is home to Duck football, with a seating capacity of 54,000 but has had over 60,000 with standing room only. The basketball arena, McArthur Court, was built in 1926. The arena was replaced by the Matthew Knight Arena in late 2010. For nearly 40 years, Eugene has been the "Track and Field Capital of the World." Oregon's most famous track icon is the late world-class distance runner Steve Prefontaine, who was killed in a car crash in 1975. Eugene's jogging trails include Pre's Trail in Alton Baker Park, Rexius Trail, the Adidas Oregon Trail, and the Ridgeline Trail. Jogging was introduced to the U.S. through Eugene, brought from New Zealand by Bill Bowerman, who wrote the best-selling book "Jogging", and coached the champion University of Oregon track and cross country teams. During Bowerman's tenure, his "Men of Oregon" won 24 individual NCAA titles, including titles in 15 out of the 19 events contested. During Bowerman's 24 years at Oregon, his track teams finished in the top ten at the NCAA championships 16 times, including four team titles (1962, '64, '65, '70), and two second-place trophies. His teams also posted a dual meet record of 114–20. Bowerman also invented the waffle sole for running shoes in Eugene, and with Oregon alumnus Phil Knight founded shoe giant Nike. Eugene's miles of running trails, through its unusually large park system, are the most extensive in the U.S. The city has dozens of running clubs. The climate is cool and temperate, good both for jogging and record-setting. Eugene is home to the University of Oregon's Hayward Field track, which hosts numerous collegiate and amateur track and field meets throughout the year, most notably the Prefontaine Classic. Hayward Field was host to the 2004 AAU Junior Olympic Games, the 1989 World Masters Athletics Championships, the track and field events of the 1998 World Masters Games, the 2006 Pacific-10 track and field championships, the 1971, 1975, 1986, 1993, 1999, 2001, 2009, and 2011 USA Track & Field Outdoor Championships and the 1972, 1976, 1980, 2008, 2012, and 2016 U.S. Olympic trials. On April 16, 2015, it was announced by the IAAF that Eugene had been awarded the right to host the 2021 World Athletics Championships. The city bid for the 2019 event but lost narrowly to Doha, Qatar. Eugene is also home to the Eugene Emeralds, a short-season Class A minor-league baseball team. The "Ems" play their home games in PK Park, also the home of the University of Oregon baseball team. The Nationwide Tour's golfing event Oregon Classic takes place at Shadow Hills Country Club, just north of Eugene. The event has been played every year since 1998, except in 2001 when it was slated to begin the day after the 9/11 terrorist attacks. The top 20 players from the Nationwide Tour are promoted to the PGA Tour for the following year. The Eugene Jr. Generals, a Tier III Junior "A" ice hockey team belonging to the Northern Pacific Hockey League (NPHL) consisting of 8 teams throughout Oregon and Washington, plays at the Lane County Ice Center. Lane United FC, a soccer club that participates in the Northwest Division of USL League Two, was founded in 2013 and plays its home games at Civic Park. The following table lists some sports clubs in Eugene and their usual home venue: Parks and recreation Spencer Butte Park at the southern edge of town provides access to Spencer Butte, a dominant feature of Eugene's skyline. Hendricks Park, situated on a knoll to the east of downtown, is known for its rhododendron garden and nearby memorial to Steve Prefontaine, known as Pre's Rock, where the legendary University of Oregon runner was killed in an auto accident. Alton Baker Park, next to the Willamette River, contains Pre's Trail. Also next to the Willamette are Skinner Butte Park and the Owen Memorial Rose Garden, which contains more than 4,500 roses of over 400 varieties, as well as the 150-year-old Black Tartarian Cherry tree, an Oregon Heritage Tree. The city of Eugene maintains an urban forest. The University of Oregon campus is an arboretum, with over 500 species of trees. The city operates and maintains scenic hiking trails that pass through and across the ridges of a cluster of hills in the southern portion of the city, on the fringe of residential neighborhoods. Some trails allow biking, and others are for hikers and runners only. The nearest ski resort, Willamette Pass, is one hour from Eugene by car. On the way, along Oregon Route 58, are several reservoirs and lakes, the Oakridge mountain bike trails, hot springs, and waterfalls within Willamette National Forest. Eugene residents also frequent the Hoodoo and Mount Bachelor ski resorts. The Three Sisters Wilderness, the Oregon Dunes National Recreation Area, and Smith Rock are just a short drive away. Government In 1944, Eugene adopted a council–manager form of government, replacing the day-to-day management of city affairs by the part-time mayor and volunteer city council with a full-time professional city manager. The subsequent history of Eugene city government has largely been one of the dynamics—often contentious—between the city manager, the mayor and city council. According to statute, all Eugene and Lane County elections are officially non-partisan, with a primary containing all candidates in May. If a candidate gets more than 50% of the vote in the primary, they win the election outright, otherwise the top two candidates face off in a November runoff. This allows candidates to win seats during the lower-turnout primary election. The mayor of Eugene is Lucy Vinis, who has been in office since winning the popular vote in May 2016, and who was re-elected in May 2020. Recent mayors include Edwin Cone (1958–69), Les Anderson (1969–77) Gus Keller (1977–84), Brian Obie (1985–88), Jeff Miller (1989–92), Ruth Bascom (1993–96), Jim Torrey (1997–2004) and Kitty Piercy (2005-2017). Eugene City Council Mayor: Lucy Vinis Ward 1 – Emily Semple Ward 2 – Matt Keating Ward 3 – Alan Zelenka Ward 4 – Jennifer Yeh Ward 5 – Mike Clark Ward 6 – Greg Evans Ward 7 – Claire Syrett Ward 8 – Randy Groves Public safety The Eugene Police Department is the city's law enforcement and public safety agency. The Lane County Sheriff's Office also has its headquarters in Eugene. The University of Oregon is served by the University of Oregon Police Department, and Eugene Police Department also has a police station in the West University District near campus. Lane Community College is served by the Lane Community College Public Safety Department. The Oregon State Police have a presence in the rural areas and highways around the Eugene metro area. The LTD downtown station, and the EmX lines are patrolled by LTD Transit Officers. Since 1989 the mental health crisis intervention non-governmental agency CAHOOTS has responded to Eugene's mental health 911 calls. Eugene-Springfield Fire Department is the agency responsible for emergency medical services, fire suppression, HAZMAT operations and water/Confined spaces rescues in the combined Eugene-Springfield metropolitan area. Eugene used to have an ordinance which prohibited car horn usage for non-driving purposes. After several residents were cited for this offense during the anti-Gulf War demonstrations in January 1991, the city was taken to court and in 1992 the Oregon Court of Appeals overturned the ordinance, finding it unconstitutionally vague. Eugene City Hall was abandoned in 2012 for reasons of structural integrity, energy efficiency, and obsolete size. Various offices of city government became tenants in eight other buildings. Politics As the biggest city by far in Lane County, Eugene's voters almost always single-handedly decide the partisan tilt and the margins. While Eugene has always been a counter-culture-heavy and liberal college town, the last two decades have seen that etched into stone as a reliable Democratic voting bloc due to Oregon's voters polarizing via an urban-Democratic/rural-Republican split. Lane County voted for Bernie Sanders over eventual 2016 nominee Hillary Clinton at an overall 60.6-38.1% margin with an even higher Sanders margin within Eugene city limits. Education Eugene is home to the University of Oregon. Other institutions of higher learning include Northwest Christian University, Lane Community College, New Hope Christian College, Gutenberg College, and Pacific University's Eugene campus. Schools The Eugene School District includes four full-service high schools (Churchill, North Eugene, Sheldon, and South Eugene) and many alternative education programs, such as international schools and charter schools. Foreign language immersion programs in the district are available in Spanish, French, Chinese, and Japanese. The Bethel School District serves children in the Bethel neighborhood on the northwest edge of Eugene. The district is home to the traditional Willamette High School and the alternative Kalapuya High School. There are 11 schools in this district. Eugene also has several private schools, including the Eugene Waldorf School, the Outdoor High School, Eugene Montessori, Far Horizon Montessori, Eugene Sudbury School, Wellsprings Friends School, Oak Hill School, and The Little French School. Parochial schools in Eugene include Marist Catholic High School, O'Hara Catholic Elementary School, Eugene Christian School, and St. Paul Parish School. Libraries The largest library in Oregon is the University of Oregon's Knight Library, with collections totaling more than 3 million volumes and over 100,000 audio and video items. The Eugene Public Library moved into a new, larger building downtown in 2002. The four-story library is an increase from . There are also two branches of the Eugene Public Library, the Sheldon Branch Library in the neighborhood of Cal Young/Sheldon, and the Bethel Branch Library, in the neighborhood of Bethel. Eugene also has the Lane County Law Library. Media Print The largest newspaper serving the area is The Register-Guard, a daily newspaper with a circulation of about 70,000, published independently by the Baker family of Eugene until 2018, before being acquired by GateHouse Media. Other newspapers serving the area include the Eugene Weekly, the Emerald, the student-run independent newspaper at the University of Oregon, now published on Mondays and Thursdays;The Torch, the student-run newspaper at Lane Community College, the Ignite, the newspaper at New Hope Christian College and The Beacon Bolt, the student-run newspaper at Bushnell University. Eugene Magazine, Lifestyle Quarterly, Eugene Living, and Sustainable Home and Garden magazines also serve the area. Adelante Latino is a Spanish language newspaper in Eugene that serves all of Lane County. Television Local television stations include KMTR (NBC), KVAL (CBS), KLSR-TV (Fox), KEVU-CD, KEZI (ABC), KEPB (PBS), and KTVC (independent). KEZI (Channel 9) (ABC) KVAL (Channel 13) (CBS) KMTR (Channel 16) (NBC) KEVU-CD (Channel 23) KEPB (Channel 28) (PBS) KLSR (Channel 34) (Fox) KTVC (Channel 36) (Independent) KHWB-LD (Channel 38) (TBN) Radio The local NPR affiliates are KOPB, and KLCC. Radio station KRVM-AM is an affiliate of Jefferson Public Radio, based at Southern Oregon University. The Pacifica Radio affiliate is the University of Oregon student-run radio station, KWVA. Additionally, the community supports two other radio stations: KWAX (classical) and KRVM-FM (alternative). AM Stations KOAC 550 Corvallis – NPR News/Talk (Oregon Public Broadcasting) KUGN 590 Eugene – NEWS/TALK (Cumulus) KXOR 660 Junction City – Spanish Religious (Zion Media) KKNX 840 Eugene – Classic Hits (Mielke Broadcasting) KORE 1050 Springfield – FOX Sports Radio KPNW 1120 Eugene – NEWS/TALK (Bicostal Media) KRVM 1280 Eugene – NPR News/Talk (Eugene School District) (JPR affiliate) KNND 1400 Cottage Grove – Classic Country (Reiten Communications Inc) KEED 1450 Eugene – Classic Country (Mielke Broadcasting) KOPB 1600 Eugene – NPR News/Talk (Oregon
Greek literature, including Homer, Pindar and Aristophanes. Elizabeth opposed slavery and published two poems highlighting the barbarity of the institution and her support for the abolitionist cause: "The Runaway Slave at Pilgrim's Point"; and "A Curse for a Nation". In "Runaway" she describes an enslaved woman who is whipped, raped, and made pregnant as she curses her enslavers. Elizabeth declared herself glad that the slaves were "virtually free" when the Slavery Abolition Act passed in the British Parliament, despite the fact that her father believed that abolition would ruin his business. The date of publication of these poems is in dispute, but her position on slavery in the poems is clear and may have led to a rift between Elizabeth and her father. She wrote to John Ruskin in 1855 "I belong to a family of West Indian slaveholders, and if I believed in curses, I should be afraid". Her father and uncle were unaffected by the Baptist War (1831–1832) and continued to own slaves until passage of the Slavery Abolition Act. In London, John Kenyon, a distant cousin, introduced Elizabeth to literary figures including William Wordsworth, Mary Russell Mitford, Samuel Taylor Coleridge, Alfred Tennyson and Thomas Carlyle. Elizabeth continued to write, contributing "The Romaunt of Margaret", "The Romaunt of the Page", "The Poet's Vow" and other pieces to various periodicals. She corresponded with other writers, including Mary Russell Mitford, who would become a close friend and who would support Elizabeth's literary ambitions. In 1838 The Seraphim and Other Poems appeared, the first volume of Elizabeth's mature poetry to appear under her own name. Sonnets from the Portuguese was published in 1850. There is debate about the origin of the title. Some say it refers to the series of sonnets of the 16th-century Portuguese poet Luís de Camões. However, "my little Portuguese" was a pet name that Browning had adopted for Elizabeth and this may have some connection. The verse-novel Aurora Leigh, her most ambitious and perhaps the most popular of her longer poems, appeared in 1856. It is the story of a female writer making her way in life, balancing work and love, and based on Elizabeth's own experiences. Aurora Leigh was an important influence on Susan B. Anthony's thinking about the traditional roles of women, with regard to marriage versus independent individuality. The North American Review praised Elizabeth's poem: "Mrs. Browning's poems are, in all respects, the utterance of a woman — of a woman of great learning, rich experience, and powerful genius, uniting to her woman's nature the strength which is sometimes thought peculiar to a man." Spiritual influence Much of Barrett Browning's work carries a religious theme. She had read and studied such works as Milton's Paradise Lost and Dante's Inferno. She says in her writing, "We want the sense of the saturation of Christ's blood upon the souls of our poets, that it may cry through them in answer to the ceaseless wail of the Sphinx of our humanity, expounding agony into renovation. Something of this has been perceived in art when its glory was at the fullest. Something of a yearning after this may be seen among the Greek Christian poets, something which would have been much with a stronger faculty". She believed that "Christ's religion is essentially poetry – poetry glorified". She explored the religious aspect in many of her poems, especially in her early work, such as the sonnets. She was interested in theological debate, had learned Hebrew and read the Hebrew Bible. Her seminal Aurora Leigh, for example, features religious imagery and allusion to the apocalypse. The critic Cynthia Scheinberg notes that female characters in Aurora Leigh and her earlier work "The Virgin Mary to the Child Jesus" allude to the female character Miriam from the Hebrew Bible. These allusions to Miriam in both poems mirror the way in which Barrett Browning herself drew from Jewish history, while distancing herself from it, in order to maintain the cultural norms of a Christian woman poet of the Victorian Age. In the correspondence Barrett Browning kept with the Reverend William Merry from 1843 to 1844 on predestination and salvation by works, she identifies herself as a Congregationalist: "I am not a Baptist — but a Congregational Christian, — in the holding of my private opinions." Barrett Browning Institute In 1892, Ledbury, Herefordshire, held a design competition to build an Institute in honour of Barrett Browning. Brightwen Binyon beat 44 other designs. It was based on the timber-framed Market House, which was opposite the site. It was completed in 1896. However, Nikolaus Pevsner was not impressed by its style. In 1938, it became a public library. It has been Grade II-listed since 2007. Critical reception Barrett Browning was widely popular in the United Kingdom and the United States during her lifetime. Edgar Allan Poe was inspired by her poem Lady Geraldine's Courtship and specifically borrowed the poem's metre for his poem The Raven. Poe had reviewed Barrett Browning's work in the January 1845 issue of the Broadway Journal, saying that "her poetic inspiration is the highest – we can conceive of nothing more august. Her sense of Art is pure in itself." In return, she praised The Raven, and Poe dedicated his 1845 collection The Raven and Other Poems to her, referring to her as "the noblest of her sex". Barrett Browning's poetry greatly influenced Emily Dickinson, who admired her as a woman of achievement. Her popularity in the United States and Britain was further advanced by her stands against social injustice, including slavery in the United States, injustice toward Italians from their foreign rulers, and child labour. Lilian Whiting published a biography of Barrett Browning (1899) which describes her as "the most philosophical poet" and depicts her life as "a Gospel of applied Christianity". To Whiting, the term "art for art's sake" did not apply to Barrett Browning's work, as each poem, distinctively purposeful, was borne of a more "honest vision". In this critical analysis, Whiting portrays Barrett Browning as a poet who uses knowledge of Classical literature with an "intuitive gift of spiritual divination". In Elizabeth Barrett Browning, Angela Leighton suggests that the portrayal of Barrett Browning as the "pious iconography of womanhood" has distracted us from her poetic achievements. Leighton cites the 1931 play by Rudolf Besier The Barretts of Wimpole Street as evidence that 20th-century literary criticism of Barrett Browning's work has suffered more as a result of her popularity than poetic ineptitude. The play was popularized by actress Katharine Cornell, for whom it became a signature role. It was an enormous success, both artistically and commercially, and was revived several times and adapted twice into movies. Throughout the 20th century, literary criticism of Barrett Browning's poetry remained sparse until her poems were discovered by the women's movement. She once described herself as being inclined to reject several women's rights principles, suggesting in letters to Mary Russell Mitford and her husband that she believed that there was an inferiority of intellect in women. In Aurora Leigh, however, she created a strong and independent woman who embraces both work and love. Leighton writes that because Elizabeth participates in the literary world, where voice and diction are dominated by perceived masculine superiority, she "is defined only in mysterious opposition to everything that distinguishes the male subject who writes..." A five-volume scholarly edition of her works was published in 2010, the first in over a century. Works (collections) 1820: The Battle of Marathon: A Poem. Privately printed 1826: An Essay on Mind, with Other Poems. London: James Duncan 1833: Prometheus Bound, Translated from the Greek of Aeschylus, and Miscellaneous Poems. London: A.J. Valpy 1838: The Seraphim, and
the abolitionist cause: "The Runaway Slave at Pilgrim's Point"; and "A Curse for a Nation". In "Runaway" she describes an enslaved woman who is whipped, raped, and made pregnant as she curses her enslavers. Elizabeth declared herself glad that the slaves were "virtually free" when the Slavery Abolition Act passed in the British Parliament, despite the fact that her father believed that abolition would ruin his business. The date of publication of these poems is in dispute, but her position on slavery in the poems is clear and may have led to a rift between Elizabeth and her father. She wrote to John Ruskin in 1855 "I belong to a family of West Indian slaveholders, and if I believed in curses, I should be afraid". Her father and uncle were unaffected by the Baptist War (1831–1832) and continued to own slaves until passage of the Slavery Abolition Act. In London, John Kenyon, a distant cousin, introduced Elizabeth to literary figures including William Wordsworth, Mary Russell Mitford, Samuel Taylor Coleridge, Alfred Tennyson and Thomas Carlyle. Elizabeth continued to write, contributing "The Romaunt of Margaret", "The Romaunt of the Page", "The Poet's Vow" and other pieces to various periodicals. She corresponded with other writers, including Mary Russell Mitford, who would become a close friend and who would support Elizabeth's literary ambitions. In 1838 The Seraphim and Other Poems appeared, the first volume of Elizabeth's mature poetry to appear under her own name. Sonnets from the Portuguese was published in 1850. There is debate about the origin of the title. Some say it refers to the series of sonnets of the 16th-century Portuguese poet Luís de Camões. However, "my little Portuguese" was a pet name that Browning had adopted for Elizabeth and this may have some connection. The verse-novel Aurora Leigh, her most ambitious and perhaps the most popular of her longer poems, appeared in 1856. It is the story of a female writer making her way in life, balancing work and love, and based on Elizabeth's own experiences. Aurora Leigh was an important influence on Susan B. Anthony's thinking about the traditional roles of women, with regard to marriage versus independent individuality. The North American Review praised Elizabeth's poem: "Mrs. Browning's poems are, in all respects, the utterance of a woman — of a woman of great learning, rich experience, and powerful genius, uniting to her woman's nature the strength which is sometimes thought peculiar to a man." Spiritual influence Much of Barrett Browning's work carries a religious theme. She had read and studied such works as Milton's Paradise Lost and Dante's Inferno. She says in her writing, "We want the sense of the saturation of Christ's blood upon the souls of our poets, that it may cry through them in answer to the ceaseless wail of the Sphinx of our humanity, expounding agony into renovation. Something of this has been perceived in art when its glory was at the fullest. Something of a yearning after this may be seen among the Greek Christian poets, something which would have been much with a stronger faculty". She believed that "Christ's religion is essentially poetry – poetry glorified". She explored the religious aspect in many of her poems, especially in her early work, such as the sonnets. She was interested in theological debate, had learned Hebrew and read the Hebrew Bible. Her seminal Aurora Leigh, for example, features religious imagery and allusion to the apocalypse. The critic Cynthia Scheinberg notes that female characters in Aurora Leigh and her earlier work "The Virgin Mary to the Child Jesus" allude to the female character Miriam from the Hebrew Bible. These allusions to Miriam in both poems mirror the way in which Barrett Browning herself drew from Jewish history, while distancing herself from it, in order to maintain the cultural norms of a Christian woman poet of the Victorian Age. In the correspondence Barrett Browning kept with the Reverend William Merry from 1843 to 1844 on predestination and salvation by works, she identifies herself as a Congregationalist: "I am not a Baptist — but a Congregational Christian, — in the holding of my private opinions." Barrett Browning Institute In 1892, Ledbury, Herefordshire, held a design competition to build an Institute in honour of Barrett Browning. Brightwen Binyon beat 44 other designs. It was based on the timber-framed Market House, which was opposite the site. It was completed in 1896. However, Nikolaus Pevsner was not impressed by its style. In 1938, it became a public library. It has been Grade II-listed since 2007. Critical reception Barrett Browning was widely popular in the United Kingdom and the United States during her lifetime. Edgar Allan Poe was inspired by her poem Lady Geraldine's Courtship and specifically borrowed the poem's metre for his poem The Raven. Poe had reviewed Barrett Browning's work in the January 1845 issue of the Broadway Journal, saying that "her poetic inspiration is the highest – we can conceive of nothing more august. Her sense of Art is pure in itself." In return, she praised The Raven, and Poe dedicated his 1845 collection The Raven and Other Poems to her, referring to her as "the noblest of her sex". Barrett Browning's poetry greatly influenced Emily Dickinson, who admired her as a woman of achievement. Her popularity in the United States and Britain was further advanced by her stands against social injustice, including slavery in the United States, injustice toward Italians from their foreign rulers, and child labour. Lilian Whiting published a biography of Barrett Browning (1899) which describes her as "the most philosophical poet" and depicts her life as "a Gospel of applied Christianity". To Whiting, the term "art for art's sake" did not apply to Barrett Browning's work, as each poem, distinctively purposeful, was borne of a more "honest vision". In this critical analysis, Whiting portrays Barrett Browning as a poet who uses knowledge of Classical literature with an "intuitive gift of spiritual divination". In Elizabeth Barrett Browning, Angela Leighton suggests that the portrayal of Barrett Browning as the "pious iconography of womanhood" has distracted us from her poetic achievements. Leighton cites the 1931 play by Rudolf Besier The Barretts of Wimpole Street as evidence that 20th-century literary criticism of Barrett Browning's work has suffered more as a result of her popularity than poetic ineptitude. The play was popularized by actress Katharine Cornell, for whom it became a signature role. It was an enormous success, both artistically and commercially, and was revived several times and adapted twice into movies. Throughout the 20th century, literary criticism of Barrett Browning's poetry remained sparse until her poems were discovered by the women's movement. She once described herself as being inclined to reject several women's rights principles, suggesting in letters to Mary Russell Mitford and her husband that she believed that there was an inferiority of intellect in women. In Aurora Leigh, however, she created a strong and independent woman who embraces both work and love. Leighton writes that because Elizabeth participates in the literary world, where voice and diction are dominated by perceived masculine superiority, she "is defined only in mysterious opposition to everything that distinguishes the male subject who writes..." A five-volume scholarly edition of her works was published in 2010, the first in over a century. Works (collections) 1820: The Battle of Marathon: A Poem. Privately printed 1826: An Essay on Mind, with Other Poems. London: James Duncan 1833: Prometheus Bound, Translated from the Greek of Aeschylus, and Miscellaneous Poems. London: A.J. Valpy 1838: The Seraphim, and Other Poems. London: Saunders and Otley 1844: Poems (UK) / A Drama of Exile, and other Poems (US). London: Edward Moxon. New York: Henry G. Langley 1850: Poems ("New Edition", 2 vols.) Revision of 1844 edition adding Sonnets from the Portuguese and others. London: Chapman & Hall 1851: Casa Guidi Windows. London: Chapman & Hall 1853: Poems (3d ed.). London: Chapman & Hall 1854: Two Poems: "A Plea for the Ragged Schools of London" and "The Twins". London: Bradbury & Evans 1856: Poems (4th ed.). London: Chapman & Hall 1856: Aurora Leigh. London: Chapman & Hall 1860: Poems Before Congress. London: Chapman & Hall 1862: Last Poems. London: Chapman & Hall Posthumous publications 1863: The Greek Christian Poets and the English Poets. London: Chapman & Hall 1877: The Earlier Poems of Elizabeth Barrett Browning, 1826–1833, ed. Richard Herne Shepherd. London: Bartholomew Robson 1877: Letters of Elizabeth Barrett Browning Addressed to Richard Hengist Horne, with comments on contemporaries, 2 vols., ed. S.R.T. Mayer. London: Richard Bentley & Son 1897: Letters of Elizabeth Barrett Browning, 2 vols., ed. Frederic G. Kenyon. London:Smith, Elder,& Co. 1899: Letters of Robert Browning and Elizabeth Barrett Barrett 1845–1846, 2 vol., ed Robert W. Barrett Browning. London: Smith, Elder & Co. 1914: New Poems by Robert Browning and Elizabeth Barrett Browning, ed. Frederic G Kenyon. London: Smith, Elder & Co. 1929: Elizabeth Barrett Browning: Letters to Her Sister, 1846–1859, ed. Leonard Huxley. London: John Murray 1935: Twenty-Two Unpublished Letters of Elizabeth Barrett Browning and Robert Browning to Henrietta and Arabella Moulton Barrett. New York: United Feature Syndicate 1939: Letters from Elizabeth Barrett to B.R. Haydon, ed. Martha Hale Shackford. New York: Oxford University Press 1954: Elizabeth Barrett to Miss Mitford, ed. Betty Miller. London: John Murray 1955: Unpublished Letters of Elizabeth Barrett Browning to Hugh Stuart Boyd, ed. Barbara P. McCarthy. New Heaven, Conn.: Yale University Press 1958: Letters of the Brownings to George Barrett, ed. Paul Landis with Ronald E. Freeman. Urbana: University of Illinois Press 1974: Elizabeth Barrett Browning's Letters to Mrs. David Ogilvy, 1849–1861, ed. P. Heydon and P. Kelley. New York: Quadrangle, New York Times Book Co., and Browning Institute 1984: The Brownings' Correspondence, ed. Phillip Kelley, Ronald Hudson, and Scott Lewis. Winfield, Kansas: Wedgestone Press References Further reading Barrett, Robert Assheton. The Barretts of Jamaica – The family of Elizabeth Barrett Browning (1927). Armstrong Browning Library of Baylor University, Browning Society, Wedgestone Press in Winfield, Kan, 2000. Elizabeth Barrett Browning. "Aurora Leigh and Other Poems", eds. John Robert Glorney Bolton and Julia Bolton Holloway. Harmondsworth: Penguin, 1995. Donaldson, Sandra, et al., eds. The Works of Elizabeth Barrett Browning. 5 vols. London: Pickering & Chatto, 2010. The Complete Works of Elizabeth Barrett Browning, eds. Charlotte Porter and Helen A. Clarke. New York: Thomas Y. Crowell, 1900. Creston, Dormer.
certain numbers were believed to hold special ritual significance. Within this system, Enlil was associated with the number fifty, which was considered sacred to him. Enlil was part of a triad of deities, which also included An and Enki. These three deities together were the embodiment of all the fixed stars in the night sky. An was identified with all the stars of the equatorial sky, Enlil with those of the northern sky, and Enki with those of the southern sky. The path of Enlil's celestial orbit was a continuous, symmetrical circle around the north celestial pole, but those of An and Enki were believed to intersect at various points. Enlil was associated with the constellation Boötes. Mythology Origins myths The main source of information about the Sumerian creation myth is the prologue to the epic poem Gilgamesh, Enkidu, and the Netherworld (ETCSL 1.8.1.4), which briefly describes the process of creation: originally, there was only Nammu, the primeval sea. Then, Nammu gave birth to An, the sky, and Ki, the earth. An and Ki mated with each other, causing Ki to give birth to Enlil. Enlil separated An from Ki and carried off the earth as his domain, while An carried off the sky. Enlil marries his mother, Ki, and from this union all the plant and animal life on earth is produced. Enlil and Ninlil (ETCSL 1.2.1) is a nearly complete 152-line Sumerian poem describing the affair between Enlil and the goddess Ninlil. First, Ninlil's mother Nunbarshegunu instructs Ninlil to go bathe in the river. Ninlil goes to the river, where Enlil seduces her and impregnates her with their son, the moon-god Nanna. Because of this, Enlil is banished to Kur, the Sumerian underworld. Ninlil follows Enlil to the underworld, where he impersonates the "man of the gate". Ninlil demands to know where Enlil has gone, but Enlil, still impersonating the gatekeeper, refuses to answer. He then seduces Ninlil and impregnates her with Nergal, the god of death. The same scenario repeats, only this time Enlil instead impersonates the "man of the river of the nether world, the man-devouring river"; once again, he seduces Ninlil and impregnates her with the god Ninazu. Finally, Enlil impersonates the "man of the boat"; once again, he seduces Ninlil and impregnates her with Enbilulu, the "inspector of the canals". The story of Enlil's courtship with Ninlil is primarily a genealogical myth invented to explain the origins of the moon-god Nanna, as well as the various gods of the Underworld, but it is also, to some extent, a coming-of-age story describing Enlil and Ninlil's emergence from adolescence into adulthood. The story also explains Ninlil's role as Enlil's consort; in the poem, Ninlil declares, "As Enlil is your master, so am I also your mistress!" The story is also historically significant because, if the current interpretation of it is correct, it is the oldest known myth in which a god changes shape. Flood myth In the Sumerian version of the flood story (ETCSL 1.7.4), the causes of the flood are unclear because the portion of the tablet recording the beginning of the story has been destroyed. Somehow, a mortal known as Ziusudra manages to survive the flood, likely through the help of the god Enki. The tablet begins in the middle of the description of the flood. The flood lasts for seven days and seven nights before it subsides. Then, Utu, the god of the Sun, emerges. Ziusudra opens a window in the side of the boat and falls down prostrate before the god. Next, he sacrifices an ox and a sheep in honor of Utu. At this point, the text breaks off again. When it picks back up, Enlil and An are in the midst of declaring Ziusudra immortal as an honor for having managed to survive the flood. The remaining portion of the tablet after this point is destroyed. In the later Akkadian version of the flood story, recorded in the Epic of Gilgamesh, Enlil actually causes the flood, seeking to annihilate every living thing on earth because the humans, who are vastly overpopulated, make too much noise and prevent him from sleeping. In this version of the story, the hero is Utnapishtim, who is warned ahead of time by Ea, the Babylonian equivalent of Enki, that the flood is coming. The flood lasts for seven days; when it ends, Ishtar, who had mourned the destruction of humanity, promises Utnapishtim that Enlil will never cause a flood again. When Enlil sees that Utnapishtim and his family have survived, he is outraged, but his son Ninurta speaks up in favor of humanity, arguing that, instead of causing floods, Enlil should simply ensure that humans never become overpopulated by reducing their numbers using wild animals and famines. Enlil goes into the boat; Utnapishtim and his wife bow before him. Enlil, now appeased, grants Utnapishtim immortality as a reward for his loyalty to the gods. Chief god and arbitrator A nearly complete 108-line poem from the Early Dynastic Period ( 2900 – 2350 BC) describes Enlil's invention of the mattock, a key agricultural pick, hoe, ax, or digging tool of the Sumerians. In the poem, Enlil conjures the mattock into existence and decrees its fate. The mattock is described as gloriously beautiful; it is made of pure gold and its head is carved from lapis lazuli. Enlil gives the tool over to the humans, who use it to build cities, subjugate their people, and pull up weeds. Enlil was believed to aid in the growth of plants. The Sumerian poem Enlil Chooses the Farmer-God (ETCSL 5.3.3) describes how Enlil, hoping "to establish abundance and prosperity", creates two gods Emesh and Enten, a shepherd and a farmer, respectively. The two gods argue and Emesh lays claim to Enten's position. They take the dispute before Enlil, who rules in favor of Enten; the two gods rejoice and reconcile. Ninurta myths In the Sumerian poem Lugale (ETCSL 1.6.2), Enlil gives advice to his son, the god Ninurta, advising him on a strategy to slay the demon Asag. This advice is relayed to Ninurta by way of Sharur, his enchanted talking mace, which had been sent by Ninurta to the realm of the gods to seek counsel from Enlil directly. In the Old, Middle, and Late Babylonian myth of Anzû and the Tablet of Destinies, the Anzû, a giant, monstrous bird, betrays Enlil and steals the Tablet of Destinies, a sacred clay tablet belonging to Enlil that grants him his authority, while Enlil is preparing for a bath. The rivers dry up and the gods are stripped of their powers. The gods send Adad, Gerra, and Shara to defeat the Anzû, but all of them fail. Finally, Ea proposes that the gods should send Ninurta, Enlil's son. Ninurta successfully defeats the Anzû and returns the Tablet of Destinies to his father. As a reward, Ninurta is a granted a prominent seat on the council of the gods. War of the gods A badly damaged text from the Neo-Assyrian Period (911 — 612 BC) describes Marduk leading his army of Anunnaki into the sacred city of Nippur and causing a disturbance. The disturbance causes a flood, which forces the resident gods of Nippur under the leadership of Enlil to take shelter in the Eshumesha temple to Ninurta. Enlil is enraged at Marduk's transgression and orders the gods of Eshumesha to take Marduk and the other Anunnaki as prisoners. The Anunnaki are captured, but Marduk appoints his front-runner Mushteshirhablim to lead a revolt against the gods of Eshumesha and sends his messenger Neretagmil to alert Nabu, the god of literacy. When the Eshumesha gods hear Nabu speak, they come out of their temple to search for him. Marduk defeats the Eshumesha gods and takes 360 of them as prisoners of war, including Enlil himself. Enlil protests that the Eshumesha gods are innocent, so Marduk puts them on trial before the Anunnaki. The text ends with a warning from Damkianna (another name for Ninhursag) to the gods and to humanity, pleading them not to repeat the war between the Anunnaki and the gods of Eshumesha. See also Ancient Mesopotamian religion El Hymn to Enlil Shu (Egyptian god) Yahweh References Notes Citations Bibliography External links Ancient Mesopotamian Gods and Goddesses: Enlil/Ellil (god) Gateway to Babylon: "Enlil and Ninlil", trans. Thorkild Jacobsen. Electronic Text Corpus of Sumerian Literature: "Enlil and
established by Enlil himself. It was believed to be the "mooring-rope" of heaven and earth, meaning that it was seen as "a channel of communication between earth and heaven". A hymn written during the reign of Ur-Nammu, the founder of the Third Dynasty of Ur, describes the E-kur in great detail, stating that its gates were carved with scenes of Imdugud, a lesser deity sometimes shown as a giant bird, slaying a lion and an eagle snatching up a sinner. The Sumerians believed that the sole purpose of humanity's existence was to serve the gods. They thought that a god's statue was a physical embodiment of the god himself. As such, cult statues were given constant care and attention and a set of priests were assigned to tend to them. People worshipped Enlil by offering food and other human necessities to him. The food, which was ritually laid out before the god's cult statue in the form of a feast, was believed to be Enlil's daily meal, but, after the ritual, it would be distributed among his priests. These priests were also responsible for changing the cult statue's clothing. The Sumerians envisioned Enlil as a benevolent, fatherly deity, who watches over humanity and cares for their well-being. One Sumerian hymn describes Enlil as so glorious that even the other gods could not look upon him. The same hymn also states that, without Enlil, civilization could not exist. Enlil's epithets include titles such as "the Great Mountain" and "King of the Foreign Lands". Enlil is also sometimes described as a "raging storm", a "wild bull", and a "merchant". The Mesopotamians envisioned him as a creator, a father, a king, and the supreme lord of the universe. He was also known as "Nunamnir" and is referred to in at least one text as the "East Wind and North Wind". Kings regarded Enlil as a model ruler and sought to emulate his example. Enlil was said to be supremely just and intolerant towards evil. Rulers from all over Sumer would travel to Enlil's temple in Nippur to be legitimized. They would return Enlil's favor by devoting lands and precious objects to his temple as offerings. Nippur was the only Sumerian city-state that never built a palace; this was intended to symbolize the city's importance as the center of the cult of Enlil by showing that Enlil himself was the city's king. Even during the Babylonian Period, when Marduk had superseded Enlil as the supreme god, Babylonian kings still traveled to the holy city of Nippur to seek recognition of their right to rule. Enlil first rose to prominence during the twenty-fourth century BC, when the importance of the god An began to wane. During this time period, Enlil and An are frequently invoked together in inscriptions. Enlil remained the supreme god in Mesopotamia throughout the Amorite Period, with Amorite monarchs proclaiming Enlil as the source of their legitimacy. Enlil's importance began to wane after the Babylonian king Hammurabi conquered Sumer. The Babylonians worshipped Enlil under the name "Elil" and the Hurrians syncretized him with their own god Kumarbi. In one Hurrian ritual, Enlil and Apantu are invoked as "the father and mother of Išḫara". Enlil is also invoked alongside Ninlil as a member of "the mighty and firmly established gods". During the Kassite Period ( 1592 BC – 1155 BC), Nippur briefly managed to regain influence in the region and Enlil rose to prominence once again. From around 1300 BC onwards, Enlil was syncretized with the Assyrian national god Aššur, who was the most important deity in the Assyrian pantheon. Then, in 1230 BC, the Elamites attacked Nippur and the city fell into decline, taking the cult of Enlil along with it. Approximately one hundred years later, Enlil's role as the head of the pantheon was given to Marduk, the national god of the Babylonians. Enlil's importance in the pantheon significantly declined and he was sometimes assimilated as merely an aspect of Marduk. Nonetheless, his temples continued functioning throughout the Neo-Assyrian period (911 BC — 609 BC) and even the Babylonians saw Anu and Enlil as the ones who bestowed Marduk with his powers. During the first millennium BC, the Babylonians worshipped a deity under the title "Bel", meaning "lord", who was a syncretization of Enlil, Marduk, and the dying god Dumuzid. Bel held all the cultic titles of Enlil and his status in the Babylonian religion was largely the same. Eventually, Bel came to be seen as the god of order and destiny. Meanwhile, Aššur continued to be known as "the Assyrian Enlil" or "the Enlil of the gods". After the collapse of the Neo-Assyrian Empire, Enlil's statues were smashed and his temples were destroyed because he had become inextricably associated with the Assyrians, who many conquered peoples hated. Enlil continued to be venerated under the name of Marduk until around 141 BC, when the cult of Marduk fell into terminal decline, and was eventually largely forgotten. Iconography Enlil was not represented anthropomorphically in Mesopotamian iconography. Instead, he was represented by a horned cap, which consisted of up to seven superimposed pairs of ox-horns. Such crowns were an important symbol of divinity; gods had been shown wearing them ever since the third millennium BC. The horned cap remained consistent in form and meaning from the earliest days of Sumerian prehistory up until the time of the Persian conquest and beyond. The Sumerians had a complex numerological system, in which certain numbers were believed to hold special ritual significance. Within this system, Enlil was associated with the number fifty, which was considered sacred to him. Enlil was part of a triad of deities, which also included An and Enki. These three deities together were the embodiment of all the fixed stars in the night sky. An was identified with all the stars of the equatorial sky, Enlil with those of the northern sky, and Enki with those of the southern sky. The path of Enlil's celestial orbit was a continuous, symmetrical circle around the north celestial pole, but those of An and Enki were believed to intersect at various points. Enlil was associated with the constellation Boötes. Mythology Origins myths The main source of information about the Sumerian creation myth is the prologue to the epic poem Gilgamesh, Enkidu, and the Netherworld (ETCSL 1.8.1.4), which briefly describes the process of creation: originally, there was only Nammu, the primeval sea. Then, Nammu gave birth to An, the sky, and Ki, the earth. An and Ki mated with each other, causing Ki to give birth to Enlil. Enlil separated An from Ki and carried off the earth as his domain, while An carried off the sky. Enlil marries his mother, Ki, and from this union all the plant and animal life on earth is produced. Enlil and Ninlil (ETCSL 1.2.1) is a nearly complete 152-line Sumerian poem describing the affair between Enlil and the goddess Ninlil. First, Ninlil's mother Nunbarshegunu instructs Ninlil to go bathe in the river. Ninlil goes to the river, where Enlil seduces her and impregnates her with their son, the moon-god Nanna. Because of this, Enlil is banished to Kur, the Sumerian underworld. Ninlil follows Enlil to the underworld, where he impersonates the "man of the gate". Ninlil demands to know where Enlil has gone, but Enlil, still impersonating the gatekeeper, refuses to answer. He then seduces Ninlil and impregnates her with Nergal, the god of death. The same scenario repeats, only this time Enlil instead impersonates the "man of the river of the nether world, the man-devouring river"; once again, he seduces Ninlil and impregnates her with the god Ninazu. Finally, Enlil impersonates the "man of the boat"; once again, he
human and oceanic microbiomes. To a microbe, the human body is a habitat and a landscape. Microbiomes were discovered largely through advances in molecular genetics, which have revealed a hidden richness of microbial diversity on the planet. The oceanic microbiome plays a significant role in the ecological biogeochemistry of the planet's oceans. Biosphere The largest scale of ecological organization is the biosphere: the total sum of ecosystems on the planet. Ecological relationships regulate the flux of energy, nutrients, and climate all the way up to the planetary scale. For example, the dynamic history of the planetary atmosphere's CO2 and O2 composition has been affected by the biogenic flux of gases coming from respiration and photosynthesis, with levels fluctuating over time in relation to the ecology and evolution of plants and animals. Ecological theory has also been used to explain self-emergent regulatory phenomena at the planetary scale: for example, the Gaia hypothesis is an example of holism applied in ecological theory. The Gaia hypothesis states that there is an emergent feedback loop generated by the metabolism of living organisms that maintains the core temperature of the Earth and atmospheric conditions within a narrow self-regulating range of tolerance. Population ecology Population ecology studies the dynamics of species populations and how these populations interact with the wider environment. A population consists of individuals of the same species that live, interact, and migrate through the same niche and habitat. A primary law of population ecology is the Malthusian growth model which states, "a population will grow (or decline) exponentially as long as the environment experienced by all individuals in the population remains constant." Simplified population models usually start with four variables: death, birth, immigration, and emigration. An example of an introductory population model describes a closed population, such as on an island, where immigration and emigration does not take place. Hypotheses are evaluated with reference to a null hypothesis which states that random processes create the observed data. In these island models, the rate of population change is described by: where N is the total number of individuals in the population, b and d are the per capita rates of birth and death respectively, and r is the per capita rate of population change. Using these modeling techniques, Malthus' population principle of growth was later transformed into a model known as the logistic equation by Pierre Verhulst: where N(t) is the number of individuals measured as biomass density as a function of time, t, r is the maximum per-capita rate of change commonly known as the intrinsic rate of growth, and is the crowding coefficient, which represents the reduction in population growth rate per individual added. The formula states that the rate of change in population size () will grow to approach equilibrium, where (), when the rates of increase and crowding are balanced, . A common, analogous model fixes the equilibrium, as K, which is known as the "carrying capacity." Population ecology builds upon these introductory models to further understand demographic processes in real study populations. Commonly used types of data include life history, fecundity, and survivorship, and these are analysed using mathematical techniques such as matrix algebra. The information is used for managing wildlife stocks and setting harvest quotas. In cases where basic models are insufficient, ecologists may adopt different kinds of statistical methods, such as the Akaike information criterion, or use models that can become mathematically complex as "several competing hypotheses are simultaneously confronted with the data." Metapopulations and migration The concept of metapopulations was defined in 1969 as "a population of populations which go extinct locally and recolonize". Metapopulation ecology is another statistical approach that is often used in conservation research. Metapopulation models simplify the landscape into patches of varying levels of quality, and metapopulations are linked by the migratory behaviours of organisms. Animal migration is set apart from other kinds of movement because it involves the seasonal departure and return of individuals from a habitat. Migration is also a population-level phenomenon, as with the migration routes followed by plants as they occupied northern post-glacial environments. Plant ecologists use pollen records that accumulate and stratify in wetlands to reconstruct the timing of plant migration and dispersal relative to historic and contemporary climates. These migration routes involved an expansion of the range as plant populations expanded from one area to another. There is a larger taxonomy of movement, such as commuting, foraging, territorial behaviour, stasis, and ranging. Dispersal is usually distinguished from migration because it involves the one-way permanent movement of individuals from their birth population into another population. In metapopulation terminology, migrating individuals are classed as emigrants (when they leave a region) or immigrants (when they enter a region), and sites are classed either as sources or sinks. A site is a generic term that refers to places where ecologists sample populations, such as ponds or defined sampling areas in a forest. Source patches are productive sites that generate a seasonal supply of juveniles that migrate to other patch locations. Sink patches are unproductive sites that only receive migrants; the population at the site will disappear unless rescued by an adjacent source patch or environmental conditions become more favourable. Metapopulation models examine patch dynamics over time to answer potential questions about spatial and demographic ecology. The ecology of metapopulations is a dynamic process of extinction and colonization. Small patches of lower quality (i.e., sinks) are maintained or rescued by a seasonal influx of new immigrants. A dynamic metapopulation structure evolves from year to year, where some patches are sinks in dry years and are sources when conditions are more favourable. Ecologists use a mixture of computer models and field studies to explain metapopulation structure. Community ecology Community ecology is the study of the interactions among a collection of species that inhabit the same geographic area. Community ecologists study the determinants of patterns and processes for two or more interacting species. Research in community ecology might measure species diversity in grasslands in relation to soil fertility. It might also include the analysis of predator-prey dynamics, competition among similar plant species, or mutualistic interactions between crabs and corals. Ecosystem ecology Ecosystems may be habitats within biomes that form an integrated whole and a dynamically responsive system having both physical and biological complexes. Ecosystem ecology is the science of determining the fluxes of materials (e.g. carbon, phosphorus) between different pools (e.g., tree biomass, soil organic material). Ecosystem ecologists attempt to determine the underlying causes of these fluxes. Research in ecosystem ecology might measure primary production (g C/m^2) in a wetland in relation to decomposition and consumption rates (g C/m^2/y). This requires an understanding of the community connections between plants (i.e., primary producers) and the decomposers (e.g., fungi and bacteria), The underlying concept of an ecosystem can be traced back to 1864 in the published work of George Perkins Marsh ("Man and Nature"). Within an ecosystem, organisms are linked to the physical and biological components of their environment to which they are adapted. Ecosystems are complex adaptive systems where the interaction of life processes form self-organizing patterns across different scales of time and space. Ecosystems are broadly categorized as terrestrial, freshwater, atmospheric, or marine. Differences stem from the nature of the unique physical environments that shapes the biodiversity within each. A more recent addition to ecosystem ecology are technoecosystems, which are affected by or primarily the result of human activity. Food webs A food web is the archetypal ecological network. Plants capture solar energy and use it to synthesize simple sugars during photosynthesis. As plants grow, they accumulate nutrients and are eaten by grazing herbivores, and the energy is transferred through a chain of organisms by consumption. The simplified linear feeding pathways that move from a basal trophic species to a top consumer is called the food chain. The larger interlocking pattern of food chains in an ecological community creates a complex food web. Food webs are a type of concept map or a heuristic device that is used to illustrate and study pathways of energy and material flows. Food webs are often limited relative to the real world. Complete empirical measurements are generally restricted to a specific habitat, such as a cave or a pond, and principles gleaned from food web microcosm studies are extrapolated to larger systems. Feeding relations require extensive investigations into the gut contents of organisms, which can be difficult to decipher, or stable isotopes can be used to trace the flow of nutrient diets and energy through a food web. Despite these limitations, food webs remain a valuable tool in understanding community ecosystems. Food webs exhibit principles of ecological emergence through the nature of trophic relationships: some species have many weak feeding links (e.g., omnivores) while some are more specialized with fewer stronger feeding links (e.g., primary predators). Theoretical and empirical studies identify non-random emergent patterns of few strong and many weak linkages that explain how ecological communities remain stable over time. Food webs are composed of subgroups where members in a community are linked by strong interactions, and the weak interactions occur between these subgroups. This increases food web stability. Step by step lines or relations are drawn until a web of life is illustrated. Trophic levels A trophic level (from Greek troph, τροφή, trophē, meaning "food" or "feeding") is "a group of organisms acquiring a considerable majority of its energy from the lower adjacent level (according to ecological pyramids) nearer the abiotic source." Links in food webs primarily connect feeding relations or trophism among species. Biodiversity within ecosystems can be organized into trophic pyramids, in which the vertical dimension represents feeding relations that become further removed from the base of the food chain up toward top predators, and the horizontal dimension represents the abundance or biomass at each level. When the relative abundance or biomass of each species is sorted into its respective trophic level, they naturally sort into a 'pyramid of numbers'. Species are broadly categorized as autotrophs (or primary producers), heterotrophs (or consumers), and Detritivores (or decomposers). Autotrophs are organisms that produce their own food (production is greater than respiration) by photosynthesis or chemosynthesis. Heterotrophs are organisms that must feed on others for nourishment and energy (respiration exceeds production). Heterotrophs can be further sub-divided into different functional groups, including primary consumers (strict herbivores), secondary consumers (carnivorous predators that feed exclusively on herbivores), and tertiary consumers (predators that feed on a mix of herbivores and predators). Omnivores do not fit neatly into a functional category because they eat both plant and animal tissues. It has been suggested that omnivores have a greater functional influence as predators because compared to herbivores, they are relatively inefficient at grazing. Trophic levels are part of the holistic or complex systems view of ecosystems. Each trophic level contains unrelated species that are grouped together because they share common ecological functions, giving a macroscopic view of the system. While the notion of trophic levels provides insight into energy flow and top-down control within food webs, it is troubled by the prevalence of omnivory in real ecosystems. This has led some ecologists to "reiterate that the notion that species clearly aggregate into discrete, homogeneous trophic levels is fiction." Nonetheless, recent studies have shown that real trophic levels do exist, but "above the herbivore trophic level, food webs are better characterized as a tangled web of omnivores." Keystone species A keystone species is a species that is connected to a disproportionately large number of other species in the food-web. Keystone species have lower levels of biomass in the trophic pyramid relative to the importance of their role. The many connections that a keystone species holds means that it maintains the organization and structure of entire communities. The loss of a keystone species results in a range of dramatic cascading effects that alters trophic dynamics, other food web connections, and can cause the extinction of other species. Sea otters (Enhydra lutris) are commonly cited as an example of a keystone species because they limit the density of sea urchins that feed on kelp. If sea otters are removed from the system, the urchins graze until the kelp beds disappear, and this has a dramatic effect on community structure. Hunting of sea otters, for example, is thought to have led indirectly to the extinction of the Steller's sea cow (Hydrodamalis gigas). While the keystone species concept has been used extensively as a conservation tool, it has been criticized for being poorly defined from an operational stance. It is difficult to experimentally determine what species may hold a keystone role in each ecosystem. Furthermore, food web theory suggests that keystone species may not be common, so it is unclear how generally the keystone species model can be applied. Complexity Complexity is understood as a large computational effort needed to piece together numerous interacting parts exceeding the iterative memory capacity of the human mind. Global patterns of biological diversity are complex. This biocomplexity stems from the interplay among ecological processes that operate and influence patterns at different scales that grade into each other, such as transitional areas or ecotones spanning landscapes. Complexity stems from the interplay among levels of biological organization as energy, and matter is integrated into larger units that superimpose onto the smaller parts. "What were wholes on one level become parts on a higher one." Small scale patterns do not necessarily explain large scale phenomena, otherwise captured in the expression (coined by Aristotle) 'the sum is greater than the parts'. "Complexity in ecology is of at least six distinct types: spatial, temporal, structural, process, behavioral, and geometric." From these principles, ecologists have identified emergent and self-organizing phenomena that operate at different environmental scales of influence, ranging from molecular to planetary, and these require different explanations at each integrative level. Ecological complexity relates to the dynamic resilience of ecosystems that transition to multiple shifting steady-states directed by random fluctuations of history. Long-term ecological studies provide important track records to better understand the complexity and resilience of ecosystems over longer temporal and broader spatial scales. These studies are managed by the International Long Term Ecological Network (LTER). The longest experiment in existence is the Park Grass Experiment, which was initiated in 1856. Another example is the Hubbard Brook study, which has been in operation since 1960. Holism Holism remains a critical part of the theoretical foundation in contemporary ecological studies. Holism addresses the biological organization of life that self-organizes into layers of emergent whole systems that function according to non-reducible properties. This means that higher-order patterns of a whole functional system, such as an ecosystem, cannot be predicted or understood by a simple summation of the parts. "New properties emerge because the components interact, not because the basic nature of the components is changed." Ecological studies are necessarily holistic as opposed to reductionistic. Holism has three scientific meanings or uses that identify with ecology: 1) the mechanistic complexity of ecosystems, 2) the practical description of patterns in quantitative reductionist terms where correlations may be identified but nothing is understood about the causal relations without reference to the whole system, which leads to 3) a metaphysical hierarchy whereby the causal relations of larger systems are understood without reference to the smaller parts. Scientific holism differs from mysticism that has appropriated the same term. An example of metaphysical holism is identified in the trend of increased exterior thickness in shells of different species. The reason for a thickness increase can be understood through reference to principles of natural selection via predation without the need to reference or understand the biomolecular properties of the exterior shells. Relation to evolution Ecology and evolutionary biology are considered sister disciplines of the life sciences. Natural selection, life history, development, adaptation, populations, and inheritance are examples of concepts that thread equally into ecological and evolutionary theory. Morphological, behavioural, and genetic traits, for example, can be mapped onto evolutionary trees to study the historical development of a species in relation to their functions and roles in different ecological circumstances. In this framework, the analytical tools of ecologists and evolutionists overlap as they organize, classify, and investigate life through common systematic principles, such as phylogenetics or the Linnaean system of taxonomy. The two disciplines often appear together, such as in the title of the journal Trends in Ecology and Evolution. There is no sharp boundary separating ecology from evolution, and they differ more in their areas of applied focus. Both disciplines discover and explain emergent and unique properties and processes operating across different spatial or temporal scales of organization. While the boundary between ecology and evolution is not always clear, ecologists study the abiotic and biotic factors that influence evolutionary processes, and evolution can be rapid, occurring on ecological timescales as short as one generation. Behavioural ecology All organisms can exhibit behaviours. Even plants express complex behaviour, including memory and communication. Behavioural ecology is the study of an organism's behaviour in its environment and its ecological and evolutionary implications. Ethology is the study of observable movement or behaviour in animals. This could include investigations of motile sperm of plants, mobile phytoplankton, zooplankton swimming toward the female egg, the cultivation of fungi by weevils, the mating dance of a salamander, or social gatherings of amoeba. Adaptation is the central unifying concept in behavioural ecology. Behaviours can be recorded as traits and inherited in much the same way that eye and hair colour can. Behaviours can evolve by means of natural selection as adaptive traits conferring functional utilities that increases reproductive fitness. Predator-prey interactions are an introductory concept into food-web studies as well as behavioural ecology. Prey species can exhibit different kinds of behavioural adaptations to predators, such as avoid, flee, or defend. Many prey species are faced with multiple predators that differ in the degree of danger posed. To be adapted to their environment and face predatory threats, organisms must balance their energy budgets as they invest in different aspects of their life history, such as growth, feeding, mating, socializing, or modifying their habitat. Hypotheses posited in behavioural ecology are generally based on adaptive principles of conservation, optimization, or efficiency. For example, "[t]he threat-sensitive predator avoidance hypothesis predicts that prey should assess the degree of threat posed by different predators and match their behaviour according to current levels of risk" or "[t]he optimal flight initiation distance occurs where expected postencounter fitness is maximized, which depends on the prey's initial fitness, benefits obtainable by not fleeing, energetic escape costs, and expected fitness loss due to predation risk." Elaborate sexual displays and posturing are encountered in the behavioural ecology of animals. The birds-of-paradise, for example, sing and display elaborate ornaments during courtship. These displays serve a dual purpose of signalling healthy or well-adapted individuals and desirable genes. The displays are driven by sexual selection as an advertisement of quality of traits among suitors. Cognitive ecology Cognitive ecology integrates theory and observations from evolutionary ecology and neurobiology, primarily cognitive science, in order to understand the effect that animal interaction with their habitat has on their cognitive systems and how those systems restrict behavior within an ecological and evolutionary framework. "Until recently, however, cognitive scientists have not paid sufficient attention to the fundamental fact that cognitive traits evolved under particular natural settings. With consideration of the selection pressure on cognition, cognitive ecology can contribute intellectual coherence to the multidisciplinary study of cognition." As a study involving the 'coupling' or interactions between organism and environment, cognitive ecology is closely related to enactivism, a field based upon the view that "...we must see the organism and environment as bound together in reciprocal specification and selection...". Social ecology Social-ecological behaviours are notable in the social insects, slime moulds, social spiders, human society, and naked mole-rats where eusocialism has evolved. Social behaviours include reciprocally beneficial behaviours among kin and nest mates and evolve from kin and group selection. Kin selection explains altruism through genetic relationships, whereby an altruistic behaviour leading to death is rewarded by the survival of genetic copies distributed among surviving relatives. The social insects, including ants, bees, and wasps are most famously studied for this type of relationship because the male drones are clones that share the same genetic make-up as every other male in the colony. In contrast, group selectionists find examples of altruism among non-genetic relatives and explain this through selection acting on the group; whereby, it becomes selectively advantageous for groups if their members express altruistic behaviours to one another. Groups with predominantly altruistic members survive better than groups with predominantly selfish members. Coevolution Ecological interactions can be classified broadly into a host and an associate relationship. A host is any entity that harbours another that is called the associate. Relationships within a species that are mutually or reciprocally beneficial are called mutualisms. Examples of mutualism include fungus-growing ants employing agricultural symbiosis, bacteria living in the guts of insects and other organisms, the fig wasp and yucca moth pollination complex, lichens with fungi and photosynthetic algae, and corals with photosynthetic algae. If there is a physical connection between host and associate, the relationship is called symbiosis. Approximately 60% of all plants, for example, have a symbiotic relationship with arbuscular mycorrhizal fungi living in their roots forming an exchange network of carbohydrates for mineral nutrients. Indirect mutualisms occur where the organisms live apart. For example, trees living in the equatorial regions of the planet supply oxygen into the atmosphere that sustains species living in distant polar regions of the planet. This relationship is called commensalism because many others receive the benefits of clean air at no cost or harm to trees supplying the oxygen. If the associate benefits while the host suffers, the relationship is called parasitism. Although parasites impose a cost to their host (e.g., via damage to their reproductive organs or propagules, denying the services of a beneficial partner), their net effect on host fitness is not necessarily negative and, thus, becomes difficult to forecast. Co-evolution is also driven by competition among species or among members of the same species under the banner of reciprocal antagonism, such as grasses competing for growth space. The Red Queen Hypothesis, for example, posits that parasites track down and specialize on the locally common genetic defense systems of its host that drives the evolution of sexual reproduction to diversify the genetic constituency of populations responding to the antagonistic pressure. Biogeography Biogeography (an amalgamation of biology and geography) is the comparative study of the geographic distribution of organisms and the corresponding evolution of their traits in space and time. The Journal of Biogeography was established in 1974. Biogeography and ecology share many of their disciplinary roots. For example, the theory of island biogeography, published by the Robert MacArthur and Edward O. Wilson in 1967 is considered one of the fundamentals of ecological theory. Biogeography has a long history in the natural sciences concerning the spatial distribution of plants and animals. Ecology and evolution provide the explanatory context for biogeographical studies. Biogeographical patterns result from ecological processes that influence range distributions, such as migration and dispersal. and from historical processes that split populations or species into different areas. The biogeographic processes that result in the natural splitting of species explain much of the modern distribution of the Earth's biota. The splitting of lineages in a species is called vicariance biogeography and it is a sub-discipline of biogeography. There are also practical applications in the field of biogeography concerning ecological systems and processes. For example, the range and distribution of biodiversity and invasive species responding to climate change is a serious concern and active area of research in the context of global warming. r/K selection theory A population ecology concept is r/K selection theory, one of the first predictive models in ecology used to explain life-history evolution. The premise behind the r/K selection model is that natural selection pressures change according to population density. For example, when an island is first colonized, density of individuals is low. The initial increase in population size is not limited by competition, leaving an abundance of available resources for rapid population growth. These early phases of population growth experience density-independent forces of natural selection, which is called r-selection. As the population becomes more crowded, it approaches the island's carrying capacity, thus forcing individuals to compete more heavily for fewer available resources. Under crowded conditions, the population experiences density-dependent forces of natural selection, called K-selection. In the r/K-selection model, the first variable r is the intrinsic rate of natural increase in population size and the second variable K is the carrying capacity of a population. Different species evolve different life-history strategies spanning a continuum between these two selective forces. An r-selected species is one that has high birth rates, low levels of parental investment, and high rates of mortality before individuals reach maturity. Evolution favours high rates of fecundity in r-selected species. Many kinds of insects and invasive species exhibit r-selected characteristics. In contrast, a K-selected species has low rates of fecundity, high levels of parental investment in the young, and low rates of mortality as individuals mature. Humans and elephants are examples of species exhibiting K-selected characteristics, including longevity and efficiency in the conversion of more resources into fewer offspring. Molecular ecology The important relationship between ecology and genetic inheritance predates modern techniques for molecular analysis. Molecular ecological research became more feasible with the development of rapid and accessible genetic technologies, such as the polymerase chain reaction (PCR). The rise of molecular technologies and the influx of research questions into this new ecological field resulted in the publication Molecular Ecology in 1992. Molecular ecology uses various analytical techniques to study genes in an evolutionary and ecological context. In 1994, John Avise also played a leading role in this area of science with the publication of his book, Molecular Markers, Natural History and Evolution. Newer technologies opened a wave of genetic analysis into organisms once difficult to study from an ecological or evolutionary standpoint, such as bacteria, fungi, and nematodes. Molecular ecology engendered a new research paradigm for investigating ecological questions considered otherwise intractable. Molecular investigations revealed previously obscured details in the tiny intricacies of nature and improved resolution into probing questions about behavioural and biogeographical ecology. For example, molecular ecology revealed promiscuous sexual behaviour and multiple male partners in tree swallows previously thought to be socially monogamous. In a biogeographical context, the marriage between genetics, ecology, and evolution resulted in a new sub-discipline called phylogeography. Human ecology Ecology is as much a biological science as it is a human science. Human ecology is an interdisciplinary investigation into the ecology of our species. "Human ecology may be defined: (1) from a bioecological standpoint as the study of man as the ecological dominant in plant and animal communities and systems; (2) from a bioecological standpoint as simply another animal affecting and being affected by his physical environment; and (3) as a human being, somehow different from animal life in general, interacting with physical and modified environments in a distinctive and creative way. A truly interdisciplinary human ecology will most likely address itself to all three." The term was formally introduced in 1921, but many sociologists, geographers, psychologists, and other disciplines were interested in human relations to natural systems centuries prior, especially in the late 19th century. The ecological complexities human beings are facing through the technological transformation of the planetary biome has brought on the Anthropocene. The unique set of circumstances has generated the need for a new unifying science called coupled human and natural systems that builds upon, but moves beyond the field of human ecology. Ecosystems tie into human societies through the critical and all-encompassing life-supporting functions they sustain. In recognition of these functions and the incapability of traditional economic valuation methods to see the value in ecosystems, there has been a surge of interest in social-natural capital, which provides the means to put a value on the stock and use of information and materials stemming from ecosystem goods and services. Ecosystems produce, regulate, maintain, and supply services of critical necessity and beneficial to human health (cognitive and physiological), economies, and they even provide an information or reference function as a living library giving opportunities for science and cognitive development in children engaged in the complexity of the natural world. Ecosystems relate importantly to human ecology as they are the ultimate base foundation of global economics as every commodity, and the capacity for exchange ultimately stems from the ecosystems on Earth. Restoration and management Ecology is an employed science of restoration, repairing disturbed sites through human intervention, in natural resource management, and in environmental impact assessments. Edward O. Wilson predicted in 1992 that the 21st century "will be the era of restoration in ecology". Ecological science has boomed in the industrial investment of restoring ecosystems and their processes in abandoned sites after disturbance. Natural resource managers, in forestry, for example, employ ecologists to develop, adapt, and implement ecosystem based methods into the planning, operation, and restoration phases of land-use. Ecological science is used in the methods of sustainable harvesting, disease, and fire outbreak management, in fisheries stock management, for integrating land-use with protected areas and communities, and conservation in complex geo-political landscapes. Relation to the environment The environment of ecosystems includes both physical parameters and biotic attributes. It is dynamically interlinked and contains resources for organisms at any time throughout their life cycle. Like ecology, the term environment has different conceptual meanings and overlaps with the concept of nature. Environment "includes the physical world, the social world of human relations and the built world of human creation." The physical environment is external to the level of biological organization under investigation, including abiotic factors such as temperature, radiation, light, chemistry, climate and geology. The biotic environment includes genes, cells, organisms, members of the same species (conspecifics) and other species that share a habitat. The distinction between external and internal environments, however, is an abstraction parsing life and environment into units or facts that are inseparable in reality. There is an interpenetration of cause and effect between the
a widely adopted definition: "the set of biotic and abiotic conditions in which a species is able to persist and maintain stable population sizes." The ecological niche is a central concept in the ecology of organisms and is sub-divided into the fundamental and the realized niche. The fundamental niche is the set of environmental conditions under which a species is able to persist. The realized niche is the set of environmental plus ecological conditions under which a species persists. The Hutchinsonian niche is defined more technically as a "Euclidean hyperspace whose dimensions are defined as environmental variables and whose size is a function of the number of values that the environmental values may assume for which an organism has positive fitness." Biogeographical patterns and range distributions are explained or predicted through knowledge of a species' traits and niche requirements. Species have functional traits that are uniquely adapted to the ecological niche. A trait is a measurable property, phenotype, or characteristic of an organism that may influence its survival. Genes play an important role in the interplay of development and environmental expression of traits. Resident species evolve traits that are fitted to the selection pressures of their local environment. This tends to afford them a competitive advantage and discourages similarly adapted species from having an overlapping geographic range. The competitive exclusion principle states that two species cannot coexist indefinitely by living off the same limiting resource; one will always out-compete the other. When similarly adapted species overlap geographically, closer inspection reveals subtle ecological differences in their habitat or dietary requirements. Some models and empirical studies, however, suggest that disturbances can stabilize the co-evolution and shared niche occupancy of similar species inhabiting species-rich communities. The habitat plus the niche is called the ecotope, which is defined as the full range of environmental and biological variables affecting an entire species. Niche construction Organisms are subject to environmental pressures, but they also modify their habitats. The regulatory feedback between organisms and their environment can affect conditions from local (e.g., a beaver pond) to global scales, over time and even after death, such as decaying logs or silica skeleton deposits from marine organisms. The process and concept of ecosystem engineering are related to niche construction, but the former relates only to the physical modifications of the habitat whereas the latter also considers the evolutionary implications of physical changes to the environment and the feedback this causes on the process of natural selection. Ecosystem engineers are defined as: "organisms that directly or indirectly modulate the availability of resources to other species, by causing physical state changes in biotic or abiotic materials. In so doing they modify, maintain and create habitats." The ecosystem engineering concept has stimulated a new appreciation for the influence that organisms have on the ecosystem and evolutionary process. The term "niche construction" is more often used in reference to the under-appreciated feedback mechanisms of natural selection imparting forces on the abiotic niche. An example of natural selection through ecosystem engineering occurs in the nests of social insects, including ants, bees, wasps, and termites. There is an emergent homeostasis or homeorhesis in the structure of the nest that regulates, maintains and defends the physiology of the entire colony. Termite mounds, for example, maintain a constant internal temperature through the design of air-conditioning chimneys. The structure of the nests themselves is subject to the forces of natural selection. Moreover, a nest can survive over successive generations, so that progeny inherit both genetic material and a legacy niche that was constructed before their time. Biome Biomes are larger units of organization that categorize regions of the Earth's ecosystems, mainly according to the structure and composition of vegetation. There are different methods to define the continental boundaries of biomes dominated by different functional types of vegetative communities that are limited in distribution by climate, precipitation, weather and other environmental variables. Biomes include tropical rainforest, temperate broadleaf and mixed forest, temperate deciduous forest, taiga, tundra, hot desert, and polar desert. Other researchers have recently categorized other biomes, such as the human and oceanic microbiomes. To a microbe, the human body is a habitat and a landscape. Microbiomes were discovered largely through advances in molecular genetics, which have revealed a hidden richness of microbial diversity on the planet. The oceanic microbiome plays a significant role in the ecological biogeochemistry of the planet's oceans. Biosphere The largest scale of ecological organization is the biosphere: the total sum of ecosystems on the planet. Ecological relationships regulate the flux of energy, nutrients, and climate all the way up to the planetary scale. For example, the dynamic history of the planetary atmosphere's CO2 and O2 composition has been affected by the biogenic flux of gases coming from respiration and photosynthesis, with levels fluctuating over time in relation to the ecology and evolution of plants and animals. Ecological theory has also been used to explain self-emergent regulatory phenomena at the planetary scale: for example, the Gaia hypothesis is an example of holism applied in ecological theory. The Gaia hypothesis states that there is an emergent feedback loop generated by the metabolism of living organisms that maintains the core temperature of the Earth and atmospheric conditions within a narrow self-regulating range of tolerance. Population ecology Population ecology studies the dynamics of species populations and how these populations interact with the wider environment. A population consists of individuals of the same species that live, interact, and migrate through the same niche and habitat. A primary law of population ecology is the Malthusian growth model which states, "a population will grow (or decline) exponentially as long as the environment experienced by all individuals in the population remains constant." Simplified population models usually start with four variables: death, birth, immigration, and emigration. An example of an introductory population model describes a closed population, such as on an island, where immigration and emigration does not take place. Hypotheses are evaluated with reference to a null hypothesis which states that random processes create the observed data. In these island models, the rate of population change is described by: where N is the total number of individuals in the population, b and d are the per capita rates of birth and death respectively, and r is the per capita rate of population change. Using these modeling techniques, Malthus' population principle of growth was later transformed into a model known as the logistic equation by Pierre Verhulst: where N(t) is the number of individuals measured as biomass density as a function of time, t, r is the maximum per-capita rate of change commonly known as the intrinsic rate of growth, and is the crowding coefficient, which represents the reduction in population growth rate per individual added. The formula states that the rate of change in population size () will grow to approach equilibrium, where (), when the rates of increase and crowding are balanced, . A common, analogous model fixes the equilibrium, as K, which is known as the "carrying capacity." Population ecology builds upon these introductory models to further understand demographic processes in real study populations. Commonly used types of data include life history, fecundity, and survivorship, and these are analysed using mathematical techniques such as matrix algebra. The information is used for managing wildlife stocks and setting harvest quotas. In cases where basic models are insufficient, ecologists may adopt different kinds of statistical methods, such as the Akaike information criterion, or use models that can become mathematically complex as "several competing hypotheses are simultaneously confronted with the data." Metapopulations and migration The concept of metapopulations was defined in 1969 as "a population of populations which go extinct locally and recolonize". Metapopulation ecology is another statistical approach that is often used in conservation research. Metapopulation models simplify the landscape into patches of varying levels of quality, and metapopulations are linked by the migratory behaviours of organisms. Animal migration is set apart from other kinds of movement because it involves the seasonal departure and return of individuals from a habitat. Migration is also a population-level phenomenon, as with the migration routes followed by plants as they occupied northern post-glacial environments. Plant ecologists use pollen records that accumulate and stratify in wetlands to reconstruct the timing of plant migration and dispersal relative to historic and contemporary climates. These migration routes involved an expansion of the range as plant populations expanded from one area to another. There is a larger taxonomy of movement, such as commuting, foraging, territorial behaviour, stasis, and ranging. Dispersal is usually distinguished from migration because it involves the one-way permanent movement of individuals from their birth population into another population. In metapopulation terminology, migrating individuals are classed as emigrants (when they leave a region) or immigrants (when they enter a region), and sites are classed either as sources or sinks. A site is a generic term that refers to places where ecologists sample populations, such as ponds or defined sampling areas in a forest. Source patches are productive sites that generate a seasonal supply of juveniles that migrate to other patch locations. Sink patches are unproductive sites that only receive migrants; the population at the site will disappear unless rescued by an adjacent source patch or environmental conditions become more favourable. Metapopulation models examine patch dynamics over time to answer potential questions about spatial and demographic ecology. The ecology of metapopulations is a dynamic process of extinction and colonization. Small patches of lower quality (i.e., sinks) are maintained or rescued by a seasonal influx of new immigrants. A dynamic metapopulation structure evolves from year to year, where some patches are sinks in dry years and are sources when conditions are more favourable. Ecologists use a mixture of computer models and field studies to explain metapopulation structure. Community ecology Community ecology is the study of the interactions among a collection of species that inhabit the same geographic area. Community ecologists study the determinants of patterns and processes for two or more interacting species. Research in community ecology might measure species diversity in grasslands in relation to soil fertility. It might also include the analysis of predator-prey dynamics, competition among similar plant species, or mutualistic interactions between crabs and corals. Ecosystem ecology Ecosystems may be habitats within biomes that form an integrated whole and a dynamically responsive system having both physical and biological complexes. Ecosystem ecology is the science of determining the fluxes of materials (e.g. carbon, phosphorus) between different pools (e.g., tree biomass, soil organic material). Ecosystem ecologists attempt to determine the underlying causes of these fluxes. Research in ecosystem ecology might measure primary production (g C/m^2) in a wetland in relation to decomposition and consumption rates (g C/m^2/y). This requires an understanding of the community connections between plants (i.e., primary producers) and the decomposers (e.g., fungi and bacteria), The underlying concept of an ecosystem can be traced back to 1864 in the published work of George Perkins Marsh ("Man and Nature"). Within an ecosystem, organisms are linked to the physical and biological components of their environment to which they are adapted. Ecosystems are complex adaptive systems where the interaction of life processes form self-organizing patterns across different scales of time and space. Ecosystems are broadly categorized as terrestrial, freshwater, atmospheric, or marine. Differences stem from the nature of the unique physical environments that shapes the biodiversity within each. A more recent addition to ecosystem ecology are technoecosystems, which are affected by or primarily the result of human activity. Food webs A food web is the archetypal ecological network. Plants capture solar energy and use it to synthesize simple sugars during photosynthesis. As plants grow, they accumulate nutrients and are eaten by grazing herbivores, and the energy is transferred through a chain of organisms by consumption. The simplified linear feeding pathways that move from a basal trophic species to a top consumer is called the food chain. The larger interlocking pattern of food chains in an ecological community creates a complex food web. Food webs are a type of concept map or a heuristic device that is used to illustrate and study pathways of energy and material flows. Food webs are often limited relative to the real world. Complete empirical measurements are generally restricted to a specific habitat, such as a cave or a pond, and principles gleaned from food web microcosm studies are extrapolated to larger systems. Feeding relations require extensive investigations into the gut contents of organisms, which can be difficult to decipher, or stable isotopes can be used to trace the flow of nutrient diets and energy through a food web. Despite these limitations, food webs remain a valuable tool in understanding community ecosystems. Food webs exhibit principles of ecological emergence through the nature of trophic relationships: some species have many weak feeding links (e.g., omnivores) while some are more specialized with fewer stronger feeding links (e.g., primary predators). Theoretical and empirical studies identify non-random emergent patterns of few strong and many weak linkages that explain how ecological communities remain stable over time. Food webs are composed of subgroups where members in a community are linked by strong interactions, and the weak interactions occur between these subgroups. This increases food web stability. Step by step lines or relations are drawn until a web of life is illustrated. Trophic levels A trophic level (from Greek troph, τροφή, trophē, meaning "food" or "feeding") is "a group of organisms acquiring a considerable majority of its energy from the lower adjacent level (according to ecological pyramids) nearer the abiotic source." Links in food webs primarily connect feeding relations or trophism among species. Biodiversity within ecosystems can be organized into trophic pyramids, in which the vertical dimension represents feeding relations that become further removed from the base of the food chain up toward top predators, and the horizontal dimension represents the abundance or biomass at each level. When the relative abundance or biomass of each species is sorted into its respective trophic level, they naturally sort into a 'pyramid of numbers'. Species are broadly categorized as autotrophs (or primary producers), heterotrophs (or consumers), and Detritivores (or decomposers). Autotrophs are organisms that produce their own food (production is greater than respiration) by photosynthesis or chemosynthesis. Heterotrophs are organisms that must feed on others for nourishment and energy (respiration exceeds production). Heterotrophs can be further sub-divided into different functional groups, including primary consumers (strict herbivores), secondary consumers (carnivorous predators that feed exclusively on herbivores), and tertiary consumers (predators that feed on a mix of herbivores and predators). Omnivores do not fit neatly into a functional category because they eat both plant and animal tissues. It has been suggested that omnivores have a greater functional influence as predators because compared to herbivores, they are relatively inefficient at grazing. Trophic levels are part of the holistic or complex systems view of ecosystems. Each trophic level contains unrelated species that are grouped together because they share common ecological functions, giving a macroscopic view of the system. While the notion of trophic levels provides insight into energy flow and top-down control within food webs, it is troubled by the prevalence of omnivory in real ecosystems. This has led some ecologists to "reiterate that the notion that species clearly aggregate into discrete, homogeneous trophic levels is fiction." Nonetheless, recent studies have shown that real trophic levels do exist, but "above the herbivore trophic level, food webs are better characterized as a tangled web of omnivores." Keystone species A keystone species is a species that is connected to a disproportionately large number of other species in the food-web. Keystone species have lower levels of biomass in the trophic pyramid relative to the importance of their role. The many connections that a keystone species holds means that it maintains the organization and structure of entire communities. The loss of a keystone species results in a range of dramatic cascading effects that alters trophic dynamics, other food web connections, and can cause the extinction of other species. Sea otters (Enhydra lutris) are commonly cited as an example of a keystone species because they limit the density of sea urchins that feed on kelp. If sea otters are removed from the system, the urchins graze until the kelp beds disappear, and this has a dramatic effect on community structure. Hunting of sea otters, for example, is thought to have led indirectly to the extinction of the Steller's sea cow (Hydrodamalis gigas). While the keystone species concept has been used extensively as a conservation tool, it has been criticized for being poorly defined from an operational stance. It is difficult to experimentally determine what species may hold a keystone role in each ecosystem. Furthermore, food web theory suggests that keystone species may not be common, so it is unclear how generally the keystone species model can be applied. Complexity Complexity is understood as a large computational effort needed to piece together numerous interacting parts exceeding the iterative memory capacity of the human mind. Global patterns of biological diversity are complex. This biocomplexity stems from the interplay among ecological processes that operate and influence patterns at different scales that grade into each other, such as transitional areas or ecotones spanning landscapes. Complexity stems from the interplay among levels of biological organization as energy, and matter is integrated into larger units that superimpose onto the smaller parts. "What were wholes on one level become parts on a higher one." Small scale patterns do not necessarily explain large scale phenomena, otherwise captured in the expression (coined by Aristotle) 'the sum is greater than the parts'. "Complexity in ecology is of at least six distinct types: spatial, temporal, structural, process, behavioral, and geometric." From these principles, ecologists have identified emergent and self-organizing phenomena that operate at different environmental scales of influence, ranging from molecular to planetary, and these require different explanations at each integrative level. Ecological complexity relates to the dynamic resilience of ecosystems that transition to multiple shifting steady-states directed by random fluctuations of history. Long-term ecological studies provide important track records to better understand the complexity and resilience of ecosystems over longer temporal and broader spatial scales. These studies are managed by the International Long Term Ecological Network (LTER). The longest experiment in existence is the Park Grass Experiment, which was initiated in 1856. Another example is the Hubbard Brook study, which has been in operation since 1960. Holism Holism remains a critical part of the theoretical foundation in contemporary ecological studies. Holism addresses the biological organization of life that self-organizes into layers of emergent whole systems that function according to non-reducible properties. This means that higher-order patterns of a whole functional system, such as an ecosystem, cannot be predicted or understood by a simple summation of the parts. "New properties emerge because the components interact, not because the basic nature of the components is changed." Ecological studies are necessarily holistic as opposed to reductionistic. Holism has three scientific meanings or uses that identify with ecology: 1) the mechanistic complexity of ecosystems, 2) the practical description of patterns in quantitative reductionist terms where correlations may be identified but nothing is understood about the causal relations without reference to the whole system, which leads to 3) a metaphysical hierarchy whereby the causal relations of larger systems are understood without reference to the smaller parts. Scientific holism differs from mysticism that has appropriated the same term. An example of metaphysical holism is identified in the trend of increased exterior thickness in shells of different species. The reason for a thickness increase can be understood through reference to principles of natural selection via predation without the need to reference or understand the biomolecular properties of the exterior shells. Relation to evolution Ecology and evolutionary biology are considered sister disciplines of the life sciences. Natural selection, life history, development, adaptation, populations, and inheritance are examples of concepts that thread equally into ecological and evolutionary theory. Morphological, behavioural, and genetic traits, for example, can be mapped onto evolutionary trees to study the historical development of a species in relation to their functions and roles in different ecological circumstances. In this framework, the analytical tools of ecologists and evolutionists overlap as they organize, classify, and investigate life through common systematic principles, such as phylogenetics or the Linnaean system of taxonomy. The two disciplines often appear together, such as in the title of the journal Trends in Ecology and Evolution. There is no sharp boundary separating ecology from evolution, and they differ more in their areas of applied focus. Both disciplines discover and explain emergent and unique properties and processes operating across different spatial or temporal scales of organization. While the boundary between ecology and evolution is not always clear, ecologists study the abiotic and biotic factors that influence evolutionary processes, and evolution can be rapid, occurring on ecological timescales as short as one generation. Behavioural ecology All organisms can exhibit behaviours. Even plants express complex behaviour, including memory and communication. Behavioural ecology is the study of an organism's behaviour in its environment and its ecological and evolutionary implications. Ethology is the study of observable movement or behaviour in animals. This could include investigations of motile sperm of plants, mobile phytoplankton, zooplankton swimming toward the female egg, the cultivation of fungi by weevils, the mating dance of a salamander, or social gatherings of amoeba. Adaptation is the central unifying concept in behavioural ecology. Behaviours can be recorded as traits and inherited in much the same way that eye and hair colour can. Behaviours can evolve by means of natural selection as adaptive traits conferring functional utilities that increases reproductive fitness. Predator-prey interactions are an introductory concept into food-web studies as well as behavioural ecology. Prey species can exhibit different kinds of behavioural adaptations to predators, such as avoid, flee, or defend. Many prey species are faced with multiple predators that differ in the degree of danger posed. To be adapted to their environment and face predatory threats, organisms must balance their energy budgets as they invest in different aspects of their life history, such as growth, feeding, mating, socializing, or modifying their habitat. Hypotheses posited in behavioural ecology are generally based on adaptive principles of conservation, optimization, or efficiency. For example, "[t]he threat-sensitive predator avoidance hypothesis predicts that prey should assess the degree of threat posed by different predators and match their behaviour according to current levels of risk" or "[t]he optimal flight initiation distance occurs where expected postencounter fitness is maximized, which depends on the prey's initial fitness, benefits obtainable by not fleeing, energetic escape costs, and expected fitness loss due to predation risk." Elaborate sexual displays and posturing are encountered in the behavioural ecology of animals. The birds-of-paradise, for example, sing and display elaborate ornaments during courtship. These displays serve a dual purpose of signalling healthy or well-adapted individuals and desirable genes. The displays are driven by sexual selection as an advertisement of quality of traits among suitors. Cognitive ecology Cognitive ecology integrates theory and observations from evolutionary ecology and neurobiology, primarily cognitive science, in order to understand the effect that animal interaction with their habitat has on their cognitive systems and how those systems restrict behavior within an ecological and evolutionary framework. "Until recently, however, cognitive scientists have not paid sufficient attention to the fundamental fact that cognitive traits evolved under particular natural settings. With consideration of the selection pressure on cognition, cognitive ecology can contribute intellectual coherence to the multidisciplinary study of cognition." As a study involving the 'coupling' or interactions between organism and environment, cognitive ecology is closely related to enactivism, a field based upon the view that "...we must see the organism and environment as bound together in reciprocal specification and selection...". Social ecology Social-ecological behaviours are notable in the social insects, slime moulds, social spiders, human society, and naked mole-rats where eusocialism has evolved. Social behaviours include reciprocally beneficial behaviours among kin and nest mates and evolve from kin and group selection. Kin selection explains altruism through genetic relationships, whereby an altruistic behaviour leading to death is rewarded by the survival of genetic copies distributed among surviving relatives. The social insects, including ants, bees, and wasps are most famously studied for this type of relationship because the male drones are clones that share the same genetic make-up as every other male in the colony. In contrast, group selectionists find
This represents two minor sets (couples A-B and couples C-D) and one couple (couple E) who are "standing out" due to having no one to dance with. After one iteration of the dance, every active couple will have moved below the inactive couple in their minor set, which in the example would be thus: Inactive (couple B)/Active (couple A)/Inactive (couple D)/Active (couple C)/Out (couple E). For the next iteration, any inactive couple at the top (and any active couple at the bottom) will stand out, while any couple standing out will begin dancing as actives (if at the top) or inactives (if at the bottom). So the next iteration would begin as follows: Out (couple B)/Active (couple A)/Inactive (couple D)/Active (couple C)/Inactive (couple E). The minor sets now contain couples A-D and couples C-E, while couple B is "standing out." Dances in other forms progress differently, though the "triple minor" progression is quite similar. Progression, double or triple - a longways dance has a double progression if the arrangement of couples into minor sets advances twice during one iteration of the dance instead of just once. A triple-progression dance advances thrice during one iteration. Proper - with the man on the left and the woman on the right, from the perspective of someone facing the music. Improper is the opposite. The terms carry no value judgment, but only indicate whether one is on one's "home" side. A dance in duple-minor longways form is termed "improper" if the active couples are improper by default; this is the exception in English country dance, but the rule in contra dance. Right and left - see changes of right and left. Set - a dancer steps right, closes with left foot and shifts weight to it, then steps back to the right foot (right-together-step); then repeats the process mirror-image (left-together-step). In some areas, such as the Society for Creative Anachronism, it is done starting to the left. It may be done in place or advancing. Often followed by a turn single. In Scottish country dance there are several variations; in contra dance its place is generally taken by a balance right and left. Not to be confused with terms indicating groups of dancers, like longways set or minor set. Set and link - a figure done by a pair of dancers and simultaneously by another pair of dancers who are facing them. Most commonly this means that the men do it facing the women, while the women do it facing the men. First, all dancers set; then the dancer on the left of each pair dances a turn single right, while also moving to the right, to end in his or her neighbor's place. Meanwhile the dancer on the right of each pair casts to the left into his or her neighbor's place; thus the men have traded places with each other, and so have the women. This figure is most commonly found in Scottish country dance. Sicilian circle - a type of dance formation, roughly equivalent to a longways set rolled into a ring. Every couple stands along the line of a large circle, facing another couple; thus half of the couples face clockwise, while the other half face counterclockwise. Since, unlike the longways set, the Sicilian circle has no place for dancers to "stand out," Sicilian circle dances must be done by an even number of couples. The progression is similar to that of a "duple minor," but since there is nowhere for couples to reverse direction, every clockwise couple will only dance with the counterclockwise couples (and vice versa). Siding - two dancers go forward in four counts to meet side by side, then back in four counts to where they started the figure. As depicted by Feuillet, this is done right side by right side the first time, left by left the second time. In Cecil Sharp's reconstruction, the dancers pass by left shoulder (in some versions holding hands), turn to face each other, then return along the same path, passing by right shoulder; this is then repeated. So-called Cecil Sharp siding is no longer considered historical, but is still used on its own merits. Standard siding is sometimes called Pat Shaw siding (after its reconstructor) to distinguish it from Cecil Sharp siding. Single - two steps in any direction, closing feet on the second step. The second step tends to be interpreted as a closing action in which weight usually stays on the same foot as before, consistent with descriptions from Renaissance sources. Slipping circle (left or right) - dancers take hands
hey, like the modern circular hey but adapted to the straight sides of a longways formation. Clockwise - in a ring, move to one's left. In a turn single turn to the right. Contrary - your contrary is not your partner. In Playford's original notation, this term meant the same thing that Corner (or sometimes Opposite) means today. Corner - in a two-couple minor set, the dancer diagonally opposite one. The first man and the second woman are first corners, while the first woman and second man are second corners. In other dance formations, it has similar meanings. Counter-clockwise - the opposite of clockwise - in a ring, move right. In a turn single, turn to the left. Cross hands - face and give left to left and right to right. Cross over or pass - change places with another dancer moving forward and passing by the right shoulder, unless otherwise directed. Cross and go below - cross as above and go outside below one couple, ending improper. Double - four steps forward or back, closing the feet on the 4th step (see "Single" below). Fall (back) - dance backwards. Figure of 8 - a weaving figure in which a moving couple crosses between a standing couple and casts around them in a figure 8 pattern. To do this once, ending in one's partner's place, is a half figure of 8; to do it twice, returning to one's own place, is a full figure of 8. The right of way in the cross has traditionally been given to the lady; some communities prefer to give it to whichever dancer is coming from the left-hand side. In a double figure of 8, the other couple does not stand still, but performs their own figure of 8 simultaneously; they begin with the cast and end with the cross to avoid collision. Forward - lead or move in the direction you are facing. Grand chain - a handing hey (changes of right and left) done in a circle of more than two couples. Gypsy - two dancers move around each other in a circular path while facing each other. Hands across - right or left hands are given to corners, and dancers move in the direction they face. In contra dance, instead of taking one's corner's hand, one grasps the wrist of the next dancer. Also known as a star right/left. Hands three, four etc. - the designated number of dancers form a ring and move around in the direction indicated, usually first to the left and back to the right. Head and foot - the head of a longways set is the end with the music; the foot is the other end. Toward the head is "up," and toward the foot is "down." Hey - a weaving figure in which dancers move in single file along a set track, passing one another on alternating sides (see circular hey and straight hey). In Scottish country dance, the hey is known as the reel. "Hole in the Wall" cross - a type of cross. In a regular cross, the dancers walk past each other and turn upon reaching the other line; in a "Hole in the Wall" cross, they meet in the middle, make a brief half-turn without hands, and back into one another's place, maintaining eye contact the while. Named for "Hole in the Wall," a dance in which it appears. Honour - couples step forward and right, close, shift weight, and curtsey or bow, then (usually) repeat to their left. In the time of Playford's original manual, a woman's curtsey was similar to the modern one, but a man's honour (or reverence) kept the upper body upright and involved sliding the left leg forward while bending the right knee Improper - see proper. Ladies' chain - a figure in which ladies dance first with each other in the center of the set and then with the gentlemen on the sides. In its simplest form, two ladies begin in second corner positions (nearer the head on the women's line and nearer the foot on the men's line). The ladies pass each other by right hand and turn with the gentlemen by left hand, approximately once around, to end with the ladies in each other's place and the gentlemen where they began. The figure can be extended to more couples in a ring, as long as the dancers in the ring are alternating between gentlemen and ladies. If the gentlemen turn the ladies only by left hand, that is an open ladies' chain; if they also place their right hands on the ladies' backs during the turn, that is a closed ladies' chain. In English country dance, both closed and open ladies' chains are to be found, and the gentlemen make a short cast up or down the set to meet the ladies; in contra dance, only the closed ladies' chain is done, and the gentlemen sidestep to meet the ladies. The men's chain is a simple gender reversal, but is a much rarer figure. Lead - join inside hands and walk in a certain direction. To lead up or down is to walk toward or away from the head of the set; to lead out is to walk away from the other line of dancers. Link - see set and link. Longways set - a line of couples dancing together. This is usually "longways for as many as will," indicating that any number of couples may join the longways set—although some dances require a three- or four-couple longways set. If the longways set is not restricted to three or four couples, it will be subdivided into minor sets of two or three couples each. "Mad Robin" figure - a figure in which one couple dances around their respective neighbours. Men take one step forward and then slide to the right passing in front of their neighbour, then step backward and slide left behind their neighbour. Conversely women take one step backward and then slide to the left passing behind their neighbour, then step forward and slide right in front of their neighbour. In one version, the dancer who is going outside the set at the moment casts out to begin that motion; in the other, the active couple maintains eye contact. The term Mad Robin comes from the name of the dance which originated the figure. A version involving all four dancers was developed for contra dancing and later readmitted into some modern English dances. Minor set - a longways set is subdivided into several minor sets. In a "duple minor" dance, every two couples form a minor set. In a "triple minor" dance, every three couples form a minor set. The active couple is always the couple in each minor set who are closest to the head. After every iteration of the dance, the progression will create new minor sets for the next iteration. Neighbour - the person you are standing beside, but not your partner. Opposite - the person you are facing, if you are not facing your partner. Poussette - two dancers face, give both hands and change places as a couple with two adjacent dancers. One pair moves a double toward one wall,
study the flow of energy and material through ecological systems. Ecosystem processes External and internal factors Ecosystems are controlled by both external and internal factors. External factors, also called state factors, control the overall structure of an ecosystem and the way things work within it, but are not themselves influenced by the ecosystem. On broad geographic scales, climate is the factor that "most strongly determines ecosystem processes and structure". Climate determines the biome in which the ecosystem is embedded. Rainfall patterns and seasonal temperatures influence photosynthesis and thereby determine the amount of energy available to the ecosystem. Parent material determines the nature of the soil in an ecosystem, and influences the supply of mineral nutrients. Topography also controls ecosystem processes by affecting things like microclimate, soil development and the movement of water through a system. For example, ecosystems can be quite different if situated in a small depression on the landscape, versus one present on an adjacent steep hillside. Other external factors that play an important role in ecosystem functioning include time and potential biota, the organisms that are present in a region and could potentially occupy a particular site. Ecosystems in similar environments that are located in different parts of the world can end up doing things very differently simply because they have different pools of species present. The introduction of non-native species can cause substantial shifts in ecosystem function. Unlike external factors, internal factors in ecosystems not only control ecosystem processes but are also controlled by them. While the resource inputs are generally controlled by external processes like climate and parent material, the availability of these resources within the ecosystem is controlled by internal factors like decomposition, root competition or shading. Other factors like disturbance, succession or the types of species present are also internal factors. Primary production Primary production is the production of organic matter from inorganic carbon sources. This mainly occurs through photosynthesis. The energy incorporated through this process supports life on earth, while the carbon makes up much of the organic matter in living and dead biomass, soil carbon and fossil fuels. It also drives the carbon cycle, which influences global climate via the greenhouse effect. Through the process of photosynthesis, plants capture energy from light and use it to combine carbon dioxide and water to produce carbohydrates and oxygen. The photosynthesis carried out by all the plants in an ecosystem is called the gross primary production (GPP). About half of the gross GPP is respired by plants in order to provide the energy that supports their growth and maintenance. The remainder, that portion of GPP that is not used up by respiration, is known as the net primary production (NPP). Total photosynthesis is limited by a range of environmental factors. These include the amount of light available, the amount of leaf area a plant has to capture light (shading by other plants is a major limitation of photosynthesis), the rate at which carbon dioxide can be supplied to the chloroplasts to support photosynthesis, the availability of water, and the availability of suitable temperatures for carrying out photosynthesis. Energy flow Energy and carbon enter ecosystems through photosynthesis, are incorporated into living tissue, transferred to other organisms that feed on the living and dead plant matter, and eventually released through respiration. The carbon and energy incorporated into plant tissues (net primary production) is either consumed by animals while the plant is alive, or it remains uneaten when the plant tissue dies and becomes detritus. In terrestrial ecosystems, the vast majority of the net primary production ends up being broken down by decomposers. The remainder is consumed by animals while still alive and enters the plant-based trophic system. After plants and animals die, the organic matter contained in them enters the detritus-based trophic system. Ecosystem respiration is the sum of respiration by all living organisms (plants, animals, and decomposers) in the ecosystem. Net ecosystem production is the difference between gross primary production (GPP) and ecosystem respiration. In the absence of disturbance, net ecosystem production is equivalent to the net carbon accumulation in the ecosystem. Energy can also be released from an ecosystem through disturbances such as wildfire or transferred to other ecosystems (e.g., from a forest to a stream to a lake) by erosion. In aquatic systems, the proportion of plant biomass that gets consumed by herbivores is much higher than in terrestrial systems. In trophic systems, photosynthetic organisms are the primary producers. The organisms that consume their tissues are called primary consumers or secondary producers—herbivores. Organisms which feed on microbes (bacteria and fungi) are termed microbivores. Animals that feed on primary consumers—carnivores—are secondary consumers. Each of these constitutes a trophic level. The sequence of consumption—from plant to herbivore, to carnivore—forms a food chain. Real systems are much more complex than this—organisms will generally feed on more than one form of food, and may feed at more than one trophic level. Carnivores may capture some prey that is part of a plant-based trophic system and others that are part of a detritus-based trophic system (a bird that feeds both on herbivorous grasshoppers and earthworms, which consume detritus). Real systems, with all these complexities, form food webs rather than food chains. Decomposition The carbon and nutrients in dead organic matter are broken down by a group of processes known as decomposition. This releases nutrients that can then be re-used for plant and microbial production and returns carbon dioxide to the atmosphere (or water) where it can be used for photosynthesis. In the absence of decomposition, the dead organic matter would accumulate in an ecosystem, and nutrients and atmospheric carbon dioxide would be depleted. Decomposition processes can be separated into three categories—leaching, fragmentation and chemical alteration of dead material. As water moves through dead organic matter, it dissolves and carries with it the water-soluble components. These are then taken up by organisms in the soil, react with mineral soil, or are transported beyond the confines of the ecosystem (and are considered lost to it). Newly shed leaves and newly dead animals have high concentrations of water-soluble components and include sugars, amino acids and mineral nutrients. Leaching is more important in wet environments and less important in dry ones. Fragmentation processes break organic material into smaller pieces, exposing new surfaces for colonization by microbes. Freshly shed leaf litter may be inaccessible due to an outer layer of cuticle or bark, and cell contents are protected by a cell wall. Newly dead animals may be covered by an exoskeleton. Fragmentation processes, which break through these protective layers, accelerate the rate of microbial decomposition. Animals fragment detritus as they hunt for food, as does passage through the gut. Freeze-thaw cycles and cycles of wetting and drying also fragment dead material. The chemical alteration of the dead organic matter is primarily achieved through bacterial and fungal action. Fungal hyphae produce enzymes that can break through the tough outer structures surrounding dead plant material. They also produce enzymes that break down lignin, which allows them access to both cell contents and the nitrogen in the lignin. Fungi can transfer carbon and nitrogen through their hyphal networks and thus, unlike bacteria, are not dependent solely on locally available resources. Decomposition rates Decomposition rates vary among ecosystems. The rate of decomposition is governed by three sets of factors—the physical environment (temperature, moisture, and soil properties), the quantity and quality of the dead material available to decomposers, and the nature of the microbial community itself. Temperature controls the rate of microbial respiration; the higher the temperature, the faster the microbial decomposition occurs. Temperature also affects soil moisture, which affects decomposition. Freeze-thaw cycles also affect decomposition—freezing temperatures kill soil microorganisms, which allows leaching to play a more important role in moving nutrients around. This can be especially important as the soil thaws in the spring, creating a pulse of nutrients that become available. Decomposition rates are low under very wet or very dry conditions. Decomposition rates are highest in wet, moist conditions with adequate levels of oxygen. Wet soils tend to become deficient in oxygen (this is especially true in wetlands), which slows microbial growth. In dry soils, decomposition slows as well, but bacteria continue to grow (albeit at a slower rate) even after soils become too dry to support plant growth. Dynamics and resilience Ecosystems are dynamic entities. They are subject to periodic disturbances and are always in the process of recovering from past disturbances. When a perturbation occurs, an ecosystem responds by moving away from its initial state. The tendency of an ecosystem to remain close to its equilibrium state, despite that disturbance, is termed its resistance. The capacity of a system to absorb disturbance and reorganize while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks is termed its ecological resilience. Resilience thinking also includes humanity as an integral part of the biosphere where we are dependent on ecosystem services for our survival and must build and maintain their natural capacities to withstand shocks and disturbances. Time plays a central role over a wide range, for example, in the slow development of soil from bare rock and the faster recovery of a community from disturbance. Disturbance also plays an important role in ecological processes. F. Stuart Chapin and coauthors define disturbance as "a relatively discrete event in time that removes plant biomass". This can range from herbivore outbreaks, treefalls, fires, hurricanes, floods, glacial advances, to volcanic eruptions. Such disturbances can cause large changes in plant, animal and microbe populations, as well as soil organic matter content. Disturbance is followed by succession, a "directional change in ecosystem structure and functioning resulting from biotically driven changes in resource supply." The frequency and severity of disturbance determine the way it affects ecosystem function. A major disturbance like a volcanic eruption or glacial advance and retreat leave behind soils that lack plants, animals or organic matter. Ecosystems that experience such disturbances undergo primary succession. A less severe disturbance like forest fires, hurricanes or cultivation result in secondary succession and a faster recovery. More severe and more frequent disturbance result in longer recovery times. From one year to another, ecosystems experience variation in their biotic and abiotic environments. A drought, a colder than usual winter, and a pest outbreak all are short-term variability in environmental conditions. Animal populations vary from year to year, building
available, the amount of leaf area a plant has to capture light (shading by other plants is a major limitation of photosynthesis), the rate at which carbon dioxide can be supplied to the chloroplasts to support photosynthesis, the availability of water, and the availability of suitable temperatures for carrying out photosynthesis. Energy flow Energy and carbon enter ecosystems through photosynthesis, are incorporated into living tissue, transferred to other organisms that feed on the living and dead plant matter, and eventually released through respiration. The carbon and energy incorporated into plant tissues (net primary production) is either consumed by animals while the plant is alive, or it remains uneaten when the plant tissue dies and becomes detritus. In terrestrial ecosystems, the vast majority of the net primary production ends up being broken down by decomposers. The remainder is consumed by animals while still alive and enters the plant-based trophic system. After plants and animals die, the organic matter contained in them enters the detritus-based trophic system. Ecosystem respiration is the sum of respiration by all living organisms (plants, animals, and decomposers) in the ecosystem. Net ecosystem production is the difference between gross primary production (GPP) and ecosystem respiration. In the absence of disturbance, net ecosystem production is equivalent to the net carbon accumulation in the ecosystem. Energy can also be released from an ecosystem through disturbances such as wildfire or transferred to other ecosystems (e.g., from a forest to a stream to a lake) by erosion. In aquatic systems, the proportion of plant biomass that gets consumed by herbivores is much higher than in terrestrial systems. In trophic systems, photosynthetic organisms are the primary producers. The organisms that consume their tissues are called primary consumers or secondary producers—herbivores. Organisms which feed on microbes (bacteria and fungi) are termed microbivores. Animals that feed on primary consumers—carnivores—are secondary consumers. Each of these constitutes a trophic level. The sequence of consumption—from plant to herbivore, to carnivore—forms a food chain. Real systems are much more complex than this—organisms will generally feed on more than one form of food, and may feed at more than one trophic level. Carnivores may capture some prey that is part of a plant-based trophic system and others that are part of a detritus-based trophic system (a bird that feeds both on herbivorous grasshoppers and earthworms, which consume detritus). Real systems, with all these complexities, form food webs rather than food chains. Decomposition The carbon and nutrients in dead organic matter are broken down by a group of processes known as decomposition. This releases nutrients that can then be re-used for plant and microbial production and returns carbon dioxide to the atmosphere (or water) where it can be used for photosynthesis. In the absence of decomposition, the dead organic matter would accumulate in an ecosystem, and nutrients and atmospheric carbon dioxide would be depleted. Decomposition processes can be separated into three categories—leaching, fragmentation and chemical alteration of dead material. As water moves through dead organic matter, it dissolves and carries with it the water-soluble components. These are then taken up by organisms in the soil, react with mineral soil, or are transported beyond the confines of the ecosystem (and are considered lost to it). Newly shed leaves and newly dead animals have high concentrations of water-soluble components and include sugars, amino acids and mineral nutrients. Leaching is more important in wet environments and less important in dry ones. Fragmentation processes break organic material into smaller pieces, exposing new surfaces for colonization by microbes. Freshly shed leaf litter may be inaccessible due to an outer layer of cuticle or bark, and cell contents are protected by a cell wall. Newly dead animals may be covered by an exoskeleton. Fragmentation processes, which break through these protective layers, accelerate the rate of microbial decomposition. Animals fragment detritus as they hunt for food, as does passage through the gut. Freeze-thaw cycles and cycles of wetting and drying also fragment dead material. The chemical alteration of the dead organic matter is primarily achieved through bacterial and fungal action. Fungal hyphae produce enzymes that can break through the tough outer structures surrounding dead plant material. They also produce enzymes that break down lignin, which allows them access to both cell contents and the nitrogen in the lignin. Fungi can transfer carbon and nitrogen through their hyphal networks and thus, unlike bacteria, are not dependent solely on locally available resources. Decomposition rates Decomposition rates vary among ecosystems. The rate of decomposition is governed by three sets of factors—the physical environment (temperature, moisture, and soil properties), the quantity and quality of the dead material available to decomposers, and the nature of the microbial community itself. Temperature controls the rate of microbial respiration; the higher the temperature, the faster the microbial decomposition occurs. Temperature also affects soil moisture, which affects decomposition. Freeze-thaw cycles also affect decomposition—freezing temperatures kill soil microorganisms, which allows leaching to play a more important role in moving nutrients around. This can be especially important as the soil thaws in the spring, creating a pulse of nutrients that become available. Decomposition rates are low under very wet or very dry conditions. Decomposition rates are highest in wet, moist conditions with adequate levels of oxygen. Wet soils tend to become deficient in oxygen (this is especially true in wetlands), which slows microbial growth. In dry soils, decomposition slows as well, but bacteria continue to grow (albeit at a slower rate) even after soils become too dry to support plant growth. Dynamics and resilience Ecosystems are dynamic entities. They are subject to periodic disturbances and are always in the process of recovering from past disturbances. When a perturbation occurs, an ecosystem responds by moving away from its initial state. The tendency of an ecosystem to remain close to its equilibrium state, despite that disturbance, is termed its resistance. The capacity of a system to absorb disturbance and reorganize while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks is termed its ecological resilience. Resilience thinking also includes humanity as an integral part of the biosphere where we are dependent on ecosystem services for our survival and must build and maintain their natural capacities to withstand shocks and disturbances. Time plays a central role over a wide range, for example, in the slow development of soil from bare rock and the faster recovery of a community from disturbance. Disturbance also plays an important role in ecological processes. F. Stuart Chapin and coauthors define disturbance as "a relatively discrete event in time that removes plant biomass". This can range from herbivore outbreaks, treefalls, fires, hurricanes, floods, glacial advances, to volcanic eruptions. Such disturbances can cause large changes in plant, animal and microbe populations, as well as soil organic matter content. Disturbance is followed by succession, a "directional change in ecosystem structure and functioning resulting from biotically driven changes in resource supply." The frequency and severity of disturbance determine the way it affects ecosystem function. A major disturbance like a volcanic eruption or glacial advance and retreat leave behind soils that lack plants, animals or organic matter. Ecosystems that experience such disturbances undergo primary succession. A less severe disturbance like forest fires, hurricanes or cultivation result in secondary succession and a faster recovery. More severe and more frequent disturbance result in longer recovery times. From one year to another, ecosystems experience variation in their biotic and abiotic environments. A drought, a colder than usual winter, and a pest outbreak all are short-term variability in environmental conditions. Animal populations vary from year to year, building up during resource-rich periods and crashing as they overshoot their food supply. Longer-term changes also shape ecosystem processes. For example, the forests of eastern North America still show legacies of cultivation which ceased in 1850 when large areas were reverted to forests. Another example is the methane production in eastern Siberian lakes that is controlled by organic matter which accumulated during the Pleistocene. Nutrient cycling Ecosystems continually exchange energy and carbon with the wider environment. Mineral nutrients, on the other hand, are mostly cycled back and forth between plants, animals, microbes and the soil. Most nitrogen enters ecosystems through biological nitrogen fixation, is deposited through precipitation, dust, gases or is applied as fertilizer. Most terrestrial ecosystems are nitrogen-limited in the short term making nitrogen cycling an important control on ecosystem production. Over the long term, phosphorus availability can also be critical. Macronutrients which are required by all plants in large quantities include the primary nutrients (which are most limiting as they are used in largest amounts): Nitrogen, phosphorus, potassium. Secondary major nutrients (less often limiting) include: Calcium, magnesium, sulfur. Micronutrients required by all plants in small quantities include boron, chloride, copper, iron, manganese, molybdenum, zinc. Finally, there are also beneficial nutrients which may be required by certain plants or by plants under specific environmental conditions: aluminum, cobalt, iodine, nickel, selenium, silicon, sodium, vanadium. Until modern times, nitrogen fixation was the major source of nitrogen for ecosystems. Nitrogen-fixing bacteria either live symbiotically with plants or live freely in the soil. The energetic cost is high for plants that support nitrogen-fixing symbionts—as much as 25% of gross primary production when measured in controlled conditions. Many members of the legume plant family support nitrogen-fixing symbionts. Some cyanobacteria are also capable of nitrogen fixation. These are phototrophs, which carry out photosynthesis. Like other nitrogen-fixing bacteria, they can either be free-living or have symbiotic relationships with plants. Other sources of nitrogen include acid deposition
logarithm of to base . Thus, when the value of is set this limit is equal and so one arrives at the following simple identity: Consequently, the exponential function with base is particularly suited to doing calculus. (as opposed to some other number as the base of the exponential function) makes calculations involving the derivatives much simpler. Another motivation comes from considering the derivative of the base- logarithm (i.e., ), for : where the substitution was made. The base- logarithm of is 1, if equals . So symbolically, The logarithm with this special base is called the natural logarithm, and is denoted as ; it behaves well under differentiation since there is no undetermined limit to carry through the calculations. Thus, there are two ways of selecting such special numbers . One way is to set the derivative of the exponential function equal to , and solve for . The other way is to set the derivative of the base logarithm to and solve for . In each case, one arrives at a convenient choice of base for doing calculus. It turns out that these two solutions for are actually the same: the number . Alternative characterizations Other characterizations of are also possible: one is as the limit of a sequence, another is as the sum of an infinite series, and still others rely on integral calculus. So far, the following two (equivalent) properties have been introduced: The number is the unique positive real number such that . The number is the unique positive real number such that . The following four characterizations can be proven to be equivalent: Properties Calculus As in the motivation, the exponential function is important in part because it is the unique nontrivial function that is its own derivative (up to multiplication by a constant): and therefore its own antiderivative as well: Inequalities The number is the unique real number such that for all positive . Also, we have the inequality for all real , with equality if and only if . Furthermore, is the unique base of the exponential for which the inequality holds for all . This is a limiting case of Bernoulli's inequality. Exponential-like functions Steiner's problem asks to find the global maximum for the function This maximum occurs precisely at . The value of this maximum is 1.4446 6786 1009 7661 3365... (accurate to 20 decimal places). For proof, the inequality , from above, evaluated at and simplifying gives . So for all positive x. Similarly, is where the global minimum occurs for the function defined for positive . More generally, for the function the global maximum for positive occurs at for any ; and the global minimum occurs at for any . The infinite tetration or converges if and only if (or approximately between 0.0660 and 1.4447), due to a theorem of Leonhard Euler. Number theory The real number is irrational. Euler proved this by showing that its simple continued fraction expansion is infinite. (See also Fourier's proof that is irrational.) Furthermore, by the Lindemann–Weierstrass theorem, is transcendental, meaning that it is not a solution of any non-constant polynomial equation with rational coefficients. It was the first number to be proved transcendental without having been specifically constructed for this purpose (compare with Liouville number); the proof was given by Charles Hermite in 1873. It is conjectured that is normal, meaning that when is expressed in any base the possible digits in that base are uniformly distributed (occur with equal probability in any sequence of given length). Complex numbers The exponential function may be written as a Taylor series Because this series is convergent for every complex value of , it is commonly used to extend the definition of to the complex numbers. This, with the Taylor series for and , allows one to derive Euler's formula: which holds for every complex . The special case with is Euler's identity: from which it follows that, in the principal branch of the logarithm, Furthermore, using the laws for exponentiation, which is de Moivre's formula. The expression is sometimes referred to as . The expressions of and in terms of the exponential function can be deduced: Differential equations The family of functions where is any real number, is the solution to the differential equation Representations The number can be represented in a variety of ways: as an infinite series, an infinite product, a continued fraction, or a limit of a sequence. Two of these representations, often used in introductory calculus courses, are the limit given above, and the series obtained by evaluating at the above power series representation of . Less common is the continued fraction which written out looks like This continued fraction for converges three times as quickly: Many other series, sequence, continued fraction, and infinite product representations of have been proved. Stochastic representations In addition to exact analytical expressions for representation of , there are stochastic techniques for estimating . One such approach begins with an infinite sequence of independent random variables , ..., drawn from the uniform distribution on [0, 1]. Let be the least number such that the sum of the first observations exceeds 1: Then the expected value of is : . Known digits The number of known digits of has increased substantially during the last decades. This is due both to the increased performance of computers and to algorithmic improvements. Since around 2010, the proliferation of modern high-speed desktop computers has made it feasible for most amateurs to compute trillions of digits of within acceptable amounts of time. The most recent record was set on Dec 5, 2020, where has been calculated to
There are various other characterizations. is sometimes called Euler's number (not to be confused with Euler's constant ), after the Swiss mathematician Leonhard Euler, or Napier's constant. The constant was discovered by the Swiss mathematician Jacob Bernoulli while studying compound interest. The number is of great importance in mathematics, alongside 0, 1, , and . All five appear in one formulation of Euler's identity, and play important and recurring roles across mathematics. Like the constant , is irrational (that is, it cannot be represented as a ratio of integers) and transcendental (that is, it is not a root of any non-zero polynomial with rational coefficients). To 50 decimal places the value of is: History The first references to the constant were published in 1618 in the table of an appendix of a work on logarithms by John Napier. However, this did not contain the constant itself, but simply a list of logarithms calculated from the constant. It is assumed that the table was written by William Oughtred. The discovery of the constant itself is credited to Jacob Bernoulli in 1683, who attempted to find the value of the following expression (which is equal to ): The first known use of the constant, represented by the letter , was in correspondence from Gottfried Leibniz to Christiaan Huygens in 1690 and 1691. Leonhard Euler introduced the letter as the base for natural logarithms, writing in a letter to Christian Goldbach on 25 November 1731. Euler started to use the letter for the constant in 1727 or 1728, in an unpublished paper on explosive forces in cannons, while the first appearance of in a publication was in Euler's Mechanica (1736). Although some researchers used the letter in the subsequent years, the letter was more common and eventually became standard. In mathematics, the most common typographical convention is to typeset the constant as "", in italics, although sometimes "e" in roman is used. On the other hand, the ISO 80000-2:2019 standard recommends typesetting constants in an upright style. Applications Compound interest Jacob Bernoulli discovered this constant in 1683, while studying a question about compound interest: If the interest is credited twice in the year, the interest rate for each 6 months will be 50%, so the initial $1 is multiplied by 1.5 twice, yielding at the end of the year. Compounding quarterly yields , and compounding monthly yields If there are compounding intervals, the interest for each interval will be and the value at the end of the year will be $1.00 × . Bernoulli noticed that this sequence approaches a limit (the force of interest) with larger and, thus, smaller compounding intervals. Compounding weekly () yields $2.692597..., while compounding daily () yields $2.714567... (approximately two cents more). The limit as grows large is the number that came to be known as . That is, with continuous compounding, the account value will reach $2.718281828... More generally, an account that starts at $1 and offers an annual interest rate of will, after years, yield dollars with continuous compounding. (Note here that is the decimal equivalent of the rate of interest expressed as a percentage, so for 5% interest, .) Bernoulli trials The number itself also has applications in probability theory, in a way that is not obviously related to exponential growth. Suppose that a gambler plays a slot machine that pays out with a probability of one in and plays it times. Then, for large , the probability that the gambler will lose every bet is approximately . For , this is already approximately 1/2.79. This is an example of a Bernoulli trial process. Each time the gambler plays the slots, there is a one in n chance of winning. Playing n times is modeled by the binomial distribution, which is closely related to the binomial theorem and Pascal's triangle. The probability of winning times out of n trials is: In particular, the probability of winning zero times () is The limit of the above expression, as n tends to infinity, is precisely . Standard normal distribution The normal distribution with zero mean and unit standard deviation is known as the standard normal distribution, given by the probability density function The constraint of unit variance (and thus also unit standard deviation) results in the in the exponent, and the constraint of unit total area under the curve results in the factor .[proof] This function is symmetric around , where it attains its maximum value , and has inflection points at . Derangements Another application of , also discovered in part by Jacob Bernoulli along with Pierre Remond de Montmort, is in the problem of derangements, also known as the hat check problem: guests are invited to a party, and at the door, the guests all check their hats with the butler, who in turn places the hats into boxes, each labelled with the name of one guest. But the butler has not asked the identities of the guests, and so he puts the hats into boxes selected at random. The problem of de