sentence1
stringlengths 1
133k
| sentence2
stringlengths 1
131k
|
---|---|
mention of Edo in historical records, but for a few settlements in the area. Edo first appears in the Azuma Kagami chronicles, that name for the area being probably used since the second half of the Heian period. Its development started in late 11th century with a branch of the called the , coming from the banks of the then-Iruma River, present day upstream of Arakawa river. A descendant of the head of the Chichibu clan settled in the area and took the name , likely based on the name used for the place, and founded the Edo clan. Shigetsugu built a fortified residence, probably around the tip of the Musashino terrace, which would become the Edo castle. Shigetsugu's son, , took the Taira's side against Minamoto no Yoritomo in 1180 but eventually surrendered to Minamoto and became a gokenin for the Kamakura shogunate. At the fall of the shogunate in the 14th century, the Edo clan took the side of the Southern court, and its influence declined during the Muromachi period. In 1456, a vassal of the Ōgigayatsu branch of the Uesugi clan, started to build a castle on the former fortified residence of the Edo clan and took the name Ōta Dōkan. Dōkan lived in this castle until his assassination in 1486. Under Dōkan, with good water connections to Kamakura, Odawara and other parts of Kanto and the country, Edo expanded in a jokamachi, with the castle bordering a cove opening into Edo Bay (current Hibiya Park) and the town developing along the Hirakawa River that was flowing into the cove, as well as the stretch of land on the eastern side of the cove (roughly where current Tokyo Station is) called . Some priests and scholars fleeing Kyoto after the Ōnin War came to Edo during that period. After the death of Dōkan, the castle became one of strongholds of the Uesugi clan, which fell to the Later Hōjō clan at the battle of Takanawahara in 1524, during the expansion of their rule over the Kantō area. When the Hōjō clan was finally defeated by Toyotomi Hideyoshi in 1590, the Kanto area was given to rule to Toyotomi's senior officer Tokugawa Ieyasu, who took his residence in Edo. Tokugawa era Tokugawa Ieyasu emerged as the paramount warlord of the Sengoku period following his victory at the Battle of Sekigahara in October 1600. He formally founded the Tokugawa shogunate in 1603 and established his headquarters at Edo Castle. Edo became the center of political power and de facto capital of Japan, although the historic capital of Kyoto remained the de jure capital as the seat of the emperor. Edo transformed from a fishing village in Musashi Province in 1457 into the largest metropolis in the world with an estimated population of 1,000,000 by 1721.Edo was repeatedly and regularly devastated by fires, the Great fire of Meireki in 1657 being the most disastrous, with an estimated 100,000 victims and a vast portion of the city completely burnt. At the time, the population of Edo was around 300,000, and the impact of the fire was tremendous. The fire destroyed the central keep of Edo Castle, which was never rebuilt, and it influenced the urban planning afterwards to make the city more resilient with many empty areas to break spreading fires and wider streets. Reconstruction efforts expanded the city east of the Sumida River, and some daimyō residences were relocated to give more space to the city, especially in the direct vicinity of the shogun's residence, giving birth to a large green space beside the castle, present-day Fukiage gardens of the Imperial Palace. During the Edo period, there were about 100 major fires mostly begun by accident and often quickly escalating and spreading through neighborhoods of wooden nagaya which were heated with charcoal fires. In 1868, the Tokugawa shogunate was overthrown in the Meiji Restoration by supporters of Emperor Meiji and his Imperial Court in Kyoto, ending Edo's status as the de facto capital of Japan. However, the new Meiji government soon renamed Edo to Tōkyō (東京, "Eastern Capital") and the city became the formal capital of Japan when the emperor moved his residence to the city. Urbanism Very quickly after its inception, the shogunate undertook major works in Edo that drastically changed the topography of the area, notably under the nationwide program of major civil works involving the now pacified daimyō workforce. The Hibiya cove facing the castle was soon filled after the arrival of Ieyasu, the Hirakawa river was diverted, and several protective moats and logistical canals were dug (including the Kanda river), to limit the risks of flooding. Landfill works on the bay began, with several areas reclaimed during the duration of the shogunate (notably the Tsukiji area). East of the city and of the Sumida River, a massive network of canals was dug. Fresh water was a major issue, as direct wells would provide brackish water because of the location of the city over an estuary. The few fresh water | court nobles, its Buddhist temples and its history; Osaka was the country's commercial center, dominated by the chōnin or the merchant class. On the contrary, the samurai and daimyō residences occupied up to 70% of the area of Edo. On the east and northeast sides of the castle lived the including the chōnin in a much more densely populated area than the samurai class area, organized in a series of gated communities called machi (町, "town" or "village"). This area, Shitamachi (下町, "lower town" or "lower towns"), was the center of urban and merchant culture. Shomin also lived along the main roads leading in and out of the city. The Sumida River, then called the Great River (大川, Ōkawa), ran on the eastern side of the city. The shogunate's official rice-storage warehouses and other official buildings were located here. The marked the center of the city's commercial center and the starting point of the gokaidō (thus making it the de facto "center of the country"). Fishermen, craftsmen and other producers and retailers operated here. Shippers managed ships known as tarubune to and from Osaka and other cities, bringing goods into the city or transferring them from sea routes to river barges or land routes. The northeastern corner of the city was considered dangerous in the traditional onmyōdō cosmology and was protected from evil by a number of temples including Sensō-ji and Kan'ei-ji, one of the two tutelary Bodaiji temples of the Tokugawa. A path and a canal, a short distance north of Sensō-ji, extended west from the Sumida riverbank leading along the northern edge of the city to the Yoshiwara pleasure district. Previously located near Ningyōchō, the district was rebuilt in this more remote location after the great fire of Meireki. Danzaemon, the hereditary position head of eta, or outcasts, who performed "unclean" works in the city resided nearby. Temples and shrines occupied roughly 15% of the surface of the city, equivalent to the living areas of the townspeople, with however an average of 1/10th of its population. Temples and shrines were spread out over the city. Besides the large concentration in the northeast side to protect the city, the second Bodaiji of the Tokugawa, Zōjō-ji occupied a large area south of the castle. Housing Military caste The samurai and daimyōs residences varied dramatically in size depending on their status. Some daimyōs could have several residences in Edo. The , was the main residence while the lord was in Edo and was used for official duties. It was not necessarily the largest of his residences, but the most convenient to commute to the castle. The , a bit further from the castle, could house the heir of the lord, his servants from his fief when he was in Edo for the sankin-kotai, or be a hiding residence if needed. The , if there was any, was on the outskirts of town, more of a pleasure retreat with gardens. The lower residence could also be used as a retreat for the lord if a fire had devastated the city. Some of the powerful daimyōs residences occupied vast grounds of several dozens of hectares. Shonin In a strict sense of the word, chōnin were only the townspeople who owned their residence, which was actually a minority. The shonin population mainly lived in semi-collective housings called , multi-rooms wooden dwellings, organized in enclosed , with communal facilities, such as wells connected to the city's fresh water distribution system, garbage collection area and communal bathrooms. A typical machi was of rectangular shape and could have a population of several hundreds. The machi had curfew for the night with closing and guarded gates called opening on the in the machi. Two floor buildings and larger shops, reserved to the higher-ranking members of the society, were facing the main street. A machi would typically follow a grid pattern and smaller streets, , were opening on the main street, also with (sometimes) two-floor buildings, shop on the first floor, living quarter on the second floor, for the more well-off residents. Very narrow streets accessible through small gates called , would enter deeper inside the machi, where single floor nagayas, the were located. Rentals and smaller rooms for lower ranked shonin were located in those back housings. Edo was nicknamed the , depicting the large number and diversity of those communities, but the actual number was closer to 1,700 by the 18th century. Government and administration Edo's municipal government was under the responsibility of the rōjū, the senior officials which oversaw the entire bakufu – the government of the Tokugawa shogunate. The administrative definition of Edo was called . The Kanjō-bugyō (finance commissioners) were responsible for the financial matters of the shogunate, whereas the Jisha-Bugyō handled matters related to shrines and temples. The were samurai (at the very beginning of the shogunate daimyōs, later hatamoto) officials appointed to keep the order in the city, with the word designating both the heading magistrate, the magistrature and its organization. They were in charge of Edo's day-to-day administration, combining the role of police, judge and fire brigade. There were two offices, the South Machi-Bugyō and the North Machi-Bugyō, which had the same geographical jurisdiction in spite of their name but rotated roles on a monthly basis. Despite their extensive responsibilities, the teams of the Machi-Bugyō were rather small, with 2 offices of 125 people each. The Machi-Bugyō did not have jurisdiction over the samurai residential areas, which remained under the shogunate direct rule. The geographical jurisdiction of the Machi-Bugyō did not exactly coincide with the Gofunai, creating some complexity on the handling on the matters of the city. The Machi-bugyō oversaw the numerous Machi where shonin lived |
to indicate the degree to which an explosive can be oxidized. If an explosive molecule contains just enough oxygen to convert all of its carbon to carbon dioxide, all of its hydrogen to water, and all of its metal to metal oxide with no excess, the molecule is said to have a zero oxygen balance. The molecule is said to have a positive oxygen balance if it contains more oxygen than is needed and a negative oxygen balance if it contains less oxygen than is needed. The sensitivity, strength, and brisance of an explosive are all somewhat dependent upon oxygen balance and tend to approach their maxima as oxygen balance approaches zero. Oxygen balance applies to traditional explosives mechanics with the assumption that carbon is oxidized to carbon monoxide and carbon dioxide during detonation. In what seems like a paradox to an explosives expert, Cold Detonation Physics uses carbon in its most highly oxidized state as the source of oxygen in the form of carbon dioxide. Oxygen balance, therefore, either does not apply to a CDP formulation or must be calculated without including the carbon in the carbon dioxide. Chemical composition A chemical explosive may consist of either a chemically pure compound, such as nitroglycerin, or a mixture of a fuel and an oxidizer, such as black powder or grain dust and air. Pure compounds Some chemical compounds are unstable in that, when shocked, they react, possibly to the point of detonation. Each molecule of the compound dissociates into two or more new molecules (generally gases) with the release of energy. Nitroglycerin: A highly unstable and sensitive liquid Acetone peroxide: A very unstable white organic peroxide TNT: Yellow insensitive crystals that can be melted and cast without detonation Cellulose nitrate: A nitrated polymer which can be a high or low explosive depending on nitration level and conditions RDX, PETN, HMX: Very powerful explosives which can be used pure or in plastic explosives C-4 (or Composition C-4): An RDX plastic explosive plasticized to be adhesive and malleable The above compositions may describe most of the explosive material, but a practical explosive will often include small percentages of other substances. For example, dynamite is a mixture of highly sensitive nitroglycerin with sawdust, powdered silica, or most commonly diatomaceous earth, which act as stabilizers. Plastics and polymers may be added to bind powders of explosive compounds; waxes may be incorporated to make them safer to handle; aluminium powder may be introduced to increase total energy and blast effects. Explosive compounds are also often "alloyed": HMX or RDX powders may be mixed (typically by melt-casting) with TNT to form Octol or Cyclotol. Oxidized fuel An oxidizer is a pure substance (molecule) that in a chemical reaction can contribute some atoms of one or more oxidizing elements, in which the fuel component of the explosive burns. On the simplest level, the oxidizer may itself be an oxidizing element, such as gaseous or liquid oxygen. Black powder: Potassium nitrate, charcoal and sulfur Flash powder: Fine metal powder (usually aluminium or magnesium) and a strong oxidizer (e.g. potassium chlorate or perchlorate) Ammonal: Ammonium nitrate and aluminium powder Armstrong's mixture: Potassium chlorate and red phosphorus. This is a very sensitive mixture. It is a primary high explosive in which sulfur is substituted for some or all of the phosphorus to slightly decrease sensitivity. Cold Detonation Physics: Combinations of carbon dioxide in the form of dry ice (an untraditional oxygen source), and powdered reducing agents (fuel) like magnesium and aluminum. Sprengel explosives: A very general class incorporating any strong oxidizer and highly reactive fuel, although in practice the name was most commonly applied to mixtures of chlorates and nitroaromatics. ANFO: Ammonium nitrate and fuel oil Cheddites: Chlorates or perchlorates and oil Oxyliquits: Mixtures of organic materials and liquid oxygen Panclastites: Mixtures of organic materials and dinitrogen tetroxide Availability and cost The availability and cost of explosives are determined by the availability of the raw materials and the cost, complexity, and safety of the manufacturing operations. Classification By sensitivity Primary A primary explosive is an explosive that is extremely sensitive to stimuli such as impact, friction, heat, static electricity, or electromagnetic radiation. Some primary explosives are also known as contact explosives. A relatively small amount of energy is required for initiation. As a very general rule, primary explosives are considered to be those compounds that are more sensitive than PETN. As a practical measure, primary explosives are sufficiently sensitive that they can be reliably initiated with a blow from a hammer; however, PETN can also usually be initiated in this manner, so this is only a very broad guideline. Additionally, several compounds, such as nitrogen triiodide, are so sensitive that they cannot even be handled without detonating. Nitrogen triiodide is so sensitive that it can be reliably detonated by exposure to alpha radiation; it is the only explosive for which this is true. Primary explosives are often used in detonators or to trigger larger charges of less sensitive secondary explosives. Primary explosives are commonly used in blasting caps and percussion caps to translate a physical shock signal. In other situations, different signals such as electrical or physical shock, or, in the case of laser detonation systems, light, are used to initiate an action, i.e., an explosion. A small quantity, usually milligrams, is sufficient to initiate a larger charge of explosive that is usually safer to handle. Examples of primary high explosives are: Acetone peroxide Alkali metal ozonides Ammonium permanganate Ammonium chlorate Azidotetrazolates Azoclathrates Benzoyl peroxide Benzvalene 3,5-Bis(trinitromethyl)tetrazole Chlorine oxides Copper(I) acetylide Copper(II) azide Cumene hydroperoxide CXP CycloProp(-2-)enyl Nitrate (or CPN) Cyanogen azide Cyanuric triazide Diacetyl peroxide 1-Diazidocarbamoyl-5-azidotetrazole Diazodinitrophenol Diazomethane Diethyl ether peroxide 4-Dimethylaminophenylpentazole Disulfur dinitride Ethyl azide Explosive antimony Fluorine perchlorate Fulminic acid Halogen azides: Fluorine azide Chlorine azide Bromine azide Iodine azide Hexamethylene triperoxide diamine Hydrazoic acid Hypofluorous acid Lead azide Lead styphnate Lead picrate Manganese heptoxide Mercury(II) fulminate Mercury nitride Methyl ethyl ketone peroxide Nickel hydrazine nitrate Nickel hydrazine perchlorate Nitrogen trihalides: Nitrogen trichloride Nitrogen tribromide Nitrogen triiodide Nitroglycerin Nitronium perchlorate Nitrosyl perchlorate Nitrotetrazolate-N-oxides Octaazacubane Pentazenium hexafluoroarsenate Peroxy acids Peroxymonosulfuric acid Selenium tetraazide Silicon tetraazide Silver azide Silver acetylide Silver fulminate Silver nitride Tellurium tetraazide tert-Butyl hydroperoxide Tetraamine copper complexes Tetraazidomethane Tetrazene explosive Tetranitratoxycarbon Tetrazoles Titanium tetraazide Triazidomethane Oxides of xenon: Xenon dioxide Xenon oxytetrafluoride Xenon tetroxide Xenon trioxide Secondary A secondary explosive is less sensitive than a primary explosive and requires substantially more energy to be initiated. Because they are less sensitive, they are usable in a wider variety of applications and are safer to handle and store. Secondary explosives are used in larger quantities in an explosive train and are usually initiated by a smaller quantity of a primary explosive. Examples of secondary explosives include TNT and RDX. Tertiary Tertiary explosives, also called blasting agents, are so insensitive to shock that they cannot be reliably detonated by practical quantities of primary explosive, and instead require an intermediate explosive booster of secondary explosive. These are often used for safety and the typically lower costs of material and handling. The largest consumers are large-scale mining and construction operations. Most tertiaries include a fuel and an oxidizer. ANFO can be a tertiary explosive if its reaction rate is slow. By velocity Low Low explosives are compounds wherein the rate of decomposition proceeds through the material at less than the speed of sound. The decomposition is propagated by a flame front (deflagration) which travels much more slowly through the explosive material than a shock wave of a high explosive. Under normal conditions, low explosives undergo deflagration at rates that vary from a few centimetres per second to approximately . It is possible for them to deflagrate very quickly, producing an effect similar to a detonation. This can happen under higher pressure (such as when gunpowder deflagrates inside the confined space of a bullet casing, accelerating the bullet to well beyond the speed of sound) or temperature. A low explosive is usually a mixture of a combustible substance and an oxidant that decomposes rapidly (deflagration); however, they burn more slowly than a high explosive, which has an extremely fast burn rate. Low explosives are normally employed as propellants. Included in this group are petroleum products such as propane and gasoline, gunpowder (including smokeless powder), and light pyrotechnics, such as flares and fireworks, but can replace high explosives in certain applications, see gas pressure blasting. High High explosives (HE) are explosive materials that detonate, meaning that the explosive shock front passes through the material at a supersonic speed. High explosives detonate with explosive velocity of about . For instance, TNT has a detonation (burn) rate of approximately 5.8 km/s (19,000 feet per second), detonating cord of 6.7 km/s (22,000 feet per second), and C-4 about 8.5 km/s (29,000 feet per second). They are normally employed in mining, demolition, and military applications. They can be divided into two explosives classes differentiated by sensitivity: primary explosive and secondary explosive. The term high explosive is in contrast with the term low explosive, which explodes (deflagrates) at a lower rate. Countless high-explosive compounds are chemically possible, but commercially and militarily important ones have included NG, TNT, TNX, RDX, HMX, PETN, TATB, and HNS. By physical form Explosives are often characterized by the physical form that the explosives are produced or used in. These use forms are commonly categorized as: Pressings Castings Plastic or polymer bonded Plastic explosives, a.k.a. putties Rubberized Extrudable Binary Blasting agents Slurries and gels Dynamites Shipping label classifications Shipping labels and tags may include both United Nations and national markings. United Nations markings include numbered Hazard Class and Division (HC/D) codes and alphabetic Compatibility Group codes. Though the two are related, they are separate and distinct. Any Compatibility Group designator can be assigned to any Hazard Class and Division. An example of this hybrid marking would be a consumer firework, which is labeled as 1.4G or 1.4S. Examples of national markings would include United States Department of Transportation (U.S. DOT) codes. United Nations (UN) GHS Hazard Class and Division The UN GHS Hazard Class and Division (HC/D) is a numeric designator within a hazard class indicating the character, predominance of associated hazards, and potential for causing personnel casualties and property damage. It is an internationally accepted system that communicates using the minimum amount of markings the primary hazard associated with a substance. Listed below are the Divisions for Class 1 (Explosives): 1.1 Mass Detonation Hazard. With HC/D 1.1, it is expected that if one item in a container or pallet inadvertently detonates, the explosion will sympathetically detonate the surrounding items. The explosion could propagate to all or the majority of the items stored together, causing a mass detonation. There will also be fragments from the item's casing and/or structures in the blast area. 1.2 Non-mass explosion, fragment-producing. HC/D 1.2 is further divided into three subdivisions, HC/D 1.2.1, 1.2.2 and 1.2.3, to account for the magnitude of the effects of an explosion. 1.3 Mass fire, minor blast or fragment hazard. Propellants and many pyrotechnic items fall into this category. If one item in a package or stack initiates, it will usually propagate to the other items, creating a mass fire. 1.4 Moderate fire, no blast or fragment. HC/D 1.4 items are listed in the table as explosives with no significant hazard. Most small arms ammunition (including loaded weapons) and some pyrotechnic items fall into this category. If the energetic material in these items inadvertently initiates, most of the energy and fragments will be contained within the storage structure or the item containers themselves. 1.5 mass detonation hazard, very insensitive. 1.6 detonation hazard without mass detonation hazard, extremely insensitive. To see an entire UNO Table, browse Paragraphs 3-8 and 3-9 of NAVSEA OP 5, Vol. 1, Chapter 3. Class 1 Compatibility Group Compatibility Group codes are used to indicate storage compatibility for HC/D Class 1 (explosive) materials. Letters are used to designate 13 compatibility groups as follows. A: Primary explosive substance (1.1A). B: An article containing a primary explosive substance and not containing two or more effective protective features. Some articles, such as detonator assemblies for blasting and primers, cap-type, are included. (1.1B, 1.2B, 1.4B). C: Propellant explosive substance or other deflagrating explosive substance or article containing such explosive substance (1.1C, 1.2C, 1.3C, 1.4C). These are bulk propellants, propelling charges, and devices containing propellants with or without means of ignition. Examples include single-based propellant, double-based propellant, triple-based propellant, and composite propellants, solid propellant rocket motors and ammunition with inert projectiles. D: Secondary detonating explosive substance or black powder or article containing a secondary detonating explosive substance, in each case without means of initiation and without a propelling charge, or article containing a primary explosive substance and containing two or more effective protective features. (1.1D, 1.2D, 1.4D, 1.5D). E: Article containing a secondary detonating explosive substance without means of initiation, with a propelling charge (other than one containing flammable liquid, gel or hypergolic liquid) (1.1E, 1.2E, 1.4E). F containing a secondary detonating explosive substance with its means of initiation, with a propelling charge (other than one containing flammable liquid, gel or hypergolic liquid) or without a propelling charge (1.1F, 1.2F, 1.3F, 1.4F). G: Pyrotechnic substance or article containing a pyrotechnic substance, or article containing both an explosive substance and an illuminating, incendiary, tear-producing or smoke-producing substance (other than a water-activated article or one containing white phosphorus, phosphide or flammable liquid or gel or hypergolic liquid) (1.1G, 1.2G, 1.3G, 1.4G). Examples include Flares, signals, incendiary or illuminating ammunition and other smoke and tear producing devices. H: Article containing both an explosive substance and white phosphorus (1.2H, 1.3H). These articles will spontaneously combust when exposed to the atmosphere. J: Article containing both an explosive substance and flammable liquid or gel (1.1J, 1.2J, 1.3J). This excludes liquids or gels which are spontaneously flammable when exposed to water or the atmosphere, which belong in group H. Examples include liquid or gel filled incendiary ammunition, fuel-air explosive (FAE) devices, and flammable liquid fueled missiles. K: Article containing both an explosive substance and a toxic chemical agent (1.2K, 1.3K) L Explosive substance or article containing an explosive substance and presenting a special risk (e.g., due to water-activation or presence of hypergolic liquids, phosphides, or pyrophoric substances) needing isolation of each type (1.1L, 1.2L, 1.3L). Damaged or suspect ammunition of any group belongs in this group. N: Articles containing only extremely insensitive detonating substances (1.6N). S: Substance or article so packed or designed that any hazardous effects arising from accidental functioning are limited to the extent that they do not significantly hinder or prohibit fire fighting or other emergency response efforts in the immediate vicinity of the package (1.4S). Regulation The legality of possessing or using explosives varies by jurisdiction. Various countries around the world have enacted explosives law and require licenses to manufacture, distribute, store, use, possess explosives or ingredients. Netherlands In the Netherlands, the civil and commercial use of explosives is covered under the Wet explosieven voor civiel gebruik (explosives for civil use Act), in accordance with EU directive nr. 93/15/EEG (Dutch). The illegal use of explosives is covered under the Wet Wapens en Munitie (Weapons and Munition Act) (Dutch). UK The new Explosives Regulations 2014 (ER 2014) came into force on 1 October 2014 and defines "explosive" as: United States During World War I, numerous laws were created to regulate war related industries and increase security within the United States. In 1917, the 65th United States Congress created many laws, including the Espionage Act of 1917 and Explosives Act of 1917. The Explosives Act of 1917 (session 1, chapter 83, ) was signed on 6 October 1917 and went into effect on 16 November 1917. The legal summary is "An Act to prohibit the manufacture, distribution, storage, use, and possession in time of war of explosives, providing regulations for the safe manufacture, distribution, storage, use, and possession of the same, and for other purposes". This was the first federal regulation of licensing explosives purchases. The act was deactivated after World War I ended. After the United States entered World War II, the Explosives Act of 1917 was reactivated. In 1947, the act was deactivated by President Truman. The Organized Crime Control Act of 1970 () transferred many explosives regulations to the Bureau of Alcohol, Tobacco and Firearms (ATF) of the Department of Treasury. The bill became effective in 1971. Currently, regulations are governed by Title 18 of the United States Code and Title 27 of the Code of Federal Regulations: "Importation, Manufacture, Distribution and Storage of Explosive Materials" (18 U.S.C. Chapter 40). "Commerce in Explosives" (27 C.F.R. Chapter II, Part 555). Many states restrict the possession, sale, and use of explosives. Alabama Code Title 8 Chapter 17 Article 9 Alaska State Code Chapter 11.61.240 & 11.61.250 Arizona State Code Title 13 Chapter 31 Articles 01 through 19 Arkansas State Code Title 5 Chapter 73 Article 108 California Penal Code Title 2 Division 5 Colorado (Colorado statutes are copyrighted and require purchase before reading.) Connecticut Statutes Volume 9 Title 29 Chapters 343-355 Delaware Code Title 16 Part VI Chapters 70 & 71 Florida Statutes Title XXXIII Chapter 552 Georgia Code Title 16 Chapter 7 Articles 64-97 (Repealed by Ga. L. 1996) Hawaii Administrative Rules Title 12 Subtitle 8 Part 1 Chapter 58 AND Hawaii Revised Statutes Illinois Explosives Act 225 ILCS 210 Michigan Penal Code Chapter XXXIII Section 750.200 – 750.212a Minnesota Mississippi Code Title 45 Chapter 13 Article 3 Section 101–109 New York: Health and safety regulations restrict the quantity of black powder a person may store and transport. Wisconsin Chapter 941 Subchapter 4-31 List Compounds Acetylides CUA, DCA, AGA Fulminates HCNO, AUF, HGF, PTF, KF, AGF Nitro MonoNitro: NGA, NE, NM, NP, NS, NU DiNitro: DDNP, DNB, DNEU, DNN, DNP, DNPA, DNPH, DNR, DNPD, DNPA, DNC, DPS, DPA, EDNP, KDNBF, BEAF TriNitro: RDX, DATB, TATB, PBS, PBP, TNAL, TNAS, TNB, TNBA, TNC, MC, TNEF, TNOC, TNOF, TNP, TNT, TNN, TNPG, TNR, BTNEN, BTNEC, SA, API, TNS TetraNitro: Tetryl OctaNitro: ONC Nitrates Mononitrates: AN, BAN, CAN, MAN, NAN, UN Dinitrates: DEGDN, EDDN, EDNA, EGDN, HDN, TEGDN, TAOM Trinitrates: BTTN, TMOTN, NG Tetranitrates: ETN, PETN, TNOC Pentanitrates: XPN Hexanitrates: CHN, MHN Amines Tertiary Amines: NTBR, NTCL, NTI, NTS, SEN, AGN Diamines: DSDN Azides: CNA, CYA, CLA, CUA, EA, FA, HA, PBA, AGA, NAA, RBA, SEA, SIA, TEA, TAM, TIA Tetramines: TZE, TZO, AA Pentamines: PZ Octamines: OAC, ATA Peroxides AP (TATP), CHP, DAP, DBP, DEP, HMTD, MEKP, TBHP Oxides XOTF, XDIO, XTRO, XTEO Unsorted Alkali metal Ozonides Ammonium chlorate Ammonium perchlorate Ammonium permaganate Azidotetrazolates Azoclathrates Benzvalene Chlorine oxides DMAPP Fluorine perchlorate Fulminating gold Fulminating silver (several substances) Hexafluoroarsenate Hypofluorous acid Manganese heptoxide Mercury nitride Nitronium perchlorate Nitrotetrazolate-N-Oxides Peroxy acids Peroxymonosulfuric acid Tetramine copper complexes | influencing the yield of the energy transmitted for both atmospheric over-pressure and ground acceleration. By definition, a "low explosive", such as black powder, or smokeless gunpowder has a burn rate of 171–631 m/s. In contrast, a "high explosive", whether a primary, such as detonating cord, or a secondary, such as TNT or C-4 has a significantly higher burn rate. Stability Stability is the ability of an explosive to be stored without deterioration. The following factors affect the stability of an explosive: Chemical constitution. In the strictest technical sense, the word "stability" is a thermodynamic term referring to the energy of a substance relative to a reference state or to some other substance. However, in the context of explosives, stability commonly refers to ease of detonation, which is concerned with kinetics (i.e., rate of decomposition). It is perhaps best, then, to differentiate between the terms thermodynamically stable and kinetically stable by referring to the former as "inert." Contrarily, a kinetically unstable substance is said to be "labile." It is generally recognized that certain groups like nitro (–NO2), nitrate (–ONO2), and azide (–N3), are intrinsically labile. Kinetically, there exists a low activation barrier to the decomposition reaction. Consequently, these compounds exhibit high sensitivity to flame or mechanical shock. The chemical bonding in these compounds is characterized as predominantly covalent and thus they are not thermodynamically stabilized by a high ionic-lattice energy. Furthermore, they generally have positive enthalpies of formation and there is little mechanistic hindrance to internal molecular rearrangement to yield the more thermodynamically stable (more strongly bonded) decomposition products. For example, in lead azide, Pb(N3)2, the nitrogen atoms are already bonded to one another, so decomposition into Pb and N2[1] is relatively easy. Temperature of storage. The rate of decomposition of explosives increases at higher temperatures. All standard military explosives may be considered to have a high degree of stability at temperatures from –10 to +35 °C, but each has a high temperature at which its rate of decomposition rapidly accelerates and stability is reduced. As a rule of thumb, most explosives become dangerously unstable at temperatures above 70 °C. Exposure to sunlight. When exposed to the ultraviolet rays of sunlight, many explosive compounds containing nitrogen groups rapidly decompose, affecting their stability. Electrical discharge. Electrostatic or spark sensitivity to initiation is common in a number of explosives. Static or other electrical discharge may be sufficient to cause a reaction, even detonation, under some circumstances. As a result, safe handling of explosives and pyrotechnics usually requires proper electrical grounding of the operator. Power, performance, and strength The term power or performance as applied to an explosive refers to its ability to do work. In practice it is defined as the explosive's ability to accomplish what is intended in the way of energy delivery (i.e., fragment projection, air blast, high-velocity jet, underwater shock and bubble energy, etc.). Explosive power or performance is evaluated by a tailored series of tests to assess the material for its intended use. Of the tests listed below, cylinder expansion and air-blast tests are common to most testing programs, and the others support specific applications. Cylinder expansion test. A standard amount of explosive is loaded into a long hollow cylinder, usually of copper, and detonated at one end. Data is collected concerning the rate of radial expansion of the cylinder and the maximum cylinder wall velocity. This also establishes the Gurney energy or 2E. Cylinder fragmentation. A standard steel cylinder is loaded with explosive and detonated in a sawdust pit. The fragments are collected and the size distribution analyzed. Detonation pressure (Chapman–Jouguet condition). Detonation pressure data derived from measurements of shock waves transmitted into water by the detonation of cylindrical explosive charges of a standard size. Determination of critical diameter. This test establishes the minimum physical size a charge of a specific explosive must be to sustain its own detonation wave. The procedure involves the detonation of a series of charges of different diameters until difficulty in detonation wave propagation is observed. Massive-diameter detonation velocity. Detonation velocity is dependent on loading density (c), charge diameter, and grain size. The hydrodynamic theory of detonation used in predicting explosive phenomena does not include the diameter of the charge, and therefore a detonation velocity, for a massive diameter. This procedure requires the firing of a series of charges of the same density and physical structure, but different diameters, and the extrapolation of the resulting detonation velocities to predict the detonation velocity of a charge of a massive diameter. Pressure versus scaled distance. A charge of a specific size is detonated and its pressure effects measured at a standard distance. The values obtained are compared with those for TNT. Impulse versus scaled distance. A charge of a specific size is detonated and its impulse (the area under the pressure-time curve) measured as a function of distance. The results are tabulated and expressed as TNT equivalents. Relative bubble energy (RBE). A 5 to 50 kg charge is detonated in water and piezoelectric gauges measure peak pressure, time constant, impulse, and energy. The RBE may be defined as Kx 3 RBE = Ks where K = the bubble expansion period for an experimental (x) or a standard (s) charge. Brisance In addition to strength, explosives display a second characteristic, which is their shattering effect or brisance (from the French meaning to "break"), which is distinguished and separate from their total work capacity. This characteristic is of practical importance in determining the effectiveness of an explosion in fragmenting shells, bomb casings, grenades, and the like. The rapidity with which an explosive reaches its peak pressure (power) is a measure of its brisance. Brisance values are primarily employed in France and Russia. The sand crush test is commonly employed to determine the relative brisance in comparison to TNT. No test is capable of directly comparing the explosive properties of two or more compounds; it is important to examine the data from several such tests (sand crush, trauzl, and so forth) in order to gauge relative brisance. True values for comparison require field experiments. Density Density of loading refers to the mass of an explosive per unit volume. Several methods of loading are available, including pellet loading, cast loading, and press loading, the choice being determined by the characteristics of the explosive. Dependent upon the method employed, an average density of the loaded charge can be obtained that is within 80–99% of the theoretical maximum density of the explosive. High load density can reduce sensitivity by making the mass more resistant to internal friction. However, if density is increased to the extent that individual crystals are crushed, the explosive may become more sensitive. Increased load density also permits the use of more explosive, thereby increasing the power of the warhead. It is possible to compress an explosive beyond a point of sensitivity, known also as dead-pressing, in which the material is no longer capable of being reliably initiated, if at all. Volatility Volatility is the readiness with which a substance vaporizes. Excessive volatility often results in the development of pressure within rounds of ammunition and separation of mixtures into their constituents. Volatility affects the chemical composition of the explosive such that a marked reduction in stability may occur, which results in an increase in the danger of handling. Hygroscopicity and water resistance The introduction of water into an explosive is highly undesirable since it reduces the sensitivity, strength, and velocity of detonation of the explosive. Hygroscopicity is a measure of a material's moisture-absorbing tendencies. Moisture affects explosives adversely by acting as an inert material that absorbs heat when vaporized, and by acting as a solvent medium that can cause undesired chemical reactions. Sensitivity, strength, and velocity of detonation are reduced by inert materials that reduce the continuity of the explosive mass. When the moisture content evaporates during detonation, cooling occurs, which reduces the temperature of reaction. Stability is also affected by the presence of moisture since moisture promotes decomposition of the explosive and, in addition, causes corrosion of the explosive's metal container. Explosives considerably differ from one another as to their behavior in the presence of water. Gelatin dynamites containing nitroglycerine have a degree of water resistance. Explosives based on ammonium nitrate have little or no water resistance as ammonium nitrate is highly soluble in water and is hygroscopic. Toxicity Many explosives are toxic to some extent. Manufacturing inputs can also be organic compounds or hazardous materials that require special handling due to risks (such as carcinogens). The decomposition products, residual solids, or gases of some explosives can be toxic, whereas others are harmless, such as carbon dioxide and water. Examples of harmful by-products are: Heavy metals, such as lead, mercury, and barium from primers (observed in high-volume firing ranges) Nitric oxides from TNT Perchlorates when used in large quantities "Green explosives" seek to reduce environment and health impacts. An example of such is the lead-free primary explosive copper(I) 5-nitrotetrazolate, an alternative to lead azide. One variety of a green explosive is CDP explosives, whose synthesis does not involve any toxic ingredients, consumes carbon dioxide while detonating and does not release any nitric oxides into the atmosphere when used. Explosive train Explosive material may be incorporated in the explosive train of a device or system. An example is a pyrotechnic lead igniting a booster, which causes the main charge to detonate. Volume of products of explosion The most widely used explosives are condensed liquids or solids converted to gaseous products by explosive chemical reactions and the energy released by those reactions. The gaseous products of complete reaction are typically carbon dioxide, steam, and nitrogen. Gaseous volumes computed by the ideal gas law tend to be too large at high pressures characteristic of explosions. Ultimate volume expansion may be estimated at three orders of magnitude, or one liter per gram of explosive. Explosives with an oxygen deficit will generate soot or gases like carbon monoxide and hydrogen, which may react with surrounding materials such as atmospheric oxygen. Attempts to obtain more precise volume estimates must consider the possibility of such side reactions, condensation of steam, and aqueous solubility of gases like carbon dioxide. By comparison, CDP detonation is based on the rapid reduction of carbon dioxide to carbon with the abundant release of energy. Rather than produce typical waste gases like carbon dioxide, carbon monoxide, nitrogen and nitric oxides, CDP is different. Instead, the highly energetic reduction of carbon dioxide to carbon vaporizes and pressurizes excess dry ice at the wave front, which is the only gas released from the detonation. The velocity of detonation for CDP formulations can therefore be customized by adjusting the weight percentage of reducing agent and dry ice. CDP detonations produce a large amount of solid materials that can have great commercial value as an abrasive: Example – CDP Detonation Reaction with Magnesium: XCO2 + 2Mg → 2MgO + C + (X-1)CO2 The products of detonation in this example are magnesium oxide, carbon in various phases including diamond, and vaporized excess carbon dioxide that was not consumed by the amount of magnesium in the explosive formulation. Oxygen balance (OB% or Ω) Oxygen balance is an expression that is used to indicate the degree to which an explosive can be oxidized. If an explosive molecule contains just enough oxygen to convert all of its carbon to carbon dioxide, all of its hydrogen to water, and all of its metal to metal oxide with no excess, the molecule is said to have a zero oxygen balance. The molecule is said to have a positive oxygen balance if it contains more oxygen than is needed and a negative oxygen balance if it contains less oxygen than is needed. The sensitivity, strength, and brisance of an explosive are all somewhat dependent upon oxygen balance and tend to approach their maxima as oxygen balance approaches zero. Oxygen balance applies to traditional explosives mechanics with the assumption that carbon is oxidized to carbon monoxide and carbon dioxide during detonation. In what seems like a paradox to an explosives expert, Cold Detonation Physics uses carbon in its most highly oxidized state as the source of oxygen in the form of carbon dioxide. Oxygen balance, therefore, either does not apply to a CDP formulation or must be calculated without including the carbon in the carbon dioxide. Chemical composition A chemical explosive may consist of either a chemically pure compound, such as nitroglycerin, or a mixture of a fuel and an oxidizer, such as black powder or grain dust and air. Pure compounds Some chemical compounds are unstable in that, when shocked, they react, possibly to the point of detonation. Each molecule of the compound dissociates into two or more new molecules (generally gases) with the release of energy. Nitroglycerin: A highly unstable and sensitive liquid Acetone peroxide: A very unstable white organic peroxide TNT: Yellow insensitive crystals that can be melted and cast without detonation Cellulose nitrate: A nitrated polymer which can be a high or low explosive depending on nitration level and conditions RDX, PETN, HMX: Very powerful explosives which can be used pure or in plastic explosives C-4 (or Composition C-4): An RDX plastic explosive plasticized to be adhesive and malleable The above compositions may describe most of the explosive material, but a practical explosive will often include small percentages of other substances. For example, dynamite is a mixture of highly sensitive nitroglycerin with sawdust, powdered silica, or most commonly diatomaceous earth, which act as stabilizers. Plastics and polymers may be added to bind powders of explosive compounds; waxes may be incorporated to make them safer to handle; aluminium powder may be introduced to increase total energy and blast effects. Explosive compounds are also often "alloyed": HMX or RDX powders may be mixed (typically by melt-casting) with TNT to form Octol or Cyclotol. Oxidized fuel An oxidizer is a pure substance (molecule) that in a chemical reaction can contribute some atoms of one or more oxidizing elements, in which the fuel component of the explosive burns. On the simplest level, the oxidizer may itself be an oxidizing element, such as gaseous or liquid oxygen. Black powder: Potassium nitrate, charcoal and sulfur Flash powder: Fine metal powder (usually aluminium or magnesium) and a strong oxidizer (e.g. potassium chlorate or perchlorate) Ammonal: Ammonium nitrate and aluminium powder Armstrong's mixture: Potassium chlorate and red phosphorus. This is a very sensitive mixture. It is a primary high explosive in which sulfur is substituted for some or all of the phosphorus to slightly decrease sensitivity. Cold Detonation Physics: Combinations of carbon dioxide in the form of dry ice (an untraditional oxygen source), and powdered reducing agents (fuel) like magnesium and aluminum. Sprengel explosives: A very general class incorporating any strong oxidizer and highly reactive fuel, although in practice the name was most commonly applied to mixtures |
in Hong Kong, the film grossed 3,307,536, which was huge business for the time, but less than Lee's previous 1972 films Fist of Fury and The Way of the Dragon. In North America, upon its limited release in August 1973 in four theaters in New York, the film entered the weekly box office charts at number 17 with a gross of $140,010 in 3 days. Upon its expansion the following week, it topped the charts for two weeks. Over the next four weeks, it remained in the top 10 while competing with other kung fu films, including Lady Kung Fu, The Shanghai Killers and Deadly China Doll which held the top spot for one week each. In October, Enter the Dragon regained the top spot in its eighth week. It went on to gross from its initial North American release, making it the year's fourth highest-grossing film in the market. It was repeatedly re-released throughout the 1970s, with each re-release entering the top five in the box office charts. By 1982, the film had grossed a total of in the United States. In Europe, the film initially monopolized several London West End cinemas for five weeks, before becoming a sellout success across Britain and the rest of Europe. In Spain, it was the seventh top-grossing film of 1973, selling 2,462,489 tickets. In France, it was one of the top five highest-grossing films of 1974 (above two other Lee films, Way of the Dragon at and Fist of Fury at ), with 4,444,582 ticket sales. In Germany, it was one of the top 10 highest-grossing films of 1974, with ticket sales. In Greece, the film earned in its first year of release. In Japan, it was the second highest-grossing film of 1974, with distribution rental earnings of . In South Korea, the film sold 229,681 tickets in the capital city of Seoul. In India, the movie was released in 1975 and opened to full houses; in one Bombay theater, New Excelsior, it had a packed 32-week run. The film was also a success in Iran, where there was a theater which played it daily up until the 1979 Iranian Revolution. Against a tight budget of $850,000, the film grossed upon its initial 1973 worldwide release, making it one of the world's highest-grossing films of all time up until then. The film went on to have multiple re-releases around the world over the next several decades, significantly increasing its worldwide gross. The film went on to gross over by 1987, and more than by 1994. It was reportedly still among the all-time highest-grossing films in 1990. By 1998, it had grossed more than worldwide. By the 2010s, it had grossed an estimated worldwide total of (equivalent to approximately adjusted for inflation), having earned about times its original budget. The film's cost-to-profit ratio makes it one of the most commercially successful and profitable films of all time. Critical reception Upon release, the film initially received mixed reviews from several critics, including a favorable review from Variety magazine. The film eventually went on to be well-received by most critics, and it is widely regarded as one of the best films of 1973. Critics have referred to Enter the Dragon as "a low-rent James Bond thriller", a "remake of Dr. No" with elements of Fu Manchu. J.C. Maçek III of PopMatters wrote, "Of course the real showcase here is the obvious star here, Bruce Lee, whose performance as an actor and a fighter are the most enhanced by the perfect sound and video transfer. While Kelly was a famous martial artist and a surprisingly good actor and Saxon was a famous actor and a surprisingly good martial artist, Lee proves to be a master of both fields." Many acclaimed newspapers and magazines reviewed the film. Variety described it as "rich in the atmosphere", the music score as "a strong asset" and the photography as "interesting". The New York Times gave the film a rave review: "The picture is expertly made and well-meshed; it moves like lightning and brims with color. It is also the most savagely murderous and numbing hand-hacker (not a gun in it) you will ever see anywhere." The film holds a 95% approval rating on the review aggregation website Rotten Tomatoes based on 55 reviews, with an average rating of 7.80/10. The site's critical consensus reads, "Badass to the max, Enter the Dragon is the ultimate kung-fu movie and fitting (if untimely) Bruce Lee swan song." On Metacritic it has a weighted average score of 83% based on reviews from 16 critics, indicating "universal acclaim". In 2004, the film was deemed "culturally significant" by the Library of Congress and selected for preservation in the National Film Registry. Enter the Dragon was selected as the best martial arts film of all time, in a 2013 poll of The Guardian and The Observer critics. The film also ranks No. 474 on Empire magazine's 2008 list of The 500 Greatest Movies of All Time. Home video Enter the Dragon has remained one of the most popular martial arts films since its premiere and has been released numerous times worldwide on multiple home video formats. For almost three decades, many theatrical and home video versions were censored for violence, especially in the West. In the U.K. alone, at least four different versions have been released. Since 2001, the film has been released uncut in the U.K. and most other territories. Most DVDs and Blu-rays come with a wide range of extra features in the form of documentaries, interviews, etc. In 2013, a second, remastered HD transfer appeared on Blu-ray, billed as the "40th Anniversary Edition". In 2020, new 2K digital restorations of the theatrical cut and special edition were included as part of the Bruce Lee: His Greatest Hits box set by The Criterion Collection, which featured all of Lee's films, as well as Game of Death II. Legacy According to Scott Mendelson of Forbes, Enter the Dragon contains spy film elements similar to the James Bond franchise. Enter the Dragon was the most successful action-spy film to not be part of the James Bond franchise; Enter the Dragon had an initial global box office comparable to the James Bond films of that era, and a lifetime gross surpassing every James Bond film up until GoldenEye (1995). Mendelson argues that, had Lee lived after Enter the Dragon was released, the film had the potential to launch an action-spy film franchise starring Lee that could have rivalled the success of the James Bond franchise. The film has been parodied and referenced in places such as the 1976 film The Pink Panther Strikes Again, the satirical publication The Onion, the Japanese game-show Takeshi's Castle, and the 1977 John Landis comedy anthology film Kentucky Fried Movie (in its lengthy "A Fistful of Yen" sequence, basically a comedic, note for note remake of Dragon) and also in the film Balls of Fury. It was also parodied on television in That '70s Show during the episode "Jackie Moves On" with regular character Fez taking on the Bruce Lee role. Several clips from the film are comically used during the theatre scene in The Last Dragon. Lee's martial arts films were broadly lampooned in the recurring Almost Live! sketch Mind Your Manners with Billy Quan. In August 2007, the now-defunct Warner Independent Pictures announced that television producer Kurt Sutter would be remaking the film as a noir-style thriller entitled Awaken the Dragon with Korean singer-actor Rain starring. It was announced in September 2014 that Spike Lee would work on the remake. In March 2015, Brett Ratner revealed that he wanted to make the remake. In July 2018, David Leitch is in early talks to direct the remake. Cultural impact Enter the Dragon has been cited as one of the most influential action films of all time. Sascha Matuszak of Vice called it the most influential kung fu film and said it "is referenced in all manner of media, the plot line and characters continue to influence storytellers today, and the impact was particularly felt in the revolutionizing way the film portrayed African-Americans, Asians and traditional martial arts." Joel Stice of Uproxx called it "arguably the most influential Kung Fu movie of all time." Kuan-Hsing Chen and Beng Huat Chua cited its fight scenes as influential as well as its "hybrid form and its mode of address" which pitches "an elemental story of good against evil in such a spectacle-saturated way". The film had an impact on mixed martial arts (MMA). In the opening fight sequence, where Lee fights Sammo Hung, Lee demonstrated elements of what would later become known as MMA. Both fighters wore what would later become common mixed martial arts clothing items, including kempo gloves and small shorts, and | one of the best films of 1973. Critics have referred to Enter the Dragon as "a low-rent James Bond thriller", a "remake of Dr. No" with elements of Fu Manchu. J.C. Maçek III of PopMatters wrote, "Of course the real showcase here is the obvious star here, Bruce Lee, whose performance as an actor and a fighter are the most enhanced by the perfect sound and video transfer. While Kelly was a famous martial artist and a surprisingly good actor and Saxon was a famous actor and a surprisingly good martial artist, Lee proves to be a master of both fields." Many acclaimed newspapers and magazines reviewed the film. Variety described it as "rich in the atmosphere", the music score as "a strong asset" and the photography as "interesting". The New York Times gave the film a rave review: "The picture is expertly made and well-meshed; it moves like lightning and brims with color. It is also the most savagely murderous and numbing hand-hacker (not a gun in it) you will ever see anywhere." The film holds a 95% approval rating on the review aggregation website Rotten Tomatoes based on 55 reviews, with an average rating of 7.80/10. The site's critical consensus reads, "Badass to the max, Enter the Dragon is the ultimate kung-fu movie and fitting (if untimely) Bruce Lee swan song." On Metacritic it has a weighted average score of 83% based on reviews from 16 critics, indicating "universal acclaim". In 2004, the film was deemed "culturally significant" by the Library of Congress and selected for preservation in the National Film Registry. Enter the Dragon was selected as the best martial arts film of all time, in a 2013 poll of The Guardian and The Observer critics. The film also ranks No. 474 on Empire magazine's 2008 list of The 500 Greatest Movies of All Time. Home video Enter the Dragon has remained one of the most popular martial arts films since its premiere and has been released numerous times worldwide on multiple home video formats. For almost three decades, many theatrical and home video versions were censored for violence, especially in the West. In the U.K. alone, at least four different versions have been released. Since 2001, the film has been released uncut in the U.K. and most other territories. Most DVDs and Blu-rays come with a wide range of extra features in the form of documentaries, interviews, etc. In 2013, a second, remastered HD transfer appeared on Blu-ray, billed as the "40th Anniversary Edition". In 2020, new 2K digital restorations of the theatrical cut and special edition were included as part of the Bruce Lee: His Greatest Hits box set by The Criterion Collection, which featured all of Lee's films, as well as Game of Death II. Legacy According to Scott Mendelson of Forbes, Enter the Dragon contains spy film elements similar to the James Bond franchise. Enter the Dragon was the most successful action-spy film to not be part of the James Bond franchise; Enter the Dragon had an initial global box office comparable to the James Bond films of that era, and a lifetime gross surpassing every James Bond film up until GoldenEye (1995). Mendelson argues that, had Lee lived after Enter the Dragon was released, the film had the potential to launch an action-spy film franchise starring Lee that could have rivalled the success of the James Bond franchise. The film has been parodied and referenced in places such as the 1976 film The Pink Panther Strikes Again, the satirical publication The Onion, the Japanese game-show Takeshi's Castle, and the 1977 John Landis comedy anthology film Kentucky Fried Movie (in its lengthy "A Fistful of Yen" sequence, basically a comedic, note for note remake of Dragon) and also in the film Balls of Fury. It was also parodied on television in That '70s Show during the episode "Jackie Moves On" with regular character Fez taking on the Bruce Lee role. Several clips from the film are comically used during the theatre scene in The Last Dragon. Lee's martial arts films were broadly lampooned in the recurring Almost Live! sketch Mind Your Manners with Billy Quan. In August 2007, the now-defunct Warner Independent Pictures announced that television producer Kurt Sutter would be remaking the film as a noir-style thriller entitled Awaken the Dragon with Korean singer-actor Rain starring. It was announced in September 2014 that Spike Lee would work on the remake. In March 2015, Brett Ratner revealed that he wanted to make the remake. In July 2018, David Leitch is in early talks to direct the remake. Cultural impact Enter the Dragon has been cited as one of the most influential action films of all time. Sascha Matuszak of Vice called it the most influential kung fu film and said it "is referenced in all manner of media, the plot line and characters continue to influence storytellers today, and the impact was particularly felt in the revolutionizing way the film portrayed African-Americans, Asians and traditional martial arts." Joel Stice of Uproxx called it "arguably the most influential Kung Fu movie of all time." Kuan-Hsing Chen and Beng Huat Chua cited its fight scenes as influential as well as its "hybrid form and its mode of address" which pitches "an elemental story of good against evil in such a spectacle-saturated way". The film had an impact on mixed martial arts (MMA). In the opening fight sequence, where Lee fights Sammo Hung, Lee demonstrated elements of what would later become known as MMA. Both fighters wore what would later become common mixed martial arts clothing items, including kempo gloves and small shorts, and the fight ends with Lee utilizing an armbar (then used in judo and jiu jitsu) to submit Hung. According to UFC Hall of Fame fighter Urijah Faber, "that was the moment" that MMA was born. The Dragon Ball manga and anime franchise, debuted in 1984, was inspired by Enter the Dragon, which Dragon Ball creator Akira Toriyama was a fan of. The title Dragon Ball was also inspired by Enter the Dragon, and the piercing eyes of Goku's Super Saiyan transformation was based on Bruce Lee's paralysing glare. Enter the Dragon inspired early beat 'em up brawler games. It was cited by game designer Yoshihisa Kishimoto as a key inspiration behind Technōs Japan's brawler Nekketsu Kōha Kunio-kun (1986), released as Renegade in the West. Its spiritual successor Double Dragon (1987) also drew inspiration from Enter the Dragon, with the game's title being a homage to the film. Double Dragon also features two enemies named Roper and Williams, a reference to the two characters Roper and Williams from Enter the Dragon. The sequel Double Dragon II: The Revenge (1988) includes opponents named Bolo and Oharra. Enter the Dragon was the foundation for fighting games. The film's tournament plot inspired numerous fighting games. The Street Fighter video game franchise, debuted in 1987, was inspired by Enter the Dragon, with the gameplay centered around an international fighting tournament, and each character having a unique combination of ethnicity, nationality and fighting style. Street Fighter went on to set the template for all fighting games that followed. The little-known 1985 Nintendo arcade game Arm Wrestling contains voice leftovers from the film, as well as their original counterparts. The popular fighting game Mortal Kombat borrows multiple plot elements from Enter the Dragon, as does its movie adaptation. See also Bruce Lee filmography Notes References External links Enter the Dragon essay by Michael Sragow at National Film Registry Enter the Dragon essay by Daniel Eagan in America's Film Legacy: The Authoritative Guide to the Landmark Movies in the National Film Registry, A&C Black, 2010 , pages 694-696 1973 films 1973 |
to chemical reactions where chemical bond energy is converted to thermal energy (heat). Two types of chemical reactions Exothermic and endothermic describe two types of chemical reactions or systems found in nature, as follows: Exothermic After an exothermic reaction, more energy has been released to the surroundings than was absorbed to initiate and maintain the reaction. An example would be the burning of a candle, wherein the sum of calories produced by combustion (found by looking at radiant heating of the surroundings and visible light produced, including the increase in temperature of the fuel (wax) itself, which oxygen converts to hot CO2 and water vapor) exceeds the number of calories absorbed initially in lighting the flame and in the flame maintaining itself (some energy is reabsorbed and used in melting, then vaporizing the wax, etc. but is far outstripped by the energy released in converting the relatively weak double bond of oxygen to the stronger bonds in CO2 and H2O). Endothermic In an endothermic reaction or system, energy is taken from the surroundings in the course of the reaction, usually driven by a favorable entropy increase in the system. An example of an endothermic reaction is a first aid cold pack, in which the reaction of two chemicals, or dissolving of one in | term exothermic process (exo- : "outside") describes a process or reaction that releases energy from the system to its surroundings, usually in the form of heat, but also in a form of light (e.g. a spark, flame, or flash), electricity (e.g. a battery), or sound (e.g. explosion heard when burning hydrogen). Its etymology stems from the Greek prefix έξω (exō, which means "outwards") and the Greek word θερμικός (thermikόs, which means "thermal"). The term exothermic was first coined by Marcellin Berthelot. The opposite of an exothermic process is an endothermic process, one that absorbs energy usually in the form of heat. The concept is frequently applied in the physical sciences to chemical reactions where chemical bond energy is converted to thermal energy (heat). Two types of chemical reactions Exothermic and endothermic describe two types of chemical reactions or systems found in nature, as follows: Exothermic After an exothermic reaction, more energy has been released to the surroundings than was absorbed to initiate and maintain the reaction. An example would be the burning of a candle, wherein the sum of calories produced by combustion (found by looking at radiant heating of the surroundings and visible light produced, including the increase in temperature of the fuel (wax) itself, which oxygen converts to hot CO2 and water vapor) exceeds the number of calories absorbed initially in lighting the flame and in the flame maintaining itself (some energy is reabsorbed and used in melting, then vaporizing the wax, etc. but is far outstripped by the energy released in converting the relatively weak double bond of oxygen to the stronger bonds in CO2 and H2O). Endothermic In an endothermic reaction or system, energy is taken from the surroundings in the course of the reaction, usually driven by a favorable entropy increase in the system. An example of an endothermic reaction is a first aid cold pack, in which the reaction of two chemicals, or dissolving |
live with him, causing quite a scandal within Madras’ colonial society. Elihu Yale and Hieromima de Paiva had a son. The son died in South Africa. Accusations of corruption and removal As president of Fort St. George, Yale purchased territory for private purposes with East India Company funds, including a fort at Devanampattinam (now Cuddalore). Yale imposed high taxes for the maintenance of the colonial garrison and town, resulting in an unpopular regime and several revolts by Indians, brutally quelled by garrison soldiers. Yale was also notorious for arresting and trying Indians on his own private authority, including the hanging of a stable boy who had absconded with a Company horse. Charges of corruption were brought against Elihu Yale in the last years of his presidency. He was eventually removed in 1692 and replaced with Nathaniel Higginson as the President of Madras. Return to Britain Yale returned to Britain in 1699. He spent the rest of his life at Plas Grono, a mansion in Wales bought by his father, or at his house in London, spending liberally the considerable wealth he had accumulated. Marriage Elihu Yale married Catherine Hynmers in 1680, widow of Joseph Hynmers, 2nd-in-command of Fort St. George, India as Deputy Governor for the East india company. The wedding took place at St. Mary's Church, at Fort St. George, where Yale was a vestryman and treasurer. The marriage was the first registered at the church. They had 4 children together. David Yale (died 1687), died young. Katherine Yale (died 1715) married Dudley North (politician, born 1684) of Glemham Hall, son of Sir Dudley North of Camden Place, and Anne Cann, daughter of Sir Robert Cann, 1st Baronet of Compton Greenfield, Gloucestershire. He was a cousin of Francis North, 1st Earl of Guilford of Wroxton Abbey and a grandson of Anne Montagu of Boughton House, member of the House of Montagu. Their daughter Anne North would marry Nicholas Herbert, member of the Herbert family, son of the 8th Earl of Pembroke, Thomas Herbert of Wilton House and his first wife, Margaret Sawyer of Highclere Castle while one of their sons, William Dudley North, would marry Lady Barbara Herbert, daughter of Thomas and his second wife, Barbara Herbert, Countess of Pembroke. Anne Yale (died 1734), married Lord James Cavendish (MP for Derby) of Staveley Hall, member of the Cavendish family, son of William Cavendish, 1st Duke of Devonshire of Chatsworth House and Lady Mary Butler, member of the Butler dynasty and daughter of James Butler, 1st Duke of Ormonde of Kilkenny Castle. He was also a grandson of Countess Elizabeth Cecil of Hatfield House and a nephew of John Cecil, 5th Earl of Exeter of Burghley House. Ursula Yale (died 1721), died childless at Latimer House , the house was rented by Elihu Yale from his son-in-law Lord James Cavendish (MP for Derby), husband of Anne Yale , and is burried in the small church on the estate, St Mary Magdalene. Death Yale died on 8 July 1721 in London, but was buried in the churchyard of the parish church of St Giles’ Church, Wrexham, Wales. His tomb is inscribed with these lines: In Boston, Massachusetts, a tablet to Yale was erected in 1927 at Scollay Square, near the site of Yale's birth. Yale president Arthur Twining Hadley penned the inscription, which reads: "On Pemberton Hill, 255 Feet North of This Spot, Was Born on April Fifth 1649 Elihu Yale, Governor of Madras, Whose Permanent Memorial in His Native Land is the College That Bears His Name." Yale University In 1718, Cotton Mather contacted Yale and asked for his help. Mather represented a small institution of learning that had been founded in 1701 in Old Saybrook, Connecticut, as the Collegiate School of Connecticut, which needed money for a new building. Yale sent Mather 417 books, a portrait of King George I, and nine bales of goods. These last were sold by the school for £800 pound sterling. In gratitude, officials named the new building Yale; eventually the entire institution became Yale College. Yale was also a vestryman and treasurer of St. Mary's Church at Fort St. George. On 6 October 1968, the 250th anniversary of the naming of Yale College for Elihu Yale, the classmates of Chester Bowles, then the American ambassador to India and a graduate of Yale (1924), donated money for lasting improvements to the church and erected a plaque to commemorate the occasion. In 1970 a portrait of him, Elihu Yale seated at table with the Second Duke of Devonshire and Lord James Cavendish was donated to the Yale Center for British Art from Chatsworth House. On 5 April 1999, Yale University recognized the 350th anniversary of Yale's birthday. An article that year in American Heritage magazine rated Elihu Yale the "most overrated philanthropist" in American history, | resulting in an unpopular regime and several revolts by Indians, brutally quelled by garrison soldiers. Yale was also notorious for arresting and trying Indians on his own private authority, including the hanging of a stable boy who had absconded with a Company horse. Charges of corruption were brought against Elihu Yale in the last years of his presidency. He was eventually removed in 1692 and replaced with Nathaniel Higginson as the President of Madras. Return to Britain Yale returned to Britain in 1699. He spent the rest of his life at Plas Grono, a mansion in Wales bought by his father, or at his house in London, spending liberally the considerable wealth he had accumulated. Marriage Elihu Yale married Catherine Hynmers in 1680, widow of Joseph Hynmers, 2nd-in-command of Fort St. George, India as Deputy Governor for the East india company. The wedding took place at St. Mary's Church, at Fort St. George, where Yale was a vestryman and treasurer. The marriage was the first registered at the church. They had 4 children together. David Yale (died 1687), died young. Katherine Yale (died 1715) married Dudley North (politician, born 1684) of Glemham Hall, son of Sir Dudley North of Camden Place, and Anne Cann, daughter of Sir Robert Cann, 1st Baronet of Compton Greenfield, Gloucestershire. He was a cousin of Francis North, 1st Earl of Guilford of Wroxton Abbey and a grandson of Anne Montagu of Boughton House, member of the House of Montagu. Their daughter Anne North would marry Nicholas Herbert, member of the Herbert family, son of the 8th Earl of Pembroke, Thomas Herbert of Wilton House and his first wife, Margaret Sawyer of Highclere Castle while one of their sons, William Dudley North, would marry Lady Barbara Herbert, daughter of Thomas and his second wife, Barbara Herbert, Countess of Pembroke. Anne Yale (died 1734), married Lord James Cavendish (MP for Derby) of Staveley Hall, member of the Cavendish family, son of William Cavendish, 1st Duke of Devonshire of Chatsworth House and Lady Mary Butler, member of the Butler dynasty and daughter of James Butler, 1st Duke of Ormonde of Kilkenny Castle. He was also a grandson of Countess Elizabeth Cecil of Hatfield House and a nephew of John Cecil, 5th Earl of Exeter of Burghley House. Ursula Yale (died 1721), died childless at Latimer House , the house was rented by Elihu Yale from his son-in-law Lord James Cavendish (MP for Derby), husband of Anne Yale , and is burried in the small church on the estate, St Mary Magdalene. Death Yale died on 8 July 1721 in London, but was buried in the churchyard of the parish church of St Giles’ Church, Wrexham, Wales. His tomb is inscribed with these lines: In Boston, Massachusetts, a tablet to Yale was erected in 1927 at Scollay Square, near the site of Yale's birth. Yale president Arthur Twining Hadley penned the inscription, which reads: "On Pemberton Hill, 255 Feet North of This Spot, Was Born on April Fifth 1649 Elihu Yale, Governor of Madras, Whose Permanent Memorial in His Native Land is the College That Bears His Name." Yale University In 1718, Cotton Mather contacted Yale and asked for his help. Mather represented a small institution of learning that had been founded in 1701 in Old Saybrook, Connecticut, as the Collegiate School of Connecticut, which needed money for a new building. Yale sent Mather 417 books, a portrait of King George I, and nine bales of goods. These last were sold by the school for £800 pound sterling. In gratitude, officials named the new building Yale; eventually the entire institution became Yale College. Yale was also a vestryman and treasurer of St. Mary's Church at Fort St. George. On 6 October 1968, the 250th anniversary of the naming of Yale College for Elihu Yale, the classmates of Chester Bowles, then the American ambassador to India and a graduate of Yale (1924), donated money for lasting improvements to the church and erected a plaque to commemorate the occasion. In 1970 a portrait of him, Elihu Yale seated at table with the Second Duke of Devonshire and Lord James Cavendish was donated to the Yale Center for British Art from Chatsworth House. On 5 April 1999, Yale University recognized the 350th anniversary of Yale's birthday. An article that year in American Heritage magazine rated Elihu Yale the "most overrated philanthropist" in American history, arguing that the college that became Yale University was successful largely because of the generosity of a man named Jeremiah Dummer, but that the trustees of the school did not want it known by the name "Dummer College". In her article for The Atlantic about Skull and Bones, a secret society at Yale University, Alexandra Robbins alleges that Yale's headstone was stolen years ago from its proper setting in Wrexham. She further alleges that the tombstone is now displayed in a glass case in a room with purple walls. Slave Trade One of Elihu Yale's responsibilities as president of Fort St. George was overseeing its slave trade, though he himself was never a slave trader, never owned slaves, opposed the slave trade, and imposed several restrictions on it during his tenure. Critics, nonetheless, argue that he benefited from the trade by having it as one of his responsibilities as president, despite not owning any of the traded human beings or profiting from their sales. Cultural references Elihu later became the name of a "senior society" founded in 1903 at Yale. Tom Wolfe, who earned a Ph.D. in American |
printing telegraph (Patent no. 103,898 "Système de télégraphie rapide"), in which the signals were translated automatically into typographic characters. Baudot's hardware had three main parts: the keyboard, the distributor, and a paper tape. Each operator - there were as many as four - was allocated a single sector. The keyboard had just five piano type keys, operated with two fingers of the left hand and three fingers of the right hand. The five unit code was designed to be easy to remember. Once the keys had been pressed they were locked down until the contacts again passed over the sector connected to that particular keyboard, when the keyboard was unlocked ready for the next character to be entered, with an audible click (known as the "cadence signal") to warn the operator. Operators had to maintain a steady rhythm, and the usual speed of operation was 30 words per minute. The receiver was also connected to the distributor. The signals from the telegraph line were temporarily stored on a set of five electromagnets, before being decoded to print the corresponding character on paper tape. Accurate operation of this system depended on the distributor at the transmitting end keeping in synchronization with the one at the receiving end and operators only sending characters when the contacts passed over their allocated sector. This could be achieved at a speed of 30 wpm by strictly observing the "cadence" of rhythm of the system when the distributor gave the operator the use of the line. First use The Baudot system was accepted by the French Telegraph Administration in 1875, with the first online tests of his system occurring between Paris and Bordeaux on 12 November 1877. At the end of 1877, the Paris-Rome line, which was about , began operating a duplex Baudot. The Baudot apparatus was shown at the Paris Exposition Universelle (1878) and won him the Exposition's gold medal, as well as bringing his system to worldwide notice. Later career After the first success of his system, Baudot was promoted to Controller in 1880, and was named Inspector-Engineer in 1882. In July 1887 he conducted successful tests on the Atlantic telegraph cable between Weston-super-Mare and Waterville, Nova Scotia operated by the Commercial Company, with a double Baudot installed in duplex, the Baudot transmitters and receivers substituted for the recorder. On 8 August 1890 he established communications between Paris, Vannes, and Lorient over a single wire. On 3 January 1894 he installed a triplex apparatus on the telegraph between Paris and Bordeaux that had previously been operating with some difficulty on the Hughes telegraph system. On 27 April 1894 he established communications between the Paris stock exchange and the Milan stock exchange, again over a single wire, using his new invention, the retransmitter. In 1897 the Baudot system was improved by switching to punched tape, which was prepared offline like the Morse tape used with the Wheatstone and Creed systems. A tape reader, controlled by the Baudot distributor, then replaced the manual keyboard. The tape had five rows of holes for the code, with a sixth row of smaller holes for transporting the tape through the reader mechanism. Baudot's code was later standardised as International Telegraph Alphabet Number One. Baudot received little help from the French Telegraph Administration for his system, and often had to fund his own research, even having to sell the gold medal awarded by the 1878 Exposition Universelle in 1880. The Baudot telegraph system was employed progressively in France, and then was adopted in other countries, Italy being the first to introduce it, in its inland service, in 1887. The Netherlands followed in 1895, Switzerland in 1896, and Austria and Brazil in 1897. The British Post Office adopted it for a simplex circuit between London and Paris during 1897, then used it for more general purposes from 1898. In 1900 it was adopted by Germany, by Russia in 1904, the British West Indies in 1905, Spain in 1906, Belgium in 1909, Argentina in 1912, and Romania in 1913. Final years Baudot married Marie Josephine Adelaide Langrognet on 15 January 1890. She died only three months later, on 9 April 1890. Soon after starting work with the telegraph service, Baudot began to suffer physical discomfort and | his system occurring between Paris and Bordeaux on 12 November 1877. At the end of 1877, the Paris-Rome line, which was about , began operating a duplex Baudot. The Baudot apparatus was shown at the Paris Exposition Universelle (1878) and won him the Exposition's gold medal, as well as bringing his system to worldwide notice. Later career After the first success of his system, Baudot was promoted to Controller in 1880, and was named Inspector-Engineer in 1882. In July 1887 he conducted successful tests on the Atlantic telegraph cable between Weston-super-Mare and Waterville, Nova Scotia operated by the Commercial Company, with a double Baudot installed in duplex, the Baudot transmitters and receivers substituted for the recorder. On 8 August 1890 he established communications between Paris, Vannes, and Lorient over a single wire. On 3 January 1894 he installed a triplex apparatus on the telegraph between Paris and Bordeaux that had previously been operating with some difficulty on the Hughes telegraph system. On 27 April 1894 he established communications between the Paris stock exchange and the Milan stock exchange, again over a single wire, using his new invention, the retransmitter. In 1897 the Baudot system was improved by switching to punched tape, which was prepared offline like the Morse tape used with the Wheatstone and Creed systems. A tape reader, controlled by the Baudot distributor, then replaced the manual keyboard. The tape had five rows of holes for the code, with a sixth row of smaller holes for transporting the tape through the reader mechanism. Baudot's code was later standardised as International Telegraph Alphabet Number One. Baudot received little help from the French Telegraph Administration for his system, and often had to fund his own research, even having to sell the gold medal awarded by the 1878 Exposition Universelle in 1880. The Baudot telegraph system was employed progressively in France, and then was adopted in other countries, Italy being the first to introduce it, in its inland service, in 1887. The Netherlands followed in 1895, Switzerland in 1896, and Austria and Brazil in 1897. The British Post Office adopted it for a simplex circuit between London and Paris during 1897, then used it for more general purposes from 1898. In 1900 it was adopted by Germany, by Russia in 1904, the British West Indies in 1905, Spain in 1906, Belgium in 1909, Argentina in 1912, and Romania in 1913. Final years Baudot married Marie Josephine Adelaide Langrognet on 15 January 1890. She died only three months later, on 9 April 1890. Soon after starting work with the telegraph service, Baudot began to suffer physical discomfort and was frequently absent from work for this reason, for as long as a month on one occasion. His condition affected him for the rest of his life, until he died on 28 March 1903, at Sceaux, Hauts-de-Seine, near Paris, at the age of 57. Mimault patent suit In 1874, French telegraph operator Louis Victor Mimault patented a telegraph system using five separate lines to transmit. After his patent was rejected by the Telegraph Administration, Mimault modified his device to incorporate features from the Meyer telegraph and obtained a new patent which was also rejected. In the meantime, Baudot had patented his prototype telegraph a few weeks earlier. Mimault claimed priority of invention over Baudot and brought a patent suit against him in 1877. The Tribunal Civil de la Seine, which reviewed testimony from three experts unconnected with the Telegraph Administration, found in favor of Mimault and accorded him priority of invention of the Baudot code and ruled that Baudot's patents were simply improvements of Mimault's. Neither inventor was satisfied with this judgment, which was eventually rescinded with Mimault being ordered to pay all legal costs. Mimault became unnerved because of the decision, and after an incident where he shot at and wounded two students of the École Polytechnique (charges for which were dropped), he demanded a special act to prolong the duration of his patents, 100,000 Francs, and election to the Légion d'honneur. A commission directed by Jules Raynaud (head of telegraph research) rejected his demands. Upon hearing the decision, Mimault shot and killed |
and monetary support for non-working citizens. Components of individual economic security In the United States, children's economic security is indicated by the income level and employment security of their families or organizations. Economic security of people over 50 years old is based on Social Security benefits, pensions and savings, earnings and employment, and health insurance coverage. Arizona In 1972, the state legislature of Arizona formed a Department of Economic Security with a mission to promote "the safety, well-being, and self sufficiency of children, adults, and families". This department combines state government activities previously managed by the Employment Security Commission, the State Department of Public Welfare, the Division of Vocational Rehabilitation, the State Office of Economic Opportunity, the Apprenticeship Council, and the State Office of Manpower Planning. The State Department of Mental Retardation (renamed the Division of Developmental Disabilities, House Bill 2213) joined the Department in 1974 . The purpose in creating the Department was to provide an integration of direct services to people in such a way as to reduce duplication of administrative efforts, services and expenditures. Family Connections became a part of the Department in January 2007. Minnesota The Minnesota Department of Economic Security was formed | Department of Employment and Economic Development. National economic security In the context of domestic politics and international relations, national economic security is the ability of a country to follow its choice of policies to develop the national economy in the manner desired. Historically, conquest of nations have made conquerors rich through plunder, access to new resources and enlarged trade through controlling of the economies of conquered nations. Today's complex system of international trade is characterized by multi-national agreements and mutual inter-dependence. Availability of natural resources and capacity for production and distribution are essential under this system, leading many experts to consider economic security to be as important a part of national security as military policy. Economic security has been proposed as a key determinant of international relations, particularly in the geopolitics of petroleum in American foreign policy after September 11, 2001. In Canada, threats to the country's overall economic security are considered economic espionage, which is "illegal, clandestine or coercive activity by a foreign government in order to gain unauthorized access to economic intelligence, such as proprietary information or technology, for economic advantage." Other It is widely believed that there is a tradeoff between economic security and |
convolutional code is punctured to achieve the desired code rate. In GPRS Coding Scheme CS-4, no convolutional coding is applied. In EGPRS/EDGE, the modulation and coding schemes MCS-1 to MCS-9 take the place of the coding schemes of GPRS, and additionally specify which modulation scheme is used, GMSK or 8PSK. MCS-1 through MCS-4 use GMSK and have performance similar (but not equal) to GPRS, while MCS-5 through MCS-9 use 8PSK. In all EGPRS modulation and coding schemes, a convolutional code of rate 1/3 is used, and puncturing is used to achieve the desired code rate. In contrast to GPRS, the Radio Link Control (RLC) and Media Access Control (MAC) headers and the payload data are coded separately in EGPRS. The headers are coded more robustly than the data. Evolved EDGE Evolved EDGE, also called EDGE Evolution, is a bolt-on extension to the GSM mobile telephony standard, which improves on EDGE in a number of ways. Latencies are reduced by lowering the Transmission Time Interval by half (from 20 ms to 10 ms). Bit rates are increased up to 1 Mbit/s peak bandwidth and latencies down to 80 ms using dual carrier, higher symbol rate and higher-order modulation (32QAM and 16QAM instead of 8PSK), and turbo codes to improve error correction. This results in real world downlink speeds of up to 600kbit/s. Further the signal quality is improved using dual antennas improving average bit-rates and spectrum efficiency. The main intention of increasing the existing EDGE throughput is that many operators would like to upgrade their existing infrastructure rather than invest on new network infrastructure. Mobile operators have invested billions in GSM networks, many of which are already capable of supporting EDGE data speeds up to 236.8 kbit/s. With a software upgrade and a new device compliant with Evolved EDGE (like an Evolved EDGE smartphone) for the user, these data rates can be boosted to speeds approaching 1 Mbit/s (i.e. 98.6 kbit/s per timeslot for 32QAM). Many service providers may not invest in a completely new technology like 3G networks. Considerable research and development happened throughout the world for this new technology. A successful trial by Nokia Siemens and "one of China's leading operators" has been achieved in a live environment. With the introduction for more advanced wireless technologies like UMTS and LTE, which also focus on a network coverage layer on low frequencies and the upcoming phase-out and shutdown of 2G mobile networks, it is very unlikely that Evolved EDGE will ever see any deployment on live networks. Up to now (as of 2016) there are no commercial networks which support the Evolved EDGE standard (3GPP Rel-7). Technology Reduced Latency With Evolved EDGE come three major features designed to reduce latency over the air interface. In EDGE, a single RLC data block (ranging from 23 to 148 bytes of data) is transmitted over four frames, using a single time slot. On average, this requires 20 ms for one way transmission. Under the RTTI scheme, one data block is transmitted over two frames in two timeslots, reducing the latency of the air interface to 10 ms. In addition, Reduced Latency also implies support of Piggy-backed ACK/NACK (PAN), in which a bitmap of blocks not received is included in normal data blocks. Using the PAN field, the receiver may report missing data blocks immediately, rather than waiting to send a dedicated PAN message. A final enhancement is RLC-non persistent mode. With EDGE, the RLC interface could operate in either acknowledged mode, or unacknowledged mode. In unacknowledged mode, there is no retransmission of missing data blocks, so a single corrupt block would cause an entire upper-layer IP packet to be lost. With non-persistent mode, an RLC data block may be retransmitted if it is less than a certain age. Once this time expires, it is considered lost, and subsequent data blocks may then be forwarded to upper layers. Downlink dual carrier With downlink dual carrier, the handheld is able to receive on two different frequency channels at the same time, doubling the downlink throughput. In addition, if second receiver is present then the handheld is able to receive on an additional timeslot in single-carrier | as well as EGPRS/EDGE consists of two steps: first, a cyclic code is used to add parity bits, which are also referred to as the Block Check Sequence, followed by coding with a possibly punctured convolutional code. In GPRS, the Coding Schemes CS-1 to CS-4 specify the number of parity bits generated by the cyclic code and the puncturing rate of the convolutional code. In GPRS Coding Schemes CS-1 through CS-3, the convolutional code is of rate 1/2, i.e. each input bit is converted into two coded bits. In Coding Schemes CS-2 and CS-3, the output of the convolutional code is punctured to achieve the desired code rate. In GPRS Coding Scheme CS-4, no convolutional coding is applied. In EGPRS/EDGE, the modulation and coding schemes MCS-1 to MCS-9 take the place of the coding schemes of GPRS, and additionally specify which modulation scheme is used, GMSK or 8PSK. MCS-1 through MCS-4 use GMSK and have performance similar (but not equal) to GPRS, while MCS-5 through MCS-9 use 8PSK. In all EGPRS modulation and coding schemes, a convolutional code of rate 1/3 is used, and puncturing is used to achieve the desired code rate. In contrast to GPRS, the Radio Link Control (RLC) and Media Access Control (MAC) headers and the payload data are coded separately in EGPRS. The headers are coded more robustly than the data. Evolved EDGE Evolved EDGE, also called EDGE Evolution, is a bolt-on extension to the GSM mobile telephony standard, which improves on EDGE in a number of ways. Latencies are reduced by lowering the Transmission Time Interval by half (from 20 ms to 10 ms). Bit rates are increased up to 1 Mbit/s peak bandwidth and latencies down to 80 ms using dual carrier, higher symbol rate and higher-order modulation (32QAM and 16QAM instead of 8PSK), and turbo codes to improve error correction. This results in real world downlink speeds of up to 600kbit/s. Further the signal quality is improved using dual antennas improving average bit-rates and spectrum efficiency. The main intention of increasing the existing EDGE throughput is that many operators would like to upgrade their existing infrastructure rather than invest on new network infrastructure. Mobile operators have invested billions in GSM networks, many of which are already capable of supporting EDGE data speeds up to 236.8 kbit/s. With a software upgrade and a new device compliant with Evolved EDGE (like an Evolved EDGE smartphone) for the user, these data rates can be boosted to speeds approaching 1 Mbit/s (i.e. 98.6 kbit/s per timeslot for 32QAM). Many service providers may not invest in a completely new technology like 3G networks. Considerable research and development happened throughout the world for this new technology. A successful trial by Nokia Siemens and "one of China's leading operators" has been achieved in a live environment. With the introduction for more advanced wireless technologies like UMTS and LTE, which also focus on a network coverage layer on low frequencies and the upcoming phase-out and shutdown of 2G mobile networks, it is very unlikely that Evolved EDGE will ever see any deployment on live networks. Up to now (as of 2016) there are no commercial networks which support the Evolved EDGE standard (3GPP Rel-7). Technology Reduced Latency With Evolved EDGE come three major |
đ) began to emerge in the early 8th century, with ð becoming strongly preferred by the 780s. Another source indicates that the letter is "derived from Irish writing". Under King Ælfred the Great, þ grew greatly in popularity and started to overtake ð. Þ completely overtook ð by Middle English, and þ died out by Early Modern English, mostly due to the rise of the printing press, and was replaced by the digraph th. Lower case version The lowercase (minuscule) version has retained the curved shape of a medieval scribe's d, which d itself in general has not. Icelandic In Icelandic, ð, called "eð", represents a voiced dental fricative , which is the same as the th in English that, but it never appears as the first letter of a word. At the end of words as well as within words when it's followed by a voiceless consonant, ð is devoiced to . The ð in the name of the letter is devoiced | completely overtook ð by Middle English, and þ died out by Early Modern English, mostly due to the rise of the printing press, and was replaced by the digraph th. Lower case version The lowercase (minuscule) version has retained the curved shape of a medieval scribe's d, which d itself in general has not. Icelandic In Icelandic, ð, called "eð", represents a voiced dental fricative , which is the same as the th in English that, but it never appears as the first letter of a word. At the end of words as well as within words when it's followed by a voiceless consonant, ð is devoiced to . The ð in the name of the letter is devoiced in the Nominative and Accusative cases . In the Icelandic alphabet, ð follows d. Faroese In Faroese, ð is not assigned to any particular phoneme and appears mostly for etymological reasons, but it indicates most glides. When ð appears before r, it is in |
Nord department References INSEE commune file More information about Eth Eth ( French | called Ethois (feminine plural Ethoises). Heraldry See also Communes of the Nord department References INSEE commune file More |
impact on the environment and society of each riparian country. The dams constructed as part of GAP – in both the Euphrates and the Tigris basins – have affected 382 villages and almost 200,000 people have been resettled elsewhere. The largest number of people was displaced by the building of the Atatürk Dam, which alone affected 55,300 people. A survey among those who were displaced showed that the majority were unhappy with their new situation and that the compensation they had received was considered insufficient. The flooding of Lake Assad led to the forced displacement of c. 4,000 families, who were resettled in other parts of northern Syria as part of a now abandoned plan to create an "Arab belt" along the borders with Turkey and Iraq. Apart from the changes in the discharge regime of the river, the numerous dams and irrigation projects have also had other effects on the environment. The creation of reservoirs with large surfaces in countries with high average temperatures has led to increased evaporation; thereby reducing the total amount of water that is available for human use. Annual evaporation from reservoirs has been estimated at in Turkey, in Syria and in Iraq. Water quality in the Iraqi Euphrates is low because irrigation water tapped in Turkey and Syria flows back into the river, together with dissolved fertilizer chemicals used on the fields. The salinity of Euphrates water in Iraq has increased as a result of upstream dam construction, leading to lower suitability as drinking water. The many dams and irrigation schemes, and the associated large-scale water abstraction, have also had a detrimental effect on the ecologically already fragile Mesopotamian Marshes and on freshwater fish habitats in Iraq. The inundation of large parts of the Euphrates valley, especially in Turkey and Syria, has led to the flooding of many archaeological sites and other places of cultural significance. Although concerted efforts have been made to record or save as much of the endangered cultural heritage as possible, many sites are probably lost forever. The combined GAP projects on the Turkish Euphrates have led to major international efforts to document the archaeological and cultural heritage of the endangered parts of the valley. Especially the flooding of Zeugma with its unique Roman mosaics by the reservoir of the Birecik Dam has generated much controversy in both the Turkish and international press. The construction of the Tabqa Dam in Syria led to a large international campaign coordinated by UNESCO to document the heritage that would disappear under the waters of Lake Assad. Archaeologists from numerous countries excavated sites ranging in date from the Natufian to the Abbasid period, and two minarets were dismantled and rebuilt outside the flood zone. Important sites that have been flooded or affected by the rising waters of Lake Assad include Mureybet, Emar and Abu Hureyra. A similar international effort was made when the Tishrin Dam was constructed, which led, among others, to the flooding of the important Pre-Pottery Neolithic B site of Jerf el-Ahmar. An archaeological survey and rescue excavations were also carried out in the area flooded by Lake Qadisiya in Iraq. Parts of the flooded area have recently become accessible again due to the drying up of the lake, resulting not only in new possibilities for archaeologists to do more research, but also providing opportunities for looting, which has been rampant elsewhere in Iraq in the wake of the 2003 invasion. History Palaeolithic to Chalcolithic periods The early occupation of the Euphrates basin was limited to its upper reaches; that is, the area that is popularly known as the Fertile Crescent. Acheulean stone artifacts have been found in the Sajur basin and in the El Kowm oasis in the central Syrian steppe; the latter together with remains of Homo erectus that were dated to 450,000 years old. In the Taurus Mountains and the upper part of the Syrian Euphrates valley, early permanent villages such as Abu Hureyra – at first occupied by hunter-gatherers but later by some of the earliest farmers, Jerf el-Ahmar, Mureybet and Nevalı Çori became established from the eleventh millennium BCE onward. In the absence of irrigation, these early farming communities were limited to areas where rainfed agriculture was possible, that is, the upper parts of the Syrian Euphrates as well as Turkey. Late Neolithic villages, characterized by the introduction of pottery in the early 7th millennium BCE, are known throughout this area. Occupation of lower Mesopotamia started in the 6th millennium and is generally associated with the introduction of irrigation, as rainfall in this area is insufficient for dry agriculture. Evidence for irrigation has been found at several sites dating to this period, including Tell es-Sawwan. During the 5th millennium BCE, or late Ubaid period, northeastern Syria was dotted by small villages, although some of them grew to a size of over . In Iraq, sites like Eridu and Ur were already occupied during the Ubaid period. Clay boat models found at Tell Mashnaqa along the Khabur indicate that riverine transport was already practiced during this period. The Uruk period, roughly coinciding with the 4th millennium BCE, saw the emergence of truly urban settlements across Mesopotamia. Cities like Tell Brak and Uruk grew to over in size and displayed monumental architecture. The spread of southern Mesopotamian pottery, architecture and sealings far into Turkey and Iran has generally been interpreted as the material reflection of a widespread trade system aimed at providing the Mesopotamian cities with raw materials. Habuba Kabira on the Syrian Euphrates is a prominent example of a settlement that is interpreted as an Uruk colony. Ancient history During the Jemdet Nasr (3600–3100 BCE) and Early Dynastic periods (3100–2350 BCE), southern Mesopotamia experienced a growth in the number and size of settlements, suggesting strong population growth. These settlements, including Sumero-Akkadian sites like Sippar, Uruk, Adab and Kish, were organized in competing city-states. Many of these cities were located along canals of the Euphrates and the Tigris that have since dried up, but that can still be identified from remote sensing imagery. A similar development took place in Upper Mesopotamia, Subartu and Assyria, although only from the mid 3rd millennium and on a smaller scale than in Lower Mesopotamia. Sites like Ebla, Mari and Tell Leilan grew to prominence for the first time during this period. Large parts of the Euphrates basin were for the first time united under a single ruler during the Akkadian Empire (2335–2154 BC) and Ur III empires, which controlled – either directly or indirectly through vassals – large parts of modern-day Iraq and northeastern Syria. Following their collapse, the Old Assyrian Empire (1975–1750 BCE) and Mari asserted their power over northeast Syria and northern Mesopotamia, while southern Mesopotamia was controlled by city-states like Isin, Kish and Larsa before their territories were absorbed by the newly emerged state of Babylonia under Hammurabi in the early to mid 18th century BCE. In the second half of the 2nd millennium BCE, the Euphrates basin was divided between Kassite Babylon in the south and Mitanni, Assyria and the Hittite Empire in the north, with the Middle Assyrian Empire (1365–1020 BC) eventually eclipsing the Hittites, Mitanni and Kassite Babylonians. Following the end of the Middle Assyrian Empire in the late 11th century BCE, struggles broke out between Babylonia and Assyria over the control of the Iraqi Euphrates basin. The Neo-Assyrian Empire (935–605 BC) eventually emerged victorious out of this conflict and also succeeded in gaining control of the northern Euphrates basin in the first half of the 1st millennium BCE. In the centuries to come, control of the wider Euphrates basin shifted from the Neo-Assyrian Empire (which collapsed between 612 and 599 BC) to the short lived Median Empire (612–546 BC) and equally brief Neo-Babylonian Empire (612–539 BC) in the last years of the 7th century BC, and eventually to the Achaemenid Empire (539–333 BC). The Achaemenid Empire was in turn overrun by Alexander the Great, who defeated the last king Darius III and died in Babylon in 323 BCE. Subsequent to this, the region came under the control of the Seleucid Empire (312–150 BC), Parthian Empire (150–226 AD) (during which several Neo-Assyrian states such as Adiabene came to rule certain regions of the Euphrates), and was fought over by the Roman Empire, its succeeding Byzantine Empire and the Sassanid Empire (226–638 AD), until the Islamic conquest of the mid 7th century AD. The Battle of Karbala took place near the banks of this river in 680 AD. In the north, the river served as a border between Greater Armenia | at elevations of and amsl, respectively. At the location of the Keban Dam, the two rivers, now combined into the Euphrates, have dropped to an elevation of amsl. From Keban to the Syrian–Turkish border, the river drops another over a distance of less than . Once the Euphrates enters the Upper Mesopotamian plains, its grade drops significantly; within Syria the river falls while over the last stretch between Hīt and the Shatt al-Arab the river drops only . Discharge The Euphrates receives most of its water in the form of rainfall and melting snow, resulting in peak volumes during the months April through May. Discharge in these two months accounts for 36 percent of the total annual discharge of the Euphrates, or even 60–70 percent according to one source, while low runoff occurs in summer and autumn. The average natural annual flow of the Euphrates has been determined from early- and mid-twentieth century records as at Keban, at Hīt and at Hindiya. However, these averages mask the high inter-annual variability in discharge; at Birecik, just north of the Syro–Turkish border, annual discharges have been measured that ranged from a low volume of in 1961 to a high of in 1963. The discharge regime of the Euphrates has changed dramatically since the construction of the first dams in the 1970s. Data on Euphrates discharge collected after 1990 show the impact of the construction of the numerous dams in the Euphrates and of the increased withdrawal of water for irrigation. Average discharge at Hīt after 1990 has dropped to per second ( per year). The seasonal variability has equally changed. The pre-1990 peak volume recorded at Hīt was per second, while after 1990 it is only per second. The minimum volume at Hīt remained relatively unchanged, rising from per second before 1990 to per second afterward. Tributaries In Syria, three rivers add their water to the Euphrates; the Sajur, the Balikh and the Khabur. These rivers rise in the foothills of the Taurus Mountains along the Syro–Turkish border and add comparatively little water to the Euphrates. The Sajur is the smallest of these tributaries; emerging from two streams near Gaziantep and draining the plain around Manbij before emptying into the reservoir of the Tishrin Dam. The Balikh receives most of its water from a karstic spring near 'Ayn al-'Arus and flows due south until it reaches the Euphrates at the city of Raqqa. In terms of length, drainage basin and discharge, the Khabur is the largest of these three. Its main karstic springs are located around Ra's al-'Ayn, from where the Khabur flows southeast past Al-Hasakah, where the river turns south and drains into the Euphrates near Busayrah. Once the Euphrates enters Iraq, there are no more natural tributaries to the Euphrates, although canals connecting the Euphrates basin with the Tigris basin exist. Drainage basin The drainage basins of the Kara Su and the Murat River cover an area of and , respectively. Estimates of the area of the Euphrates drainage basin vary widely; from a low to a high . Recent estimates put the basin area at , and . The greater part of the Euphrates basin is located in Turkey, Syria, and Iraq. According to both Daoudy and Frenken, Turkey's share is 28 percent, Syria's is 17 percent and that of Iraq is 40 percent. Isaev and Mikhailova estimate the percentages of the drainage basin lying within Turkey, Syria and Iraq at 33, 20 and 47 percent respectively. Some sources estimate that approximately 15 percent of the drainage basin is located within Saudi Arabia, while a small part falls inside the borders of Kuwait. Finally, some sources also include Jordan in the drainage basin of the Euphrates; a small part of the eastern desert () drains toward the east rather than to the west. Natural history The Euphrates flows through a number of distinct vegetation zones. Although millennia-long human occupation in most parts of the Euphrates basin has significantly degraded the landscape, patches of original vegetation remain. The steady drop in annual rainfall from the sources of the Euphrates toward the Persian Gulf is a strong determinant for the vegetation that can be supported. In its upper reaches the Euphrates flows through the mountains of Southeast Turkey and their southern foothills which support a xeric woodland. Plant species in the moister parts of this zone include various oaks, pistachio trees, and Rosaceae (rose/plum family). The drier parts of the xeric woodland zone supports less dense oak forest and Rosaceae. Here can also be found the wild variants of many cereals, including einkorn wheat, emmer wheat, oat and rye. South of this zone lies a zone of mixed woodland-steppe vegetation. Between Raqqa and the Syro–Iraqi border the Euphrates flows through a steppe landscape. This steppe is characterised by white wormwood (Artemisia herba-alba) and Chenopodiaceae. Throughout history, this zone has been heavily overgrazed due to the practicing of sheep and goat pastoralism by its inhabitants. Southeast of the border between Syria and Iraq starts true desert. This zone supports either no vegetation at all or small pockets of Chenopodiaceae or Poa sinaica. Although today nothing of it survives due to human interference, research suggests that the Euphrates Valley would have supported a riverine forest. Species characteristic of this type of forest include the Oriental plane, the Euphrates poplar, the tamarisk, the ash and various wetland plants. Among the fish species in the Tigris–Euphrates basin, the family of the Cyprinidae are the most common, with 34 species out of 52 in total. Among the Cyprinids, the mangar has good sport fishing qualities, leading the British to nickname it "Tigris salmon." The Rafetus euphraticus is an endangered soft-shelled turtle that is limited to the Tigris–Euphrates river system. The Neo-Assyrian palace reliefs from the 1st millennium BCE depict lion and bull hunts in fertile landscapes. Sixteenth to nineteenth century European travellers in the Syrian Euphrates basin reported on an abundance of animals living in the area, many of which have become rare or even extinct. Species like gazelle, onager and the now-extinct Arabian ostrich lived in the steppe bordering the Euphrates valley, while the valley itself was home to the wild boar. Carnivorous species include the gray wolf, the golden jackal, the red fox, the leopard and the lion. The Syrian brown bear can be found in the mountains of Southeast Turkey. The presence of European beaver has been attested in the bone assemblage of the prehistoric site of Abu Hureyra in Syria, but the beaver has never been sighted in historical times. River The Hindiya Barrage on the Iraqi Euphrates, based on plans by British civil engineer William Willcocks and finished in 1913, was the first modern water diversion structure built in the Tigris–Euphrates river system. The Hindiya Barrage was followed in the 1950s by the Ramadi Barrage and the nearby Abu Dibbis Regulator, which serve to regulate the flow regime of the Euphrates and to discharge excess flood water into the depression that is now Lake Habbaniyah. Iraq's largest dam on the Euphrates is the Haditha Dam; a earth-fill dam creating Lake Qadisiyah. Syria and Turkey built their first dams in the Euphrates in the 1970s. The Tabqa Dam in Syria was completed in 1973 while Turkey finished the Keban Dam, a prelude to the immense Southeastern Anatolia Project, in 1974. Since then, Syria has built two more dams in the Euphrates, the Baath Dam and the Tishrin Dam, and plans to build a fourth dam – the Halabiye Dam – between Raqqa and Deir ez-Zor. The Tabqa Dam is Syria's largest dam and its reservoir (Lake Assad) is an important source of irrigation and drinking water. It was planned that should be irrigated from Lake Assad, but in 2000 only had been realized. Syria also built three smaller dams |
of Enlightenment, during the Estophile Enlightenment Period (1750–1840). Although Baltic Germans at large regarded the future of Estonians as being a fusion with themselves, the Estophile educated class admired the ancient culture of the Estonians and their era of freedom before the conquests by Danes and Germans in the 13th century. After the Estonian War of Independence in 1919, the Estonian language became the state language of the newly independent country. In 1945, 97.3% of Estonia considered itself ethnic Estonian and spoke the language. When Estonia was invaded and occupied by the Soviet Union in World War II, the status of the Estonian language changed to the first of two official languages (Russian being the other one). As with Latvia many immigrants entered Estonia under Soviet encouragement. In the second half of the 1970s, the pressure of bilingualism (for Estonians) intensified, resulting in widespread knowledge of Russian throughout the country. The Russian language was termed as ‘the language of friendship of nations’ and was taught to Estonian children, sometimes as early as in kindergarten. Although teaching Estonian to non-Estonians in schools was compulsory, in practice learning the language was often considered unnecessary. During the Perestroika era, The Law on the Status of the Estonian Language was adopted in January 1989. The 1991 collapse of the Soviet Union led to the restoration of the Republic of Estonia's independence. Estonian went back to being the only state language in Estonia which in practice meant that use of Estonian was promoted while the use of Russian was discouraged. The return of Soviet immigrants to their countries of origin has brought the proportion of Estonians in Estonia back above 70%. And again as in Latvia, today many of the remnant non-Estonians in Estonia have adopted the Estonian language; about 40% at the 2000 census. Dialects The Estonian dialects are divided into two groups – the northern and southern dialects, historically associated with the cities of Tallinn in the north and Tartu in the south, in addition to a distinct kirderanniku dialect, Northeastern coastal Estonian. The northern group consists of the or central dialect that is also the basis for the standard language, the or western dialect, roughly corresponding to Lääne County and Pärnu County, the (islands' dialect) of Saaremaa, Hiiumaa, Muhu and Kihnu, and the or eastern dialect on the northwestern shore of Lake Peipus. South Estonian consists of the Tartu, Mulgi, Võro and Seto varieties. These are sometimes considered either variants of South Estonian or separate languages altogether. Also, Seto and Võro distinguish themselves from each other less by language and more by their culture and their respective Christian confession. Writing system Alphabet Estonian employs the Latin script as the basis for its alphabet, which adds the letters ä, ö, ü, and õ, plus the later additions š and ž. The letters c, q, w, x and y are limited to proper names of foreign origin, and f, z, š, and ž appear in loanwords and foreign names only. Ö and Ü are pronounced similarly to their equivalents in Swedish and German. Unlike in standard German but like Swedish (when followed by 'r') and Finnish, Ä is pronounced [æ], as in English mat. The vowels Ä, Ö and Ü are clearly separate phonemes and inherent in Estonian, although the letter shapes come from German. The letter õ denotes , unrounded , or a close-mid back unrounded vowel. It is almost identical to the Bulgarian ъ and the Vietnamese ơ, and is also used to transcribe the Russian ы. Orthography Although the Estonian orthography is generally guided by phonemic principles, with each grapheme corresponding to one phoneme, there are some historical and morphological deviations from this: for example preservation of the morpheme in declension of the word (writing b, g, d in places where p, k, t is pronounced) and in the use of 'i' and 'j'. Where it is very impractical or impossible to type š and ž, they are replaced by sh and zh in some written texts, although this is considered incorrect. Otherwise, the h in sh represents a voiceless glottal fricative, as in Pasha (pas-ha); this also applies to some foreign names. Modern Estonian orthography is based on the Newer Orthography created by Eduard Ahrens in the second half of the 19thcentury based on Finnish orthography. The Older Orthography it replaced was created in the 17thcentury by Bengt Gottfried Forselius and Johann Hornung based on standard German orthography. Earlier writing in Estonian had by and large used an ad hoc orthography based on Latin and Middle Low German orthography. Some influences of the standard German orthography – for example, writing 'W'/'w' instead of 'V'/'v' – persisted well into the 1930s. Estonian words and names quoted in international publications from Soviet sources are often back-transliterations from the Russian transliteration. Examples are the use of "ya" for "ä" (e.g. Pyarnu instead of Pärnu), "y" instead of "õ" (e.g., Pylva instead of Põlva) and "yu" instead of "ü" (e.g., Pyussi instead of Püssi). Even in the Encyclopædia Britannica one can find "ostrov Khiuma", where "ostrov" means "island" in Russian and "Khiuma" is back-transliteration from Russian instead of "Hiiumaa" (Hiiumaa > Хийума(а) > Khiuma). Phonology There are 9 vowels and 36 diphthongs, 28 of which are native to Estonian.[1] All nine vowels can appear as the first component of a diphthong, but only /ɑ e i o u/ occur as the second component. A vowel characteristic of Estonian is the unrounded back vowel /ɤ/, which | South Estonian languages, are based on the ancestors of modern Estonians' migration into the territory of Estonia in at least two different waves, both groups speaking considerably different Finnic vernaculars. Modern standard Estonian has evolved on the basis of the dialects of Northern Estonia. The oldest written records of the Finnic languages of Estonia date from the 13th century. Originates Livoniae in the Chronicle of Henry of Livonia contains Estonian place names, words and fragments of sentences. Estonian literature The earliest extant samples of connected (north) Estonian are the so-called Kullamaa prayers dating from 1524 and 1528. In 1525 the first book published in the Estonian language was printed. The book was a Lutheran manuscript, which never reached the reader and was destroyed immediately after publication. The first extant Estonian book is a bilingual German-Estonian translation of the Lutheran catechism by S.Wanradt and J.Koell dating to 1535, during the Protestant Reformation period. An Estonian grammar book to be used by priests was printed in German in 1637. The New Testament was translated into southern Estonian in 1686 (northern Estonian, 1715). The two languages were united based on northern Estonian by Anton thor Helle. Writings in Estonian became more significant in the 19th century during the Estophile Enlightenment Period (1750–1840). The birth of native Estonian literature was in 1810 to 1820 when the patriotic and philosophical poems by Kristjan Jaak Peterson were published. Peterson, who was the first student at the then German-language University of Dorpat to acknowledge his Estonian origin, is commonly regarded as a herald of Estonian national literature and considered the founder of modern Estonian poetry. His birthday, March 14, is celebrated in Estonia as Mother Tongue Day. A fragment from Peterson's poem "Kuu" expresses the claim reestablishing the birthright of the Estonian language: Kas siis selle maa keel Laulutuules ei või Taevani tõustes üles Igavikku omale otsida? In English: Can the language of this land In the wind of incantation Rising up to the heavens Not seek for eternity? Kristjan Jaak Peterson In the period from 1525 to 1917, 14,503 titles were published in Estonian; by comparison, between 1918 and 1940, 23,868 titles were published. In modern times Jaan Kross, Jaan Kaplinski and Viivi Luik are three of Estonia's best known and most translated writers. State language Writings in Estonian became significant only in the 19th century with the spread of the ideas of the Age of Enlightenment, during the Estophile Enlightenment Period (1750–1840). Although Baltic Germans at large regarded the future of Estonians as being a fusion with themselves, the Estophile educated class admired the ancient culture of the Estonians and their era of freedom before the conquests by Danes and Germans in the 13th century. After the Estonian War of Independence in 1919, the Estonian language became the state language of the newly independent country. In 1945, 97.3% of Estonia considered itself ethnic Estonian and spoke the language. When Estonia was invaded and occupied by the Soviet Union in World War II, the status of the Estonian language changed to the first of two official languages (Russian being the other one). As with Latvia many immigrants entered Estonia under Soviet encouragement. In the second half of the 1970s, the pressure of bilingualism (for Estonians) intensified, resulting in widespread knowledge of Russian throughout the country. The Russian language was termed as ‘the language of friendship of nations’ and was taught to Estonian children, sometimes as early as in kindergarten. Although teaching Estonian to non-Estonians in schools was compulsory, in practice learning the language was often considered unnecessary. During the Perestroika era, The Law on the Status of the Estonian Language was adopted in January 1989. The 1991 collapse of the Soviet Union led to the restoration of the Republic of Estonia's independence. Estonian went back to being the only state language in Estonia which in practice meant that use of Estonian was promoted while the use of Russian was discouraged. The return of Soviet immigrants to their countries of origin has brought the proportion of Estonians in Estonia back above 70%. And again as in Latvia, today many of the remnant non-Estonians in Estonia have adopted the Estonian language; about 40% at the 2000 census. Dialects The Estonian dialects are divided into two groups – the northern and southern dialects, historically associated with the cities of Tallinn in the north and Tartu in the south, in addition to a distinct kirderanniku dialect, Northeastern coastal Estonian. The northern group consists of the or central dialect that is also the basis for the standard language, the or western dialect, roughly corresponding to Lääne County and Pärnu County, the (islands' dialect) of Saaremaa, Hiiumaa, Muhu and Kihnu, and the or eastern dialect on the northwestern shore of Lake Peipus. South Estonian consists of the Tartu, Mulgi, Võro and Seto varieties. These are sometimes considered either variants of South Estonian or separate languages altogether. Also, Seto and Võro distinguish themselves from each other less by language and more by their culture and their respective Christian confession. Writing system Alphabet Estonian employs the Latin script as the basis for its alphabet, which adds the letters ä, ö, ü, and õ, plus the later additions š and ž. The letters c, q, w, x and y are limited to proper names of foreign origin, and f, z, š, and ž appear in loanwords and foreign names only. Ö and Ü are pronounced similarly to their equivalents in Swedish and German. Unlike in standard German but like Swedish (when followed by 'r') and Finnish, Ä is pronounced [æ], as in English mat. The vowels Ä, Ö and Ü are clearly separate phonemes and inherent in Estonian, although the letter shapes come from German. The letter õ denotes , unrounded , or a close-mid back unrounded vowel. It is almost identical to the Bulgarian ъ and the Vietnamese ơ, and is also used to transcribe the Russian ы. Orthography Although the Estonian orthography is generally guided by phonemic principles, with each grapheme corresponding to one phoneme, there are some historical and morphological deviations from this: for example preservation of the morpheme in declension of the word (writing b, g, d in places where p, k, t is pronounced) and in the use of 'i' and 'j'. Where it is very impractical or impossible to type š and ž, they are replaced by sh and zh in some written texts, although this is considered incorrect. Otherwise, the h in sh represents a voiceless glottal fricative, as in Pasha (pas-ha); this also applies to some foreign names. Modern Estonian orthography is based on the Newer Orthography created by Eduard Ahrens in the second half of the 19thcentury based on Finnish orthography. The Older Orthography it replaced was created in the 17thcentury by Bengt Gottfried Forselius and Johann Hornung based on standard German orthography. Earlier writing in Estonian had by and large used an ad hoc orthography based on Latin and Middle Low German orthography. Some influences of the standard German orthography – for example, writing 'W'/'w' instead of 'V'/'v' – persisted well into the 1930s. Estonian words and names quoted in international publications from Soviet sources are often back-transliterations from the Russian transliteration. Examples are the use of "ya" for "ä" (e.g. Pyarnu |
his innovation. The first (1991), co-edited by Paul Dennithorne Johnston, bore the title: To Be or Not: An E-Prime Anthology. For the second, More E-Prime: To Be or Not II, published in 1994, he added a third editor, Jeremy Klein. Bourland and Johnston then edited a third book, E-Prime III: a third anthology, published in 1997. Different functions of "to be" In the English language, the verb 'to be' (also known as the copula) has several distinct functions: identity, of the form "noun copula definite-noun" [The cat is my only pet]; [The cat is Garfield] class membership, of the form "definite-noun copula noun" [Garfield is a cat] class inclusion, of the form "noun copula noun" [A cat is an animal] predication, of the form "noun copula adjective" [The cat is furry] auxiliary, of the form "noun copula verb" [The cat is sleeping]; [The cat is being bitten by the dog]. The examples illustrate two different uses of 'be' as an auxiliary. In the first, 'be' is part of the progressive aspect, used with "-ing" on the verb; in the second, it is part of the passive, as indicated by the perfect participle of a transitive verb. existence, of the form "there copula noun" [There is a cat] location, of the form "noun copula place-phrase" [The cat is on the mat]; [The cat is here] Bourland sees specifically the "identity" and "predication" functions as pernicious, but advocates eliminating all forms for the sake of simplicity. In the case of the "existence" form (and less idiomatically, the "location" form), one might (for example) simply substitute the verb "exists". Other copula-substitutes in English include taste, feel, smell, sound, grow, remain, stay, and turn, among others a user of E-prime might use instead of to be. Examples Words not used in E-prime include: be, being, been, am, is, isn't, are, aren't, was, wasn't, were, and weren't. Contractions formed from a pronoun and a form of to be are also not used, including: I'm, you're, we're, they're, he's, she's, it's, there's, here's, where's, how's, what's, who's, and that's. E-Prime also prohibits contractions of to be found in nonstandard dialects of English, such as ain't. The different functions of "to be" could be rewritten as follows: "The cat is my only pet": "I have only a pet cat". "The cat is Garfield": "I call my cat Garfield". "Garfield is a cat": "I call my cat Garfield". "A cat is an animal": "Cat denotes an animal". "The cat is furry": "The cat feels furry". "The cat is sleeping": "The cat sleeps". "The dog is chasing the cat": "The dog chases the cat". "There is a cat": "I can see a cat". "The cat is on the mat": "The cat sits on the mat". "The cat is here": "I can see the cat". Rationale Bourland and other advocates also suggest that use of E-Prime leads to a less dogmatic style of language that reduces the possibility of misunderstanding or conflict. Kellogg and Bourland describe misuse of the verb to be as creating a "deity mode of speech", allowing "even the most ignorant to transform their opinions magically into god-like pronouncements on the nature of things". Psychological effects While teaching at | a third anthology, published in 1997. Different functions of "to be" In the English language, the verb 'to be' (also known as the copula) has several distinct functions: identity, of the form "noun copula definite-noun" [The cat is my only pet]; [The cat is Garfield] class membership, of the form "definite-noun copula noun" [Garfield is a cat] class inclusion, of the form "noun copula noun" [A cat is an animal] predication, of the form "noun copula adjective" [The cat is furry] auxiliary, of the form "noun copula verb" [The cat is sleeping]; [The cat is being bitten by the dog]. The examples illustrate two different uses of 'be' as an auxiliary. In the first, 'be' is part of the progressive aspect, used with "-ing" on the verb; in the second, it is part of the passive, as indicated by the perfect participle of a transitive verb. existence, of the form "there copula noun" [There is a cat] location, of the form "noun copula place-phrase" [The cat is on the mat]; [The cat is here] Bourland sees specifically the "identity" and "predication" functions as pernicious, but advocates eliminating all forms for the sake of simplicity. In the case of the "existence" form (and less idiomatically, the "location" form), one might (for example) simply substitute the verb "exists". Other copula-substitutes in English include taste, feel, smell, sound, grow, remain, stay, and turn, among others a user of E-prime might use instead of to be. Examples Words not used in E-prime include: be, being, been, am, is, isn't, are, aren't, was, wasn't, were, and weren't. Contractions formed from a pronoun and a form of to be are also not used, including: I'm, you're, we're, they're, he's, she's, it's, there's, here's, where's, how's, what's, who's, and that's. E-Prime also prohibits contractions of to be found in nonstandard dialects of English, such as ain't. The different functions of "to be" could be rewritten as follows: "The cat is my only pet": "I have only a pet cat". "The cat is Garfield": "I call my cat Garfield". "Garfield is a cat": "I call my cat Garfield". "A cat is an animal": "Cat denotes an animal". "The cat is furry": "The cat feels furry". "The cat is sleeping": "The cat sleeps". "The dog is chasing the cat": "The dog chases the cat". "There is a cat": "I can see a cat". "The cat is on the mat": "The cat sits on the mat". "The cat is here": "I can see the cat". Rationale Bourland and other advocates also suggest that use of E-Prime leads to a less dogmatic style of language that reduces the possibility of misunderstanding or conflict. Kellogg and Bourland describe misuse of the verb to be as creating a "deity mode of speech", allowing "even the most |
assuming BSD, this is equivalent to its L-function having a zero at s = 1. Tunnell has shown a related result: assuming BSD, n is a congruent number if and only if the number of triplets of integers (x, y, z) satisfying is twice the number of triples satisfying . The interest in this statement is that the condition is easy to check. In a different direction, certain analytic methods allow for an estimation of the order of zero in the center of the critical strip for certain L-functions. Admitting BSD, these estimations correspond to information about the rank of families of the corresponding elliptic curves. For example: assuming the generalized Riemann hypothesis and BSD, the average rank of curves given by is smaller than 2. The modularity theorem and its application to Fermat's Last Theorem The modularity theorem, once known as the Taniyama–Shimura–Weil conjecture, states that every elliptic curve E over Q is a modular curve, that is to say, its Hasse–Weil zeta function is the L-function of a modular form of weight 2 and level N, where N is the conductor of E (an integer divisible by the same prime numbers as the discriminant of E, Δ(E)). In other words, if one writes the L-function for Re(s) > 3/2 in the form then the expression defines a parabolic modular newform of weight 2 and level N. For prime numbers ℓ not dividing N, the coefficient a(ℓ) is equal to ℓ minus the number of solutions of the minimal equation of the curve modulo ℓ. For example, the elliptic curve , with discriminant (and conductor) 37, is associated to the form For prime numbers ℓ not equal to 37, one can verify the property about the coefficients. Thus, for ℓ = 3, there are 6 solutions of the equation modulo 3: , , , , , ; thus . The conjecture, going back to the 1950s, was completely proven by 1999 using ideas of Andrew Wiles, who proved it in 1994 for a large family of elliptic curves. There are several formulations of the conjecture. Showing that they are equivalent was a main challenge of number theory in the second half of the 20th century. The modularity of an elliptic curve E of conductor N can be expressed also by saying that there is a non-constant rational map defined over Q, from the modular curve X0(N) to E. In particular, the points of E can be parametrized by modular functions. For example, a modular parametrization of the curve is given by where, as above, q = exp(2πiz). The functions x(z) and y(z) are modular of weight 0 and level 37; in other words they are meromorphic, defined on the upper half-plane Im(z) > 0 and satisfy and likewise for y(z), for all integers a, b, c, d with ad − bc = 1 and 37|c. Another formulation depends on the comparison of Galois representations attached on the one hand to elliptic curves, and on the other hand to modular forms. The latter formulation has been used in the proof of the conjecture. Dealing with the level of the forms (and the connection to the conductor of the curve) is particularly delicate. The most spectacular application of the conjecture is the proof of Fermat's Last Theorem (FLT). Suppose that for a prime p ≥ 5, the Fermat equation has a solution with non-zero integers, hence a counter-example to FLT. Then as Yves Hellegouarch was the first to notice, the elliptic curve of discriminant cannot be modular. Thus, the proof of the Taniyama–Shimura–Weil conjecture for this family of elliptic curves (called Hellegouarch–Frey curves) implies FLT. The proof of the link between these two statements, based on an idea of Gerhard Frey (1985), is difficult and technical. It was established by Kenneth Ribet in 1987. Integral points This section is concerned with points P = (x, y) of E such that x is an integer. The following theorem is due to C. L. Siegel: the set of points P = (x, y) of E(Q) such that x is an integer is finite. This theorem can be generalized to points whose x coordinate has a denominator divisible only by a fixed finite set of prime numbers. The theorem can be formulated effectively. For example, if the Weierstrass equation of E has integer coefficients bounded by a constant H, the coordinates (x, y) of a point of E with both x and y integer satisfy: For example, the equation y2 = x3 + 17 has eight integral solutions with y > 0 : (x, y) = (−1, 4), (−2, 3), (2, 5), (4, 9), (8, 23), (43, 282), (52, 375), (, ). As another example, Ljunggren's equation, a curve whose Weierstrass form is y2 = x3 − 2x, has only four solutions with y ≥ 0 : (x, y) = (0, 0), (−1, 1), (2, 2), (338, ). Generalization to number fields Many of the preceding results remain valid when the field of definition of E is a number field K, that is to say, a finite field extension of Q. In particular, the group E(K) of K-rational points of an elliptic curve E defined over K is finitely generated, which generalizes the Mordell–Weil theorem above. A theorem due to Loïc Merel shows that for a given integer d, there are (up to isomorphism) only finitely many groups that can occur as the torsion groups of E(K) for an elliptic curve defined over a number field K of degree d. More precisely, there is a number B(d) such that for any elliptic curve E defined over a number field K of degree d, any torsion point of E(K) is of order less than B(d). The theorem is effective: for d > 1, if a torsion point is of order p, with p prime, then As for the integral points, Siegel's theorem generalizes to the following: Let E be an elliptic curve defined over a number field K, x and y the Weierstrass coordinates. Then there are only finitely many points of E(K) whose x-coordinate is in the ring of integers OK. The properties of the Hasse–Weil zeta function and the Birch and Swinnerton-Dyer conjecture can also be extended to this more general situation. Elliptic curves over a general field Elliptic curves can be defined over any field K; the formal definition of an elliptic curve is a non-singular projective algebraic curve over K with genus 1 and endowed with a distinguished point defined over K. If the characteristic of K is neither 2 nor 3, then every elliptic curve over K can be written in the form after a linear change of variables. Here p and q are elements of K such that the right hand side polynomial x3 − px − q does not have any double roots. If the characteristic is 2 or 3, then more terms need to be kept: in characteristic 3, the most general equation is of the form for arbitrary constants b2, b4, b6 such that the polynomial on the right-hand side has distinct roots (the notation is chosen for historical reasons). In characteristic 2, even this much is not possible, and the most general equation is provided that the variety it defines is non-singular. If characteristic were not an obstruction, each equation would reduce to the previous ones by a suitable linear change of variables. One typically takes the curve to be the set of all points (x,y) which satisfy the above equation and such that both x and y are elements of the algebraic closure of K. Points of the curve whose coordinates both belong to K are called K-rational points. Isogeny Let E and D be elliptic curves over a field k. An isogeny between E and D is a finite morphism f : E → D of varieties that preserves basepoints (in other words, maps the given point on E to that on D). The two curves are called isogenous if there is an isogeny between them. This is an equivalence relation, symmetry being due to the existence of the dual isogeny. Every isogeny is an algebraic homomorphism and thus induces homomorphisms of the groups of the elliptic curves for k-valued points. Elliptic curves over finite fields Let K = Fq be the finite field with q elements and E an elliptic curve defined over K. While the precise number of rational points of an elliptic curve E over K is in general rather difficult to compute, Hasse's theorem on elliptic curves gives us, including the point at infinity, the following estimate: In other words, the number of points of the curve grows roughly as the number of elements in the field. This fact can be understood and proven with | real coordinates) by the tangent and secant method can be applied to E. The explicit formulae show that the sum of two points P and Q with rational coordinates has again rational coordinates, since the line joining P and Q has rational coefficients. This way, one shows that the set of rational points of E forms a subgroup of the group of real points of E. As this group, it is an abelian group, that is, P + Q = Q + P. The structure of rational points The most important result is that all points can be constructed by the method of tangents and secants starting with a finite number of points. More precisely the Mordell–Weil theorem states that the group E(Q) is a finitely generated (abelian) group. By the fundamental theorem of finitely generated abelian groups it is therefore a finite direct sum of copies of Z and finite cyclic groups. The proof of that theorem rests on two ingredients: first, one shows that for any integer m > 1, the quotient group E(Q)/mE(Q) is finite (weak Mordell–Weil theorem). Second, introducing a height function h on the rational points E(Q) defined by h(P0) = 0 and if P (unequal to the point at infinity P0) has as abscissa the rational number x = p/q (with coprime p and q). This height function h has the property that h(mP) grows roughly like the square of m. Moreover, only finitely many rational points with height smaller than any constant exist on E. The proof of the theorem is thus a variant of the method of infinite descent and relies on the repeated application of Euclidean divisions on E: let P ∈ E(Q) be a rational point on the curve, writing P as the sum 2P1 + Q1 where Q1 is a fixed representant of P in E(Q)/2E(Q), the height of P1 is about of the one of P (more generally, replacing 2 by any m > 1, and by ). Redoing the same with P1, that is to say P1 = 2P2 + Q2, then P2 = 2P3 + Q3, etc. finally expresses P as an integral linear combination of points Qi and of points whose height is bounded by a fixed constant chosen in advance: by the weak Mordell–Weil theorem and the second property of the height function P is thus expressed as an integral linear combination of a finite number of fixed points. So far, the theorem is not effective since there is no known general procedure for determining representatives of E(Q)/mE(Q). The rank of E(Q), that is the number of copies of Z in E(Q) or, equivalently, the number of independent points of infinite order, is called the rank of E. The Birch and Swinnerton-Dyer conjecture is concerned with determining the rank. One conjectures that it can be arbitrarily large, even if only examples with relatively small rank are known. The elliptic curve with biggest exactly known rank is y2 + xy + y = x3 − x2 − x + It has rank 20, found by Noam Elkies and Zev Klagsbrun in 2020. Curves of rank higher than 20 were known since 1994, with lower bounds on their ranks ranging from at-least-21 to at-least-28, but their exact ranks are not currently known and in particular it is not proven which of them have higher rank than the others or which is the true "current champion". As for the groups constituting the torsion subgroup of E(Q), the following is known: the torsion subgroup of E(Q) is one of the 15 following groups (a theorem due to Barry Mazur): Z/NZ for N = 1, 2, ..., 10, or 12, or Z/2Z × Z/2NZ with N = 1, 2, 3, 4. Examples for every case are known. Moreover, elliptic curves whose Mordell–Weil groups over Q have the same torsion groups belong to a parametrized family. The Birch and Swinnerton-Dyer conjecture The Birch and Swinnerton-Dyer conjecture (BSD) is one of the Millennium problems of the Clay Mathematics Institute. The conjecture relies on analytic and arithmetic objects defined by the elliptic curve in question. At the analytic side, an important ingredient is a function of a complex variable, L, the Hasse–Weil zeta function of E over Q. This function is a variant of the Riemann zeta function and Dirichlet L-functions. It is defined as an Euler product, with one factor for every prime number p. For a curve E over Q given by a minimal equation with integral coefficients , reducing the coefficients modulo p defines an elliptic curve over the finite field Fp (except for a finite number of primes p, where the reduced curve has a singularity and thus fails to be elliptic, in which case E is said to be of bad reduction at p). The zeta function of an elliptic curve over a finite field Fp is, in some sense, a generating function assembling the information of the number of points of E with values in the finite field extensions Fpn of Fp. It is given by The interior sum of the exponential resembles the development of the logarithm and, in fact, the so-defined zeta function is a rational function: where the 'trace of Frobenius' term is defined to be the (negative of) the difference between the number of points on the elliptic curve over and the 'expected' number , viz.: There are two points to note about this quantity. First, these are not to be confused with the in the definition of the curve above: this is just an unfortunate clash of notation. Second, we may define the same quantities and functions over an arbitrary finite field of characteristic , with replacing everywhere. The Hasse–Weil zeta function of E over Q is then defined by collecting this information together, for all primes p. It is defined by where ε(p) = 1 if E has good reduction at p and 0 otherwise (in which case ap is defined differently from the method above: see Silverman (1986) below). This product converges for Re(s) > 3/2 only. Hasse's conjecture affirms that the L-function admits an analytic continuation to the whole complex plane and satisfies a functional equation relating, for any s, L(E, s) to L(E, 2 − s). In 1999 this was shown to be a consequence of the proof of the Shimura–Taniyama–Weil conjecture, which asserts that every elliptic curve over Q is a modular curve, which implies that its L-function is the L-function of a modular form whose analytic continuation is known. One can therefore speak about the values of L(E, s) at any complex number s. The Birch–Swinnerton-Dyer conjecture relates the arithmetic of the curve to the behavior of its L-function at s = 1. It affirms that the vanishing order of the L-function at s = 1 equals the rank of E and predicts the leading term of the Laurent series of L(E, s) at that point in terms of several quantities attached to the elliptic curve. Much like the Riemann hypothesis, the truth of the BSD conjecture would have multiple consequences, including the following two: A congruent number is defined as an odd square-free integer n which is the area of a right triangle with rational side lengths. It is known that n is a congruent number if and only if the elliptic curve has a rational point of infinite order; assuming BSD, this is equivalent to its L-function having a zero at s = 1. Tunnell has shown a related result: assuming BSD, n is a congruent number if and only if the number of triplets of integers (x, y, z) satisfying is twice the number of triples satisfying . The interest in this statement is that the condition is easy to check. In a different direction, certain analytic methods allow for an estimation of the order of zero in the center of the critical strip for certain L-functions. Admitting BSD, these estimations correspond to information about the rank of families of the corresponding elliptic curves. For example: assuming the generalized Riemann hypothesis and BSD, the average rank of curves given by is smaller than 2. The modularity theorem and its application to Fermat's Last Theorem The modularity theorem, once known as the Taniyama–Shimura–Weil conjecture, states that every elliptic curve E over Q is a modular curve, that is to say, its Hasse–Weil zeta function is the L-function of a modular form of weight 2 and level N, where N is the conductor of E (an integer divisible by the same prime numbers as the discriminant of E, Δ(E)). In other words, if one writes the L-function for Re(s) > 3/2 in the form then the expression defines a parabolic modular newform of weight 2 and level N. For prime numbers ℓ not dividing N, the coefficient a(ℓ) is equal to ℓ minus the number of solutions of the minimal equation of the curve modulo ℓ. For example, the elliptic curve , with discriminant (and conductor) 37, is associated to the form For prime numbers ℓ not equal to 37, one can verify the property about the coefficients. Thus, for ℓ = 3, there are 6 solutions of the equation modulo 3: , , , , , ; thus . The conjecture, going back to the 1950s, was completely proven by 1999 using ideas of Andrew Wiles, who proved it in 1994 for a large family of elliptic curves. There are several formulations of the conjecture. Showing that they are equivalent was a main challenge of number theory in the second half of the 20th century. The modularity of an elliptic curve E of conductor N can be expressed also by saying that there is a non-constant rational map defined over Q, from the modular curve X0(N) to E. In particular, the points of E can be parametrized by modular functions. For example, a modular parametrization of the curve is given by where, as above, q = exp(2πiz). The functions x(z) and y(z) are modular of weight 0 and level 37; in other words they are meromorphic, defined on the upper half-plane Im(z) > 0 and satisfy and likewise for y(z), for all integers a, b, c, d with ad − bc = 1 and 37|c. Another formulation depends on the comparison of Galois representations attached on the one hand to elliptic curves, and on the other hand to modular forms. The latter formulation has been used in the proof of the conjecture. Dealing with the level of the forms (and the connection to the conductor of the curve) is particularly delicate. The most spectacular application of the conjecture is the proof of Fermat's Last Theorem (FLT). Suppose that for a prime p ≥ 5, the Fermat equation has a solution with non-zero integers, hence a counter-example to FLT. Then as Yves Hellegouarch was the first to notice, the elliptic curve of discriminant cannot be modular. Thus, the proof of the Taniyama–Shimura–Weil conjecture for this family of elliptic curves (called Hellegouarch–Frey curves) implies FLT. The proof of the link between these two statements, based on an idea of Gerhard Frey (1985), is difficult and technical. It was established by Kenneth Ribet in 1987. Integral points This section is concerned with points P = (x, y) of E such that x is an integer. The following theorem is due to C. L. Siegel: the set of points P = (x, y) of E(Q) such that x is an integer is finite. This theorem can be generalized to points whose x coordinate has a denominator divisible only by a fixed finite set of prime numbers. The theorem can be formulated effectively. For example, if the Weierstrass equation of E has integer coefficients bounded by a constant H, the coordinates (x, y) of a point of E with both x and y integer satisfy: For example, the equation y2 = x3 + 17 has eight integral solutions with y > 0 : (x, y) = (−1, 4), (−2, 3), (2, 5), (4, 9), (8, 23), (43, 282), (52, 375), (, ). As another example, Ljunggren's equation, a curve whose Weierstrass form is y2 = x3 − 2x, has only four solutions with y ≥ 0 : (x, y) = (0, 0), (−1, 1), (2, 2), (338, ). Generalization to number fields Many of the preceding results remain valid when the field of definition of E is a number field K, that is to say, a finite field extension of Q. In particular, the group E(K) of K-rational points of an elliptic curve E defined over K is finitely generated, which generalizes the Mordell–Weil theorem above. A theorem due to Loïc Merel shows that for a given integer d, there are (up to isomorphism) only finitely many groups that can occur as the torsion groups of E(K) for an elliptic curve defined over a number field K of degree d. More precisely, there is a number B(d) such that for any elliptic curve E defined over a number field K of degree d, any torsion point of E(K) is of order less than B(d). The theorem is effective: for d > 1, if a torsion point is of order p, with p prime, then As for the integral points, Siegel's theorem generalizes to the following: Let E be an elliptic curve defined over a number field K, x and y the Weierstrass coordinates. Then there are only finitely many points of E(K) whose x-coordinate is in the ring of integers OK. The properties of the Hasse–Weil zeta function and the Birch and Swinnerton-Dyer conjecture can also be extended to this more general situation. Elliptic curves over a general field Elliptic curves can be defined over any field K; the formal definition of an elliptic curve is a non-singular projective algebraic curve over K with genus 1 and endowed with a distinguished point defined over K. If the characteristic of K is neither 2 nor 3, then every elliptic curve over K can be written in the form after a linear change of variables. Here p and q are |
(In addition to Equidae, Perissodactyla includes four species of tapir in a single genus, as well as five living species (belonging to four genera) of rhinoceros.) † indicates extinct taxa. Family Equidae Subfamily †Eohippinae Genus †Epihippus Genus †Haplohippus Genus †Eohippus Genus †Minippus Subfamily †Propalaeotheriinae Genus †Orohippus Genus †Pliolophus Genus †Protorohippus Genus †Sifrhippus Genus †Xenicohippus Genus †Eurohippus Genus †Propalaeotherium Subfamily †Anchitheriinae Genus †Anchitherium Genus †Archaeohippus Genus †Desmatippus Genus †Hypohippus Genus †Kalobatippus Genus †Megahippus Genus †Mesohippus Genus †Miohippus Genus †Parahippus Genus †Sinohippus Subfamily Equinae Genus †Merychippus Genus †Scaphohippus Genus †Acritohippus Tribe †Hipparionini Genus †Eurygnathohippus Genus †Hipparion Genus †Hippotherium Genus †Nannippus Genus †Neohipparion Genus †Proboscidipparion Genus †Pseudhipparion Tribe Equini Genus †Haringtonhippus Genus †Heteropliohippus Genus †Parapliohippus Subtribe Protohippina Genus †Calippus Genus †Protohippus Subtribe Equina Genus †Astrohippus Genus †Dinohippus Genus Equus (22 species, 7 extant) Equus ferus Wild horse Equus ferus caballus Domestic horse †Equus ferus ferus Tarpan Equus ferus przewalskii Przewalski's horse †Equus algericus †Equus alaskae †Equus lambei Yukon wild horse †Equus niobrarensis †Equus scotti †Equus conversidens Mexican horse †Equus semiplicatus Subgenus †Amerhippus (this subgenus and its species are possibly synonymous with E. ferus) †Equus andium †Equus neogeus †Equus insulatus Subgenus Asinus Equus africanus African wild ass Equus africanus africanus Nubian wild ass Equus africanus asinus Domestic donkey †Equus africanus atlanticus Atlas wild ass Equus africanus somalicus Somali wild ass Equus hemionus Onager or Asiatic wild ass Equus hemionus hemionus Mongolian wild ass †Equus hemionus hemippus Syrian wild ass Equus hemionus khur Indian wild ass Equus hemionus kulan Turkmenian kulan Equus hemionus onager Persian onager Equus kiang Kiang Equus kiang chu Northern kiang Equus kiang kiang Western kiang Equus kiang holdereri Eastern kiang Equus kiang polyodon Southern kiang †Equus hydruntinus European ass †Equus altidens †Equus tabeti †Equus melkiensis †Equus graziosii Subgenus Hippotigris Equus grevyi Grévy's | if at all. The sole surviving genus, Equus, had evolved by the early Pleistocene, and spread rapidly through the world. Classification Order Perissodactyla (In addition to Equidae, Perissodactyla includes four species of tapir in a single genus, as well as five living species (belonging to four genera) of rhinoceros.) † indicates extinct taxa. Family Equidae Subfamily †Eohippinae Genus †Epihippus Genus †Haplohippus Genus †Eohippus Genus †Minippus Subfamily †Propalaeotheriinae Genus †Orohippus Genus †Pliolophus Genus †Protorohippus Genus †Sifrhippus Genus †Xenicohippus Genus †Eurohippus Genus †Propalaeotherium Subfamily †Anchitheriinae Genus †Anchitherium Genus †Archaeohippus Genus †Desmatippus Genus †Hypohippus Genus †Kalobatippus Genus †Megahippus Genus †Mesohippus Genus †Miohippus Genus †Parahippus Genus †Sinohippus Subfamily Equinae Genus †Merychippus Genus †Scaphohippus Genus †Acritohippus Tribe †Hipparionini Genus †Eurygnathohippus Genus †Hipparion Genus †Hippotherium Genus †Nannippus Genus †Neohipparion Genus †Proboscidipparion Genus †Pseudhipparion Tribe Equini Genus †Haringtonhippus Genus †Heteropliohippus Genus †Parapliohippus Subtribe Protohippina Genus †Calippus Genus †Protohippus Subtribe Equina Genus †Astrohippus Genus †Dinohippus Genus Equus (22 species, 7 extant) Equus ferus Wild horse Equus ferus caballus Domestic horse †Equus ferus ferus Tarpan Equus ferus przewalskii Przewalski's horse †Equus algericus †Equus alaskae †Equus lambei Yukon wild horse †Equus niobrarensis †Equus scotti †Equus conversidens Mexican horse †Equus semiplicatus Subgenus †Amerhippus (this subgenus and its species are possibly synonymous with E. ferus) †Equus andium †Equus neogeus †Equus insulatus Subgenus |
This is an incomplete alphabetical list by surname of notable economists, experts in the social science of economics, past and present. For a history of economics, see the article History of economic thought. Only economists with biographical articles in Wikipedia are listed here. A B C D E F G H I J K L M N O P Q | alphabetical list by surname of notable economists, experts in the social science of economics, past and present. For a history of economics, see the article History of economic thought. Only economists with biographical articles in Wikipedia are listed here. A B C D E F G H I J K L M N O P Q R S T U V W X Y Z See also History of economic thought Schools of economic thought List of Austrian School economists List |
the DOCTOR script in the context of psychotherapy to "sidestep the problem of giving the program a data base of real-world knowledge", as in a Rogerian therapeutic situation, the program had only to reflect back the patient's statements. The algorithms of DOCTOR allowed for a deceptively intelligent response, which deceived many individuals when first using the program. Weizenbaum named his program ELIZA after Eliza Doolittle, a working-class character in George Bernard Shaw's Pygmalion. According to Weizenbaum, ELIZA's ability to be "incrementally improved" by various users made it similar to Eliza Doolittle, since Eliza Doolittle was taught to speak with an upper-class accent in Shaw's play. However, unlike in Shaw's play, ELIZA is incapable of learning new patterns of speech or new words through interaction alone. Edits must be made directly to ELIZA’s active script in order to change the manner by which the program operates. Weizenbaum first implemented ELIZA in his own SLIP list-processing language, where, depending upon the initial entries by the user, the illusion of human intelligence could appear, or be dispelled through several interchanges. Some of ELIZA's responses were so convincing that Weizenbaum and several others have anecdotes of users becoming emotionally attached to the program, occasionally forgetting that they were conversing with a computer. Weizenbaum's own secretary reportedly asked Weizenbaum to leave the room so that she and ELIZA could have a real conversation. Weizenbaum was surprised by this, later writing: "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." In 1966, interactive computing (via a teletype) was new. It was 15 years before the personal computer became familiar to the general public, and three decades before most people encountered attempts at natural language processing in Internet services like Ask.com or PC help systems such as Microsoft Office Clippit. Although those programs included years of research and work, ELIZA remains a milestone simply because it was the first time a programmer had attempted such a human-machine interaction with the goal of creating the illusion (however brief) of human–human interaction. At the ICCC 1972 ELIZA was brought together with another early artificial-intelligence program named PARRY for a computer-only conversation. While ELIZA was built to speak as a doctor, PARRY was intended to simulate a patient with schizophrenia. Design Weizenbaum originally wrote ELIZA in MAD-Slip for CTSS on an IBM 7094, as a program to make natural-language conversation possible with a computer. To accomplish this, Weizenbaum identified five "fundamental technical problems" for ELIZA to overcome: the identification of critical words, the discovery of a minimal context, the choice of appropriate transformations, the generation of responses appropriate to the transformation or in the absence of critical words and the provision of an ending capacity for ELIZA scripts. Weizenbaum solved these problems and made ELIZA such that it had no built-in contextual framework or universe of discourse. However, this required ELIZA to have a script of instructions on how to respond to inputs from users. ELIZA starts its process of responding to an input by a user by first examining the text input for a "keyword". A "keyword" is a word designated as important by the acting ELIZA script, which assigns to each keyword a precedence number, or a RANK, designed by the programmer. If such words are found, they are put into a "keystack", with the keyword of the highest RANK at the top. The input sentence is then manipulated and transformed as the rule associated with the keyword of the highest RANK directs. For example, when the DOCTOR script encounters words such as "alike" or "same", it would output a message pertaining to similarity, in this case “In what way?”, as these words had high precedence number. This also demonstrates how certain words, as dictated by the script, can be manipulated regardless of contextual considerations, such as switching first-person pronouns and second-person pronouns and vice versa, as these too had high precedence numbers. Such words with high precedence numbers are deemed superior to conversational patterns and are treated independently of contextual patterns. Following the first examination, the next step of the process is to apply an appropriate transformation rule, which includes two parts: the "decomposition rule" and the "reassembly rule". First, the input is reviewed for syntactical patterns in order to establish the minimal context necessary to respond. Using the keywords and other nearby words from the input, different disassembly rules are tested until an appropriate pattern is found. Using the script's rules, the sentence is then "dismantled" and arranged into sections of the component parts as the "decomposition rule for the highest-ranking keyword" dictates. The example that Weizenbaum gives is the input "I are very helpful" (remembering that "I" is "You" transformed), which is broken into (1) empty (2) "I" (3) "are" (4) "very helpful". The decomposition rule has broken the phrase into four small segments that contain both the keywords and the information in the sentence. The decomposition rule then designates a particular reassembly rule, or set of reassembly rules, to follow when reconstructing the sentence. The reassembly rule takes the fragments of the input that the decomposition rule had created, rearranges them, and adds in programmed words to create a response. Using Weizenbaum's example previously stated, such a reassembly rule would take the fragments and apply them to the phrase | user inputs and engage in discourse following the rules and directions of the script. The most famous script, DOCTOR, simulated a Rogerian psychotherapist (in particular, Carl Rogers, who was well-known for simply parroting back at patients what they had just said), and used rules, dictated in the script, to respond with non-directional questions to user inputs. As such, ELIZA was one of the first chatterbots and one of the first programs capable of attempting the Turing test. ELIZA's creator, Weizenbaum, regarded the program as a method to show the superficiality of communication between man and machine, but was surprised by the number of individuals who attributed human-like feelings to the computer program, including Weizenbaum’s secretary. Many academics believed that the program would be able to positively influence the lives of many people, particularly those suffering from psychological issues, and that it could aid doctors working on such patients' treatment. While ELIZA was capable of engaging in discourse, ELIZA could not converse with true understanding. However, many early users were convinced of ELIZA’s intelligence and understanding, despite Weizenbaum’s insistence to the contrary. Overview Joseph Weizenbaum’s ELIZA, running the DOCTOR script, was created to provide a parody of "the responses of a non-directional psychotherapist in an initial psychiatric interview" and to "demonstrate that the communication between man and machine was superficial". While ELIZA is best known for acting in the manner of a psychotherapist, the speech patterns are due to the data and instructions supplied by the DOCTOR script. ELIZA itself examined the text for keywords, applied values to said keywords, and transformed the input into an output; the script that ELIZA ran determined the keywords, set the values of keywords, and set the rules of transformation for the output. Weizenbaum chose to make the DOCTOR script in the context of psychotherapy to "sidestep the problem of giving the program a data base of real-world knowledge", as in a Rogerian therapeutic situation, the program had only to reflect back the patient's statements. The algorithms of DOCTOR allowed for a deceptively intelligent response, which deceived many individuals when first using the program. Weizenbaum named his program ELIZA after Eliza Doolittle, a working-class character in George Bernard Shaw's Pygmalion. According to Weizenbaum, ELIZA's ability to be "incrementally improved" by various users made it similar to Eliza Doolittle, since Eliza Doolittle was taught to speak with an upper-class accent in Shaw's play. However, unlike in Shaw's play, ELIZA is incapable of learning new patterns of speech or new words through interaction alone. Edits must be made directly to ELIZA’s active script in order to change the manner by which the program operates. Weizenbaum first implemented ELIZA in his own SLIP list-processing language, where, depending upon the initial entries by the user, the illusion of human intelligence could appear, or be dispelled through several interchanges. Some of ELIZA's responses were so convincing that Weizenbaum and several others have anecdotes of users becoming emotionally attached to the program, occasionally forgetting that they were conversing with a computer. Weizenbaum's own secretary reportedly asked Weizenbaum to leave the room so that she and ELIZA could have a real conversation. Weizenbaum was surprised by this, later writing: "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." In 1966, interactive computing (via a teletype) was new. It was 15 years before the personal computer became familiar to the general public, and three decades before most people encountered attempts at natural language processing in Internet services like Ask.com or PC help systems such as Microsoft Office Clippit. Although those programs included years of research and work, ELIZA remains a milestone simply because it was the |
named for the 1966 chatbot ELIZA, developed by MIT computer scientist Joseph Weizenbaum. When executing Weizenbaum's DOCTOR script, ELIZA parodied a Rogerian psychotherapist, largely by rephrasing the "patients replies as questions: Human: Well, my boyfriend made me come here. ELIZA: Your boyfriend made you come here? Human: He says I'm depressed much of the time. ELIZA: I am sorry to hear you are depressed. Human: It's true. I'm unhappy. ELIZA: Do you think coming here will help you not to be unhappy? Though designed strictly as a mechanism to support "natural language conversation" with a computer, ELIZA's DOCTOR script was found to be surprisingly successful in eliciting emotional responses from users who, in the course of interacting with the program, began to ascribe understanding and motivation to the program's output. As Weizenbaum later wrote, "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." Indeed, ELIZA's code had not been designed to evoke this reaction in the first place. Upon observation, researchers discovered users unconsciously assuming ELIZA's questions implied interest and emotional involvement in the topics discussed, even when they consciously knew that ELIZA did not simulate emotion. Significance to automated labor ELIZA convinced some users into thinking that a machine was human. This shift in human-machine interaction marked progress in technologies emulating human behavior. Two groups of chatbots are distinguished by William Meisel as "general personal assistants" and "specialized digital assistants". General digital assistants have been integrated into personal devices, with skills like sending messages, taking notes, checking calendars, and setting appointments. Specialized digital assistants “operate in very specific domains or help with very specific tasks". Digital assistants that are programmed to aid productivity by assuming behaviors analogous to humans. Joseph Weizenbaum considered that not every part of the human thought could be reduced to logical formalisms and that | not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." Indeed, ELIZA's code had not been designed to evoke this reaction in the first place. Upon observation, researchers discovered users unconsciously assuming ELIZA's questions implied interest and emotional involvement in the topics discussed, even when they consciously knew that ELIZA did not simulate emotion. Significance to automated labor ELIZA convinced some users into thinking that a machine was human. This shift in human-machine interaction marked progress in technologies emulating human behavior. Two groups of chatbots are distinguished by William Meisel as "general personal assistants" and "specialized digital assistants". General digital assistants have been integrated into personal devices, with skills like sending messages, taking notes, checking calendars, and setting appointments. Specialized digital assistants “operate in very specific domains or help with very specific tasks". Digital assistants that are programmed to aid productivity by assuming behaviors analogous to humans. Joseph Weizenbaum considered that not every part of the human thought could be reduced to logical formalisms and that “there are some acts of thought that ought to be attempted only by humans”. He also observed that we develop emotional involvement with machines if we interact with them as humans. When chatbots are anthropomorphized, they tend to portray gendered features as a way |
(110 001 110)2, we take a window of length 3 using the 2k-ary method algorithm and calculate 1, x3, x6, x12, x24, x48, x49, x98, x99, x198, x199, x398. But, we can also compute 1, x3, x6, x12, x24, x48, x96, x192, x199, x398, which saves one multiplication and amounts to evaluating (110 001 110)2 Here is the general algorithm: Algorithm: Input An element x of G, a non negative integer , a parameter k > 0 and the pre-computed values . Output The element xn ∈ G. Algorithm: y := 1; i := l - 1 while i > -1 do if ni = 0 then y := y2' i := i - 1 else s := max{i - k + 1, 0} while ns = 0 do s := s + 1 for h := 1 to i - s + 1 do y := y2 u := (ni, ni-1, ..., ns)2 y := y * xu i := s - 1 return y Montgomery's ladder technique Many algorithms for exponentiation do not provide defence against side-channel attacks. Namely, an attacker observing the sequence of squarings and multiplications can (partially) recover the exponent involved in the computation. This is a problem if the exponent should remain secret, as with many public-key cryptosystems. A technique called "Montgomery's ladder" addresses this concern. Given the binary expansion of a positive, non-zero integer n = (nk−1...n0)2 with nk−1 = 1, we can compute xn as follows: x1 = x; x2 = x2 for i = k - 2 to 0 do If ni = 0 then x2 = x1 * x2; x1 = x12 else x1 = x1 * x2; x2 = x22 return x1 The algorithm performs a fixed sequence of operations (up to log n): a multiplication and squaring takes place for each bit in the exponent, regardless of the bit's specific value. A similar algorithm for multiplication by doubling exists. This specific implementation of Montgomery's ladder is not yet protected against cache timing attacks: memory access latencies might still be observable to an attacker, as different variables are accessed depending on the value of bits of the secret exponent. Modern cryptographic implementations use a "scatter" technique to make sure the processor always misses the faster cache. Fixed-base exponent There are several methods which can be employed to calculate xn when the base is fixed and the exponent varies. As one can see, precomputations play a key role in these algorithms. Yao's method Yao's method is orthogonal to the -ary method where the exponent is expanded in radix and the computation is as performed in the algorithm above. Let , , , and be integers. Let the exponent be written as where for all . Let . Then the algorithm uses the equality Given the element of , and the exponent written in the above form, along with the precomputed values , the element is calculated using the algorithm below: y = 1, u = 1, j = h - 1 while j > 0 do for i = 0 to w - 1 do if ni = j then u = u × xbi y = y × u j = j - 1 return y If we set and , then the values are simply the digits of in base . Yao's method collects in u first those that appear to the highest power ; in the next round those with power are collected in as well etc. The variable y is multiplied times with the initial , times with the next highest powers, and so on. The algorithm uses multiplications, and elements must be stored to compute . Euclidean method The Euclidean method was first introduced in Efficient exponentiation | calculate 1, x3, x6, x12, x24, x48, x49, x98, x99, x198, x199, x398. But, we can also compute 1, x3, x6, x12, x24, x48, x96, x192, x199, x398, which saves one multiplication and amounts to evaluating (110 001 110)2 Here is the general algorithm: Algorithm: Input An element x of G, a non negative integer , a parameter k > 0 and the pre-computed values . Output The element xn ∈ G. Algorithm: y := 1; i := l - 1 while i > -1 do if ni = 0 then y := y2' i := i - 1 else s := max{i - k + 1, 0} while ns = 0 do s := s + 1 for h := 1 to i - s + 1 do y := y2 u := (ni, ni-1, ..., ns)2 y := y * xu i := s - 1 return y Montgomery's ladder technique Many algorithms for exponentiation do not provide defence against side-channel attacks. Namely, an attacker observing the sequence of squarings and multiplications can (partially) recover the exponent involved in the computation. This is a problem if the exponent should remain secret, as with many public-key cryptosystems. A technique called "Montgomery's ladder" addresses this concern. Given the binary expansion of a positive, non-zero integer n = (nk−1...n0)2 with nk−1 = 1, we can compute xn as follows: x1 = x; x2 = x2 for i = k - 2 to 0 do If ni = 0 then x2 = x1 * x2; x1 = x12 else x1 = x1 * x2; x2 = x22 return x1 The algorithm performs a fixed sequence of operations (up to log n): a multiplication and squaring takes place for each bit in the exponent, regardless of the bit's specific value. A similar algorithm for multiplication by doubling exists. This specific implementation of Montgomery's ladder is not yet protected against cache timing attacks: memory access latencies might still be observable to an attacker, as different variables are accessed depending on the value of bits of the secret exponent. Modern cryptographic implementations use a "scatter" technique to make sure the processor always misses the faster cache. Fixed-base exponent There are several methods which can be employed to calculate xn when the base is fixed and the exponent varies. As one can see, precomputations play a key role in these algorithms. Yao's method Yao's method is orthogonal to the -ary method where the exponent is expanded in radix and the computation is as performed in the algorithm above. Let , , , and be integers. Let the exponent be written as where for all . Let . Then the algorithm uses the equality Given the element of , and the exponent written in the above form, along with the precomputed values , the element is calculated using the algorithm below: y = 1, u = 1, j = h - 1 while j > 0 do for i = 0 to w - 1 do if ni = j then u = u × xbi y = y × u j = j - 1 return y If we set and , then the values are simply the digits of in base . Yao's method collects in u first those that appear to the highest power ; in the next round those with power are collected in as well etc. The variable y is multiplied times with the initial , times with the next highest powers, and so on. The algorithm uses multiplications, and elements must be stored to compute . Euclidean method The Euclidean method was first introduced in Efficient exponentiation using precomputation and vector addition chains by P.D Rooij. This method for computing in group , where is a natural integer, whose algorithm is given below, is using the following equality recursively: where . In other words, a |
entire set of exons constitutes the exome. History The term exon derives from the expressed region and was coined by American biochemist Walter Gilbert in 1978: "The notion of the cistron… must be replaced by that of a transcription unit containing regions which will be lost from the mature messengerwhich I suggest we call introns (for intragenic regions)alternating with regions which will be expressedexons." This definition was originally made for protein-coding transcripts that are spliced before being translated. The term later came to include sequences removed from rRNA and tRNA, and other ncRNA and it also was used later for RNA molecules originating from different parts of the genome that are then ligated by trans-splicing. Contribution to genomes and size distribution Although unicellular eukaryotes such as yeast have either no introns or very few, metazoans and especially vertebrate genomes have a large fraction of non-coding DNA. For instance, in the human genome only 1.1% of the genome is spanned by exons, whereas 24% is in introns, with 75% of the genome being intergenic DNA. This can provide a practical advantage in omics-aided health care (such as precision medicine) because it makes commercialized whole exome sequencing a smaller and less expensive challenge than commercialized whole genome sequencing. The large variation in genome size and C-value across life forms has posed an interesting challenge called the C-value enigma. Across all eukaryotic genes in GenBank, there were (in 2002), on average, 5.48 exons per protein coding gene. The average exon encoded 30-36 amino acids. While the longest exon in the human genome is 11555 bp long, several exons have been found | as part of generating the mature RNA. Just as the entire set of genes for a species constitutes the genome, the entire set of exons constitutes the exome. History The term exon derives from the expressed region and was coined by American biochemist Walter Gilbert in 1978: "The notion of the cistron… must be replaced by that of a transcription unit containing regions which will be lost from the mature messengerwhich I suggest we call introns (for intragenic regions)alternating with regions which will be expressedexons." This definition was originally made for protein-coding transcripts that are spliced before being translated. The term later came to include sequences removed from rRNA and tRNA, and other ncRNA and it also was used later for RNA molecules originating from different parts of the genome that are then ligated by trans-splicing. Contribution to genomes and size distribution Although unicellular eukaryotes such as yeast have either no introns or very few, metazoans and especially vertebrate genomes have a large fraction of non-coding DNA. For instance, in the human genome only 1.1% of the genome is spanned by exons, whereas 24% is in introns, with 75% of the genome being intergenic DNA. This can provide a practical advantage in omics-aided health care (such as precision medicine) because it makes commercialized whole exome sequencing a smaller and less expensive challenge than commercialized whole genome sequencing. The large variation in genome size and C-value across life forms has posed an interesting challenge called the C-value enigma. Across all eukaryotic genes in GenBank, there were (in 2002), on average, 5.48 exons per protein coding gene. The average exon encoded 30-36 amino acids. While the longest exon in the human genome is 11555 bp long, several exons have been found to be only 2 bp long. A single-nucleotide exon has been reported from the Arabidopsis genome. In humans, like protein coding mRNA, most non-coding RNA also contain multiple exons Structure |
from Standard Oil Company of California (Chevron) and rebranded them as Enco outlets, greatly increasing Enco's presence in California. Finally, in 1969, Humble Oil opened a new refinery in Benicia, California. In 1966, the U.S. Justice Department ordered Humble Oil to "cease and desist" from using the Esso brand at stations in several southeastern states, following protests from Standard Oil of Kentucky (Kyso), which was a Standard Oil of California subsidiary in the process of rebranding its Standard stations to Chevron. By 1967, Humble Oil's Esso stations in the Southeast were rebranded to Enco. In the 1960s and early 1970s, Humble Oil continued to have difficulties promoting itself as a nationwide marketer of petroleum products, despite a number of high-profile marketing strategies. These included the popular "Put a Tiger in Your Tank" advertising campaign and accompanying tiger mascot created by American illustrator Bob Jones, to promote Enco Extra and Esso Extra gasolines. Humble Oil also used similar logotypes, use of the Humble name in all Enco and Esso advertising, and uniform designs for all stations regardless of brand. In addition, Humble Oil was a major promoter and broadcast sponsor for college football in the Pacific-8 (now Pac-12) and Southwestern conferences. But Humble Oil still faced stiff competition from national brands such as Shell and Texaco, which at that time was the only company to market under one brand name in all 50 states. By the late 1960s, Humble officials realized that the time had come to develop a new brand name that could be used nationwide. At first, consideration was given to simply rebranding all stations as Enco, but that was shelved when it was learned that the word "Enco" is similar in pronunciation to the Japanese slang term enko, meaning "stalled car" (an abbreviation of enjin no kosho, "engine breakdown"). Prior to its purchase by Standard Oil of New Jersey, Humble Oil had conducted a study titled "Radiocarbon Evidence on the Dilution of Atmospheric and Oceanic Carbon by Carbon from Fossil Fuels" in 1957. Exxon Corporation The company changed its corporate name from Standard Oil of New Jersey to "Exxon Corporation". Once Exxon was officially constituted on January 1, 1973, the company replaced the Esso, Enco, and Humble brands in the United States. Exxon established as the new, unified brand name for all former Enco and Esso outlets. The Esso name was a trademark of Standard Oil Company of New Jersey and attracted protests from other Standard Oil spinoffs because of its phonetic similarity to the acronym of the name of the parent company, Standard Oil. As a result, Standard Oil Company of New Jersey was restricted from using Esso in the U.S., except in those states awarded to it in the 1911 Standard Oil antitrust settlement. The company initially planned to change its name to "Exon", in keeping with the four-letter format of Enco and Esso. However, during the planning process, it was noted that James Exon was the governor of Nebraska. Renaming the company after a sitting governor seemed ill-advised. George T. Piercy, a senior member of the board of directors suggested adding an X resulting in the new EXXON name. In states where it was restricted from using the Esso name, the company marketed under the Humble or Enco brands. The Humble brand was used at Texas stations for decades, as those operations were under the direction of Standard Oil Company of New Jersey affiliate Humble Oil & Refining Company. In the middle to late 1950s, the use of the Humble brand spread to other southwestern states, including Arizona, New Mexico, and Oklahoma. The rebranding came after successful test-marketing of the Exxon name, under two experimental logos, in the fall and winter of 1971–1972. Along with the new name, Exxon settled on a rectangular logo using red lettering and blue trim on a white background, similar to the familiar color scheme on the old Enco and Esso logos. The unrestricted international use of the popular Esso brand prompted Exxon to continue using it outside the U.S. Esso is the only widely used Standard Oil descendant brand left in existence. Others, such as Chevron, maintain a few Standard-branded stations in specific states in order to retain their trademarks, and prevent others from using them. On March 24, 1989, in what is regarded as one of the worst oil spills in American history, a tanker owned by Exxon, the Exxon Valdez, crashed into Bligh Reef, spilling its cargo of over ten million gallons of crude oil into the Prince William Sound in Alaska and causing the deaths of hundreds of thousands of seabirds and sea mammals. The ship was piloted by a captain with a history of drunk driving convictions, and Exxon was ordered by a jury to pay punitive damages in the amount of $5 billion. This judgment was eventually reduced, after multiple appeals, to just $500 million by 2008. As a result of the COVID-19 pandemic, in July 2020, Exxon announced deep spending and job position cuts in | Oklahoma, and Texas were rebranded to Enco. That same year, Enco appeared on former Carter stations in the Midwest and the Pacific Northwest. In 1963, Humble Oil and Tidewater Oil Company began negotiating a sale of Tidewater's West Coast refining and marketing operations. The sale would have given Humble Oil many existing Flying A stations and distributorships, as well as a refinery in California, the nation's fastest-growing gasoline market. However, the Justice Department objected to the sale on anti-trust grounds. (In 1966, Phillips Petroleum Company bought Tidewater's western properties and rebranded all Flying A outlets to Phillips 66.) Humble Oil continued to expand its West Coast operations, adding California to its marketing territory, building many new Enco stations, and rebranding others. In 1967, Humble Oil purchased all remaining Signal stations from Standard Oil Company of California (Chevron) and rebranded them as Enco outlets, greatly increasing Enco's presence in California. Finally, in 1969, Humble Oil opened a new refinery in Benicia, California. In 1966, the U.S. Justice Department ordered Humble Oil to "cease and desist" from using the Esso brand at stations in several southeastern states, following protests from Standard Oil of Kentucky (Kyso), which was a Standard Oil of California subsidiary in the process of rebranding its Standard stations to Chevron. By 1967, Humble Oil's Esso stations in the Southeast were rebranded to Enco. In the 1960s and early 1970s, Humble Oil continued to have difficulties promoting itself as a nationwide marketer of petroleum products, despite a number of high-profile marketing strategies. These included the popular "Put a Tiger in Your Tank" advertising campaign and accompanying tiger mascot created by American illustrator Bob Jones, to promote Enco Extra and Esso Extra gasolines. Humble Oil also used similar logotypes, use of the Humble name in all Enco and Esso advertising, and uniform designs for all stations regardless of brand. In addition, Humble Oil was a major promoter and broadcast sponsor for college football in the Pacific-8 (now Pac-12) and Southwestern conferences. But Humble Oil still faced stiff competition from national brands such as Shell and Texaco, which at that time was the only company to market under one brand name in all 50 states. By the late 1960s, Humble officials realized that the time had come to develop a new brand name that could be used nationwide. At first, consideration was given to simply rebranding all stations as Enco, but that was shelved when it was learned that the word "Enco" is similar in pronunciation to the Japanese slang term enko, meaning "stalled car" (an abbreviation of enjin no kosho, "engine breakdown"). Prior to its purchase by Standard Oil of New Jersey, Humble Oil had conducted a study titled "Radiocarbon Evidence on the Dilution of Atmospheric and Oceanic Carbon by Carbon from Fossil Fuels" in 1957. Exxon Corporation The company changed its corporate name from Standard Oil of New Jersey to "Exxon Corporation". Once Exxon was officially constituted on January 1, 1973, the company replaced the Esso, Enco, and Humble brands in the United States. Exxon established as the new, unified brand name for all former Enco and Esso outlets. The Esso name was a trademark of Standard Oil Company of New Jersey and attracted protests from other Standard Oil spinoffs because of its phonetic similarity to the acronym of the name of the parent company, Standard Oil. As a result, Standard Oil Company of New Jersey was restricted from using Esso in the U.S., except in those states awarded to it in the 1911 Standard Oil antitrust settlement. The company initially planned to change its name to "Exon", in keeping with the four-letter format of Enco and Esso. However, during the planning process, it was noted that James Exon was the governor of Nebraska. Renaming the company after a sitting governor seemed ill-advised. George T. Piercy, a senior member of the board of directors suggested adding an X resulting in the new EXXON name. In states where it was restricted from using the Esso name, |
been off the port side. Cousins ordered a course change as the ship was in danger. Captain Hazelwood was phoned by Cousins, but before their conversation could finish, the ship grounded. At 12.04 a.m., accompanied by what the helmsman and Cousins described as “a bumpy ride and “six very sharp jolts” respectively, the ship ran aground on Bligh Reef. Carried by its own momentum, the ship ended up perched on its middle on a pinnacle of rock. 8 out of 11 cargo holds were punctured. 5.8 million gallons of oil drained from the ship within 3 hours and 15 minutes. 30 minutes after numerous attempts to dislodge the ship under her own power, Captain Hazelwood radioed the Coast Guard informing them of the grounding. For more than 45 minutes after the grounding, the captain attempted to maneuver free of the reef despite being informed by First Mate James Kunkel that the vessel was not structurally sound without the reef supporting it. Multiple factors have been identified as contributing to the incident: Exxon Shipping Company failed to supervise the master (ship's captain) and provide a rested and sufficient crew for Exxon Valdez. The NTSB found this practice was widespread throughout the industry, prompting a safety recommendation to Exxon and to the industry. The third mate failed to properly maneuver the vessel, possibly due to fatigue or excessive workload. Exxon Shipping Company failed to properly maintain the Raytheon Collision Avoidance System (RAYCAS) radar, which, if functional, would have indicated to the third mate an impending collision with the Bligh Reef by detecting the radar reflector placed on the next rock inland from Bligh Reef for the purpose of keeping ships on course. This cause was brought forward by Greg Palast and is not presented in the official accident report. Captain Hazelwood, who was widely reported to have been drinking heavily that night, was not at the controls when the ship struck the reef. Exxon blamed Hazelwood for the grounding of the tanker, but he accused the corporation of making him a scapegoat. In a 1990 trial he was charged with criminal mischief, reckless endangerment, and piloting a vessel while intoxicated, but was cleared of the three charges. He was convicted of misdemeanor negligent discharge of oil. 21 witnesses testified that he did not appear to be under the influence of alcohol around the time of the accident. Journalist Greg Palast stated in 2008: Other factors, according to an MIT course entitled "Software System Safety" by Professor Nancy G. Leveson, included: Ships were not informed that the previous practice of the Coast Guard tracking ships out to Bligh Reef had ceased. The oil industry promised, but never installed, state-of-the-art iceberg monitoring equipment. Exxon Valdez was sailing outside the normal sea lane to avoid small icebergs thought to be in the area. Coast Guard vessel inspections in Valdez were not performed, and the number of staff was reduced. Lack of available equipment and personnel hampered the spill cleanup. This disaster resulted in International Maritime Organization introducing comprehensive marine pollution prevention rules (MARPOL) through various conventions. The rules were ratified by member countries and, under International Ship Management rules, the ships are being operated with a common objective of "safer ships and cleaner oceans." In 2009, Captain Hazelwood offered a "heartfelt apology" to the people of Alaska, suggesting he had been wrongly blamed for the disaster: "The true story is out there for anybody who wants to look at the facts, but that's not the sexy story and that's not the easy story," he said. Hazelwood said he felt Alaskans always gave him a fair shake. Clean-up and major effects Chemical dispersant, a surfactant and solvent mixture, was applied to the slick by a private company on March 24 with a helicopter, but the helicopter missed the target area. Scientific data on its toxicity were either thin or incomplete. In addition, public acceptance of new, widespread chemical treatment was lacking. Landowners, fishing groups, and conservation organizations questioned the use of chemicals on hundreds of miles of shoreline when other alternatives might have been available." According to a report by David Kirby for TakePart, the main component of the Corexit formulation used during cleanup, 2-butoxyethanol, was identified as "one of the agents that caused liver, kidney, lung, nervous system, and blood disorders among cleanup crews in Alaska following the 1989 Exxon Valdez spill. Mechanical cleanup was started shortly afterward using booms and skimmers, but the skimmers were not readily available during the first 24 hours following the spill, and thick oil and kelp tended to clog the equipment. Despite civilian insistence for a complete cleanup, only 10% of total oil was actually completely cleaned. Exxon was widely criticized for its slow response to cleaning up the disaster and John Devens, the mayor of Valdez, said his community felt betrayed by Exxon's inadequate response to the crisis. More than 11,000 Alaska residents, along with some Exxon employees, worked throughout the region to try to restore the environment. Though the clean-up effort was diligent it failed to contain the majority of the oil that had spilled and that has been blamed heavily upon Exxon. On November 26th, 1984 Ronald A. Kreizenbeck(Director, Alaska Operations Office) informed the Coast Guard that the EPA suspected, due to a recent site-visitation during a 'Annual Marine Drill' that the Port of Valdez was not prepared to 'efficiently respond to a major spill event'. In the letter, he stated that '[it] appears that the Vikoma boom and/or deployment vessels used may not be adequate to handle the harsh environmental conditions of Port Valdez' Because Prince William Sound contained many rocky coves where the oil was collected, the decision was made to displace it with high-pressure hot water. However, this also displaced and destroyed the microbial populations on the shoreline; many of these organisms (e.g. plankton) are the basis of the coastal marine food chain, and others (e.g. certain bacteria and fungi) are capable of facilitating the biodegradation of oil. At the time, both scientific advice and public pressure was to clean everything, but since then, a much greater understanding of natural and facilitated remediation processes has developed, due somewhat in part to the opportunity presented for study by the Exxon Valdez spill. Despite the extensive cleanup attempts, less than ten percent of the oil was recovered. Both long-term and short-term effects of the oil spill have been studied. Immediate effects include the deaths of between 100,000 and 250,000 seabirds, at least 2,800 sea otters, approximately 12 river otters, 300 harbor seals, 247 bald eagles, and 22 orcas, and an unknown number of salmon and herring. Nine years after the disaster, evidence of negative oil spill effects on marine birds was found in the following species: cormorants, goldeneyes, mergansers, murres and pigeon guillemots. Although the volume of oil has declined considerably, with oil remaining only about 0.14–0.28% of the original spilled volume, studies suggest that the area of oiled beach has changed little since 1992. A study by the National Marine Fisheries Service, NOAA in Juneau, determined that by 2001 approximately 90 tonnes of oil remained on beaches in Prince William Sound in the sandy soil of the contaminated shoreline, with annual loss rates declining from 68% per year prior to 1992, to 4% per year after 2001. The remaining oil lasting far longer than anticipated has resulted in more long-term losses of species than had been expected. Laboratory experiments found that at levels as low as one part per billion, polycyclic aromatic hydrocarbons are toxic for salmon and herring eggs. Species as diverse as sea otters, harlequin ducks, and orcas suffered immediate and long-term losses. Oiled mussel beds and other tidal shoreline habitats may take up to 30 years to recover. ExxonMobil denied concerns over the remaining oil, stating that they anticipated the remaining fraction would not cause long-term ecological impacts. According to the conclusions of ExxonMobil's study: "We've done 350 peer-reviewed studies of Prince William Sound, and those studies conclude that Prince William Sound has recovered, it's healthy and it's thriving." On March 24, 2014, the twenty-fifth anniversary | finish cleaning up oiled beaches and attempting to restore the crippled herring population. As of 2012, the indirect and long-term sublethal effects of oil on shorebirds had been measured in relatively few studies. Litigation and cleanup costs In October 1989, Exxon filed a suit against the State of Alaska, claiming that the state had interfered with Exxon's attempts to clean up the spill by refusing to approve the use of dispersant chemicals until the night of the 26th. The State of Alaska disputed this claim, stating that there was a long-standing agreement to allow the use of dispersants to clean up spills, thus Exxon did not require permission to use them, and that, in fact, Exxon had not had enough dispersant on hand to effectively handle a spill of the size created by Exxon Valdez. Exxon filed claims in October 1990 against the Coast Guard, asking to be reimbursed for cleanup costs and damages awarded to plaintiffs in any lawsuits filed by the State of Alaska or the federal government against Exxon. The company claimed that the Coast Guard was "wholly or partially responsible" for the spill, because they had granted mariners' licenses to the crew of the Valdez, and because they had given Exxon Valdez permission to leave regular shipping lanes to avoid ice. They also reiterated the claim that the Coast Guard had delayed cleanup by refusing to give permission to immediately use chemical dispersants on the spill. Also, in 1991, Exxon made a quiet, separate financial settlement of damages with a group of seafood producers known as the Seattle Seven for the disaster's effect on the Alaskan seafood industry. The agreement granted $63.75 million to the Seattle Seven, but stipulated that the seafood companies would have to repay almost all of any punitive damages awarded in other civil proceedings. The $5 billion in punitive damages was awarded later, and the Seattle Seven's share could have been as high as $750 million if the damages award had held. Other plaintiffs have objected to this secret arrangement, and when it came to light, Judge Holland ruled that Exxon should have told the jury at the start that an agreement had already been made, so the jury would know exactly how much Exxon would have to pay. In the case of Exxon v. Baker, an Anchorage jury awarded $287 million for actual damages and $5 billion for punitive damages. To protect itself in case the judgment was affirmed, Exxon obtained a $4.8 billion credit line from J.P. Morgan & Co., who created the first modern credit default swap so that they would not have to hold as much money in reserve against the risk of Exxon's default. Meanwhile, Exxon appealed the ruling, and the 9th U.S. Circuit Court of Appeals ordered the trial judge, Russel Holland, to reduce the punitive damages. On December 6, 2002, Holland announced that he had reduced the damages to $4 billion, which he concluded was justified by the facts of the case and was not grossly excessive. Exxon appealed again and the case returned to Holland to be reconsidered in light of a recent Supreme Court ruling in a similar case. Holland increased the punitive damages to $4.5 billion, plus interest. After more appeals, in December 2006 the damages award was cut to $2.5 billion. The court of appeals cited recent Supreme Court rulings relative to limits on punitive damages. Exxon appealed again. On May 23, 2007, the 9th Circuit Court of Appeals denied ExxonMobil's request for a third hearing and let stand its ruling that Exxon owed $2.5 billion in punitive damages. Exxon then appealed to the Supreme Court, which agreed to hear the case. On February 27, 2008, the Supreme Court heard oral arguments. Justice Samuel Alito, who at the time owned between $100,000 and $250,000 in Exxon stock, recused himself from the case. In a decision issued June 25, 2008, written by Justice David Souter, the court vacated the $2.5 billion award and remanded the case back to the lower court, finding that the damages were excessive with respect to maritime common law. Exxon's actions were deemed "worse than negligent but less than malicious." The punitive damages were further reduced to an amount of $507.5 million. The Court's ruling was that maritime punitive damages should not exceed the compensatory damages, supported by a precedent dating from 1818. Senate Judiciary Committee Chairman Patrick J. Leahy has decried the ruling as "another in a line of cases where this Supreme Court has misconstrued congressional intent to benefit large corporations." Exxon's official position was that punitive damages greater than $25 million were not justified because the spill resulted from an accident, and because Exxon spent an estimated $2 billion cleaning up the spill and a further $1 billion to settle related civil and criminal charges. Attorneys for the plaintiffs contended that Exxon bore responsibility for the accident because the company "put a drunk in charge of a tanker in Prince William Sound." Exxon recovered a significant portion of clean-up and legal expenses through insurance claims associated with the grounding of Exxon Valdez. As of December 15, 2009, Exxon had paid the entire $507.5 million in punitive damages, including lawsuit costs, plus interest, which were further distributed to thousands of plaintiffs. Political consequences and reforms Coast Guard report A 1989 report by the Coast Guard's U.S. National Response Center summarized the event and made many recommendations, including that neither Exxon, Alyeska Pipeline Service Company, the State of Alaska, nor the federal government were prepared for a spill of this magnitude. Oil Pollution Act of 1990 In response to the spill, the United States Congress passed the Oil Pollution Act of 1990 (OPA). The legislation included a clause that prohibits any vessel that, after March 22, 1989, has caused an oil spill of more than in any marine area, from operating in Prince William Sound. In April 1998, the company argued in a legal action against the federal government that the ship should be allowed back into Alaskan waters. Exxon claimed OPA was effectively a bill of attainder, a regulation that was unfairly directed at Exxon alone. In 2002, the 9th Circuit Court of Appeals ruled against Exxon. As of 2002, OPA had prevented 18 ships from entering Prince William Sound. OPA also set a schedule for the gradual phase-in of a double hull design, providing an additional layer between the oil tanks and the ocean. While a double hull would likely not have prevented the Exxon Valdez disaster, a Coast Guard study estimated that it would have cut the amount of oil spilled by 60 percent. Exxon Valdez was towed to San Diego, arriving on July 10. Repairs began on July 30. Approximately of steel were removed and replaced. In June 1990, the tanker, renamed Exxon Mediterranean, left the harbor after $30 million of repairs. In 1993, owned by SeaRiver Maritime, it was named S/R Mediterranean, then in 2005 Mediterranean. In 2008 the vessel was acquired by a Hong Kong company that operated her as Dong Fang Ocean, then in 2011 renamed her Oriental Nicety. In August 2012, she was beached at Dalian, China, and dismantled. Alaska regulations In the aftermath of the spill, Alaska governor Steve Cowper issued an executive order requiring two tugboats to escort every loaded tanker from Valdez out through Prince William Sound to Hinchinbrook Entrance. As the plan evolved in the 1990s, one of the two routine tugboats was replaced with a Escort Response Vehicle (ERV). Tankers at Valdez are no longer single-hulled. Congress enacted legislation requiring all tankers to be double-hulled as of 2015. Economic and personal impact In 1991, following the collapse of the local marine population (particularly clams, herring, and seals) the Chugach Alaska Corporation, an Alaska Native Corporation, filed for Chapter 11 bankruptcy protection. It has since recovered. According to several studies funded by the state of Alaska, the spill had both short-term and long-term economic effects. These included the loss of recreational sports, fisheries, reduced tourism, and an estimate of what economists call "existence value", which is the value to the public of a pristine Prince William Sound. The economy of the city of Cordova, Alaska was adversely affected after the spill damaged stocks of salmon and herring in the area. The village of Chenega was transformed into an emergency base and media outlet. The local villagers had to cope with a tripling of their population from 80 to 250. When asked how they felt about the situation, a village councilor noted that they were too shocked and busy to be depressed; others emphasized the human costs of leaving children unattended while their parents worked to clean up. Many Native Americans were worried that too much time was spent on the fishery and not enough on the land that supports subsistence hunting. In 2010, a CNN report alleged that many oil spill cleanup workers involved in the Exxon Valdez response had subsequently become sick. Anchorage lawyer Dennis Mestas found that this was true for 6,722 of 11,000 worker files he was able to inspect. Access to the records was controlled by Exxon. Exxon responded in a statement to CNN: Reactions In 1992, Exxon released a video titled Scientists and the Alaska Oil Spill for distribution to schools. Critics said the video misrepresented the clean-up process. In December 1994, the Unabomber assassinated Burson-Marsteller executive Thomas Mosser, accusing him of having "helped Exxon clean up its public image after the Exxon Valdez incident". In popular culture Several weeks after the spill, Saturday Night Live aired a pointed sketch featuring Kevin Nealon, Phil Hartman, and Victoria Jackson as cleanup workers struggling to scrub the oil off of animals and rocks on a beach in Prince William Sound. A two-part story arc in the DC Comics title Green Arrow is inspired by the event. In the 1995 film Waterworld, Exxon Valdez is the flagship of the movie's villain, "The Deacon," the leader of a band of scavenging raiders. In the ship is a portrait of their patron saint, Joseph Hazelwood. In the second Forrest Gump novel, Gump and Co. by Winston Groom, Gump commandeers Exxon Valdez and accidentally crashes it. Composer Jonathan Larson wrote a song called "Iron Mike" about the oil spill. The song is written in the style of a sea shanty. It was first professionally recorded by George Salazar for the album The Jonathan Larson Project. The 1992 made-for-television film Dead Ahead: The Exxon Valdez Disaster, produced by HBO, dramatized the oil spill disaster. On Sep 25, 1998, the fifth episode of the fourth season of Pinky and the Brain, "The Pinky and the Brain Reunion Special," showed a brief snippet of Brain and Pinky boarding the Exxon Valdez, thus insinuating they were the cause of the grounding. In season 2, episode 8, of Breaking Bad, entitled "Better Call Saul", Walter White tells Jesse Pinkman that Brandon "Badger" Mayhew is going to spill like the Exxon Valdez. See also List of oil spills Deepwater Horizon oil spill Ixtoc I oil spill Dead Ahead: The Exxon Valdez Disaster, 1992 HBO movie Martin County coal slurry spill Kingston Fossil Plant coal fly ash slurry spill References Further reading |
a strange second-person voice, telling you—the reader—what you are seeing and smelling as you follow a recipe) and for his general disdain for upper-class elaborate French cuisine. He travelled widely and quite a few of his recipes are from abroad. His recipes often take pains to demystify cooking by explaining the chemical processes at work. Books La Cuisine en dix minutes, ou l'Adaptation au rythme moderne (1930) Also translated as Cooking in ten minutes : The adaptation to the rhythm of our time Cooking with Pomiane "Vingt Plats Qui Donnent Goutte"IMG_7221.jpeg 1935 edition. References External links Biography and | gave Félix d'Herelle a place to work on bacteriophages. His best known works that have been translated into English are Cooking in Ten Minutes and Cooking with Pomiane. His writing was remarkable in its time for its directness (he frequently uses a strange second-person voice, telling you—the reader—what you are seeing and smelling as you follow a recipe) and for his general disdain |
first Act of Uniformity of 1549. The Book of Common Prayer of 1549, intended as a compromise, was attacked by traditionalists for dispensing with many cherished rituals of the liturgy, such as the elevation of the bread and wine, while some reformers complained about the retention of too many "popish" elements, including vestiges of sacrificial rites at communion. Many senior Catholic clerics, including Bishops Stephen Gardiner of Winchester and Edmund Bonner of London, also opposed the prayer book. Both were imprisoned in the Tower and, along with others, deprived of their sees. In 1549, over 5,500 people lost their lives in the Prayer Book Rebellion in Devon and Cornwall. Reformed doctrines were made official, such as justification by faith alone and communion for laity as well as clergy in both kinds, of bread and wine. The Ordinal of 1550 replaced the divine ordination of priests with a government-run appointment system, authorising ministers to preach the gospel and administer the sacraments rather than, as before, "to offer sacrifice and celebrate mass both for the living and the dead". After 1551, the Reformation advanced further, with the approval and encouragement of Edward, who began to exert more personal influence in his role as Supreme Head of the church. The new changes were also a response to criticism from such reformers as John Hooper, Bishop of Gloucester, and the Scot John Knox, who was employed as a minister in Newcastle upon Tyne under the Duke of Northumberland and whose preaching at court prompted the king to oppose kneeling at communion. Cranmer was also influenced by the views of the continental reformer Martin Bucer, who died in England in 1551; by Peter Martyr, who was teaching at Oxford; and by other foreign theologians. The progress of the Reformation was further speeded by the consecration of more reformers as bishops. In the winter of 1551–52, Cranmer rewrote the Book of Common Prayer in less ambiguous reformist terms, revised canon law and prepared a doctrinal statement, the Forty-two Articles, to clarify the practice of the reformed religion, particularly in the divisive matter of the communion service. Cranmer's formulation of the reformed religion, finally divesting the communion service of any notion of the real presence of God in the bread and the wine, effectively abolished the mass. According to Elton, the publication of Cranmer's revised prayer book in 1552, supported by a second Act of Uniformity, "marked the arrival of the English Church at Protestantism". The prayer book of 1552 remains the foundation of the Church of England's services. However, Cranmer was unable to implement all these reforms once it became clear in spring 1553 that King Edward, upon whom the whole Reformation in England depended, was dying. Succession crisis Devise for the succession In February 1553, Edward VI became ill, and by June, after several improvements and relapses, he was in a hopeless condition. The king's death and the succession of his Catholic half-sister Mary would jeopardise the English Reformation, and Edward's council and officers had many reasons to fear it. Edward himself opposed Mary's succession, not only on religious grounds but also on those of legitimacy and male inheritance, which also applied to Elizabeth. He composed a draft document, headed "My devise for the succession", in which he undertook to change the succession, most probably inspired by his father Henry VIII's precedent. He passed over the claims of his half-sisters and, at last, settled the Crown on his first cousin once removed, the 16-year-old Lady Jane Grey, who on 25 May 1553 had married Lord Guilford Dudley, a younger son of the Duke of Northumberland. In the document he writes: In his document Edward provided, in case of "lack of issue of my body", for the succession of male heirs only – those of Lady Jane Grey's mother, Frances Grey, Duchess of Suffolk; of Jane herself; or of her sisters Katherine, Lady Herbert, and Lady Mary. As his death approached and possibly persuaded by Northumberland, he altered the wording so that Jane and her sisters themselves should be able to succeed. Yet Edward conceded their right only as an exception to male rule, demanded by reality, an example not to be followed if Jane and her sisters had only daughters. In the final document both Mary and Elizabeth were excluded because of bastardy; since both had been declared bastards under Henry VIII and never made legitimate again, this reason could be advanced for both sisters. The provisions to alter the succession directly contravened Henry VIII's Third Succession Act of 1543 and have been described as bizarre and illogical. In early June, Edward personally supervised the drafting of a clean version of his devise by lawyers, to which he lent his signature "in six several places." Then, on 15 June he summoned high-ranking judges to his sickbed, commanding them on their allegiance "with sharp words and angry countenance" to prepare his devise as letters patent and announced that he would have these passed in Parliament. His next measure was to have leading councillors and lawyers sign a bond in his presence, in which they agreed faithfully to perform Edward's will after his death. A few months later, Chief Justice Edward Montagu recalled that when he and his colleagues had raised legal objections to the devise, Northumberland had threatened them "trembling for anger, and ... further said that he would fight in his shirt with any man in that quarrel". Montagu also overheard a group of lords standing behind him conclude "if they refused to do that, they were traitors". At last, on 21 June, the devise was signed by over a hundred notables, including councillors, peers, archbishops, bishops and sheriffs; many of them later claimed that they had been bullied into doing so by Northumberland, although in the words of Edward's biographer Jennifer Loach, "few of them gave any clear indication of reluctance at the time". It was now common knowledge that Edward was dying, and foreign diplomats suspected that some scheme to debar Mary was under way. France found the prospect of the emperor's cousin on the English throne disagreeable and engaged in secret talks with Northumberland, indicating support. The diplomats were certain that the overwhelming majority of the English people backed Mary, but nevertheless believed that Queen Jane would be successfully established. For centuries, the attempt to alter the succession was mostly seen as a one-man plot by the Duke of Northumberland. Since the 1970s, however, many historians have attributed the inception of the "devise" and the insistence on its implementation to the king's initiative. Diarmaid MacCulloch has made out Edward's "teenage dreams of founding an evangelical realm of Christ", while David Starkey has stated that "Edward had a couple of co-operators, but the driving will was his". Among other members of the Privy Chamber, Northumberland's intimate Sir John Gates has been suspected of suggesting to Edward to change his devise so that Lady Jane Grey herself—not just any sons of hers—could inherit the Crown. Whatever the degree of his contribution, Edward was convinced that his word was law and fully endorsed disinheriting his half-sisters: "barring Mary from the succession was a cause in which the young King believed." Illness and death Edward became ill during January 1553 with a fever and cough that gradually worsened. The imperial ambassador, Jean Scheyfve, reported that "he suffers a good deal when the fever is upon him, especially from a difficulty in drawing his breath, which is due to the compression of the organs on the right side". Edward felt well enough in early April to take the air in the park at Westminster and to move to Greenwich, but by the end of the month he had weakened again. By 7 May he was "much amended", and the royal doctors had no doubt of his recovery. A few days later the king was watching the ships on the Thames, sitting at his window. However, he relapsed, and on 11 June Scheyfve, who had an informant in the king's household, reported that "the matter he ejects from his mouth is sometimes coloured a greenish yellow and black, sometimes pink, like the colour of blood". Now his doctors believed he was suffering from "a suppurating tumour" of the lung and admitted that Edward's life was beyond recovery. Soon, his legs became so swollen that he had to lie on his back, and he lost the strength to resist the disease. To his tutor John Cheke he whispered, "I am glad to die". Edward made his final appearance in public on 1 July, when he showed himself at his window in Greenwich Palace, horrifying those who saw him by his "thin and wasted" condition. During the next two days, large crowds arrived hoping to see the king again, but on 3 July, they were told that the weather was too chilly for him to appear. Edward died at the age of 15 at Greenwich Palace at 8 pm on 6 July 1553. According to John Foxe's legendary account of his death, his last words were: "I am faint; Lord have mercy upon me, and take my spirit". Edward was buried in the Henry VII Lady Chapel at Westminster Abbey on 8 August 1553, with reformed rites performed by Thomas Cranmer. The procession was led by "a grett company of chylderyn in ther surples" and watched by Londoners "wepyng and lamenting"; the funeral chariot, draped in cloth of gold, was topped by an effigy of Edward, with crown, sceptre, and garter. Edward's burial place was unmarked until as late as 1966, when an inscribed stone was laid in the chapel floor by Christ's Hospital school to commemorate their founder. The inscription reads as follows: "In Memory Of King Edward VI Buried In This Chapel This Stone Was Placed Here By Christ's Hospital In Thanksgiving For Their Founder 7 October 1966". The cause of Edward VI's death is not certain. As with many royal deaths in the 16th century, rumours of poisoning abounded, but no evidence has been found to support these. The Duke of Northumberland, whose unpopularity was underlined by the events that followed Edward's death, was widely believed to have ordered the imagined poisoning. Another theory held that Edward had been poisoned by Catholics seeking to bring Mary to the throne. The surgeon who opened Edward's chest after his death found that "the disease whereof his majesty died was the disease of the lungs". The Venetian ambassador reported that Edward had died of consumption—in other words, tuberculosis—a diagnosis accepted by many historians. Skidmore believes that Edward contracted tuberculosis after a bout of measles and smallpox in 1552 that suppressed his natural immunity to the disease. Loach suggests instead that his symptoms were typical of acute bronchopneumonia, leading to a "suppurating pulmonary infection" or lung abscess, septicaemia and kidney failure. Lady Jane and Queen Mary Lady Mary was last seen by Edward in February, and was kept informed about the state of her half-brother's health by Northumberland and through her contacts with the imperial ambassadors. Aware of Edward's imminent death, she left Hunsdon House, near London, and sped to her estates around Kenninghall in Norfolk, where she could count on the support of her tenants. Northumberland sent ships to the Norfolk coast to prevent her escape or the arrival of reinforcements from the continent. He delayed the announcement of the king's death while he gathered his forces, and Jane Grey was taken to the Tower on 10 July. On the same day, she was proclaimed queen in the streets of London, to murmurings of discontent. The Privy Council received a message from Mary asserting her "right and title" to the throne and commanding that the council proclaim her queen, as she had already proclaimed herself. The council replied that Jane was queen by Edward's authority and that Mary, by contrast, was illegitimate and supported only by "a few lewd, base people". Northumberland soon realised that he had miscalculated drastically, not least in failing to secure Mary's person before Edward's death. Although many of those who rallied to Mary were Catholics hoping to establish that religion and hoping for the defeat of Protestantism, her supporters also included many for whom her lawful claim to the throne overrode religious considerations. Northumberland was obliged to relinquish control of a nervous council in London and launch an unplanned pursuit of Mary into East Anglia, from where news was arriving of her growing support, which included a number of nobles and gentlemen and "innumerable companies of the common people". On 14 July Northumberland marched out of London with three thousand men, reaching Cambridge the next day; meanwhile, Mary rallied her forces at Framlingham Castle in Suffolk, gathering an army of nearly twenty thousand by 19 July. It now | assumption of monarchical power over the council. He then found himself abruptly dismissed from the chancellorship on charges of selling off some of his offices to delegates. Thomas Seymour Somerset faced less manageable opposition from his younger brother Thomas, who has been described as a "worm in the bud". As King Edward's uncle, Thomas Seymour demanded the governorship of the king's person and a greater share of power. Somerset tried to buy his brother off with a barony, an appointment to the Lord Admiralship, and a seat on the Privy Council—but Thomas was bent on scheming for power. He began smuggling pocket money to King Edward, telling him that Somerset held the purse strings too tight, making him a "beggarly king". He also urged the king to throw off the Protector within two years and "bear rule as other kings do"; but Edward, schooled to defer to the council, failed to co-operate. In the spring of 1547, using Edward's support to circumvent Somerset's opposition, Thomas Seymour secretly married Henry VIII's widow Catherine Parr, whose Protestant household included the 11-year-old Lady Jane Grey and the 13-year-old Lady Elizabeth. In summer 1548, a pregnant Catherine Parr discovered Thomas Seymour embracing Lady Elizabeth. As a result, Elizabeth was removed from Parr's household and transferred to Sir Anthony Denny's. That September, Parr died shortly after childbirth, and Seymour promptly resumed his attentions to Elizabeth by letter, planning to marry her. Elizabeth was receptive, but, like Edward, unready to agree to anything unless permitted by the council. In January 1549, the council had Thomas Seymour arrested on various charges, including embezzlement at the Bristol mint. King Edward, whom Seymour was accused of planning to marry to Lady Jane Grey, himself testified about the pocket money. Lack of clear evidence for treason ruled out a trial, so Seymour was condemned instead by an act of attainder and beheaded on 20 March 1549. War Somerset's only undoubted skill was as a soldier, which he had proven on expeditions to Scotland and in the defence of Boulogne-sur-Mer in 1546. From the first, his main interest as Protector was the war against Scotland. After a crushing victory at the Battle of Pinkie in September 1547, he set up a network of garrisons in Scotland, stretching as far north as Dundee. His initial successes, however, were followed by a loss of direction, as his aim of uniting the realms through conquest became increasingly unrealistic. The Scots allied with France, who sent reinforcements for the defence of Edinburgh in 1548. The Queen of Scots was moved to France, where she was betrothed to the Dauphin. The cost of maintaining the Protector's massive armies and his permanent garrisons in Scotland also placed an unsustainable burden on the royal finances. A French attack on Boulogne in August 1549 at last forced Somerset to begin a withdrawal from Scotland. Rebellion During 1548, England was subject to social unrest. After April 1549, a series of armed revolts broke out, fuelled by various religious and agrarian grievances. The two most serious rebellions, which required major military intervention to put down, were in Devon and Cornwall and in Norfolk. The first, sometimes called the Prayer Book Rebellion, arose from the imposition of Protestantism, and the second, led by a tradesman called Robert Kett, mainly from the encroachment of landlords on common grazing ground. A complex aspect of the social unrest was that the protesters believed they were acting legitimately against enclosing landlords with the Protector's support, convinced that the landlords were the lawbreakers. The same justification for outbreaks of unrest was voiced throughout the country, not only in Norfolk and the west. The origin of the popular view of Somerset as sympathetic to the rebel cause lies partly in his series of sometimes liberal, often contradictory, proclamations, and partly in the uncoordinated activities of the commissions he sent out in 1548 and 1549 to investigate grievances about loss of tillage, encroachment of large sheep flocks on common land, and similar issues. Somerset's commissions were led by an evangelical MP called John Hales, whose socially liberal rhetoric linked the issue of enclosure with Reformation theology and the notion of a godly commonwealth. Local groups often assumed that the findings of these commissions entitled them to act against offending landlords themselves. King Edward wrote in his Chronicle that the 1549 risings began "because certain commissions were sent down to pluck down enclosures". Whatever the popular view of Somerset, the disastrous events of 1549 were taken as evidence of a colossal failure of government, and the council laid the responsibility at the Protector's door. In July 1549, Paget wrote to Somerset: "Every man of the council have misliked your proceedings ... would to God, that, at the first stir you had followed the matter hotly, and caused justice to be ministered in solemn fashion to the terror of others ...". Fall of Somerset The sequence of events that led to Somerset's removal from power has often been called a coup d'état. By 1 October 1549, Somerset had been alerted that his rule faced a serious threat. He issued a proclamation calling for assistance, took possession of the king's person, and withdrew for safety to the fortified Windsor Castle, where Edward wrote, "Me thinks I am in prison". Meanwhile, a united council published details of Somerset's government mismanagement. They made clear that the Protector's power came from them, not from Henry VIII's will. On 11 October, the council had Somerset arrested and brought the king to Richmond Palace. Edward summarised the charges against Somerset in his Chronicle: "ambition, vainglory, entering into rash wars in mine youth, negligent looking on Newhaven, enriching himself of my treasure, following his own opinion, and doing all by his own authority, etc." In February 1550, John Dudley, Earl of Warwick, emerged as the leader of the council and, in effect, as Somerset's successor. Although Somerset was released from the Tower and restored to the council, he was executed for felony in January 1552 after scheming to overthrow Dudley's regime. Edward noted his uncle's death in his Chronicle: "the duke of Somerset had his head cut off upon Tower Hill between eight and nine o'clock in the morning". Historians contrast the efficiency of Somerset's takeover of power, in which they detect the organising skills of allies such as Paget, the "master of practices", with the subsequent ineptitude of his rule. By autumn 1549, his costly wars had lost momentum, the crown faced financial ruin, and riots and rebellions had broken out around the country. Until recent decades, Somerset's reputation with historians was high, in view of his many proclamations that appeared to back the common people against a rapacious landowning class. More recently, however, he has often been portrayed as an arrogant and aloof ruler, lacking in political and administrative skills. Northumberland's leadership In contrast, Somerset's successor the Earl of Warwick, made Duke of Northumberland in 1551, was once regarded by historians merely as a grasping schemer who cynically elevated and enriched himself at the expense of the crown. Since the 1970s, the administrative and economic achievements of his regime have been recognised, and he has been credited with restoring the authority of the royal council and returning the government to an even keel after the disasters of Somerset's protectorate. The Earl of Warwick's rival for leadership of the new regime was Thomas Wriothesley, 1st Earl of Southampton, whose conservative supporters had allied with Warwick's followers to create a unanimous council which they and observers, such as the Holy Roman Emperor Charles V's ambassador, expected to reverse Somerset's policy of religious reform. Warwick, on the other hand, pinned his hopes on the king's strong Protestantism and, claiming that Edward was old enough to rule in person, moved himself and his people closer to the king, taking control of the Privy Chamber. Paget, accepting a barony, joined Warwick when he realised that a conservative policy would not bring the emperor onto the English side over Boulogne. Southampton prepared a case for executing Somerset, aiming to discredit Warwick through Somerset's statements that he had done all with Warwick's co-operation. As a counter-move, Warwick convinced Parliament to free Somerset, which it did on 14 January 1550. Warwick then had Southampton and his followers purged from the council after winning the support of council members in return for titles, and was made Lord President of the Council and great master of the king's household. Although not called a Protector, he was now clearly the head of the government. As Edward was growing up, he was able to understand more and more government business. However, his actual involvement in decisions has long been a matter of debate, and during the 20th century, historians have presented the whole gamut of possibilities, "balanc[ing] an articulate puppet against a mature, precocious, and essentially adult king", in the words of Stephen Alford. A special "Counsel for the Estate" was created when Edward was fourteen. He chose the members himself. In the weekly meetings with this council, Edward was "to hear the debating of things of most importance". A major point of contact with the king was the Privy Chamber, and there Edward worked closely with William Cecil and William Petre, the principal secretaries. The king's greatest influence was in matters of religion, where the council followed the strongly Protestant policy that Edward favoured. The Duke of Northumberland's mode of operation was very different from Somerset's. Careful to make sure he always commanded a majority of councillors, he encouraged a working council and used it to legitimise his authority. Lacking Somerset's blood-relationship with the king, he added members to the council from his own faction in order to control it. He also added members of his family to the royal household. He saw that to achieve personal dominance, he needed total procedural control of the council. In the words of historian John Guy, "Like Somerset, he became quasi-king; the difference was that he managed the bureaucracy on the pretence that Edward had assumed full sovereignty, whereas Somerset had asserted the right to near-sovereignty as Protector". Warwick's war policies were more pragmatic than Somerset's, and they have earned him criticism for weakness. In 1550, he signed a peace treaty with France that agreed to withdrawal from Boulogne and recalled all English garrisons from Scotland. In 1551, Edward was betrothed to Elisabeth of Valois, King Henry II's daughter, and was made a Knight of Saint Michael. Warwick realised that England could no longer support the cost of wars. At home, he took measures to police local unrest. To forestall future rebellions, he kept permanent representatives of the crown in the localities, including lords lieutenant, who commanded military forces and reported back to central government. Working with William Paulet and Walter Mildmay, Warwick tackled the disastrous state of the kingdom's finances. However, his regime first succumbed to the temptations of a quick profit by further debasing the coinage. The economic disaster that resulted caused Warwick to hand the initiative to the expert Thomas Gresham. By 1552, confidence in the coinage was restored, prices fell and trade at last improved. Though a full economic recovery was not achieved until Elizabeth's reign, its origins lay in the Duke of Northumberland's policies. The regime also cracked down on widespread embezzlement of government finances, and carried out a thorough review of revenue collection practices, which has been called "one of the more remarkable achievements of Tudor administration". Reformation In the matter of religion, the regime of Northumberland followed the same policy as that of Somerset, supporting an increasingly vigorous programme of reform. Although Edward VI's practical influence on government was limited, his intense Protestantism made a reforming administration obligatory; his succession was managed by the reforming faction, who continued in power throughout his reign. The man Edward trusted most, Thomas Cranmer, Archbishop of Canterbury, introduced a series of religious reforms that revolutionised the English church from one that—while rejecting papal supremacy—remained essentially Catholic to one that was institutionally Protestant. The confiscation of church property that had begun under Henry VIII resumed under Edward—notably with the dissolution of the chantries—to the great monetary advantage of the crown and the new owners of the seized property. Church reform was therefore as much a political as a religious policy under Edward VI. By the end of his reign, the church had been financially ruined, with much of the property of the bishops transferred into lay hands. The religious convictions of both Somerset and Northumberland have proved elusive for historians, who are divided on the sincerity of their Protestantism. There is less doubt, however, about the religious fervour of King Edward, who was said to have read twelve chapters of scripture daily and enjoyed sermons, and was commemorated by John Foxe as a "godly imp". Edward was depicted during his life and afterwards as a new Josiah, the biblical king who destroyed the idols of Baal. He could be priggish in his anti-Catholicism and once asked Catherine Parr to persuade Lady Mary "to attend no longer to foreign dances and merriments which do not become a most Christian princess". Edward's biographer Jennifer Loach cautions, however, against accepting too readily the pious image of Edward handed down by the reformers, as in John Foxe's influential Acts and Monuments, where a woodcut depicts the young king listening to a sermon by Hugh Latimer. In the early part of his life, Edward conformed to the prevailing Catholic practices, including attendance at mass, but he became convinced, under the influence of Cranmer and the reformers among his tutors and courtiers, that "true" religion should be imposed in England. The English Reformation advanced under pressure from two directions: from the traditionalists on the one hand and the zealots on the other, who led incidents of iconoclasm (image-smashing) and complained that reform did not go far enough. Cranmer set himself the task of writing a uniform liturgy in English, detailing all weekly and daily services and religious festivals, to be made compulsory in the first Act of Uniformity of 1549. The Book of Common Prayer of 1549, intended as a compromise, was attacked by traditionalists for dispensing with many cherished rituals of the liturgy, such as the elevation of the bread and wine, while some reformers complained about the retention of too many "popish" elements, including vestiges of sacrificial rites at communion. Many senior Catholic clerics, including Bishops Stephen Gardiner of Winchester and Edmund Bonner of London, also opposed the prayer book. Both were imprisoned in the Tower and, along with others, deprived of their sees. In 1549, over 5,500 people lost their lives in the Prayer Book Rebellion in Devon and Cornwall. Reformed doctrines were made official, such as justification by faith alone and communion for laity as well as clergy in both kinds, of bread and wine. The Ordinal of 1550 replaced the divine ordination of priests with a government-run appointment system, authorising ministers to preach the gospel and administer the sacraments rather than, as before, "to offer sacrifice and celebrate mass both for the living and the dead". After 1551, the Reformation advanced further, with the approval and encouragement of Edward, who began to exert more personal influence in his role as Supreme Head of the church. The new changes were also a response to criticism from such reformers as John Hooper, Bishop of Gloucester, and the Scot John Knox, who was employed as a minister in Newcastle upon Tyne under the Duke of Northumberland and whose preaching at court prompted the king to oppose kneeling at communion. Cranmer was also influenced by the views of the continental reformer Martin Bucer, who died in England in 1551; by Peter Martyr, who was teaching at Oxford; and by other foreign theologians. The progress of the Reformation was further speeded by the consecration of more reformers as bishops. In the winter of 1551–52, Cranmer rewrote the Book of Common Prayer in less ambiguous reformist terms, revised canon law and prepared a doctrinal statement, the Forty-two Articles, to clarify the practice of the reformed religion, particularly in the divisive matter of the communion service. Cranmer's formulation of the reformed religion, finally divesting the communion service of any notion of the real presence of God in the bread and the wine, effectively abolished the mass. According to Elton, the publication of Cranmer's revised prayer book in 1552, supported by a second Act of Uniformity, "marked the arrival of the English Church at Protestantism". The prayer book of 1552 remains the foundation of the Church of England's services. However, Cranmer was unable to implement all these reforms once it became clear in spring 1553 that King Edward, upon whom the whole Reformation in England depended, was dying. Succession crisis Devise for the succession In February 1553, Edward VI became ill, and by June, after several improvements and relapses, he was in a hopeless condition. The king's death and the succession of his Catholic half-sister Mary would jeopardise the English Reformation, and Edward's council and officers had many reasons to fear it. Edward himself opposed Mary's succession, not only on religious grounds but also on those of legitimacy and male inheritance, which also applied to Elizabeth. He composed a draft document, headed "My devise for the succession", in which he undertook to change the succession, most probably inspired by his father Henry VIII's precedent. He passed over the claims of his half-sisters and, at last, settled the Crown on his first cousin once removed, the 16-year-old Lady Jane Grey, who on 25 May 1553 had married Lord Guilford Dudley, a younger son of the Duke of Northumberland. In the document he writes: In his document Edward provided, in case of "lack of issue of my body", for the succession of male heirs only – those of Lady Jane Grey's mother, Frances Grey, Duchess of Suffolk; of Jane herself; or of her sisters Katherine, Lady Herbert, and Lady Mary. As his death approached and possibly persuaded by Northumberland, he altered the wording so that Jane and her sisters themselves should be able to succeed. Yet Edward conceded their right only as an exception to male rule, demanded by reality, an example not to be followed if Jane and her sisters had only daughters. In the final document both Mary and Elizabeth were excluded because of bastardy; since both had been declared bastards under Henry VIII and never made legitimate again, this reason could be advanced for both sisters. The provisions to alter the succession directly contravened Henry VIII's Third Succession Act of 1543 and have been described as bizarre and illogical. In early June, Edward personally supervised the drafting of a clean version of his devise by lawyers, to which he lent his signature "in six several places." Then, on 15 June he summoned high-ranking judges to his sickbed, commanding them on their allegiance "with sharp words and angry countenance" to prepare his devise as letters patent and announced that he would have these passed in Parliament. His next measure was to have leading councillors and lawyers sign a bond in his presence, in which they agreed faithfully to perform Edward's will after his death. A few months later, Chief Justice Edward Montagu recalled that when he and his colleagues had raised legal objections to the devise, Northumberland had threatened them "trembling for anger, and ... further said that he would fight in his shirt with any man in that quarrel". Montagu also overheard a group of lords standing behind him conclude "if they refused to do that, they were traitors". At last, on 21 June, the devise was signed by over a hundred notables, including councillors, peers, archbishops, bishops and sheriffs; many of them later claimed that they had been bullied into doing so by Northumberland, although in the words of Edward's biographer Jennifer Loach, "few of them gave any clear indication of reluctance at the time". It was now common knowledge that Edward was dying, and foreign diplomats suspected that some scheme to debar Mary was under way. France found the prospect |
to: Extrapyramidal system | Extrapyramidal system Extrapyramidal |
an extension to the original EDSAC hardware. A magnetic-tape drive was added in 1952 but never worked sufficiently well to be of real use. Until 1952, the available main memory (instructions and data) was only 512 18-bit words, and there was no backing store. The delay lines (or "tanks") were arranged in two batteries providing 512 words each. The second battery came into operation in 1952. The full 1024-word delay-line store was not available until 1955 or early 1956, limiting programs to about 800 words until then. John Lindley (diploma student 1958–1959) mentioned "the incredible difficulty we had ever to produce a single correct piece of paper tape with the crude and unreliable home-made punching, printing and verifying gear available in the late 50s". Memory and instructions The EDSAC's main memory consisted of 1024 locations, though only 512 locations were initially installed. Each contained 18 bits, but the topmost bit was always unavailable due to timing problems, so only 17 bits were used. An instruction consisted of a 5-bit instruction code, 1 spare bit, a 10-bit operand (usually a memory address), and 1 length bit to control whether the instruction used a 17-bit or a 35-bit operand (two consecutive words, little-endian). All instruction codes were by design represented by one mnemonic letter, so that the Add instruction, for example, used the EDSAC character code for the letter A. Internally, the EDSAC used two's complement binary numbers. Numbers were either 17 bits (one word) or 35 bits (two words) long. Unusually, the multiplier was designed to treat numbers as fixed-point fractions in the range −1 ≤ x < 1, i.e. the binary point was immediately to the right of the sign. The accumulator could hold 71 bits, including the sign, allowing two long (35-bit) numbers to be multiplied without losing any precision. The instructions available were: Add Subtract Multiply-and-add AND-and-add (called "Collate") Shift left Arithmetic shift right Load multiplier register Store (and optionally clear) accumulator Conditional goto Read input tape Print character Round accumulator No-op Stop There was no division instruction (but various division subroutines were supplied) and no way to directly load a number into the accumulator (a "sTore and zero accumulator" instruction followed by an "Add" instruction were necessary for this). There was no unconditional jump instruction, nor was there a procedure call instruction – it had not yet been invented. Maurice Wilkes discussed relative addressing modes for the EDSAC in a paper published in 1953. He was making the proposals to facilitate the use of subroutines. System software The initial orders were hard-wired on a set of uniselector switches and loaded into the low words of memory at startup. By May 1949, the initial orders provided a primitive relocating assembler taking advantage of the mnemonic design described above, all in 31 words. This was the world's first assembler, and arguably the start of the global software industry. There is a simulation of EDSAC available and a full description of the initial orders and first programs. The first calculation done by EDSAC was a square-number program run on 6 May 1949. The program was written by Beatrice Worsley, who had come from Canada to study the machine. The machine was used by other members of the University to solve real problems, and many early techniques were developed that are now included in operating systems. Users prepared their programs by punching them (in assembler) onto a paper tape. They soon became good at being able to hold the paper tape up to the light and read back the codes. When a program was ready, it was hung on a length of line strung up near the paper-tape reader. The machine operators, who were present during the day, selected the next tape from the line and loaded it into EDSAC. This is of course well known today as job queues. If it printed something, then the tape and the printout were returned to the user, otherwise they were informed at which memory location it had stopped. Debuggers were some time away, but a CRT screen could be set to display the contents of a particular piece of memory. This was used to see whether a number was converging, for example. A loudspeaker was connected to the accumulator's sign bit; experienced users knew healthy and unhealthy sounds of programs, particularly programs "hung" in a loop. After office hours certain "authorised users" were allowed to run the machine for themselves, which went on late into the night until a valve blew – which usually happened according to one such user. This is alluded to by Fred Hoyle in his novel The Black Cloud. Programming technique The early programmers had to make use of techniques frowned upon today—in particular, the use of self-modifying code. As there was no index register until much later, the only way of accessing an array was to alter which memory location a particular instruction was referencing. David Wheeler, who earned the world's first Computer Science PhD working on the project, is credited with inventing the concept of a subroutine. Users wrote programs that called a routine by jumping to the start of the subroutine with the return address (i.e. the location-plus-one of the jump itself) in the accumulator (a Wheeler Jump). By convention the subroutine expected this, and the first thing it did was to modify its concluding jump instruction to that return address. Multiple and nested subroutines could be called | wrote, "the code used to represent orders outside the machine differs from that used inside, the differences being dictated by the different requirements of the programmer on the one hand, and of the control circuits of the machine on the other". EDSAC's programmers used special techniques to make best use of the limited available memory. For example, at the point of loading a subroutine from punched tape into memory, it might happen that a particular constant would have to be calculated, a constant that would not subsequently need recalculation. In this situation, the constant would be calculated in an "interlude". The code required to calculate the constant would be supplied along with the full subroutine. After the initial input routine had loaded the calculation-code, it would transfer control to this code. Once the constant had been calculated and written into memory, control would return to the initial input routine, which would continue to write the remainder of the subroutine into memory, but first adjusting its starting point so as to overwrite the code that had calculated the constant. This allowed quite complicated adjustments to be made to a general-purpose subroutine without making its final footprint in memory any larger than had it been tailored to a specific circumstance. Application software The subroutine concept led to the availability of a substantial subroutine library. By 1951, 87 subroutines in the following categories were available for general use: floating-point arithmetic; arithmetic operations on complex numbers; checking; division; exponentiation; routines relating to functions; differential equations; special functions; power series; logarithms; miscellaneous; print and layout; quadrature; read (input); nth root; trigonometric functions; counting operations (simulating repeat until loops, while loops and for loops); vectors; and matrices. Applications of EDSAC EDSAC was designed specifically to form part of the Mathematical Laboratory's support service for calculation. The first scientific paper to be published using a computer for calculations was by Ronald Fisher. Wilkes and Wheeler had used EDSAC to solve a differential equation relating to gene frequencies for him. In 1951, Miller and Wheeler used the machine to discover a 79-digit prime – the largest known at the time. The winners of three Nobel Prizes John Kendrew and Max Perutz (Chemistry, 1962), Andrew Huxley (Medicine, 1963) and Martin Ryle (Physics, 1974) benefitted from EDSAC's revolutionary computing power. In their acceptance prize speeches, each acknowledged the role that EDSAC had played in their research. In the early 1960s Peter Swinnerton-Dyer used the EDSAC computer to calculate the number of points modulo p (denoted by Np) for a large number of primes p on elliptic curves whose rank was known. Based on these numerical results, conjectured that Np for a curve E with rank r obeys an asymptotic law, the Birch and Swinnerton-Dyer conjecture, considered one of the top unsolved problems in mathematics as of 2016. Games In 1952, Sandy Douglas developed OXO, a version of noughts and crosses (tic-tac-toe) for the EDSAC, with graphical output to a VCR97 6" cathode ray tube. This may well have been the world's first video game. Another video game was created by Stanley Gill and involved a dot (termed a sheep) approaching a line in which one of two gates could be opened. The Stanley Gill game was controlled via the lightbeam of the EDSAC's paper-tape reader. Interrupting it (such as by the player placing their hand in it) would open the upper gate. Leaving the beam unbroken would result in the lower gate opening. Further developments EDSAC's successor, EDSAC 2, was commissioned in 1958. In 1961, an EDSAC 2 version of Autocode, an ALGOL-like high-level programming language for scientists and engineers, was developed by David Hartley. In the mid-1960s, a successor to the EDSAC 2 was planned, but the move was instead made to the Titan, a prototype Atlas 2 developed from the Atlas Computer of the University of Manchester, Ferranti, and Plessey. EDSAC Replica Project On 13 January 2011, the Computer Conservation Society announced that it planned to build a working replica of EDSAC, at the National Museum of Computing (TNMoC) in Bletchley Park supervised by Andrew Herbert, who studied under Maurice Wilkes. The first parts of the replica were switched on in November 2014. The ongoing project is open to visitors of the museum. In 2016, two original EDSAC operators, Margaret Marrs and Joyce Wheeler, visited the museum to assist the project. As of November 2016, commissioning of the fully completed and operational state of the replica was estimated to be the autumn of 2017. However, project delays have postponed its opening, and it is hoped to have it fully working in late 2021. Several instructions (orders) are working in early 2021. See also EDVAC on which much of the design of EDSAC was based History of computing hardware List of vacuum-tube computers Margaret Marrs References Further reading The Preparation of Programs for an Electronic Digital Computer by Professor Sir Maurice Wilkes, David Wheeler and Stanley Gill, Addison–Wesley, Edition 1, 1951 archive.org. 50th Anniversary of EDSAC – Dedicated website at the University of Cambridge Computer Laboratory. reprinted in External links An EDSAC simulator – Developed by Martin Campbell-Kelly, Department of Computer Science, University of Warwick, England. Oral history interview with David Wheeler, 14 May 1987. Charles Babbage Institute, University of Minnesota. Wheeler was a research student at the University Mathematical Laboratory at Cambridge in 1948–1951 and a pioneer programmer on the EDSAC project. Wheeler discusses projects that were run on EDSAC, user-oriented programming methods, and the influence of EDSAC on the ILLIAC, the ORDVAC, and the IBM 701. Wheeler also notes visits by Douglas Hartree, Nelson Blackman (of ONR), Peter Naur, Aad van Wijngarden, Arthur van der Poel, Friedrich Bauer, and Louis Couffignal. Nicholas Enticknap and Maurice Wilkes, Cambridge's Golden Jubilee – in: RESURRECTION The Bulletin of the Computer Conservation Society. . Number 22, Summer 1999. The EDSAC Paperwork Collection |
owned by his own son. (Growler no longer exists, having been given to his granddaughter Minnie Hunt and subsequently destroyed by a neighbour's dog.) His Pooh work is so famous that 300 of his preliminary sketches were exhibited at the Victoria and Albert Museum in 1969, when he was 90 years old. A Shepard painting of Winnie the Pooh, believed to have been painted in the 1930s for a Bristol teashop, is his only known oil painting of the famous teddy bear. It was purchased at an auction for $243,000 in London late in 2000. The painting is displayed in the Pavilion Gallery at Assiniboine Park in Winnipeg, Manitoba, Canada, the city after which Winnie is named. Shepard wrote two autobiographies: Drawn from Memory (1957) and Drawn From Life (1961). In 1972, Shepard gave his personal collection of papers and illustrations to the University of Surrey. These now form the E.H. Shepard Archive. Shepard was made an Officer of the Order of the British Empire in the 1972 Birthday Honours. Personal life Shepard lived at Melina Place in St John's Wood and from 1955 in Lodsworth, West Sussex. He and Florence had two children, Graham (born 1907) and Mary (born 1909), who both became illustrators. Lt. Graham Shepard died when his ship HMS Polyanthus was sunk by German submarine U-952 in September 1943. Mary married E.V. Knox, the editor of Punch, and became known as the illustrator of the Mary Poppins series of children's books. Florence Shepard died in 1927. In November 1943 Shepard married Norah Carroll, a nurse at St Mary's Hospital, Paddington. They remained married until his death on 24 March 1976. In 1966, he called the short film Winnie the Pooh and the Honey Tree a travesty. Works illustrated 1924 – When We Were Very Young 1925 – Playtime and Company; Holly Tree 1926 – Winnie-the-Pooh; Everybody's Pepys 1927 – Jeremy; Little One's Log; Let's Pretend; Now We Are Six; Fun and Fantasy 1928 – The House at Pooh Corner; The Golden Age 1930 – Everybody's Boswell; Dream Days 1931 – The Wind in the Willows; Christmas Poems; Bevis; Mother Goose 1932 – Sycamore Square 1933 – Everybody's Lamb; The Cricket in the Cage 1934 – Victoria Regina 1935 – Perfume from Provence 1936 – The Modern Struwwelpeter 1937 – Golden Sovereign; Cheddar Gorge; As the Bee Sucks; Sunset House: More Perfume from Provence 1939 – The Reluctant Dragon 1941 – Gracious Majesty 1948 – The Golden Age; Dream Days; Bertie's Escapade 1949 – York 1950 – Drover's Tale 1951 – Enter David Garrick 1953 – The Silver Curlew 1954 – The Cuckoo Clock; Susan, Bill and the Wolf-Dog 1955 – The Glass Slipper; Operation Wild Goose; Crystal Mountain; Frogmorton; The Brownies 1955 – Mary in the Country 1956 – The Islanders; The Pancake 1956 – The Secret Garden 1956 – Royal Reflections: Stories for Children 1957 – Drawn from Memory; Briar Rose 1958 – Old Greek Fairy Tales 1959 – Tom Brown's School Days 1960 – Noble Company 1961 – Drawn from Life; Hans Andersen's Fairy Tales 1965 – Ben and Brock 1969 – The Wind in the | and went into action at the Battle of the Somme. By the autumn of 1916, Shepard started working for the Intelligence Department sketching the combat area within the view of his battery position. On 16 February 1917, he was made an acting captain whilst second-in-command of his battery, and briefly served as an acting major in late April and early May of that year during the Battle of Arras before reverting to acting captain. He was promoted to substantive lieutenant on 1 July 1917. Whilst acting as Captain, he was awarded the Military Cross. His citation read: Later in 1917 105th Siege Battery participated in the final stages of the Battle of Passchendaele where it came under heavy fire and suffered a number of casualties. At the end of the year it was sent to help retrieve a disastrous situation on the Italian Front, travelling by rail via Verona before coming into action on the Montello Hill. Shepard missed the Second Battle of the Piave River in April 1918, being on leave in England (where he was invested with his MC by King George V at Buckingham Palace) and where he was attending a gunnery course. He was back in Italy with his battery for the victory at Vittorio Veneto. After the Armistice of Villa Giusti in November 1918, Shepard was promoted to acting major in command of the battery, and given the duty of administering captured enemy guns. Demobilisation began at Christmas 1918 and 105th Siege Battery was disbanded in March 1919. Throughout the war he had been contributing to Punch. He was hired as a regular staff cartoonist in 1921 and became lead cartoonist in 1945. He was removed from this post in 1953 by Punch'''s new editor, Malcolm Muggeridge. His work was also part of the painting event in the art competition at the 1928 Summer Olympics. Shepard was recommended to A. A. Milne in 1923 by another Punch staffer, E. V. Lucas. Milne initially thought Shepard's style was not what he wanted, but used him to illustrate the book of poems When We Were Very Young. Happy with the results, Milne then insisted Shepard illustrate Winnie-the-Pooh. Realising his illustrator's contribution to the book's success, the writer arranged for Shepard to receive a share of his royalties. Milne also inscribed a copy of Winnie-the-Pooh with the following personal verse: {{Quote|<poem> When I am gone, Let Shepard decorate my tomb, And put (if there is room) Two pictures on the stone: Piglet from page a hundred and eleven, And Pooh and Piglet walking (157) ... And Peter, thinking that they are my own, Will welcome me to Heaven. </poem>}} Eventually Shepard came to resent "that silly old bear" as he felt that the Pooh illustrations overshadowed his other work. Shepard modelled Pooh not on the toy owned by Milne's son Christopher Robin but on "Growler", a stuffed bear owned by his own son. (Growler no longer exists, having been given to |
oxidoreductase (subunit M), twitching motility protein PilT, 2,3-dihydroxybenzoate-AMP ligase, ATP/GTP-binding protein, multifunctional fatty acid oxidation complex (subunit alpha), S-formylglutathione hydrolase, aspartate-semialdehyde dehydrogenase, epimerase, membrane protein, formate dehydrogenylase (subunit 7), glutathione S-transferase, major facilitator superfamily transporter, phosphoglucosamine mutase, glycosyl hydrolase 1 family protein, 23S rrna [uracil(1939)-C(5)]-methyltransferase, co-chaperone HscB, N-acetylmuramoyl-L-alanine amidase, sulfate ABC transporter ATP-binding protein CysA, and LPS assembly protein LptD. These CSIs provide a molecular means of distinguishing Enterobacteriaceae from other families within the order Enterobacterales and other bacteria. Genera Validly published genera The following genera have been validly published, thus they have "Standing in Nomenclature". The year the genus was proposed is listed in parentheses after the genus name. Biostraticola (2008) Buttiauxella (1982) Cedecea (1981) Citrobacter (1932) Cronobacter (2008) Enterobacillus (2015) Enterobacter (1960) Escherichia (1919) Franconibacter (2014) Gibbsiella (2011) Izhakiella (2016) Klebsiella (1885) Kluyvera (1981) Kosakonia (2013) Leclercia (1987) Lelliottia (2013) Limnobaculum (2018) Mangrovibacter (2010) Metakosakonia (2017) Phytobacter (2017) Pluralibacter (2013) Pseudescherichia (2017) Pseudocitrobacter (2014) Raoultella (2001) Rosenbergiella (2013) Saccharobacter (1990) Salmonella (1900) Scandinavium (2020) Shigella (1919) Shimwellia (2010) Siccibacter (2014) Trabulsiella (1992) Yokenella (1985) Candidatus genera "Candidatus Annandia" "Candidatus Arocatia" "Candidatus Aschnera" "Candidatus Benitsuchiphilus" "Candidatus Blochmannia" "Candidatus Curculioniphilus" "Candidatus Cuticobacterium" "Candidatus Doolittlea" "Candidatus Gillettellia" "Candidatus Gullanella" "Candidatus Hamiltonella" "Candidatus Hartigia" "Candidatus Hoaglandella" "Candidatus Ischnodemia" "Candidatus Ishikawaella" "Candidatus Kleidoceria" "Candidatus Kotejella" "Candidatus Macropleicola" "Candidatus Mikella" "Candidatus Moranella" "Candidatus Phlomobacter" "Candidatus Profftia" "Candidatus Purcelliella" "Candidatus Regiella" "Candidatus Riesia" "Candidatus Rohrkolberia" "Candidatus Rosenkranzia" "Candidatus Schneideria" "Candidatus Stammera" "Candidatus Stammerula" "Candidatus Tachikawaea" "Candidatus Westeberhardia" Proposed genera The following genera have been effectively, but not validly, published, thus they do not have "Standing in Nomenclature". The year the genus was proposed is listed in parentheses after the genus name. Aquamonas (2009) Atlantibacter (2016) Superficieibacter (2018) Identification To identify different genera of Enterobacteriaceae, a microbiologist may run a series of tests in the lab. These include: Phenol red Tryptone broth Phenylalanine agar for detection of production of deaminase, which converts phenylalanine to phenylpyruvic acid Methyl red or Voges-Proskauer tests depend on the digestion of glucose. The methyl red tests for acid endproducts. The Voges Proskauer tests for the production of acetylmethylcarbinol. Catalase test on nutrient agar tests for the production of enzyme catalase, which splits hydrogen peroxide and releases oxygen gas. Oxidase test on nutrient agar tests for the production of the enzyme oxidase, which reacts with an aromatic amine to produce a purple color. Nutrient gelatin tests to detect activity of the enzyme gelatinase. In a clinical setting, three species make up 80 to 95% of all isolates identified. These are Escherichia coli, Klebsiella pneumoniae, and Proteus mirabilis. However, Proteus mirabilis is now considered a part of the Morganellaceae, a sister clade within the Enterobacterales. Antibiotic resistance Several Enterobacteriaceae strains have been isolated which are resistant to antibiotics including carbapenems, which are often claimed as "the last line of antibiotic defense" against resistant organisms. For instance, some Klebsiella pneumoniae strains are carbapenem resistant. Various carbapenemases genes (blaOXA-48, blaKPC and blaNDM-1, blaVIM and blaIMP) have been identified in carbapenem resistant Enterobacteriaceae including Escherichia coli and Klebsiella pneumoniae. References External links Enterobacteriaceae genomes and related information at PATRIC, a Bioinformatics Resource Center funded by NIAID Evaluation of new computer-enhanced identification program for microorganisms: adaptation of BioBASE for identification of members | of the bacterial cells to their hosts. They are not spore-forming. Metabolism Like other proteobacteria, Enterobactericeae have Gram-negative stains, and they are facultative anaerobes, fermenting sugars to produce lactic acid and various other end products. Most also reduce nitrate to nitrite, although exceptions exist. Unlike most similar bacteria, Enterobacteriaceae generally lack cytochrome c oxidase, there are exceptions. Catalase reactions vary among Enterobacteriaceae. Ecology Many members of this family are normal members of the gut microbiota in humans and other animals, while others are found in water or soil, or are parasites on a variety of different animals and plants. Model organisms and medical relevance Escherichia coli is one of the most important model organisms, and its genetics and biochemistry have been closely studied. Some enterobacteria are important pathogens, e.g. Salmonella, or Shigella e.g. because they produce endotoxins. Endotoxins reside in the cell wall and are released when the cell dies and the cell wall disintegrates. Some members of the Enterobacteriaceae produce endotoxins that, when released into the bloodstream following cell lysis, cause a systemic inflammatory and vasodilatory response. The most severe form of this is known as endotoxic shock, which can be rapidly fatal. Historical systematics and taxonomy Enterobacteriaceae was originally the sole family under the order 'Enterobacteriales'. The family contained a large array of biochemically distinct species with different ecological niches, which made biochemical descriptions difficult. The original classification of species to this family and order was largely based on 16S rRNA genome sequence analyses, which is known to have low discriminatory power and the results of which changes depends on the algorithm and organism information used. Despite this, the analyses still exhibited polyphyletic branching, indicating the presence of distinct subgroups within the family. In 2016, the order 'Enterobacteriales' was renamed to Enterobacterales, and divided into 7 new families, including the emended Enterobacteriaceae family. This emendation restricted the family to include only those genera directly related to the type genus, which included most of the enteric species under the order. This classification was proposed based on the construction of several robust phylogenetic trees using conserved genome sequences, 16S rRNA sequences and multilocus sequence analyses. Molecular markers, specifically conserved signature indels, specific to this family were identified as evidence supporting the division independent of phylogenetic trees. In 2017, a subsequent study using comparative phylogenomic analyses identified the presence of 6 subfamily level clades within the family Enterobacteriaceae, namely the “Escherichia clade”, “Klebsiella clade”, “Enterobacter clade”, “Kosakonia clade”, “Cronobacter clade”, “Cedecea clade” and a “Enterobacteriaceae incertae sedis clade” containing species whose taxonomic placement within the family is unclear. However, this division was not officially proposed as the subfamily rank is generally not used. Molecular signatures Analyses of genome sequences from Enterobacteriaceae species identified 21 conserved signature indels (CSIs) that are uniquely present in this family in the proteins NADH:ubiquinone oxidoreductase (subunit M), twitching motility protein PilT, 2,3-dihydroxybenzoate-AMP ligase, ATP/GTP-binding protein, multifunctional fatty acid oxidation complex (subunit alpha), S-formylglutathione hydrolase, aspartate-semialdehyde dehydrogenase, epimerase, membrane protein, formate dehydrogenylase (subunit 7), glutathione S-transferase, major facilitator superfamily transporter, phosphoglucosamine mutase, glycosyl hydrolase 1 family protein, 23S rrna [uracil(1939)-C(5)]-methyltransferase, co-chaperone HscB, N-acetylmuramoyl-L-alanine amidase, sulfate ABC transporter ATP-binding protein CysA, and LPS assembly protein LptD. These CSIs provide a molecular means of distinguishing Enterobacteriaceae from other families within the order Enterobacterales and other bacteria. Genera Validly published genera The following genera have been validly published, thus they have "Standing in Nomenclature". The year the genus was proposed is listed in |
may refer to: Eccentricity (behavior), odd behavior on the part of a person, as opposed to being "normal" Mathematics, science and technology Mathematics Off-center, in geometry Eccentricity (graph theory) of a vertex in a graph Eccentricity (mathematics), a parameter associated with every conic section Orbital mechanics Orbital eccentricity, in astrodynamics, a measure of the non-circularity of an orbit Eccentric anomaly, the angle between the direction of periapsis and the current position of an object on its orbit Eccentricity vector, in celestial mechanics, a dimensionless vector with direction pointing from apoapsis to periapsis Eccentric, a type | a vertex in a graph Eccentricity (mathematics), a parameter associated with every conic section Orbital mechanics Orbital eccentricity, in astrodynamics, a measure of the non-circularity of an orbit Eccentric anomaly, the angle between the direction of periapsis and the current position of an object on its orbit Eccentricity vector, in celestial mechanics, a dimensionless vector with direction pointing from apoapsis to periapsis Eccentric, a type of deferent, a circle or sphere used in obsolete epicyclical systems to carry a planet around the |
four from Essendon). The peak of these incidents occurred in 1980 with new recruit Phil Carman making headlines for head-butting an umpire. The tribunal suspended him for sixteen weeks, and although most people thought this was a fair (or even lenient) sentence, he took his case to the supreme court, gathering even more unwanted publicity for the club. Despite this, the club had recruited many talented young players in the late 70s who emerged as club greats. Three of those young players were Simon Madden, Tim Watson and Paul Van Der Haar. Terry Daniher and his brother Neale came via a trade with South Melbourne, and Roger Merrett joined soon afterwards to form the nucleus of what would become the formidable Essendon sides of the 1980s. This raw but talented group of youngsters took Essendon to an elimination final in 1979 under Barry Davis but were again thrashed in an Elimination Final, this time at the hands of Fitzroy. Davis resigned at the end of the 1980 season after missing out on a finals appearance. One of the few highlights for Essendon supporters during this time was when Graham Moss won the 1976 Brownlow Medal; he was the only Bomber to do so in a 40-year span from 1953 to 1993. Even that was bittersweet as he quit VFL football to move back to his native Western Australia, where Moss finished out his career as a player and coach at Claremont Football Club. In many ways, Moss' career reflects Essendon's mixed fortunes during the decade. Kevin Sheedy era (1981–2007) Former Richmond player Kevin Sheedy started as head coach in 1981. Essendon reached the Grand Final in 1983, the first time since 1968. Hawthorn won by a then record 83 points. In 1984, Essendon won the pre-season competition and completed the regular season on top of the ladder. The club played, and beat, Hawthorn in the 1984 VFL Grand Final to win their 13th premiership – their first since 1965. The teams met again in the 1985 Grand Final, which Essendon also won. At the start of 1986, Essendon were considered unbackable for three successive flags, but a succession of injuries to key players Paul Van der Haar (only fifteen games from 1986 to 1988), Tim Watson, Darren Williams, Roger Merrett and Simon Madden led the club to win only eight of its last eighteen games in 1986 and only nine games (plus a draw with Geelong) in 1987. In July 1987, the Bombers suffered a humiliation at the hands of Sydney, who fell two points short of scoring the then highest score in VFL history. In 1988, Essendon made a rebound to sixth place with twelve wins, including a 140-point thrashing of Brisbane where they had a record sixteen individual goalkickers. In 1989, they rebounded further to second on the ladder with only five losses and thrashed Geelong in the Qualifying Final. However, after a fiery encounter with Hawthorn ended in a convincing defeat, the Bombers were no match for Geelong next week. In 1990, Essendon were pace-setters almost from the start, but a disruption from the Qualifying Final draw between Collingwood and West Coast was a blow from which they never recovered. The Magpies comprehensively thrashed them in both the second semi final and the grand final. Following the 1991 season, Essendon moved its home games from its traditional home ground at Windy Hill to the larger and newly renovated MCG. This move generated large increases in game attendance, membership and revenue for the club. The club's training and administrative base remained at Windy Hill until 2013. Following the retirement of Tim Watson and Simon Madden in the early 1990s, the team was built on new players such as Gavin Wanganeen, Joe Misiti, Mark Mercuri, Michael Long, Dustin Fletcher (son of Ken) and James Hird, who was taken at No. 79 in the 1990 draft. This side became known as the "Baby Bombers", as the core of the side was made up of young players early in their careers. The team won the 1993 Grand Final against Carlton and that same year, Gavin Wanganeen won the Brownlow Medal, the first awarded to an Essendon player since 1976. Three years later, James Hird was jointly awarded the medal with Michael Voss of Brisbane. In 2000, the club shifted the majority of its home games to the newly opened Docklands Stadium, signing a 25-year deal to play seven home matches per year at the venue, with the other four remaining at the MCG. The season was one of the most successful by any team in VFL/AFL history, and the club opened with 20 consecutive wins before they lost to the Western Bulldogs in round 21. The team went on to win their 16th premiership, defeating , thereby completing the most dominant single season in AFL/VFL history. The defeat to the Bulldogs was the only defeat for Essendon throughout the entire calendar year (Essendon also won the 2000 pre-season competition). Essendon was less successful after 2001. Lucrative contracts to a number of premiership players had caused serious pressure on the club's salary cap, forcing the club to trade several key players. Blake Caracella, Chris Heffernan, Justin Blumfield, Gary Moorcroft and Damien Hardwick had all departed by the end of 2002; in 2004, Mark Mercuri, Sean Wellman and Joe Misiti retired. The club remained competitive; however, they could progress no further than the second week of the finals each year for the years of 2002, 2003, and 2004. Sheedy signed a new three-year contract at the end of 2004. In 2005, Essendon missed the finals for the first time since 1997; and in 2006, the club suffered its worst season under Sheedy, and its worst for more than 70 years, finishing second-last with only three wins (one of which was against defending premiers , in which newly appointed captain Matthew Lloyd kicked eight goals) and one draw from twenty-two games. Lloyd had replaced James Hird as captain at the start of the season, but after suffering a season-ending hamstring injury two weeks after his phenomenal performance against Leo Barry, David Hille was appointed captain for the remainder of the season. The club improved its on-field position in 2007, but again missed the finals. On field and relocation to Melbourne Airport (2008–2012) Sheedy's contract was not renewed after 2007, ending his 27-year tenure as Essendon coach. Matthew Knights replaced Sheedy as coach, and coached the club for three seasons, reaching the finals once – an eighth-place finish in 2009 at the expense of reigning premiers . On 29 August 2010, shortly after the end of the 2010 home-and-away season, Knights was dismissed as coach. On 28 September 2010, former captain James Hird was named as Essendon's new coach from 2011 on a four-year deal. Former dual premiership winning coach and Essendon triple-premiership winning player Mark Thompson later joined Hird on the coaching panel. In his first season, Essendon finished eighth. The club started strongly in 2012, sitting fourth with a 10–3 record at the halfway mark of the season; but the club won only one more match for the season, finishing eleventh to miss the finals. In 2013 the club moved its training and administrative base to the True Value Solar Centre, a new facility in the suburb of Melbourne Airport which it had developed in conjunction with the Australian Paralympic Committee. Essendon holds a 37-year lease at the facility, and maintains a lease at Windy Hill to use the venue for home matches for its reserves team in the Victorian Football League, and for a social club and merchandise store on the site. ASADA/WADA investigation (2013–2016) During 2013, the club was investigated by the AFL and the Australian Sports Anti-Doping Authority (ASADA) over its 2012 player supplements and sports science program, most specifically over allegations into illegal use of peptide supplements. An internal review found it to have "established a supplements program that was experimental, inappropriate and inadequately vetted and controlled", and on 27 August 2013, the club was found guilty of bringing the game into disrepute for this reason. Among its penalties, the club was fined A$2 million, stripped of early draft picks in the following two drafts, and forfeited its place in the 2013 finals series (having originally finished seventh on the ladder); Hird was suspended from coaching for twelve months. Several office-bearers also resigned their posts during the controversy, including chairman David Evans and CEO Ian Robson. In the midst of the supplements saga, assistant coach Mark Thompson took over as coach for the 2014 season during Hird's suspension. He led the club back to the finals for a seventh-place finish but in a tense second elimination final against archrivals North Melbourne, the Bombers led by as much as 27 points at half time before a resurgent Kangaroos side came back and won the game by 12 points. After the 2014 season, Mark Thompson left the club to make way for Hird's return to the senior coaching role. In June 2014, thirty-four players were issued show-cause notices alleging the use of banned peptide Thymosin beta-4 during the program. The players faced the AFL Anti-Doping Tribunal over the 2014/15 offseason, and on 31 March 2015 the tribunal returned a not guilty verdict, determining that it was "not comfortably satisfied" that the players had been administered the peptide. Hird returned as senior coach for the 2015 season, and after a strong start, the club's form severely declined after the announcement that WADA would appeal the decision of the AFL Anti-Doping Tribunal. The effect of the appeal on the team's morale was devastating and they went on to win only six games for the year. Under extreme pressure, Hird resigned on 18 August 2015 following a disastrous 112-point loss to Adelaide. Former West Coast Eagles premiership coach John Worsfold was appointed as the new senior coach on a three-year contract. On 12 January 2016 the Court of Arbitration for Sport overruled the AFL anti-doping tribunal's decision, deeming that 34 past and present players of the Essendon Football Club, took the banned substance Thymosin Beta-4. As a result, all 34 players, 12 of which were still at the club, were given two-year suspensions. However, all suspensions were effectively less due to players having previously taken part in provisional suspensions undertaken during the 2014/2015 off-season. As a result, Essendon contested the 2016 season with twelve of its regular senior players under suspension. In order for the club to remain competitive, the AFL granted Essendon the ability to upgrade all five of their rookie listed players and to sign an additional ten players to cover the loss of the suspended players for the season. Due to this unprecedented situation, many in the football community predicted the club would go through the 2016 AFL season without a win; however, they were able to win three matches: against , and in rounds 2, 21 and 23 respectively. The absence of its most experienced players also allowed the development of its young players, with Zach Merrett and Orazio Fantasia having breakout years, while Darcy Parish and Anthony McDonald-Tipungwuti, impressing in their debut seasons. Merrett acted as captain in the side's round 21 win over the Suns. The club eventually finished on the bottom of the ladder and thus claimed its first wooden spoon since 1933. Post-investigation (2017–present) Essendon made their final financial settlement related to the supplements saga in September 2017, just before finals started. They also improved vastly on their 2016 performance, finishing 7th in the home and away season and becoming the first team since in 2011 to go from wooden spooner to a finals appearance, but they ultimately lost their only final to . The 2017 season was also capped off by the retirement of much-loved club legend and ex-captain Jobe Watson, midfielder Brent Stanton, and ex-Geelong star James Kelly, who later took up a development coach role at the club. Midfielder Heath Hocking, who played 126 games for the club, was delisted. Expectations were high for the 2018 season, with the club having an outstanding offseason. The recruitment of Jake Stringer, Adam Saad and Devon Smith from the Western Bulldogs, Gold Coast Suns and Greater Western Sydney Giants respectively was expected to throw Essendon firmly into premiership contention. After beating the previous year's runner up (which went on to beat reigning premiers the following round) in round one, Essendon's form slumped severely, only winning one game out of the next seven rounds and losing to the then-winless Carlton in round eight. Senior assistant coach Mark Neeld was sacked by the club the following Monday. The team's form improved sharply after this, recording wins against top eight sides Geelong, GWS, eventual premiers West Coast and Sydney, and winning ten out of the last 13 games of the season. However, the mid-season revival was short-lived, with a loss to reigning premiers by eight points in round 22 ending any hopes they had of reaching the finals. The 2018 season was capped off by the club not offering veteran Brendon Goddard a new contract for 2019. Essendon acquired Dylan Shiel from in one of the most high-profile trades of the 2018 AFL Trade Period. The Bombers had inconsistent form throughout the 2019 season but qualified for the finals for the second time in three seasons, finishing eighth on the ladder with 12 wins and 10 losses. The Bombers, however, were no match for the West Coast Eagles in the first elimination final and lost by 55 points to end their season. The defeat extended their 15-year finals winning drought, having not won a final since 2004. Following the end of the 2019 season, assistant coach Ben Rutten was announced as John Worsfold's successor as senior coach, effective at the end of the 2020 AFL season. Rutten effectively shared co-coaching duties with Worsfold during the 2020 season. 2020 was a particularly disappointing year for the club. The Bombers failed to make the finals, finishing thirteenth on the AFL ladder with just six wins and a draw from 17 games. Conor McKenna became the first AFL player to test positive to COVID-19 during the pandemic. With Rutten solely at the helm in 2021, Essendon improved significantly from the previous year and returned to the finals, finishing eighth on the ladder with 11 wins and 11 losses. However, the Bombers’ 17-year drought without a finals victory would continue after losing to the Western Bulldogs by 49 points in the first elimination final. Club symbols Guernsey Essendon's first recorded jumpers were navy blue (The Footballers, edited by Thomas Power, 1875) although the club wore 'red and black caps and hose'. In 1877 The Footballers records the addition of 'a red sash over left shoulder'. This is the first time a red sash as part of the club jumper and by 1878 there are newspaper reports referring to Essendon players as 'the men in the sash'. Given that blue and navy blue were the most popular colours at the time it is thought that Essendon adopted a red sash in 1877 to distinguish its players from others in similar coloured jumpers. Clash jumpers In 2007, the AFL Commission laid down the requirement that all clubs must produce an alternative jumper for use in matches where jumpers are considered to clash. From 2007 to 2011, the Essendon clash guernsey was the same design as its home guernsey, but with a substantially wider sash such that the guernsey was predominantly red rather than predominantly black. This was changed after 2011 when the AFL deemed that the wider sash did not provide sufficient contrast. From 2012 to 2016, Essendon's clash guernsey was predominantly grey, with a red sash fimbriated in black; the grey field contained, in small print, the names of all Essendon premiership players. Before the 2016 season, Essendon's changed their clash guernsey to a predominantly red one, featuring a red sash in black. Similar to the grey jumper, the names of Essendon premiership players were also printed outside the sash. Yellow armbands Following Adam Ramanauskas' personal battle with cancer, a "Clash for Cancer" match against was launched in 2006. This was a joint venture between Essendon and the Cancer Council of Victoria to raise funds for the organisation. Despite a formal request to the AFL being denied, players wore yellow armbands for the match which resulted in the club being fined $20,000. In 2007, the AFL agreed to allow yellow armbands to be incorporated into the left sleeve of the jumper. The 'Clash for Cancer' match against Melbourne has become an annual event, repeated in subsequent seasons, though in 2012, 2013, 2014 and 2016, (twice), the Sydney Swans and Brisbane Lions were the opponents in those respective seasons instead of Melbourne. In 2009, the jumpers were auctioned along with yellow boots worn by some players during the match. Club song The club's theme song, "See the Bombers Fly Up", is thought to have been written c. 1959 by Kevin Andrews in the home of player Jeff Gamble at which time Kevin Andrews was living. The song is based on the tune of Johnnie Hamp's 1929 song "(Keep Your) Sunny Side Up" at an increased tempo. Jeff Gamble came up with the line 'See the bombers fly up, up' while Kevin Andrews contributed all or most of the rest. At the time, "(Keep Your) Sunny Side Up" was the theme song for the popular Melbourne-based TV show on Channel 7 Sunnyside Up. The official version of the song was recorded in 1972 by the Fable Singers and is still used today. The song, as with all other AFL clubs, is played prior to every match and at the conclusion of matches when the team is victorious. See the Bombers fly up, up! To win the premiership flag. Our boys who play this grand old game, Are always striving for glory and fame! See the bombers fly up, up, The other teams they don't fear; They all try their best, But they can't get near, As the bombers fly up! Songwriter Mike Brady, of "Up There Cazaly" fame, penned an updated version of the song in 1999 complete with a new verse arrangement, but it was not well received. However, this version is occasionally played at club functions. Logo and mascot The club's current logo was introduced in 1998, making it the second oldest AFL logo currently in use, behind St. Kilda's logo, which was introduced in 1995. Their mascot is known as "Skeeta Reynolds", and was named after Dick Reynolds. He is a mosquito and was created in honour of the team's back-to-back premiership sides in the 1920s known as the "Mosquito Fleet". He was first named through a competition run in the Bomber magazine with "Skeeta" being the winning entry. This was later changed to "Skeeta Reynolds". He appears as a red mosquito in an Essendon jumper and wears a red and black scarf. Membership Rivalries Essendon has a four-way rivalry with , , and being the four biggest and most supported clubs in Victoria. Matches between the clubs are often close regardless of form and ladder positions. If out of the race themselves, all four have the desire to deny the others a finals spot or a premiership. Essendon also has a fierce rivalry with Hawthorn stemming from the 1980s. This rivalry became even more heated when Matthew Lloyd knocked out Brad Sewell with a bump. This then | none lasting longer than four years. Off the field the club went through troubled times as well. In 1970 five players went on strike before the season even began, demanding higher payments. Essendon did make the finals in 1972 and 1973 under the autocratic direction of Des Tuddenham (Collingwood) but they were beaten badly in successive elimination finals by St. Kilda and did not taste finals action again until the very end of the decade. The 70s Essendon sides were involved in many rough and tough encounters under Tuddenham, who himself came to loggerheads with Ron Barassi at a quarter time huddle where both coaches exchanged heated words. Essendon had tough, but talented players with the likes of "Rotten Ronnie" Ron Andrews and experienced players such as Barry Davis, Ken Fletcher, Geoff Blethyn, Neville Fields and West Australian import Graham Moss. In May 1974, a controversial half-time all-in-brawl with Richmond at Windy Hill and a 1975 encounter with Carlton were testimony of the era. Following the Carlton match, the 'Herald' described Windy Hill as "Boot Hill", because of the extent of the fights and the high number of reported players (eight in all – four from Carlton and four from Essendon). The peak of these incidents occurred in 1980 with new recruit Phil Carman making headlines for head-butting an umpire. The tribunal suspended him for sixteen weeks, and although most people thought this was a fair (or even lenient) sentence, he took his case to the supreme court, gathering even more unwanted publicity for the club. Despite this, the club had recruited many talented young players in the late 70s who emerged as club greats. Three of those young players were Simon Madden, Tim Watson and Paul Van Der Haar. Terry Daniher and his brother Neale came via a trade with South Melbourne, and Roger Merrett joined soon afterwards to form the nucleus of what would become the formidable Essendon sides of the 1980s. This raw but talented group of youngsters took Essendon to an elimination final in 1979 under Barry Davis but were again thrashed in an Elimination Final, this time at the hands of Fitzroy. Davis resigned at the end of the 1980 season after missing out on a finals appearance. One of the few highlights for Essendon supporters during this time was when Graham Moss won the 1976 Brownlow Medal; he was the only Bomber to do so in a 40-year span from 1953 to 1993. Even that was bittersweet as he quit VFL football to move back to his native Western Australia, where Moss finished out his career as a player and coach at Claremont Football Club. In many ways, Moss' career reflects Essendon's mixed fortunes during the decade. Kevin Sheedy era (1981–2007) Former Richmond player Kevin Sheedy started as head coach in 1981. Essendon reached the Grand Final in 1983, the first time since 1968. Hawthorn won by a then record 83 points. In 1984, Essendon won the pre-season competition and completed the regular season on top of the ladder. The club played, and beat, Hawthorn in the 1984 VFL Grand Final to win their 13th premiership – their first since 1965. The teams met again in the 1985 Grand Final, which Essendon also won. At the start of 1986, Essendon were considered unbackable for three successive flags, but a succession of injuries to key players Paul Van der Haar (only fifteen games from 1986 to 1988), Tim Watson, Darren Williams, Roger Merrett and Simon Madden led the club to win only eight of its last eighteen games in 1986 and only nine games (plus a draw with Geelong) in 1987. In July 1987, the Bombers suffered a humiliation at the hands of Sydney, who fell two points short of scoring the then highest score in VFL history. In 1988, Essendon made a rebound to sixth place with twelve wins, including a 140-point thrashing of Brisbane where they had a record sixteen individual goalkickers. In 1989, they rebounded further to second on the ladder with only five losses and thrashed Geelong in the Qualifying Final. However, after a fiery encounter with Hawthorn ended in a convincing defeat, the Bombers were no match for Geelong next week. In 1990, Essendon were pace-setters almost from the start, but a disruption from the Qualifying Final draw between Collingwood and West Coast was a blow from which they never recovered. The Magpies comprehensively thrashed them in both the second semi final and the grand final. Following the 1991 season, Essendon moved its home games from its traditional home ground at Windy Hill to the larger and newly renovated MCG. This move generated large increases in game attendance, membership and revenue for the club. The club's training and administrative base remained at Windy Hill until 2013. Following the retirement of Tim Watson and Simon Madden in the early 1990s, the team was built on new players such as Gavin Wanganeen, Joe Misiti, Mark Mercuri, Michael Long, Dustin Fletcher (son of Ken) and James Hird, who was taken at No. 79 in the 1990 draft. This side became known as the "Baby Bombers", as the core of the side was made up of young players early in their careers. The team won the 1993 Grand Final against Carlton and that same year, Gavin Wanganeen won the Brownlow Medal, the first awarded to an Essendon player since 1976. Three years later, James Hird was jointly awarded the medal with Michael Voss of Brisbane. In 2000, the club shifted the majority of its home games to the newly opened Docklands Stadium, signing a 25-year deal to play seven home matches per year at the venue, with the other four remaining at the MCG. The season was one of the most successful by any team in VFL/AFL history, and the club opened with 20 consecutive wins before they lost to the Western Bulldogs in round 21. The team went on to win their 16th premiership, defeating , thereby completing the most dominant single season in AFL/VFL history. The defeat to the Bulldogs was the only defeat for Essendon throughout the entire calendar year (Essendon also won the 2000 pre-season competition). Essendon was less successful after 2001. Lucrative contracts to a number of premiership players had caused serious pressure on the club's salary cap, forcing the club to trade several key players. Blake Caracella, Chris Heffernan, Justin Blumfield, Gary Moorcroft and Damien Hardwick had all departed by the end of 2002; in 2004, Mark Mercuri, Sean Wellman and Joe Misiti retired. The club remained competitive; however, they could progress no further than the second week of the finals each year for the years of 2002, 2003, and 2004. Sheedy signed a new three-year contract at the end of 2004. In 2005, Essendon missed the finals for the first time since 1997; and in 2006, the club suffered its worst season under Sheedy, and its worst for more than 70 years, finishing second-last with only three wins (one of which was against defending premiers , in which newly appointed captain Matthew Lloyd kicked eight goals) and one draw from twenty-two games. Lloyd had replaced James Hird as captain at the start of the season, but after suffering a season-ending hamstring injury two weeks after his phenomenal performance against Leo Barry, David Hille was appointed captain for the remainder of the season. The club improved its on-field position in 2007, but again missed the finals. On field and relocation to Melbourne Airport (2008–2012) Sheedy's contract was not renewed after 2007, ending his 27-year tenure as Essendon coach. Matthew Knights replaced Sheedy as coach, and coached the club for three seasons, reaching the finals once – an eighth-place finish in 2009 at the expense of reigning premiers . On 29 August 2010, shortly after the end of the 2010 home-and-away season, Knights was dismissed as coach. On 28 September 2010, former captain James Hird was named as Essendon's new coach from 2011 on a four-year deal. Former dual premiership winning coach and Essendon triple-premiership winning player Mark Thompson later joined Hird on the coaching panel. In his first season, Essendon finished eighth. The club started strongly in 2012, sitting fourth with a 10–3 record at the halfway mark of the season; but the club won only one more match for the season, finishing eleventh to miss the finals. In 2013 the club moved its training and administrative base to the True Value Solar Centre, a new facility in the suburb of Melbourne Airport which it had developed in conjunction with the Australian Paralympic Committee. Essendon holds a 37-year lease at the facility, and maintains a lease at Windy Hill to use the venue for home matches for its reserves team in the Victorian Football League, and for a social club and merchandise store on the site. ASADA/WADA investigation (2013–2016) During 2013, the club was investigated by the AFL and the Australian Sports Anti-Doping Authority (ASADA) over its 2012 player supplements and sports science program, most specifically over allegations into illegal use of peptide supplements. An internal review found it to have "established a supplements program that was experimental, inappropriate and inadequately vetted and controlled", and on 27 August 2013, the club was found guilty of bringing the game into disrepute for this reason. Among its penalties, the club was fined A$2 million, stripped of early draft picks in the following two drafts, and forfeited its place in the 2013 finals series (having originally finished seventh on the ladder); Hird was suspended from coaching for twelve months. Several office-bearers also resigned their posts during the controversy, including chairman David Evans and CEO Ian Robson. In the midst of the supplements saga, assistant coach Mark Thompson took over as coach for the 2014 season during Hird's suspension. He led the club back to the finals for a seventh-place finish but in a tense second elimination final against archrivals North Melbourne, the Bombers led by as much as 27 points at half time before a resurgent Kangaroos side came back and won the game by 12 points. After the 2014 season, Mark Thompson left the club to make way for Hird's return to the senior coaching role. In June 2014, thirty-four players were issued show-cause notices alleging the use of banned peptide Thymosin beta-4 during the program. The players faced the AFL Anti-Doping Tribunal over the 2014/15 offseason, and on 31 March 2015 the tribunal returned a not guilty verdict, determining that it was "not comfortably satisfied" that the players had been administered the peptide. Hird returned as senior coach for the 2015 season, and after a strong start, the club's form severely declined after the announcement that WADA would appeal the decision of the AFL Anti-Doping Tribunal. The effect of the appeal on the team's morale was devastating and they went on to win only six games for the year. Under extreme pressure, Hird resigned on 18 August 2015 following a disastrous 112-point loss to Adelaide. Former West Coast Eagles premiership coach John Worsfold was appointed as the new senior coach on a three-year contract. On 12 January 2016 the Court of Arbitration for Sport overruled the AFL anti-doping tribunal's decision, deeming that 34 past and present players of the Essendon Football Club, took the banned substance Thymosin Beta-4. As a result, all 34 players, 12 of which were still at the club, were given two-year suspensions. However, all suspensions were effectively less due to players having previously taken part in provisional suspensions undertaken during the 2014/2015 off-season. As a result, Essendon contested the 2016 season with twelve of its regular senior players under suspension. In order for the club to remain competitive, the AFL granted Essendon the ability to upgrade all five of their rookie listed players and to sign an additional ten players to cover the loss of the suspended players for the season. Due to this unprecedented situation, many in the football community predicted the club would go through the 2016 AFL season without a win; however, they were able to win three matches: against , and in rounds 2, 21 and 23 respectively. The absence of its most experienced players also allowed the development of its young players, with Zach Merrett and Orazio Fantasia having breakout years, while Darcy Parish and Anthony McDonald-Tipungwuti, impressing in their debut seasons. Merrett acted as captain in the side's round 21 win over the Suns. The club eventually finished on the bottom of the ladder and thus claimed its first wooden spoon since 1933. Post-investigation (2017–present) Essendon made their final financial settlement related to the supplements saga in September 2017, just before finals started. They also improved vastly on their 2016 performance, finishing 7th in the home and away season and becoming the first team since in 2011 to go from wooden spooner to a finals appearance, but they ultimately lost their only final to . The 2017 season was also capped off by the retirement of much-loved club legend and ex-captain Jobe Watson, midfielder Brent Stanton, and ex-Geelong star James Kelly, who later took up a development coach role at the club. Midfielder Heath Hocking, who played 126 games for the club, was delisted. Expectations were high for the 2018 season, with the club having an outstanding offseason. The recruitment of Jake Stringer, Adam Saad and Devon Smith from the Western Bulldogs, Gold Coast Suns and Greater Western Sydney Giants respectively was expected to throw Essendon firmly into premiership contention. After beating the previous year's runner up (which went on to beat reigning premiers the following round) in round one, Essendon's form slumped severely, only winning one game out of the next seven rounds and losing to the then-winless Carlton in round eight. Senior assistant coach Mark Neeld was sacked by the club the following Monday. The team's form improved sharply after this, recording wins against top eight sides Geelong, GWS, eventual premiers West Coast and Sydney, and winning ten out of the last 13 games of the season. However, the mid-season revival was short-lived, with a loss to reigning premiers by eight points in round 22 ending any hopes they had of reaching the finals. The 2018 season was capped off by the club not offering veteran Brendon Goddard a new contract for 2019. Essendon acquired Dylan Shiel from in one of the most high-profile trades of the 2018 AFL Trade Period. The Bombers had inconsistent form throughout the 2019 season but qualified for the finals for the second time in three seasons, finishing eighth on the ladder with 12 wins and 10 losses. The Bombers, however, were no match for the West Coast Eagles in the first elimination final and lost by 55 points to end their season. The defeat extended their 15-year finals winning drought, having not won a final since 2004. Following the end of the 2019 season, assistant coach Ben Rutten was announced as John Worsfold's successor as senior coach, effective at the end of the 2020 AFL season. Rutten effectively shared co-coaching duties with Worsfold during the 2020 season. 2020 was a particularly disappointing year for the club. The Bombers failed to make the finals, finishing thirteenth on the AFL ladder with just six wins and a draw from 17 games. Conor McKenna became the first AFL player to test positive to COVID-19 during the pandemic. With Rutten solely at the helm in 2021, Essendon improved significantly from the previous year and returned to the finals, finishing eighth on the ladder with 11 wins and 11 losses. However, the Bombers’ 17-year drought without a finals victory would continue after losing to the Western Bulldogs by 49 points in the first elimination final. Club symbols Guernsey Essendon's first recorded jumpers were navy blue (The Footballers, edited by Thomas Power, 1875) although the club wore 'red and black caps and hose'. In 1877 The Footballers records the addition of 'a red sash over left shoulder'. This is the first time a red sash as part of the club jumper and by 1878 there are newspaper reports referring to Essendon players as 'the men in the sash'. Given that blue and navy blue were the most popular colours at the time it is thought that Essendon adopted a red sash in 1877 to distinguish its players from others in similar coloured jumpers. Clash jumpers In 2007, the AFL Commission laid down the requirement that all clubs must produce an alternative jumper for use in matches where jumpers are considered to clash. From 2007 to 2011, the Essendon clash guernsey was the same design as its home guernsey, but with a substantially wider sash such that the guernsey was predominantly red rather than predominantly black. This was changed after 2011 when the AFL deemed that the wider sash did not provide sufficient contrast. From 2012 to 2016, Essendon's clash guernsey was predominantly grey, with a red sash fimbriated in black; the grey field contained, in small print, the names of all Essendon premiership players. Before the 2016 season, Essendon's changed their clash guernsey to a predominantly red one, featuring a red sash in black. Similar to the grey jumper, the names of Essendon premiership players were also printed outside the sash. Yellow armbands Following Adam Ramanauskas' personal battle with cancer, a "Clash for Cancer" match against was launched in 2006. This was a joint venture between Essendon and the Cancer Council of Victoria to raise funds for the organisation. Despite a formal request to the AFL being denied, players wore yellow armbands for the match which resulted in the club being fined $20,000. In 2007, the AFL agreed to allow yellow armbands to be incorporated into the left sleeve of the jumper. The 'Clash for Cancer' match against Melbourne has become an annual event, repeated in subsequent seasons, though in 2012, 2013, 2014 and 2016, (twice), the Sydney Swans and Brisbane Lions were the opponents in those respective seasons instead of Melbourne. In 2009, the jumpers were auctioned along with yellow boots worn by some players during the match. Club song The club's theme song, "See the Bombers Fly Up", is thought to have been written c. 1959 by Kevin Andrews in the home of player Jeff Gamble at which time Kevin Andrews was living. The song is based on the tune of Johnnie Hamp's 1929 song "(Keep Your) Sunny Side Up" at an increased tempo. Jeff Gamble came up with the line 'See the bombers fly up, up' while Kevin Andrews contributed all or most of the rest. At the time, "(Keep Your) Sunny Side Up" was the theme song for the popular Melbourne-based TV show on Channel 7 Sunnyside Up. The official version of the song was recorded in 1972 by the Fable Singers and is still used today. The song, as with all other AFL clubs, is played prior to every match and at the conclusion of matches when the team is victorious. See the Bombers fly up, up! To win the premiership flag. Our boys who play this grand old game, Are always striving for glory and fame! See the bombers fly up, up, The other teams they don't fear; They all try their best, But they can't get near, As the bombers fly up! Songwriter Mike Brady, of "Up There Cazaly" fame, penned an updated version of the song in 1999 complete with a new verse arrangement, but it was not well received. However, this version is occasionally played at club functions. Logo and mascot The club's current logo was introduced in 1998, making it the second oldest AFL logo currently in use, behind St. Kilda's logo, which was introduced in 1995. Their mascot is known as "Skeeta Reynolds", and was named after Dick Reynolds. He is a mosquito and was created in honour of the team's back-to-back premiership sides in the 1920s known as the "Mosquito Fleet". He was first named through a competition run in the Bomber magazine with "Skeeta" being the winning entry. This was later changed to "Skeeta Reynolds". He appears as a red mosquito in an Essendon jumper and wears a red and black scarf. Membership Rivalries Essendon has a four-way rivalry with , , and being the four biggest and most supported clubs in Victoria. Matches between the clubs are often close regardless of form and ladder positions. If out of the race themselves, all four have the desire to deny the others a finals spot or a premiership. Essendon also has a fierce rivalry with Hawthorn stemming from the 1980s. This rivalry became even more heated when Matthew Lloyd knocked out Brad Sewell with a bump. This then led to an all-in brawl between both sides. Additionally, Essendon has a three-decade rivalry with the West Coast Eagles. – The rivalry between Essendon and Carlton is considered one of the strongest in the league. With the teams sharing the record of 16 premierships, both sides are keen to become outright leader, or if out of the finals race, at least ensure the other doesn't. In recent years, the rivalry has thickened, with Carlton beating the 1999 Minor Premiers and premiership favourites by 1 point in the Preliminary Final. Other notable meetings between the two clubs include the 1908, 1947, 1949, 1962 and 1968 VFL Grand Finals and 1993 AFL Grand Final, with some decided by small margins. – In the early days of the VFL, this rivalry grew out of several Grand Final meetings: 1901, 1902 and 1911. The teams didn't meet again in a Grand Final until 1990 when Collingwood won to draw level with the Bombers on 14 premierships and deny the Bombers a chance to join Carlton with 15 flags. Since 1995, the rivalry has been even more fierce, with the clubs facing off against each other annually in the Anzac Day clash, a match which is described as the second biggest of the season (behind only the Grand Final). Being possibly the two biggest football clubs in Victoria, regardless of their position on the ladder, this game always attracts a huge crowd, and it is a match both teams have a great desire to win regardless of either team's season prospects. – This rivalry stems out of the 1942 Grand Final which Essendon won. In 1974, a half-time brawl took place involving trainers, officials and players at Windy Hill and has become infamous as one of the biggest ever. The teams didn't meet in the finals between 1944 and 1995, but there have been many close margins in home and away season matches as a result of each team's "never say die" attitude and ability to come back from significant margins in the dying stages of matches. Having met in the AFL's Rivalry Round in (2006 and 2009) and meeting in the Dreamtime at the 'G match since 2005, the rivalry and passion between the clubs and supporters has re-ignited. In recent years the rivalry has been promoted as the "Clash of the Sash". – The two sides had a number of physical encounters in the mid-1980s when they were the top two sides of the competition. The rivalry was exacerbated when Dermott Brereton ran through Essendon's three-quarter time huddle during a match in 1988 and again by an all in brawl during a match in 2004 allegedly instigated by Brereton (now known as the Line in the Sand Match after the direction allegedly given by Brereton for the Hawthorn players to make a physical stand). This was reminiscent of the 1980s when battles with Hawthorn were often hard and uncompromising affairs. During Round 22 of the 2009 season, Essendon and Hawthorn played for the last finals spot up for grabs. The teams played out an extremely physical game and despite being 22 points down at half time Essendon went on to win by 17 points. The game included a brawl shortly after half time sparked by Essendon's captain Matthew Lloyd knocking out Hawthorn midfielder Brad Sewell, which led Hawthorn's Campbell Brown to label Lloyd a 'sniper', and promised revenge if Lloyd played on in 2010. – One of the fiercest rivalries in the AFL can be traced back to 1896, when several clubs, including Essendon, broke away from the Victorian Football Association to form the Victorian Football League. North sought to join the breakaway competition, but some argue this desire was not realised due to Essendon feeling threatened by North's proximity and the fact their inclusion could drain Essendon of vital talent. More than 100 years later, some North supporters have not forgiven Essendon for the decision and have blamed the Bombers for their small supporter base and gate revenue. North were finally admitted into the VFL in 1925 alongside Footscray and Hawthorn. In 1950, the two sides met in their first and only grand final meeting to date, which Essendon won by 38 points. The rivalry would flare up again in the 1980s. In 1982, the Krakouer brothers, Jim and Phil, led the Roos to an Elimination Final win. Essendon had their revenge a year later, winning a Preliminary Final by 86 points. The rivalry was re-ignited in the late 1990s and early 2000s due to the on-field success of the two sides. In preparation for the 1998 finals series, and despite losing six of their last eight to the Roos, legendary Essendon coach Kevin Sheedy publicly labelled North executives Greg Miller and Mark Dawson soft in response to comments from commentators that his Essendon team was soft. The Kangaroos beat Essendon in the much-hyped encounter that followed (a Qualifying Final), and |
and technique Blyton worked in a wide range of fictional genres, from fairy tales to animal, nature, detective, mystery, and circus stories, but she often "blurred the boundaries" in her books, and encompassed a range of genres even in her short stories. In a 1958 article published in The Author, she wrote that there were a "dozen or more different types of stories for children", and she had tried them all, but her favourites were those with a family at their centre. In a letter to the psychologist Peter McKellar, Blyton describes her writing technique: In another letter to McKellar she describes how in just five days she wrote the 60,000-word book The River of Adventure, the eighth in her Adventure Series, by listening to what she referred to as her "under-mind", which she contrasted with her "upper conscious mind". Blyton was unwilling to conduct any research or planning before beginning work on a new book, which coupled with the lack of variety in her life according to Druce almost inevitably presented the danger that she might unconsciously, and clearly did, plagiarise the books she had read, including her own. Gillian has recalled that her mother "never knew where her stories came from", but that she used to talk about them "coming from her 'mind's eye", as did William Wordsworth and Charles Dickens. Blyton had "thought it was made up of every experience she'd ever had, everything she's seen or heard or read, much of which had long disappeared from her conscious memory" but never knew the direction her stories would take. Blyton further explained in her biography that "If I tried to think out or invent the whole book, I could not do it. For one thing, it would bore me and for another, it would lack the 'verve' and the extraordinary touches and surprising ideas that flood out from my imagination." Blyton's daily routine varied little over the years. She usually began writing soon after breakfast, with her portable typewriter on her knee and her favourite red Moroccan shawl nearby; she believed that the colour red acted as a "mental stimulus" for her. Stopping only for a short lunch break she continued writing until five o'clock, by which time she would usually have produced 6,000–10,000 words. A 2000 article in The Malay Mail considers Blyton's children to have "lived in a world shaped by the realities of post-war austerity", enjoying freedom without the political correctness of today, which serves modern readers of Blyton's novels with a form of escapism. Brandon Robshaw of The Independent refers to the Blyton universe as "crammed with colour and character", "self-contained and internally consistent", noting that Blyton exemplifies a strong mistrust of adults and figures of authority in her works, creating a world in which children govern. Gillian noted that in her mother's adventure, detective and school stories for older children, "the hook is the strong storyline with plenty of cliffhangers, a trick she acquired from her years of writing serialised stories for children's magazines. There is always a strong moral framework in which bravery and loyalty are (eventually) rewarded". Blyton herself wrote that "my love of children is the whole foundation of all my work". Victor Watson, Assistant Director of Research at Homerton College, Cambridge, believes that Blyton's works reveal an "essential longing and potential associated with childhood", and notes how the opening pages of The Mountain of Adventure present a "deeply appealing ideal of childhood". He argues that Blyton's work differs from that of many other authors in its approach, describing the narrative of The Famous Five series for instance as "like a powerful spotlight, it seeks to illuminate, to explain, to demystify. It takes its readers on a roller-coaster story in which the darkness is always banished; everything puzzling, arbitrary, evocative is either dismissed or explained". Watson further notes how Blyton often used minimalist visual descriptions and introduced a few careless phrases such as "gleamed enchantingly" to appeal to her young readers. From the mid-1950s rumours began to circulate that Blyton had not written all the books attributed to her, a charge she found particularly distressing. She published an appeal in her magazine asking children to let her know if they heard such stories and, after one mother informed her that she had attended a parents' meeting at her daughter's school during which a young librarian had repeated the allegation, Blyton decided in 1955 to begin legal proceedings. The librarian was eventually forced to make a public apology in open court early the following year, but the rumours that Blyton operated "a 'company' of ghost writers" persisted, as some found it difficult to believe that one woman working alone could produce such a volume of work. Enid's Conservative personal politics were often in view in her fiction. In The Mystery of the Missing Necklace (a The Five Find-Outers installment), she uses the character of young Elizabeth ("Bets") to give a statement praising Winston Churchill and describing the politician as a "statesman". Charitable work Blyton felt a responsibility to provide her readers with a positive moral framework, and she encouraged them to support worthy causes. Her view, expressed in a 1957 article, was that children should help animals and other children rather than adults: Blyton and the members of the children's clubs she promoted via her magazines raised a great deal of money for various charities; according to Blyton, membership of her clubs meant "working for others, for no reward". The largest of the clubs she was involved with was the Busy Bees, the junior section of the People's Dispensary for Sick Animals, which Blyton had actively supported since 1933. The club had been set up by Maria Dickin in 1934, and after Blyton publicised its existence in the Enid Blyton Magazine it attracted 100,000 members in three years. Such was Blyton's popularity among children that after she became Queen Bee in 1952 more than 20,000 additional members were recruited in her first year in office. The Enid Blyton Magazine Club was formed in 1953. Its primary objective was to raise funds to help those children with cerebral palsy who attended a centre in Cheyne Walk, in Chelsea, London, by furnishing an on-site hostel among other things. The Famous Five series gathered such a following that readers asked Blyton if they might form a fan club. She agreed, on condition that it serve a useful purpose, and suggested that it could raise funds for the Shaftesbury Society Babies' Home in Beaconsfield, on whose committee she had served since 1948. The club was established in 1952, and provided funds for equipping a Famous Five Ward at the home, a paddling pool, sun room, summer house, playground, birthday and Christmas celebrations, and visits to the pantomime. By the late 1950s Blyton's clubs had a membership of 500,000, and raised £35,000 in the six years of the Enid Blyton Magazine'''s run. By 1974 the Famous Five Club had a membership of 220,000, and was growing at the rate of 6,000 new members a year. The Beaconsfield home it was set up to support closed in 1967, but the club continued to raise funds for other paediatric charities, including an Enid Blyton bed at Great Ormond Street Hospital and a mini-bus for disabled children at Stoke Mandeville Hospital. Jigsaw puzzle and games Blyton capitalised upon her commercial success as an author by negotiating agreements with jigsaw puzzle and games manufacturers from the late 1940s onwards; by the early 1960s some 146 different companies were involved in merchandising Noddy alone. In 1948 Bestime released four jigsaw puzzles featuring her characters, and the first Enid Blyton board game appeared, Journey Through Fairyland, created by BGL. The first card game, Faraway Tree, appeared from Pepys in 1950. In 1954 Bestime released the first four jigsaw puzzles of the Secret Seven, and the following year a Secret Seven card game appeared. Bestime released the Little Noddy Car Game in 1953 and the Little Noddy Leap Frog Game in 1955, and in 1956 American manufacturer Parker Brothers released Little Noddy's Taxi Game, a board game which features Noddy driving about town, picking up various characters. Bestime released its Plywood Noddy Jigsaws series in 1957 and a Noddy jigsaw series featuring cards appeared from 1963, with illustrations by Robert Lee. Arrow Games became the chief producer of Noddy jigsaws in the late 1970s and early 1980s. Whitman manufactured four new Secret Seven jigsaw puzzles in 1975, and produced four new Malory Towers ones two years later. In 1979 the company released a Famous Five adventure board game, Famous Five Kirrin Island Treasure. Stephen Thraves wrote eight Famous Five adventure game books, published by Hodder & Stoughton in the 1980s. The first adventure game book of the series, The Wreckers' Tower Game, was published in October 1984. Personal life On 28 August 1924, Blyton married Major Hugh Alexander Pollock, DSO (1888–1968) at Bromley Register Office, without inviting her family. They married shortly after his divorce from his first wife, with whom he had two sons, one of them already deceased. Pollock was editor of the book department in the publishing firm George Newnes, which became Blyton's regular publisher. It was he who requested her to write a book about animals, resulting in The Zoo Book, completed in the month before their marriage. They initially lived in a flat in Chelsea before moving to Elfin Cottage in Beckenham in 1926 and then to Old Thatch in Bourne End (called Peterswood in her books) in 1929. Blyton's first daughter, Gillian, was born on 15 July 1931, and, after a miscarriage in 1934, she gave birth to a second daughter, Imogen, on 27 October 1935. In 1938, she and her family moved to a house in Beaconsfield, named Green Hedges by Blyton's readers, following a competition in her magazine. By the mid-1930s, Pollock had become a secret alcoholic, withdrawing increasingly from public life—possibly triggered through his meetings, as a publisher, with Winston Churchill, which may have reawakened the trauma Pollock suffered during the World War I. With the outbreak of World War II, he became involved in the Home Guard and also re-encountered Ida Crowe, an aspiring writer 19 years his junior, whom he had first met years earlier. He made her an offer to join him as secretary in his posting to a Home Guard training center at Denbies, a Gothic mansion in Surrey belonging to Lord Ashcombe, and they began a romantic relationship. Blyton's marriage to Pollock was troubled for years, and according to Crowe's memoir, she had a series of affairs, including a lesbian relationship with one of the children's nannies. In 1941, Blyton met Kenneth Fraser Darrell Waters, a London surgeon with whom she began a serious affair. Pollock discovered the liaison, and threatened to initiate divorce proceedings. Due to fears that exposure of her adultery would ruin her public image, it was ultimately agreed that Blyton would instead file for divorce against Pollock. According to Crowe's memoir, Blyton promised that if he admitted to infidelity, she would allow him parental access to their daughters; but after the divorce, he was denied contact with them, and Blyton made sure he was subsequently unable to find work in publishing. Pollock, having married Crowe on 26 October 1943, eventually resumed his heavy drinking and was forced to petition for bankruptcy in 1950. Blyton and Darrell Waters married at the City of Westminster Register Office on 20 October 1943. She changed the surname of her daughters to Darrell Waters and publicly embraced her new role as a happily married and devoted doctor's wife. After discovering she was pregnant in the spring of 1945, Blyton miscarried five months later, following a fall from a ladder. The baby would have been Darrell Waters's first child and the son for which they both longed. Her love of tennis included playing naked, with nude tennis "a common practice in those days among the more louche members of the middle classes". Blyton's health began to deteriorate in 1957, when, during a round of golf, she started to feel faint and breathless, and, by 1960, she was displaying signs of dementia. Her agent, George Greenfield, recalled that it was "unthinkable" for the "most famous and successful of children's authors with her enormous energy and computerlike memory" to be losing her mind and suffering from what is now known as Alzheimer's disease in her mid-60s. Worsening Blyton's situation was her husband's declining health throughout the 1960s; he suffered from severe arthritis in his neck and hips, deafness, and became increasingly ill-tempered and erratic until his death on 15 September 1967. The story of Blyton's life was dramatised in a BBC film entitled Enid, which aired in the United Kingdom on BBC Four on 16 November 2009. Helena Bonham Carter, who played the title role, described Blyton as "a complete workaholic, an achievement junkie and an extremely canny businesswoman" who "knew how to brand herself, right down to the famous signature". Death and legacy During the months following her husband's death, Blyton became increasingly ill and moved into a nursing home three months before her death. She died in her sleep of Alzheimer's disease at the Greenways Nursing Home, Hampstead, North London, on 28 November 1968, aged 71. A memorial service was held at St James's Church, Piccadilly and she was cremated at Golders Green Crematorium, where her ashes remain. Blyton's home, Green Hedges, was auctioned on 26 May 1971 and demolished in 1973; the site is now occupied by houses and a street named Blyton Close. An English Heritage blue plaque commemorates Blyton at Hook Road in Chessington, where she lived from 1920 to 1924. In 2014, a plaque recording her time as a Beaconsfield resident from 1938 until her death in 1968 was unveiled in the town hall gardens, next to small iron figures of Noddy and Big Ears. Since her death and the publication of her daughter Imogen's 1989 autobiography, A Childhood at Green Hedges, Blyton has emerged as an emotionally immature, unstable and often malicious figure. Imogen considered her mother to be "arrogant, insecure, pretentious, very skilled at putting difficult or unpleasant things out of her mind, and without a trace of maternal instinct. As a child, I viewed her as a rather strict authority. As an adult I pitied her." Blyton's eldest daughter Gillian remembered her rather differently however, as "a fair and loving mother, and a fascinating companion". The Enid Blyton Trust for Children was established in 1982, with Imogen as its first chairman, and in 1985 it established the National Library for the Handicapped Child. Enid Blyton's Adventure Magazine began publication in September 1985 and, on 14 October 1992, the BBC began publishing Noddy Magazine and released the Noddy CD-Rom in October 1996. The first Enid Blyton Day was held at Rickmansworth on 6 March 1993 and, in October 1996, the Enid Blyton award, The Enid, was given to those who have made outstanding contributions towards children. The Enid Blyton Society was formed in early 1995, to provide "a focal point for collectors and enthusiasts of Enid Blyton" through its thrice-annual Enid Blyton Society Journal, its annual Enid Blyton Day and its website. On 16 December 1996, Channel 4 broadcast a documentary about Blyton, Secret Lives. To celebrate her centenary in 1997, exhibitions were put on at the London Toy & Model Museum (now closed), Hereford and Worcester County Museum and Bromley Library and, on 9 September, the Royal Mail issued centenary stamps. The London-based entertainment and retail company Trocadero plc purchased Blyton's Darrell Waters Ltd in 1995 for £14.6 million and established a subsidiary, Enid Blyton Ltd, to handle all intellectual properties, character brands and media in Blyton's works. The group changed its name to Chorion in 1998 but, after financial difficulties in 2012, sold its assets. Hachette UK acquired from Chorion world rights in the Blyton estate in March 2013, including The Famous Five series but excluding the rights to Noddy, which had been sold to DreamWorks Classics (formerly Classic Media, now a subsidiary of DreamWorks Animation) in 2012. Blyton's granddaughter, Sophie Smallwood, wrote a new Noddy book to celebrate the character's 60th birthday, 46 years after the last book was published; Noddy and the Farmyard Muddle (2009) was illustrated by Robert Tyndall. In February 2011, the manuscript of a previously unknown Blyton novel, Mr Tumpy's Caravan, was discovered by the archivist at Seven Stories, National Centre for Children's Books in a collection of papers belonging to Blyton's daughter Gillian, purchased by Seven Stories in 2010 following her death. It was initially thought to belong to a comic strip collection of the same name published in 1949, but it appears to be unrelated and is believed to be something written in the 1930s, which had been rejected by a publisher. In a 1982 survey of 10,000 eleven-year-old children, Blyton was voted their most popular writer. She is the world's fourth most-translated author, behind Agatha Christie, Jules Verne and William Shakespeare with her books being translated into 90 languages. From 2000 to 2010, Blyton was listed as a Top Ten author, selling almost 8 million copies (worth £31.2 million) in the UK alone. In 2003, The Magic Faraway Tree was voted 66th in the BBC's Big Read. In the 2008 Costa Book Awards, Blyton was voted Britain's best-loved author. Her books continue to be very popular among children in Commonwealth nations such as India, Pakistan, Sri Lanka, Singapore, Malta, New Zealand and Australia, and around the world. They have also seen a surge of popularity in China, where they are "big with every generation". In March 2004, Chorion and the Chinese publisher Foreign Language Teaching and Research Press negotiated an agreement over the Noddy franchise, which included bringing the character to an animated series on television, with a potential audience of a further 95 million children under the age of five. Chorion spent around £10 million digitising Noddy and, as of 2002, had made television agreements with at least 11 countries worldwide. Novelists influenced by Blyton include the crime writer Denise Danks, whose fictional detective Georgina Powers is based on George from the Famous Five. Peter Hunt's A Step off the Path (1985) is also influenced by the Famous Five, and the St. Clare's and Malory Towers series provided the inspiration for Jacqueline Wilson's Double Act (1996) and Adèle Geras's Egerton Hall trilogy (1990–92) respectively. Blyton was important to Stieg Larsson. "The series Stieg Larsson most often mentioned were the Famous Five and the Adventure books." Critical backlash A.H. Thompson, who compiled an extensive overview of censorship efforts in the United Kingdom's public libraries, dedicated an entire chapter to "The Enid Blyton Affair", and wrote of her in 1975: | By the late 1950s Blyton's clubs had a membership of 500,000, and raised £35,000 in the six years of the Enid Blyton Magazine'''s run. By 1974 the Famous Five Club had a membership of 220,000, and was growing at the rate of 6,000 new members a year. The Beaconsfield home it was set up to support closed in 1967, but the club continued to raise funds for other paediatric charities, including an Enid Blyton bed at Great Ormond Street Hospital and a mini-bus for disabled children at Stoke Mandeville Hospital. Jigsaw puzzle and games Blyton capitalised upon her commercial success as an author by negotiating agreements with jigsaw puzzle and games manufacturers from the late 1940s onwards; by the early 1960s some 146 different companies were involved in merchandising Noddy alone. In 1948 Bestime released four jigsaw puzzles featuring her characters, and the first Enid Blyton board game appeared, Journey Through Fairyland, created by BGL. The first card game, Faraway Tree, appeared from Pepys in 1950. In 1954 Bestime released the first four jigsaw puzzles of the Secret Seven, and the following year a Secret Seven card game appeared. Bestime released the Little Noddy Car Game in 1953 and the Little Noddy Leap Frog Game in 1955, and in 1956 American manufacturer Parker Brothers released Little Noddy's Taxi Game, a board game which features Noddy driving about town, picking up various characters. Bestime released its Plywood Noddy Jigsaws series in 1957 and a Noddy jigsaw series featuring cards appeared from 1963, with illustrations by Robert Lee. Arrow Games became the chief producer of Noddy jigsaws in the late 1970s and early 1980s. Whitman manufactured four new Secret Seven jigsaw puzzles in 1975, and produced four new Malory Towers ones two years later. In 1979 the company released a Famous Five adventure board game, Famous Five Kirrin Island Treasure. Stephen Thraves wrote eight Famous Five adventure game books, published by Hodder & Stoughton in the 1980s. The first adventure game book of the series, The Wreckers' Tower Game, was published in October 1984. Personal life On 28 August 1924, Blyton married Major Hugh Alexander Pollock, DSO (1888–1968) at Bromley Register Office, without inviting her family. They married shortly after his divorce from his first wife, with whom he had two sons, one of them already deceased. Pollock was editor of the book department in the publishing firm George Newnes, which became Blyton's regular publisher. It was he who requested her to write a book about animals, resulting in The Zoo Book, completed in the month before their marriage. They initially lived in a flat in Chelsea before moving to Elfin Cottage in Beckenham in 1926 and then to Old Thatch in Bourne End (called Peterswood in her books) in 1929. Blyton's first daughter, Gillian, was born on 15 July 1931, and, after a miscarriage in 1934, she gave birth to a second daughter, Imogen, on 27 October 1935. In 1938, she and her family moved to a house in Beaconsfield, named Green Hedges by Blyton's readers, following a competition in her magazine. By the mid-1930s, Pollock had become a secret alcoholic, withdrawing increasingly from public life—possibly triggered through his meetings, as a publisher, with Winston Churchill, which may have reawakened the trauma Pollock suffered during the World War I. With the outbreak of World War II, he became involved in the Home Guard and also re-encountered Ida Crowe, an aspiring writer 19 years his junior, whom he had first met years earlier. He made her an offer to join him as secretary in his posting to a Home Guard training center at Denbies, a Gothic mansion in Surrey belonging to Lord Ashcombe, and they began a romantic relationship. Blyton's marriage to Pollock was troubled for years, and according to Crowe's memoir, she had a series of affairs, including a lesbian relationship with one of the children's nannies. In 1941, Blyton met Kenneth Fraser Darrell Waters, a London surgeon with whom she began a serious affair. Pollock discovered the liaison, and threatened to initiate divorce proceedings. Due to fears that exposure of her adultery would ruin her public image, it was ultimately agreed that Blyton would instead file for divorce against Pollock. According to Crowe's memoir, Blyton promised that if he admitted to infidelity, she would allow him parental access to their daughters; but after the divorce, he was denied contact with them, and Blyton made sure he was subsequently unable to find work in publishing. Pollock, having married Crowe on 26 October 1943, eventually resumed his heavy drinking and was forced to petition for bankruptcy in 1950. Blyton and Darrell Waters married at the City of Westminster Register Office on 20 October 1943. She changed the surname of her daughters to Darrell Waters and publicly embraced her new role as a happily married and devoted doctor's wife. After discovering she was pregnant in the spring of 1945, Blyton miscarried five months later, following a fall from a ladder. The baby would have been Darrell Waters's first child and the son for which they both longed. Her love of tennis included playing naked, with nude tennis "a common practice in those days among the more louche members of the middle classes". Blyton's health began to deteriorate in 1957, when, during a round of golf, she started to feel faint and breathless, and, by 1960, she was displaying signs of dementia. Her agent, George Greenfield, recalled that it was "unthinkable" for the "most famous and successful of children's authors with her enormous energy and computerlike memory" to be losing her mind and suffering from what is now known as Alzheimer's disease in her mid-60s. Worsening Blyton's situation was her husband's declining health throughout the 1960s; he suffered from severe arthritis in his neck and hips, deafness, and became increasingly ill-tempered and erratic until his death on 15 September 1967. The story of Blyton's life was dramatised in a BBC film entitled Enid, which aired in the United Kingdom on BBC Four on 16 November 2009. Helena Bonham Carter, who played the title role, described Blyton as "a complete workaholic, an achievement junkie and an extremely canny businesswoman" who "knew how to brand herself, right down to the famous signature". Death and legacy During the months following her husband's death, Blyton became increasingly ill and moved into a nursing home three months before her death. She died in her sleep of Alzheimer's disease at the Greenways Nursing Home, Hampstead, North London, on 28 November 1968, aged 71. A memorial service was held at St James's Church, Piccadilly and she was cremated at Golders Green Crematorium, where her ashes remain. Blyton's home, Green Hedges, was auctioned on 26 May 1971 and demolished in 1973; the site is now occupied by houses and a street named Blyton Close. An English Heritage blue plaque commemorates Blyton at Hook Road in Chessington, where she lived from 1920 to 1924. In 2014, a plaque recording her time as a Beaconsfield resident from 1938 until her death in 1968 was unveiled in the town hall gardens, next to small iron figures of Noddy and Big Ears. Since her death and the publication of her daughter Imogen's 1989 autobiography, A Childhood at Green Hedges, Blyton has emerged as an emotionally immature, unstable and often malicious figure. Imogen considered her mother to be "arrogant, insecure, pretentious, very skilled at putting difficult or unpleasant things out of her mind, and without a trace of maternal instinct. As a child, I viewed her as a rather strict authority. As an adult I pitied her." Blyton's eldest daughter Gillian remembered her rather differently however, as "a fair and loving mother, and a fascinating companion". The Enid Blyton Trust for Children was established in 1982, with Imogen as its first chairman, and in 1985 it established the National Library for the Handicapped Child. Enid Blyton's Adventure Magazine began publication in September 1985 and, on 14 October 1992, the BBC began publishing Noddy Magazine and released the Noddy CD-Rom in October 1996. The first Enid Blyton Day was held at Rickmansworth on 6 March 1993 and, in October 1996, the Enid Blyton award, The Enid, was given to those who have made outstanding contributions towards children. The Enid Blyton Society was formed in early 1995, to provide "a focal point for collectors and enthusiasts of Enid Blyton" through its thrice-annual Enid Blyton Society Journal, its annual Enid Blyton Day and its website. On 16 December 1996, Channel 4 broadcast a documentary about Blyton, Secret Lives. To celebrate her centenary in 1997, exhibitions were put on at the London Toy & Model Museum (now closed), Hereford and Worcester County Museum and Bromley Library and, on 9 September, the Royal Mail issued centenary stamps. The London-based entertainment and retail company Trocadero plc purchased Blyton's Darrell Waters Ltd in 1995 for £14.6 million and established a subsidiary, Enid Blyton Ltd, to handle all intellectual properties, character brands and media in Blyton's works. The group changed its name to Chorion in 1998 but, after financial difficulties in 2012, sold its assets. Hachette UK acquired from Chorion world rights in the Blyton estate in March 2013, including The Famous Five series but excluding the rights to Noddy, which had been sold to DreamWorks Classics (formerly Classic Media, now a subsidiary of DreamWorks Animation) in 2012. Blyton's granddaughter, Sophie Smallwood, wrote a new Noddy book to celebrate the character's 60th birthday, 46 years after the last book was published; Noddy and the Farmyard Muddle (2009) was illustrated by Robert Tyndall. In February 2011, the manuscript of a previously unknown Blyton novel, Mr Tumpy's Caravan, was discovered by the archivist at Seven Stories, National Centre for Children's Books in a collection of papers belonging to Blyton's daughter Gillian, purchased by Seven Stories in 2010 following her death. It was initially thought to belong to a comic strip collection of the same name published in 1949, but it appears to be unrelated and is believed to be something written in the 1930s, which had been rejected by a publisher. In a 1982 survey of 10,000 eleven-year-old children, Blyton was voted their most popular writer. She is the world's fourth most-translated author, behind Agatha Christie, Jules Verne and William Shakespeare with her books being translated into 90 languages. From 2000 to 2010, Blyton was listed as a Top Ten author, selling almost 8 million copies (worth £31.2 million) in the UK alone. In 2003, The Magic Faraway Tree was voted 66th in the BBC's Big Read. In the 2008 Costa Book Awards, Blyton was voted Britain's best-loved author. Her books continue to be very popular among children in Commonwealth nations such as India, Pakistan, Sri Lanka, Singapore, Malta, New Zealand and Australia, and around the world. They have also seen a surge of popularity in China, where they are "big with every generation". In March 2004, Chorion and the Chinese publisher Foreign Language Teaching and Research Press negotiated an agreement over the Noddy franchise, which included bringing the character to an animated series on television, with a potential audience of a further 95 million children under the age of five. Chorion spent around £10 million digitising Noddy and, as of 2002, had made television agreements with at least 11 countries worldwide. Novelists influenced by Blyton include the crime writer Denise Danks, whose fictional detective Georgina Powers is based on George from the Famous Five. Peter Hunt's A Step off the Path (1985) is also influenced by the Famous Five, and the St. Clare's and Malory Towers series provided the inspiration for Jacqueline Wilson's Double Act (1996) and Adèle Geras's Egerton Hall trilogy (1990–92) respectively. Blyton was important to Stieg Larsson. "The series Stieg Larsson most often mentioned were the Famous Five and the Adventure books." Critical backlash A.H. Thompson, who compiled an extensive overview of censorship efforts in the United Kingdom's public libraries, dedicated an entire chapter to "The Enid Blyton Affair", and wrote of her in 1975: Blyton's range of plots and settings has been described as limited, repetitive and continually recycled. Many of her books were critically assessed by teachers and librarians, deemed unfit for children to read, and removed from syllabuses and public libraries. Responding to claims that her moral views were "dependably predictable", Blyton commented that "most of you could write down perfectly correctly all the things that I believe in and stand for – you have found them in my books, and a writer's books are always a faithful reflection of himself". From the 1930s to the 1950s the BBC operated a de facto ban on dramatising Blyton's books for radio, considering her to be a "second-rater" whose work was without literary merit. The children's literary critic Margery Fisher likened Blyton's books to "slow poison", and Jean E. Sutcliffe of the BBC's schools broadcast department wrote of Blyton's ability to churn out "mediocre material", noting that "her capacity to do so amounts to genius ... anyone else would have died of boredom long ago". Michael Rosen, Children's Laureate from 2007 until 2009, wrote that "I find myself flinching at occasional bursts of snobbery and the assumed level of privilege of the children and families in the books." The children's author Anne Fine presented an overview of the concerns about Blyton's work and responses to them on BBC Radio 4 in November 2008, in which she noted the "drip, drip, drip of disapproval" associated with the books. Blyton's response to her critics was that she was uninterested in the views of anyone over the age of 12, stating that half the attacks on her work were motivated by jealousy and the rest came from "stupid people who don't know what they're talking about because they've never read any of my books". Despite criticism by contemporaries that her work's quality began to suffer in the 1950s at the expense of its increasing volume, Blyton nevertheless capitalised on being generally regarded at the time as "a more 'savoury', English alternative" to what some considered an "invasion" of Britain by American culture, in the form of "rock music, horror comics, television, teenage culture, delinquency, and Disney". According to British academic Nicholas Tucker, the works of Enid Blyton have been "banned from more public libraries over the years than is the case with any other adult or children's author", though such attempts to quell the popularity of her books over the years seem to have been largely unsuccessful, and "she still remains very widely read". Simplicity Some librarians felt that Blyton's restricted use of language, a conscious product of her teaching background, was prejudicial to an appreciation of more literary qualities. In a scathing article published in Encounter in 1958, the journalist Colin Welch remarked that it was "hard to see how a diet of Miss Blyton could help with the 11-plus or even with the Cambridge English Tripos", but reserved his harshest criticism for Blyton's Noddy, describing him as an "unnaturally priggish ... sanctimonious ... witless, spiritless, snivelling, sneaking doll." The author and educational psychologist Nicholas Tucker notes that it was common to see Blyton cited as people's favourite or least favourite author according to their age, and argues that her books create an "encapsulated world for young readers that simply dissolves with age, leaving behind only memories of excitement and strong identification". Fred Inglis considers Blyton's books to be technically easy to read, but to also be "emotionally and cognitively easy". He mentions that the psychologist Michael Woods believed that Blyton was different from many other older authors writing for children in that she seemed untroubled by presenting them with a world that differed from reality. Woods surmised that Blyton "was a child, she thought as a child, and wrote as a child ... the basic feeling is essentially pre-adolescent ... Enid Blyton has no moral dilemmas ... Inevitably Enid Blyton was labelled by rumour a child-hater. If true, such a fact should come as no surprise to us, for as a child herself all other children can be nothing but rivals for her." Inglis argues though that Blyton was clearly devoted to children and put an enormous amount of energy into her work, with a powerful belief in "representing the crude moral diagrams and garish fantasies of a readership". Blyton's daughter Imogen has stated that she "loved a relationship with children through her books", but real children were an intrusion, and there was no room for intruders in the world that Blyton occupied through her writing. Accusations of racism, xenophobia and sexism Accusations of racism in Blyton's books were first made by Lena Jeger in a Guardian article published in 1966. In the context of discussing possible moves to restrict publications inciting racial hatred, Jeger was critical of Blyton's The Little Black Doll, originally published in 1937. Sambo, the black doll of the title, is hated by his owner and other toys owing to his "ugly black face", and runs away. A shower of "magic rain" washes his face clean, after which he is welcomed back home with his now pink face. Jamaica Kincaid also considers the Noddy books to be "deeply racist" because of the blonde children and the black golliwogs. In Blyton's 1944 novel The Island of Adventure, a black servant named Jo-Jo is very intelligent, but is particularly cruel to the children. Accusations of xenophobia were also made. As George Greenfield observed, "Enid was very much part of that between the wars middle class which believed that foreigners were untrustworthy or funny or sometimes both". The publisher Macmillan conducted an internal assessment of Blyton's The Mystery That Never Was, submitted to them at the height of her fame in 1960. The review was carried out by the author and books editor Phyllis Hartnoll, in whose view "There is a faint but unattractive touch of old-fashioned xenophobia in the author's attitude to the thieves; they are 'foreign' ... and this seems to be regarded as sufficient to explain their criminality." Macmillan rejected the manuscript, but it was published by William Collins in 1961, and then again in 1965 and 1983. Blyton's depictions of boys and girls are considered by many critics to be sexist. In a Guardian article published in 2005 Lucy Mangan proposed that The Famous Five series depicts a power struggle between Julian, Dick and George (Georgina), in which the female characters either act like boys or are talked down to, as when Dick lectures George: "it's really time you gave up thinking you're as good as a boy". Revisions to later editions To address criticisms levelled at Blyton's work some later editions have been altered to reflect more politically progressive attitudes towards issues such as race, gender, violence between young persons, the treatment of children by adults, and legal changes in Britain as to what is allowable for young children to do in the years since the stories were originally written (e.g. purchasing fireworks); modern reprints of the Noddy series substitute teddy bears or goblins for golliwogs, for instance. The golliwogs who steal Noddy's car and dump him naked in the Dark Wood in Here Comes Noddy Again are replaced by goblins in the 1986 revision, who strip Noddy only of his shoes and hat and return at the end of the story to apologise.The Faraway Tree's Dame Slap, who made regular use of corporal punishment, was changed to Dame Snap who no longer did so, and the names of Dick and Fanny in the same series were changed to Rick and Frannie. Characters in the Malory Towers and St. Clare's series are no longer spanked or threatened with a spanking, but are instead scolded. References to George's short hair making her look like a boy were removed in revisions to Five on a Hike Together, reflecting the idea that girls need not have long hair to be considered feminine or normal. Anne of The Famous Five stating that boys cannot wear pretty dresses or like girls' dolls was removed. In The Adventurous Four, the names of the young twin girls were changed from Jill and Mary to Pippa and Zoe. In 2010 Hodder, the publisher of the Famous Five series, announced its intention to update the language used in the books, of which it sold more than half a million copies a year. The changes, which Hodder described as "subtle", mainly affect the dialogue rather than the narrative. For instance, "school tunic" becomes "uniform", "mother and father" and "mother and daddy" (this latter one used by young female characters and deemed sexist) become "mum and dad", "bathing" is replaced by "swimming", and "jersey" by "jumper". Some commentators see the changes as necessary to encourage modern readers, whereas others regard them as unnecessary and patronising. In 2016 Hodder's parent company Hachette announced that they would abandon the revisions as, based on feedback, they had not been a success. Stage, film and television adaptations In 1954 Blyton adapted Noddy for the stage, producing the Noddy in Toyland pantomime in just two or three weeks. The production was staged at the 2660-seat Stoll Theatre in Kingsway, London at Christmas. Its popularity resulted in the show running during the Christmas season for five or six years. Blyton was delighted with its reception by children in the audience, and attended the theatre three or four times a week. TV adaptations of Noddy since 1954 include one in the 1970s narrated by Richard Briers. In 1955 a stage play based on the Famous Five was produced, and in January 1997 the King's Head Theatre embarked on a six-month tour of the UK with The Famous Five Musical, to commemorate Blyton's centenary. On 21 November 1998 The Secret Seven Save the World was first performed at the Sherman Theatre in Cardiff. There have also been several film and television adaptations of the Famous Five: by the Children's Film |
recognise that these trends began in the Epipalaeolithic. The period may be subdivided into Early, Middle and Late Epipalaeolithic: The Early Epipalaeolithic corresponds to the Kebaran culture, c. 20,000 to 14,500 years ago, the Middle Epipalaeolithic is the Geometric Kebaran or late phase of the Kebaran, and the Late Epipalaeolithic to the Natufian, 14,500–11,500 BP. The Natufian overlaps with the incipient Neolithic Revolution, the Pre-Pottery Neolithic A. Levant Early Epipalaeolithic The Early Epipalaeolithic, also known as Kebaran, lasted from 20,000 to 12,150 BP. It followed the Upper Paleolithic Levantine Aurignacian (formerly called Antelian) period throughout the Levant. By the end of the Levantine Aurignacian, gradual changes took place in stone industries. Small stone tools called microliths and retouched bladelets can be found for the first time. The microliths of this culture period differ markedly from the Aurignacian artifacts. By 18,000 BP the climate and environment had changed, starting a period of transition. The Levant became more arid and the forest vegetation retreated, to be replaced by steppe. The cool and dry period ended at the beginning of Mesolithic 1. The hunter-gatherers of the Aurignacian would have had to modify their way of living and their pattern of settlement to adapt to the changing conditions. The crystallization of these new patterns resulted in Mesolithic 1. The people developed new types of settlements and new stone industries. The inhabitants of a small Mesolithic 1 site in the Levant left little more than their chipped stone tools behind. The industry was of small tools made of bladelets struck off single-platform cores. Besides bladelets, burins and end-scrapers have been found. A few bone tools and some ground stones have also been found. These so-called Mesolithic sites of Asia are far less numerous than those of the Neolithic, and the archeological remains are very poor. The type site is Kebara Cave south of Haifa. The Kebaran was characterized by small, geometric microliths. The people were thought to lack the specialized grinders and pounders found in later Near Eastern cultures. The Kebaran is preceded by the Athlitian phase of the Levantine Aurignacian (formerly called Antelian) and followed by the proto-agrarian Natufian culture of the Epipalaeolithic. The appearance of the Kebarian culture, of microlithic type, implies a significant rupture in the cultural continuity of Levantine Upper Paleolithic. The Kebaran culture, with its use of microliths, is associated also with the use of the bow and arrow and the domestication of the dog. The Kebaran is also characterised by the earliest collecting and processing of wild cereals, known due to the excavation of grain-grinding tools. This was the first step towards the Neolithic Revolution. The Kebaran people are believed to have migrated seasonally, dispersing to upland environments in the summer, and gathering in caves and rock shelters near lowland lakes in the winter. This diversity of environments may be the reason for the variety of tools found in their toolkits. The Kebaran is generally thought to have been ancestral to the later Natufian culture that occupied much of the same range. Harvesting of cereals The earliest evidence for the use of composite cereal harvesting tools are the glossed flint blades that have been found at the site of Ohalo II, a 23,000-year-old fisher-hunter-gatherers’ camp on the shore of the Sea of Galilee, Northern Israel. The Ohalo site is dated at the junction of the Upper Paleolithic and the Early Epipalaeolithic, and has been attributed to both periods. The wear traces on the tools indicate that these were used for harvesting near-ripe, semi-green wild cereals, shortly before grains ripen enough to disperse naturally. The study shows that the tools were not used intensively, and they reflect two harvesting modes: flint knives held by hand and inserts hafted into a handle. The finds reveal the existence of cereal harvesting techniques and tools some 8,000 years before the Natufian, and 12,000 years before the establishment of sedentary farming communities in the Near East during the Neolithic Revolution. Furthermore, the new finds accord well with evidence for the earliest ever cereal cultivation at the site, and for the use of stone-made grinding implements. Artistic expression in the Kebaran culture Evidence for | Late Epipalaeolithic to the Natufian, 14,500–11,500 BP. The Natufian overlaps with the incipient Neolithic Revolution, the Pre-Pottery Neolithic A. Levant Early Epipalaeolithic The Early Epipalaeolithic, also known as Kebaran, lasted from 20,000 to 12,150 BP. It followed the Upper Paleolithic Levantine Aurignacian (formerly called Antelian) period throughout the Levant. By the end of the Levantine Aurignacian, gradual changes took place in stone industries. Small stone tools called microliths and retouched bladelets can be found for the first time. The microliths of this culture period differ markedly from the Aurignacian artifacts. By 18,000 BP the climate and environment had changed, starting a period of transition. The Levant became more arid and the forest vegetation retreated, to be replaced by steppe. The cool and dry period ended at the beginning of Mesolithic 1. The hunter-gatherers of the Aurignacian would have had to modify their way of living and their pattern of settlement to adapt to the changing conditions. The crystallization of these new patterns resulted in Mesolithic 1. The people developed new types of settlements and new stone industries. The inhabitants of a small Mesolithic 1 site in the Levant left little more than their chipped stone tools behind. The industry was of small tools made of bladelets struck off single-platform cores. Besides bladelets, burins and end-scrapers have been found. A few bone tools and some ground stones have also been found. These so-called Mesolithic sites of Asia are far less numerous than those of the Neolithic, and the archeological remains are very poor. The type site is Kebara Cave south of Haifa. The Kebaran was characterized by small, geometric microliths. The people were thought to lack the specialized grinders and pounders found in later Near Eastern cultures. The Kebaran is preceded by the Athlitian phase of the Levantine Aurignacian (formerly called Antelian) and followed by the proto-agrarian Natufian culture of the Epipalaeolithic. The appearance of the Kebarian culture, of microlithic type, implies a significant rupture in the cultural continuity of Levantine Upper Paleolithic. The Kebaran culture, with its use of microliths, is associated also with the use of the bow and arrow and the domestication of the dog. The Kebaran is also characterised by the earliest collecting and processing of wild cereals, known due to the excavation of grain-grinding tools. This was the first step towards the Neolithic Revolution. The Kebaran people are believed to have migrated seasonally, dispersing to upland environments in the summer, and gathering in caves and rock shelters near lowland lakes in the winter. This diversity of environments may be the reason for the variety of tools found in their toolkits. The Kebaran is generally thought to have been ancestral to the later Natufian culture that occupied much of the same range. Harvesting of cereals The earliest evidence for the use of composite cereal harvesting tools are the glossed flint blades that have been found at the site of Ohalo II, a 23,000-year-old fisher-hunter-gatherers’ camp on the shore of the Sea of Galilee, Northern Israel. The Ohalo site is dated at the junction of the Upper Paleolithic and the Early Epipalaeolithic, and has been attributed to both periods. The wear traces on the tools indicate that these were used for harvesting near-ripe, semi-green wild cereals, shortly before grains ripen enough to disperse naturally. The study shows that the tools were not used intensively, and they reflect two harvesting modes: flint knives held by hand and inserts hafted into a handle. The finds reveal the existence of cereal harvesting techniques and tools some 8,000 years before the Natufian, and 12,000 years before the establishment of sedentary farming communities in the Near East during the Neolithic Revolution. Furthermore, the new |
being independent. In systems where the legislature is sovereign, the powers of and the organization of the executive are completely dependent on what powers the legislature grants it and the actions of the executive may or may not be subject to judicial review, something which is also controlled by the legislature. The executive may also have legislative or judicial powers in systems that where the legislature is sovereign, which is often why the executive is instead referred to as the government since it often possesses non-executive powers. Ministers In parliamentary systems, the executive is responsible to the elected legislature, i.e. must maintain the confidence of the legislature (or one part of it, if bicameral). In certain circumstances (varying by state), the legislature can express its lack of confidence in the executive, which causes either a change in governing party or group of parties or a general election. Parliamentary systems have a head of government (who leads the executive, often called ministers) normally distinct from the head of state (who continues through governmental and electoral changes). In the Westminster type of parliamentary system, the principle of separation of powers is not as entrenched as in some others. Members of the executive (ministers), are also members of the legislature, and hence play an important part in | the legislature is sovereign, the powers of and the organization of the executive are completely dependent on what powers the legislature grants it and the actions of the executive may or may not be subject to judicial review, something which is also controlled by the legislature. The executive may also have legislative or judicial powers in systems that where the legislature is sovereign, which is often why the executive is instead referred to as the government since it often possesses non-executive powers. Ministers In parliamentary systems, the executive is responsible to the elected legislature, i.e. must maintain the confidence of the legislature (or one part of it, if bicameral). In certain circumstances (varying by state), the legislature can express its lack of confidence in the executive, which causes either a change in governing party or group of parties or a general election. Parliamentary systems have a head of government (who leads the executive, |
Franco Rasetti, whom Fermi had appointed as his assistant. They soon nicknamed the "Via Panisperna boys" after the street where the Institute of Physics was located. Fermi married Laura Capon, a science student at the university, on 19 July 1928. They had two children: Nella, born in January 1931, and Giulio, born in February 1936. On 18 March 1929, Fermi was appointed a member of the Royal Academy of Italy by Mussolini, and on 27 April he joined the Fascist Party. He later opposed Fascism when the 1938 racial laws were promulgated by Mussolini in order to bring Italian Fascism ideologically closer to German National Socialism. These laws threatened Laura, who was Jewish, and put many of Fermi's research assistants out of work. During their time in Rome, Fermi and his group made important contributions to many practical and theoretical aspects of physics. In 1928, he published his Introduction to Atomic Physics (), which provided Italian university students with an up-to-date and accessible text. Fermi also conducted public lectures and wrote popular articles for scientists and teachers in order to spread knowledge of the new physics as widely as possible. Part of his teaching method was to gather his colleagues and graduate students together at the end of the day and go over a problem, often from his own research. A sign of success was that foreign students now began to come to Italy. The most notable of these was the German physicist Hans Bethe, who came to Rome as a Rockefeller Foundation fellow, and collaborated with Fermi on a 1932 paper "On the Interaction between Two Electrons" (). At this time, physicists were puzzled by beta decay, in which an electron was emitted from the atomic nucleus. To satisfy the law of conservation of energy, Pauli postulated the existence of an invisible particle with no charge and little or no mass that was also emitted at the same time. Fermi took up this idea, which he developed in a tentative paper in 1933, and then a longer paper the next year that incorporated the postulated particle, which Fermi called a "neutrino". His theory, later referred to as Fermi's interaction, and still later as the theory of the weak interaction, described one of the four fundamental forces of nature. The neutrino was detected after his death, and his interaction theory showed why it was so difficult to detect. When he submitted his paper to the British journal Nature, that journal's editor turned it down because it contained speculations which were "too remote from physical reality to be of interest to readers". Thus Fermi saw the theory published in Italian and German before it was published in English. In the introduction to the 1968 English translation, physicist Fred L. Wilson noted that: In January 1934, Irène Joliot-Curie and Frédéric Joliot announced that they had bombarded elements with alpha particles and induced radioactivity in them. By March, Fermi's assistant Gian-Carlo Wick had provided a theoretical explanation using Fermi's theory of beta decay. Fermi decided to switch to experimental physics, using the neutron, which James Chadwick had discovered in 1932. In March 1934, Fermi wanted to see if he could induce radioactivity with Rasetti's polonium-beryllium neutron source. Neutrons had no electric charge, and so would not be deflected by the positively charged nucleus. This meant that they needed much less energy to penetrate the nucleus than charged particles, and so would not require a particle accelerator, which the Via Panisperna boys did not have. Fermi had the idea to resort to replacing the polonium-beryllium neutron source with a radon-beryllium one, which he created by filling a glass bulb with beryllium powder, evacuating the air, and then adding 50 mCi of radon gas, supplied by Giulio Cesare Trabacchi. This created a much stronger neutron source, the effectiveness of which declined with the 3.8-day half-life of radon. He knew that this source would also emit gamma rays, but, on the basis of his theory, he believed that this would not affect the results of the experiment. He started by bombarding platinum, an element with a high atomic number that was readily available, without success. He turned to aluminium, which emitted an alpha particle and produced sodium, which then decayed into magnesium by beta particle emission. He tried lead, without success, and then fluorine in the form of calcium fluoride, which emitted an alpha particle and produced nitrogen, decaying into oxygen by beta particle emission. In all, he induced radioactivity in 22 different elements. Fermi rapidly reported the discovery of neutron-induced radioactivity in the Italian journal La Ricerca Scientifica on 25 March 1934. The natural radioactivity of thorium and uranium made it hard to determine what was happening when these elements were bombarded with neutrons but, after correctly eliminating the presence of elements lighter than uranium but heavier than lead, Fermi concluded that they had created new elements, which he called hesperium and ausonium. The chemist Ida Noddack suggested that some of the experiments could have produced lighter elements than lead rather than new, heavier elements. Her suggestion was not taken seriously at the time because her team had not carried out any experiments with uranium or built the theoretical basis for this possibility. At that time, fission was thought to be improbable if not impossible on theoretical grounds. While physicists expected elements with higher atomic numbers to form from neutron bombardment of lighter elements, nobody expected neutrons to have enough energy to split a heavier atom into two light element fragments in the manner that Noddack suggested. The Via Panisperna boys also noticed some unexplained effects. The experiment seemed to work better on a wooden table than a marble tabletop. Fermi remembered that Joliot-Curie and Chadwick had noted that paraffin wax was effective at slowing neutrons, so he decided to try that. When neutrons were passed through paraffin wax, they induced a hundred times as much radioactivity in silver compared with when it was bombarded without the paraffin. Fermi guessed that this was due to the hydrogen atoms in the paraffin. Those in wood similarly explained the difference between the wooden and the marble tabletops. This was confirmed by repeating the effect with water. He concluded that collisions with hydrogen atoms slowed the neutrons. The lower the atomic number of the nucleus it collides with, the more energy a neutron loses per collision, and therefore the fewer collisions that are required to slow a neutron down by a given amount. Fermi realised that this induced more radioactivity because slow neutrons were more easily captured than fast ones. He developed a diffusion equation to describe this, which became known as the Fermi age equation. In 1938, Fermi received the Nobel Prize in Physics at the age of 37 for his "demonstrations of the existence of new radioactive elements produced by neutron irradiation, and for his related discovery of nuclear reactions brought about by slow neutrons". After Fermi received the prize in Stockholm, he did not return home to Italy but rather continued to New York City with his family in December 1938, where they applied for permanent residency. The decision to move to America and become U.S. citizens was due primarily to the racial laws in Italy. Manhattan Project Fermi arrived in New York City on 2 January 1939. He was immediately offered positions at five universities, and accepted one at Columbia University, where he had already given summer lectures in 1936. He received the news that in December 1938, the German chemists Otto Hahn and Fritz Strassmann had detected the element barium after bombarding uranium with neutrons, which Lise Meitner and her nephew Otto Frisch correctly interpreted as the result of nuclear fission. Frisch confirmed this experimentally on 13 January 1939. The news of Meitner and Frisch's interpretation of Hahn and Strassmann's discovery crossed the Atlantic with Niels Bohr, who was to lecture at Princeton University. Isidor Isaac Rabi and Willis Lamb, two Columbia University physicists working at Princeton, found out about it and carried it back to Columbia. Rabi said he told Enrico Fermi, but Fermi later gave the credit to Lamb: Noddack was proven right after all. Fermi had dismissed the possibility of fission on the basis of his calculations, but he had not taken into account the binding energy that would appear when a nuclide with an odd number of neutrons absorbed an extra neutron. For Fermi, the news came as a profound embarrassment, as the transuranic elements that he had partly been awarded the Nobel Prize for discovering had not been transuranic elements at all, but fission products. He added a footnote to this effect to his Nobel Prize acceptance speech. The scientists at Columbia decided that they should try to detect the energy released in the nuclear fission of uranium when bombarded by neutrons. On 25 January 1939, in the basement of Pupin Hall at Columbia, an experimental team including Fermi conducted the first nuclear fission experiment in the United States. The other members of the team were Herbert L. Anderson, Eugene T. Booth, John R. Dunning, G. Norris Glasoe, and Francis G. Slack. The next day, the Fifth Washington Conference on Theoretical Physics began in Washington, D.C. under the joint auspices of George Washington University and the Carnegie Institution of Washington. There, the news on nuclear fission was spread even further, fostering many more experimental demonstrations. French scientists Hans von Halban, Lew Kowarski, and Frédéric Joliot-Curie had demonstrated that uranium bombarded by neutrons emitted more neutrons than it absorbed, suggesting the possibility of a chain reaction. Fermi and Anderson did so too a few weeks later. Leó Szilárd obtained of uranium oxide from Canadian radium producer Eldorado Gold Mines Limited, allowing Fermi and Anderson to conduct experiments with fission on a much larger scale. Fermi and Szilárd collaborated on a design of a device to achieve a self-sustaining nuclear reaction—a nuclear reactor. Owing to the rate of absorption of neutrons by the hydrogen in water, it was unlikely that a self-sustaining reaction could be achieved with natural uranium and water as a neutron moderator. Fermi suggested, based on his work with neutrons, that the reaction could be achieved with uranium oxide blocks and graphite as a moderator instead of water. This would reduce the neutron capture rate, and in theory make a self-sustaining chain reaction possible. Szilárd came up with a workable design: a pile of uranium oxide blocks interspersed with graphite bricks. Szilárd, Anderson, and Fermi published a paper on "Neutron Production in Uranium". But their work habits and personalities were different, and Fermi had trouble working with Szilárd. Fermi was among the first to warn military leaders about the potential impact of nuclear energy, giving a lecture on the subject at the Navy Department on 18 March 1939. The response fell short of what he had hoped for, although the Navy agreed to provide $1,500 towards further research at Columbia. Later that year, Szilárd, Eugene Wigner, and Edward Teller sent the letter signed by Einstein to U.S. President Franklin D. Roosevelt, warning that Nazi Germany was likely to build an atomic bomb. In response, Roosevelt formed the Advisory Committee on Uranium to investigate the matter. The Advisory Committee on Uranium provided money for Fermi to buy graphite, and he built a pile of graphite bricks on the seventh floor of the Pupin Hall laboratory. By August 1941, he had six tons of uranium oxide and thirty tons of graphite, which he used to build a still larger pile in Schermerhorn Hall at Columbia. The S-1 Section of the Office of Scientific Research and Development, as the Advisory Committee on Uranium was now known, met on 18 December 1941, with the U.S. now engaged in World War II, making its work urgent. Most of the effort sponsored by the committee had been directed at producing enriched uranium, but Committee member Arthur Compton determined that a feasible alternative was plutonium, which could be mass-produced in nuclear reactors by the end of 1944. He decided to concentrate the plutonium work at the University of Chicago. Fermi reluctantly moved, and his team became part of the new Metallurgical Laboratory there. The possible results of a self-sustaining nuclear reaction were unknown, so it seemed inadvisable to build the first nuclear reactor on the University of Chicago campus in the middle of the city. Compton found a location in the Argonne Woods Forest Preserve, about from Chicago. Stone & Webster was contracted to develop the site, but the work was halted by an industrial dispute. Fermi then persuaded Compton that he could build the reactor in the squash court under the stands of the University of Chicago's Stagg Field. Construction of the pile began on 6 November 1942, and Chicago Pile-1 went critical on 2 December. The shape of the pile was intended to be roughly spherical, but as work proceeded Fermi calculated that criticality could be achieved without finishing the entire pile as planned. This experiment was a landmark in the quest for energy, and it was typical of Fermi's approach. Every step was carefully planned, every calculation was meticulously done. When the first self-sustained nuclear chain reaction was achieved, Compton made a coded phone call to James B. Conant, the chairman of the National Defense Research Committee. To continue the research where it would not pose a public health hazard, the reactor was disassembled and moved to the Argonne Woods site. There Fermi directed experiments on nuclear reactions, reveling in the opportunities provided by the reactor's abundant production of free neutrons. The laboratory soon branched out from physics and engineering into using the reactor for biological and medical research. Initially, Argonne was run by Fermi as part of the University of Chicago, but it became a separate entity with Fermi as its director in May 1944. When the air-cooled X-10 Graphite Reactor at Oak Ridge went critical on 4 November 1943, Fermi was on hand just in case something went wrong. The technicians woke him early so that he could see it happen. Getting X-10 operational was another milestone in the plutonium project. It provided data on reactor design, training for DuPont staff in reactor operation, and produced the first small quantities of reactor-bred plutonium. Fermi became an American citizen in July 1944, the earliest date the law allowed. In September 1944, Fermi inserted the first uranium fuel slug into the B Reactor at the Hanford Site, the production reactor designed to breed plutonium in large quantities. Like X-10, it had been designed by Fermi's team at the Metallurgical Laboratory and built by DuPont, but it was much larger and was water-cooled. Over the next few days, 838 tubes were loaded, and the reactor went critical. Shortly after midnight on 27 September, the operators began to withdraw the control rods to initiate production. At first, all appeared to be well, but around 03:00, the power level started to drop and by 06:30 the reactor had shut down completely. The Army and DuPont turned to Fermi's team for answers. The cooling water was investigated to see if there was a leak or contamination. The next day the reactor suddenly started up again, only to shut down once more a few hours later. The problem was traced to neutron poisoning from xenon-135 or Xe-135, a fission product with a half-life of 9.1 to 9.4 hours. Fermi and John Wheeler both deduced that Xe-135 was responsible for absorbing neutrons in the reactor, thereby sabotaging the fission process. Fermi was recommended by colleague Emilio Segrè to ask Chien-Shiung Wu, as she prepared a printed draft on this topic to be published by the Physical Review. Upon reading the draft, Fermi and the scientists confirmed their suspicions: Xe-135 indeed absorbed neutrons, in fact it had a huge neutron cross-section. DuPont had deviated from the Metallurgical Laboratory's original design in which the reactor had 1,500 tubes arranged in a circle, and had added 504 tubes to fill in the corners. The scientists had originally considered this over-engineering a waste of time and money, but Fermi realized that if all 2,004 tubes were loaded, the reactor could reach the required power level and efficiently produce plutonium. In April 1943, Fermi raised with Robert Oppenheimer the possibility of using the radioactive byproducts from enrichment to contaminate the German food supply. The background was fear that the German atomic bomb project was already at an advanced stage, and Fermi was also skeptical at the time that an atomic bomb could be developed quickly enough. Oppenheimer discussed the "promising" proposal with Edward Teller, who suggested the use of strontium-90. James B. Conant and Leslie Groves were also briefed, but Oppenheimer wanted to proceed with the plan only if enough food could be contaminated with the weapon to kill half a million people. In mid-1944, Oppenheimer persuaded Fermi to join his Project Y at Los Alamos, New Mexico. Arriving in September, Fermi was appointed an associate director of the laboratory, with broad responsibility for nuclear and theoretical physics, and was placed in charge of F Division, which was named after him. F Division had four branches: F-1 Super and General Theory under Teller, which investigated the "Super" (thermonuclear) bomb; F-2 Water Boiler under L. D. P. King, which looked after the "water boiler" aqueous homogeneous research reactor; F-3 Super Experimentation under Egon Bretscher; and F-4 Fission Studies under Anderson. Fermi observed the Trinity test on 16 July 1945 and conducted an experiment to estimate the bomb's yield by dropping strips of paper into the blast wave. He paced off the distance they were blown by the explosion, and calculated the yield as ten kilotons of TNT; the actual yield was about 18.6 kilotons. Along with Oppenheimer, Compton, and Ernest Lawrence, Fermi was part of the scientific panel that advised the Interim Committee on target selection. The panel agreed with the committee that atomic bombs would be used without warning against an industrial target. Like others at the Los Alamos Laboratory, Fermi found out about the atomic bombings of Hiroshima and Nagasaki from the public address system in the technical area. Fermi did not believe that atomic bombs would deter nations from starting wars, nor did he think that the time was ripe for world government. He therefore did not join the Association of Los Alamos Scientists. Postwar work Fermi became the Charles H. Swift Distinguished Professor of Physics at the University of Chicago on 1 July 1945, although he did not depart the Los Alamos Laboratory with his family until 31 December 1945. He was elected a member of the U.S. National Academy of Sciences in 1945. The Metallurgical Laboratory became the Argonne National Laboratory on 1 July 1946, the first of the national laboratories established by the Manhattan Project. The short distance between Chicago and Argonne allowed Fermi to work at both places. At Argonne he continued experimental physics, investigating neutron scattering with Leona Marshall. He also discussed theoretical physics with Maria Mayer, helping | first three in theoretical physics in Italy, that had been created by the Minister of Education at the urging of Professor Orso Mario Corbino, who was the university's professor of experimental physics, the Director of the Institute of Physics, and a member of Benito Mussolini's cabinet. Corbino, who also chaired the selection committee, hoped that the new chair would raise the standard and reputation of physics in Italy. The committee chose Fermi ahead of Enrico Persico and Aldo Pontremoli, and Corbino helped Fermi recruit his team, which was soon joined by notable students such as Edoardo Amaldi, Bruno Pontecorvo, Ettore Majorana and Emilio Segrè, and by Franco Rasetti, whom Fermi had appointed as his assistant. They soon nicknamed the "Via Panisperna boys" after the street where the Institute of Physics was located. Fermi married Laura Capon, a science student at the university, on 19 July 1928. They had two children: Nella, born in January 1931, and Giulio, born in February 1936. On 18 March 1929, Fermi was appointed a member of the Royal Academy of Italy by Mussolini, and on 27 April he joined the Fascist Party. He later opposed Fascism when the 1938 racial laws were promulgated by Mussolini in order to bring Italian Fascism ideologically closer to German National Socialism. These laws threatened Laura, who was Jewish, and put many of Fermi's research assistants out of work. During their time in Rome, Fermi and his group made important contributions to many practical and theoretical aspects of physics. In 1928, he published his Introduction to Atomic Physics (), which provided Italian university students with an up-to-date and accessible text. Fermi also conducted public lectures and wrote popular articles for scientists and teachers in order to spread knowledge of the new physics as widely as possible. Part of his teaching method was to gather his colleagues and graduate students together at the end of the day and go over a problem, often from his own research. A sign of success was that foreign students now began to come to Italy. The most notable of these was the German physicist Hans Bethe, who came to Rome as a Rockefeller Foundation fellow, and collaborated with Fermi on a 1932 paper "On the Interaction between Two Electrons" (). At this time, physicists were puzzled by beta decay, in which an electron was emitted from the atomic nucleus. To satisfy the law of conservation of energy, Pauli postulated the existence of an invisible particle with no charge and little or no mass that was also emitted at the same time. Fermi took up this idea, which he developed in a tentative paper in 1933, and then a longer paper the next year that incorporated the postulated particle, which Fermi called a "neutrino". His theory, later referred to as Fermi's interaction, and still later as the theory of the weak interaction, described one of the four fundamental forces of nature. The neutrino was detected after his death, and his interaction theory showed why it was so difficult to detect. When he submitted his paper to the British journal Nature, that journal's editor turned it down because it contained speculations which were "too remote from physical reality to be of interest to readers". Thus Fermi saw the theory published in Italian and German before it was published in English. In the introduction to the 1968 English translation, physicist Fred L. Wilson noted that: In January 1934, Irène Joliot-Curie and Frédéric Joliot announced that they had bombarded elements with alpha particles and induced radioactivity in them. By March, Fermi's assistant Gian-Carlo Wick had provided a theoretical explanation using Fermi's theory of beta decay. Fermi decided to switch to experimental physics, using the neutron, which James Chadwick had discovered in 1932. In March 1934, Fermi wanted to see if he could induce radioactivity with Rasetti's polonium-beryllium neutron source. Neutrons had no electric charge, and so would not be deflected by the positively charged nucleus. This meant that they needed much less energy to penetrate the nucleus than charged particles, and so would not require a particle accelerator, which the Via Panisperna boys did not have. Fermi had the idea to resort to replacing the polonium-beryllium neutron source with a radon-beryllium one, which he created by filling a glass bulb with beryllium powder, evacuating the air, and then adding 50 mCi of radon gas, supplied by Giulio Cesare Trabacchi. This created a much stronger neutron source, the effectiveness of which declined with the 3.8-day half-life of radon. He knew that this source would also emit gamma rays, but, on the basis of his theory, he believed that this would not affect the results of the experiment. He started by bombarding platinum, an element with a high atomic number that was readily available, without success. He turned to aluminium, which emitted an alpha particle and produced sodium, which then decayed into magnesium by beta particle emission. He tried lead, without success, and then fluorine in the form of calcium fluoride, which emitted an alpha particle and produced nitrogen, decaying into oxygen by beta particle emission. In all, he induced radioactivity in 22 different elements. Fermi rapidly reported the discovery of neutron-induced radioactivity in the Italian journal La Ricerca Scientifica on 25 March 1934. The natural radioactivity of thorium and uranium made it hard to determine what was happening when these elements were bombarded with neutrons but, after correctly eliminating the presence of elements lighter than uranium but heavier than lead, Fermi concluded that they had created new elements, which he called hesperium and ausonium. The chemist Ida Noddack suggested that some of the experiments could have produced lighter elements than lead rather than new, heavier elements. Her suggestion was not taken seriously at the time because her team had not carried out any experiments with uranium or built the theoretical basis for this possibility. At that time, fission was thought to be improbable if not impossible on theoretical grounds. While physicists expected elements with higher atomic numbers to form from neutron bombardment of lighter elements, nobody expected neutrons to have enough energy to split a heavier atom into two light element fragments in the manner that Noddack suggested. The Via Panisperna boys also noticed some unexplained effects. The experiment seemed to work better on a wooden table than a marble tabletop. Fermi remembered that Joliot-Curie and Chadwick had noted that paraffin wax was effective at slowing neutrons, so he decided to try that. When neutrons were passed through paraffin wax, they induced a hundred times as much radioactivity in silver compared with when it was bombarded without the paraffin. Fermi guessed that this was due to the hydrogen atoms in the paraffin. Those in wood similarly explained the difference between the wooden and the marble tabletops. This was confirmed by repeating the effect with water. He concluded that collisions with hydrogen atoms slowed the neutrons. The lower the atomic number of the nucleus it collides with, the more energy a neutron loses per collision, and therefore the fewer collisions that are required to slow a neutron down by a given amount. Fermi realised that this induced more radioactivity because slow neutrons were more easily captured than fast ones. He developed a diffusion equation to describe this, which became known as the Fermi age equation. In 1938, Fermi received the Nobel Prize in Physics at the age of 37 for his "demonstrations of the existence of new radioactive elements produced by neutron irradiation, and for his related discovery of nuclear reactions brought about by slow neutrons". After Fermi received the prize in Stockholm, he did not return home to Italy but rather continued to New York City with his family in December 1938, where they applied for permanent residency. The decision to move to America and become U.S. citizens was due primarily to the racial laws in Italy. Manhattan Project Fermi arrived in New York City on 2 January 1939. He was immediately offered positions at five universities, and accepted one at Columbia University, where he had already given summer lectures in 1936. He received the news that in December 1938, the German chemists Otto Hahn and Fritz Strassmann had detected the element barium after bombarding uranium with neutrons, which Lise Meitner and her nephew Otto Frisch correctly interpreted as the result of nuclear fission. Frisch confirmed this experimentally on 13 January 1939. The news of Meitner and Frisch's interpretation of Hahn and Strassmann's discovery crossed the Atlantic with Niels Bohr, who was to lecture at Princeton University. Isidor Isaac Rabi and Willis Lamb, two Columbia University physicists working at Princeton, found out about it and carried it back to Columbia. Rabi said he told Enrico Fermi, but Fermi later gave the credit to Lamb: Noddack was proven right after all. Fermi had dismissed the possibility of fission on the basis of his calculations, but he had not taken into account the binding energy that would appear when a nuclide with an odd number of neutrons absorbed an extra neutron. For Fermi, the news came as a profound embarrassment, as the transuranic elements that he had partly been awarded the Nobel Prize for discovering had not been transuranic elements at all, but fission products. He added a footnote to this effect to his Nobel Prize acceptance speech. The scientists at Columbia decided that they should try to detect the energy released in the nuclear fission of uranium when bombarded by neutrons. On 25 January 1939, in the basement of Pupin Hall at Columbia, an experimental team including Fermi conducted the first nuclear fission experiment in the United States. The other members of the team were Herbert L. Anderson, Eugene T. Booth, John R. Dunning, G. Norris Glasoe, and Francis G. Slack. The next day, the Fifth Washington Conference on Theoretical Physics began in Washington, D.C. under the joint auspices of George Washington University and the Carnegie Institution of Washington. There, the news on nuclear fission was spread even further, fostering many more experimental demonstrations. French scientists Hans von Halban, Lew Kowarski, and Frédéric Joliot-Curie had demonstrated that uranium bombarded by neutrons emitted more neutrons than it absorbed, suggesting the possibility of a chain reaction. Fermi and Anderson did so too a few weeks later. Leó Szilárd obtained of uranium oxide from Canadian radium producer Eldorado Gold Mines Limited, allowing Fermi and Anderson to conduct experiments with fission on a much larger scale. Fermi and Szilárd collaborated on a design of a device to achieve a self-sustaining nuclear reaction—a nuclear reactor. Owing to the rate of absorption of neutrons by the hydrogen in water, it was unlikely that a self-sustaining reaction could be achieved with natural uranium and water as a neutron moderator. Fermi suggested, based on his work with neutrons, that the reaction could be achieved with uranium oxide blocks and graphite as a moderator instead of water. This would reduce the neutron capture rate, and in theory make a self-sustaining chain reaction possible. Szilárd came up with a workable design: a pile of uranium oxide blocks interspersed with graphite bricks. Szilárd, Anderson, and Fermi published a paper on "Neutron Production in Uranium". But their work habits and personalities were different, and Fermi had trouble working with Szilárd. Fermi was among the first to warn military leaders about the potential impact of nuclear energy, giving a lecture on the subject at the Navy Department on 18 March 1939. The response fell short of what he had hoped for, although the Navy agreed to provide $1,500 towards further research at Columbia. Later that year, Szilárd, Eugene Wigner, and Edward Teller sent the letter signed by Einstein to U.S. President Franklin D. Roosevelt, warning that Nazi Germany was likely to build an atomic bomb. In response, Roosevelt formed the Advisory Committee on Uranium to investigate the matter. The Advisory Committee on Uranium provided money for Fermi to buy graphite, and he built a pile of graphite bricks on the seventh floor of the Pupin Hall laboratory. By August 1941, he had six tons of uranium oxide and thirty tons of graphite, which he used to build a still larger pile in Schermerhorn Hall at Columbia. The S-1 Section of the Office of Scientific Research and Development, as the Advisory Committee on Uranium was now known, met on 18 December 1941, with the U.S. now engaged in World War II, making its work urgent. Most of the effort sponsored by the committee had been directed at producing enriched uranium, but Committee member Arthur Compton determined that a feasible alternative was plutonium, which could be mass-produced in nuclear reactors by the end of 1944. He decided to concentrate the plutonium work at the University of Chicago. Fermi reluctantly moved, and his team became part of the new Metallurgical Laboratory there. The possible results of a self-sustaining nuclear reaction were unknown, so it seemed inadvisable to build the first nuclear reactor on the University of Chicago campus in the middle of the city. Compton found a location in the Argonne Woods Forest Preserve, about from Chicago. Stone & Webster was contracted to develop the site, but the work was halted by an industrial dispute. Fermi then persuaded Compton that he could build the reactor in the squash court under the stands of the University of Chicago's Stagg Field. Construction of the pile began on 6 November 1942, and Chicago Pile-1 went critical on 2 December. The shape of the pile was intended to be roughly spherical, but as work proceeded Fermi calculated that criticality could be achieved without finishing the entire pile as planned. This experiment was a landmark in the quest for energy, and it was typical of Fermi's approach. Every step was carefully planned, every calculation was meticulously done. When the first self-sustained nuclear chain reaction was achieved, Compton made a coded phone call to James B. Conant, the chairman of the National Defense Research Committee. To continue the research where it would not pose a public health hazard, the reactor was disassembled and moved to the Argonne Woods site. There Fermi directed experiments on nuclear reactions, reveling in the opportunities provided by the reactor's abundant production of free neutrons. The laboratory soon branched out from physics and engineering into using the reactor for biological and medical research. Initially, Argonne was run by Fermi as part of the University of Chicago, but it became a separate entity with Fermi as its director in May 1944. When the air-cooled X-10 Graphite Reactor at Oak Ridge went critical on 4 November 1943, Fermi was on hand just in case something went wrong. The technicians woke him early so that he could see it happen. Getting X-10 operational was another milestone in the plutonium project. It provided data on reactor design, training for DuPont staff in reactor operation, and produced the first small quantities of reactor-bred plutonium. Fermi became an American citizen in July 1944, the earliest date the law allowed. In September 1944, Fermi inserted the first uranium fuel slug into the B Reactor at the Hanford Site, the production reactor designed to breed plutonium in large quantities. Like X-10, it had been designed by Fermi's team at the Metallurgical Laboratory and built by DuPont, but it was much larger and was water-cooled. Over the next few days, 838 tubes were loaded, and the reactor went critical. Shortly after midnight on 27 September, the operators began to withdraw the control rods to initiate production. At first, all appeared to be well, but around 03:00, the power level started to drop and by 06:30 the reactor had shut down completely. The Army and DuPont turned to Fermi's team for answers. The cooling water was investigated to see if there was a leak or contamination. The next day the reactor suddenly started up again, only to shut down once more a few hours later. The problem was traced to neutron poisoning from xenon-135 or Xe-135, a fission product with a half-life of 9.1 to 9.4 hours. Fermi and John Wheeler both deduced that Xe-135 was responsible for absorbing neutrons in the reactor, thereby sabotaging the fission process. Fermi was recommended by colleague Emilio Segrè to ask Chien-Shiung Wu, as she prepared a printed draft on this topic to be published by the Physical Review. Upon reading the draft, Fermi and the scientists confirmed their suspicions: Xe-135 indeed absorbed neutrons, in fact it had a huge neutron cross-section. DuPont had deviated from the Metallurgical Laboratory's original design in which the reactor had 1,500 tubes arranged in a circle, and had added 504 tubes to fill in the corners. The scientists had originally considered this over-engineering a waste of time and money, but Fermi realized that if all 2,004 tubes were loaded, the reactor could reach the required power level and efficiently produce plutonium. In April 1943, Fermi raised with Robert Oppenheimer the possibility of using the radioactive byproducts from enrichment to contaminate the German food supply. The background was fear that the German atomic bomb project was already at an advanced stage, and Fermi was also skeptical at the time that an atomic bomb could be developed quickly enough. Oppenheimer discussed the "promising" proposal with Edward Teller, who suggested the use of strontium-90. James B. Conant and Leslie Groves were also briefed, but Oppenheimer wanted to proceed with the plan only if enough food could be contaminated with the weapon to kill half a million people. In mid-1944, Oppenheimer persuaded Fermi to join his Project Y at Los Alamos, New Mexico. Arriving in September, Fermi was appointed an associate director of the laboratory, with broad responsibility for nuclear and theoretical physics, and was placed in charge of F Division, which was named after him. F Division had four branches: F-1 Super and General Theory under Teller, which investigated the "Super" (thermonuclear) bomb; F-2 Water Boiler under L. D. P. King, which looked after the "water boiler" aqueous homogeneous research reactor; F-3 Super Experimentation under Egon Bretscher; and F-4 Fission Studies under Anderson. Fermi observed the Trinity test on 16 July 1945 and conducted an experiment to estimate the bomb's yield by dropping strips of paper |
between the Russian Empire, the French Third Republic and Great Britain, built upon the Franco-Russian Alliance (1894), the Entente Cordiale (1904), and the Anglo-Russian Entente (1907) Allies of World War I, sometimes referred to as "The Entente", "The Entente Powers", or "The Entente Forces" Little Entente (1920–1938), between Czechoslovakia, Romania, and the Kingdom of Yugoslavia Balkan Entente (1934–1938), between Greece, Turkey, Romania and Yugoslavia Baltic Entente (1934–1939), between Lithuania, Latvia, and Estonia Conseil de l'Entente (1959), between Côte d'Ivoire, Burkina Faso, | and Great Britain, built upon the Franco-Russian Alliance (1894), the Entente Cordiale (1904), and the Anglo-Russian Entente (1907) Allies of World War I, sometimes referred to as "The Entente", "The Entente Powers", or "The Entente Forces" Little Entente (1920–1938), between Czechoslovakia, Romania, and the Kingdom of Yugoslavia Balkan Entente (1934–1938), between Greece, Turkey, Romania and Yugoslavia Baltic Entente (1934–1939), between Lithuania, Latvia, and Estonia Conseil de l'Entente (1959), between Côte d'Ivoire, Burkina Faso, Benin, Niger, and (in 1966) Togo Entente frugale (beginning 2010), cooperation between the British and French governments Other |
multiple text editors with the operating system to cater to user demand. For example, a default installation of macOS contains ed, nano, TextEdit, and Vim. Frequently, at some point in the discussion, someone will point out that ed is the standard text editor. Humor The Church of Emacs, formed by Emacs and the GNU Project's creator Richard Stallman, is a parody religion. While it refers to vi as the "editor of the beast" (vi-vi-vi being 6-6-6 in Roman numerals), it does not oppose the use of vi; rather, it calls proprietary software anathema. ("Using a free version of vi is not a sin but a penance.") The Church of Emacs has its own newsgroup, alt.religion.emacs, that has posts purporting to support this belief system. Stallman has referred to himself as St IGNU−cius, a saint in the Church of Emacs. Supporters of vi have created an opposing Cult of vi, argued by the more hard-line Emacs users to be an attempt to "ape their betters". Regarding vi's modal nature (a common point of frustration for new users) some Emacs users joke that vi has two modes – "beep repeatedly" and "break everything". vi users enjoy joking that Emacs's key-sequences induce carpal tunnel syndrome, or mentioning one of many satirical expansions of the acronym EMACS, such as "Escape Meta Alt Control Shift" (a jab at Emacs's reliance on modifier keys) or "Eight Megabytes And Constantly Swapping" (in a time when that was a great amount of memory) or "EMACS Makes Any Computer Slow" (a recursive acronym like those Stallman uses) or "Eventually Munches All Computer Storage", in reference to Emacs's high system resource requirements. GNU EMACS has been expanded to "Generally Not Used, Except by Middle-Aged Computer Scientists" referencing its most ardent fans, and its declining usage among younger programmers compared to more graphically-oriented editors such as Atom, BBEdit, Sublime Text, TextMate, and Visual Studio Code. As a poke at Emacs' creeping featurism, vi advocates have been known to describe Emacs as "a great operating system, lacking only a decent editor". Emacs advocates have been known to respond that the editor is actually very good, but the operating system could use improvement (referring to Emacs' famous lack of concurrency, which has now been added). A game among UNIX users, either to test the depth of an Emacs user's understanding of the editor or to poke fun at the complexity of Emacs, involved predicting what would happen if a user held down a modifier key (such as or ) and typed their own name. This game humor originated with users of the older TECO editor, which was the implementation basis, via macros, of the original Emacs. Due to how one exits vi (":q", among others), hackers joke about a proposed method of creating a pseudorandom character sequence by having a user unfamiliar with vi seated in front of an open editor and asking them to exit the program. The Google search engine also joined in on the joke by having searches for vi resulting in the question "Did you mean: emacs" prompted at the top of the page, and searches for emacs resulting in "Did you mean: vi". See also Browser wars Comparison of text editors Notes References External links Results of an experiment comparing Vi and Emacs Comparing keystrokes per task Humor around Vi, Emacs and their comparisons Results of the Sucks-Rules-O-Meter for Vi and Emacs from comments made on the Web In the Church of Emacs "using a free version of vi is not a sin, it's a penance." | was written on much more capable machines with faster displays so they could have "funny commands with the screen shimmering and all that, and meanwhile, I'm sitting at home in sort of World War II surplus housing at Berkeley with a modem and a terminal that can just barely get the cursor off the bottom line". In addition to Emacs and vi workalikes, pico and its free and open-source clone nano and other text editors such as ne often have their own third-party advocates in the editor wars, though not to the extent of Emacs and vi. , both Emacs and vi can lay claim to being among the longest-lived application programs of all time, as well as being the two most commonly used text editors on Linux and Unix. Many operating systems, especially Linux and BSD derivatives, bundle multiple text editors with the operating system to cater to user demand. For example, a default installation of macOS contains ed, nano, TextEdit, and Vim. Frequently, at some point in the discussion, someone will point out that ed is the standard text editor. Humor The Church of Emacs, formed by Emacs and the GNU Project's creator Richard Stallman, is a parody religion. While it refers to vi as the "editor of the beast" (vi-vi-vi being 6-6-6 in Roman numerals), it does not oppose the use of vi; rather, it calls proprietary software anathema. ("Using a free version of vi is not a sin but a penance.") The Church of Emacs has its own newsgroup, alt.religion.emacs, that has posts purporting to support this belief system. Stallman has referred to himself as St IGNU−cius, a saint in the Church of Emacs. Supporters of vi have created an opposing Cult of vi, argued by the more hard-line Emacs users to be an attempt to "ape their betters". Regarding vi's modal nature (a common point of frustration for new users) some Emacs users joke that vi has two modes – "beep repeatedly" and "break everything". vi users enjoy joking that Emacs's key-sequences induce carpal tunnel syndrome, or mentioning one of many satirical expansions of the acronym EMACS, such as "Escape Meta Alt Control Shift" (a jab at Emacs's reliance on modifier keys) or "Eight Megabytes And Constantly Swapping" (in a time when that was a great amount of memory) or "EMACS Makes Any Computer Slow" (a recursive acronym like those Stallman uses) or "Eventually Munches All Computer Storage", in reference to Emacs's high system resource requirements. GNU EMACS has been expanded to "Generally Not Used, Except by Middle-Aged Computer Scientists" referencing its most ardent fans, and its declining usage among younger programmers compared to more graphically-oriented editors such as Atom, BBEdit, Sublime Text, TextMate, |
the Ecumenical Patriarchate, but recognised by the Russian Orthodox Church and 5 other churches) Orthodox Church of Ukraine (autocephaly from 15 December 2018, recognised by the Ecumenical Patriarchate on 5 January 2019, by the Church of Greece on 12 October 2019, by the Patriarchate of Alexandria on 8 November 2019, and by the Church of Cyprus on 24 October 2020) The four ancient patriarchates are the most senior, followed by the five junior patriarchates. Autocephalous archbishoprics follow the patriarchates in seniority, with the Church of Cyprus being the only ancient one (AD 431). In the diptychs of the Russian Orthodox Church and some of its daughter churches (e.g., the Orthodox Church in America), the ranking of the five junior patriarchal churches is different. Following the Russian Church in rank is Georgian, followed by Serbian, Romanian, and then Bulgarian Church. The ranking of the archbishoprics is the same. Autonomous Eastern Orthodox churches under the Ecumenical Patriarchate of Constantinople Monastic community of Mount Athos Estonian Apostolic Orthodox Church (autonomy recognised by the Ecumenical Patriarchate but not by the Russian Orthodox Church) Orthodox Church of Finland under the Greek Orthodox Church of Antioch Antiochian Orthodox Christian Archdiocese of North America under the Greek Orthodox Church of Jerusalem Orthodox Church of Mount Sinai under the Russian Orthodox Church Belarusian Orthodox Church Latvian Orthodox Church Ukrainian Orthodox Church (Moscow Patriarchate) (autonomy recognised by the Russian Orthodox Church but has no longer been recognised as such by the Ecumenical Patriarchate since October 2018, Patriarchate of Alexandria since November 2019, Church of Greece since August 2019, and Church of Cyprus since October 2020) Metropolis of Chișinău and All Moldova Orthodox Church in Japan (autonomy recognised by the Russian Orthodox Church but not by the Ecumenical Patriarchate) Chinese Orthodox Church (autonomy recognised by the Russian Orthodox Church but not by the Ecumenical Patriarchate) under the Serbian Orthodox Church Orthodox Ohrid Archbishopric under the Romanian Orthodox Church Metropolis of Bessarabia Romanian Orthodox Metropolis of the Americas Semi-autonomous churches under the Ecumenical Patriarchate of Constantinople Church of Crete under the Russian Orthodox Church Estonian Orthodox Church of the Moscow Patriarchate Russian Orthodox Church Outside Russia Limited self-government (not autonomy) under the Ecumenical Patriarchate of Constantinople Greek Orthodox Archdiocese of Italy and Malta Korean Orthodox Church Exarchate of the Philippines American Carpatho-Russian Orthodox Diocese Ukrainian Orthodox Church of Canada Ukrainian Orthodox Church of the USA under the Russian Orthodox Church Archdiocese of Russian Orthodox churches in Western Europe under the Romanian Orthodox Church Ukrainian Orthodox Vicariate Sighetu Marmației Unrecognised churches True Orthodox True Orthodox Christians are groups of traditionalist Eastern Orthodox churches which have severed communion since the 1920s with the mainstream Eastern Orthodox churches for various reasons, such as calendar reform, the involvement of mainstream Eastern Orthodox in ecumenism, or the refusal to submit to the authority of mainstream Eastern Orthodox Church. The True Orthodox Church in the Soviet Union was also called the Catacomb Church; the True Orthodox in Romania, Bulgaria, Greece and Cyprus are also called Old Calendarists. These groups refrain from concelebration of the Divine Liturgy with the mainstream Eastern Orthodox, while maintaining that they remain fully within the canonical boundaries of the Church: i.e., professing Eastern Orthodox belief, retaining legitimate apostolic succession, and existing in communities with historical continuity. The churches which follow True Orthodoxy are: Old Calendarists (numerous groups) Serbian True Orthodox Church Russian True Orthodox Church (Lazar Zhurbenko) Russian Orthodox Autonomous Church True Orthodox Metropolis of Germany and Europe Old Believers Old Believers are divided into various churches which do not recognize each others, nor the mainstream Eastern Orthodox Church. Churches that are not recognised despite wanting to The following churches recognize all other mainstream Eastern Orthodox churches, but are not recognised by any of them due to various disputes: Abkhazian Orthodox Church American Orthodox Catholic Church Belarusian Autocephalous Orthodox Church Macedonian Orthodox Church – Ohrid Archbishopric Montenegrin Orthodox Church Ukrainian Orthodox Church – Kyiv Patriarchate Turkish Orthodox Church Churches that are neither recognised nor fully Eastern Orthodox The following churches use the term "Orthodox" in their name and carries belief or the traditions of Eastern Orthodox church, but blend beliefs and traditions from other denominations outside of Eastern Orthodoxy: Evangelical Orthodox Church (blends with Protestant - Evangelical and Charismatic - elements) Orthodox-Catholic Church of America (blends with Catholic and Oriental Orthodox elements) Nordic Catholic Church in Italy (originally called the Orthodox Church in Italy, it had ties with the UOC-KP; now associates with the Nordic Catholic Church and the Union of Scranton) Lusitanian Catholic Orthodox Church (blends with Catholic elements) Communion of Western Orthodox Churches (blends with Oriental Orthodox elements) Celtic Orthodox Church French Orthodox Church Orthodox Church of the Gauls See also Hierarchy of the Catholic Church Catholic Church by country List of Lutheran dioceses and archdioceses Notes References External links Territorial Jurisdiction According to Orthodox Canon Law. The Phenomenon of Ethnophyletism in Recent Years, a paper read at the International Congress | the 5th century, Oriental Orthodoxy separated from Chalcedonian Christianity (and is therefore separate from both the Eastern Orthodox and Catholic Church), well before the 11th century Great Schism. It should not be confused with Eastern Orthodoxy. Jurisdictions Autocephalous Eastern Orthodox churches Ranked in order of seniority, with the year of independence (autocephaly) given in parentheses, where applicable. There are a total of 16 autocephalous Eastern Orthodox churches which are recognised at varying levels among the communion of the Eastern Orthodox Church. Four ancient patriarchates Ecumenical Patriarchate of Constantinople (independence in 330 AD, elevated to the rank of autocephalous Patriarchate in 381) Greek Orthodox Church of Alexandria Greek Orthodox Church of Antioch Greek Orthodox Church of Jerusalem (independence in 451 AD, elevated to the rank of autocephalous Patriarchate in 451) Those four ancient Eastern Orthodox Patriarchates are of the five episcopal sees forming the historical Pentarchy, the fifth one being the See of Rome. Those four Eastern Orthodox patriarchates remained in communion with each other after the 1054 schism with Rome. Of note, the title of "Patriarch" was created in 531 by Justinian. Junior patriarchates Bulgarian Orthodox Church (870, Patriarchate since 918/919, recognised by the Patriarchate of Constantinople in 927) Georgian Orthodox Church Serbian Orthodox Church Russian Orthodox Church (1448, recognised in 1589) Romanian Orthodox Church (1872, recognised in 1885, Patriarchate since 1925) Autocephalous archbishoprics Note: Church of Cyprus (recognised in 431) Church of Greece (1833, recognised in 1850) Albanian Orthodox Church (1922, recognised in 1937) Autocephalous metropolises Note: Polish Orthodox Church (1924) Orthodox Church of the Czech Lands and Slovakia (1951) Orthodox Church in America (1970, not recognised by the Ecumenical Patriarchate, but recognised by the Russian Orthodox Church and 5 other churches) Orthodox Church of Ukraine (autocephaly from 15 December 2018, recognised by the Ecumenical Patriarchate on 5 January 2019, by the Church of Greece on 12 October 2019, by the Patriarchate of Alexandria on 8 November 2019, and by the Church of Cyprus on 24 October 2020) The four ancient patriarchates are the most senior, followed by the five junior patriarchates. Autocephalous archbishoprics follow the patriarchates in seniority, with the Church of Cyprus being the only ancient one (AD 431). In the diptychs of the Russian Orthodox Church and some of its daughter churches (e.g., the Orthodox Church in America), the ranking of the five junior patriarchal churches is different. Following the Russian Church in rank is Georgian, followed by Serbian, Romanian, and then Bulgarian Church. The ranking of the archbishoprics is the same. Autonomous Eastern |
1,2-Ethanedithiol, compound commonly used for cleavage during peptide synthesis EDT (Digital), text editor for PDP-11 and VAX/VMS computer systems EDT (Univac), text editor for UNIVAC Series 90 and Fujitsu BS2000 computer systems EDT Hub, electronic document transmission software Electrodynamic tether, a spacecraft component Event dispatching thread, in Java Time zones Australian Eastern Daylight Time (GMT+11) Eastern Daylight Time (GMT−4), | computer systems EDT (Univac), text editor for UNIVAC Series 90 and Fujitsu BS2000 computer systems EDT Hub, electronic document transmission software Electrodynamic tether, a spacecraft component Event dispatching thread, in Java Time zones Australian Eastern Daylight Time (GMT+11) Eastern Daylight Time (GMT−4), in North America Other uses Chicago Engineering Design Team, a robotics team Eau de toilette Electronic Disturbance Theater, an |
Instrument Company. In that year Beauchamp applied for a United States patent for an Electrical Stringed Musical Instrument and the patent was later issued in 1937. By the time it was patented, other manufacturers were already making their own electric guitar designs. Early electric guitar manufacturers include Rickenbacker in 1932; Dobro in 1933; National, AudioVox and Volu-tone in 1934; Vega, Epiphone (Electrophone and Electar), and Gibson in 1935 and many others by 1936. By early-mid 1935, Electro String Instrument Corporation had achieved success with the "Frying Pan", and set out to capture a new audience through its release of the Electro-Spanish Model B and the Electro-Spanish Ken Roberts, which was the first full 25-inch scale electric guitar ever produced. The Electro-Spanish Ken Roberts was revolutionary for its time, providing players a full 25-inch scale, with easy access to 17 frets free of the body. Unlike other lap-steel electrified instruments produced during the time, the Electro-Spanish Ken Roberts was designed to play while standing upright with the guitar on a strap, as with acoustic guitars. The Electro-Spanish Ken Roberts was also the first instrument to feature a hand-operated vibrato as a standard appointment, a device called the "Vibrola," invented by Doc Kauffman. It is estimated that fewer than 50 Electro-Spanish Ken Roberts were constructed between 1933 and 1937; fewer than 10 are known to survive today. The solid-body electric guitar is made of solid wood, without functionally resonating air spaces. The first solid-body Spanish standard guitar was offered by Vivi-Tone no later than 1934. This model featured a guitar-shaped body of a single sheet of plywood affixed to a wood frame. Another early, substantially solid Spanish electric guitar, called the Electro Spanish, was marketed by the Rickenbacker guitar company in 1935 and made of Bakelite. By 1936, the Slingerland company introduced a wooden solid-body electric model, the Slingerland Songster 401 (and a lap steel counterpart, the Songster 400). Gibson's first production electric guitar, marketed in 1936, was the ES-150 model ("ES" for "Electric Spanish", and "150" reflecting the $150 price of the instrument, along with matching amplifier). The ES-150 guitar featured a single-coil, hexagonally shaped "bar" pickup, which was designed by Walt Fuller. It became known as the "Charlie Christian" pickup (named for the great jazz guitarist who was among the first to perform with the ES-150 guitar). The ES-150 achieved some popularity but suffered from unequal loudness across the six strings. A functioning solid-body electric guitar was designed and built in 1940 by Les Paul from an Epiphone acoustic archtop as an experiment. His "log guitar" — a wood post with a neck attached and two hollow-body halves attached to the sides for appearance only — shares nothing in common for design or hardware with the solid-body Gibson Les Paul, designed by Ted McCarty and introduced in 1952. The feedback associated with amplified hollow-bodied electric guitars was understood long before Paul's "log" was created in 1940; Gage Brewer's Ro-Pat-In of 1932 had a top so heavily reinforced that it essentially functioned as a solid-body instrument. Types Solid-body Unlike acoustic guitars, solid-body electric guitars have no vibrating soundboard to amplify string vibration. Instead, solid-body instruments depend on electric pickups (microphones) and an amplifier (or amp) and speaker. The solid body ensures that the amplified sound reproduces the string vibration alone, thus avoiding the wolf tones and unwanted feedback associated with amplified acoustic guitars. These guitars are generally made of hardwood covered with a hard polymer finish, often polyester or lacquer. In large production facilities, the wood is stored for three to six months in a wood-drying kiln before being cut to shape. Premium custom-built guitars are frequently made with much older, hand-selected wood. One of the first solid-body guitars was invented by Les Paul. Gibson did not present their Gibson Les Paul guitar prototypes to the public, as they did not believe the solid-body style would catch on. Another early solid-body Spanish style guitar, resembling what would become Gibson's Les Paul guitar a decade later, was developed in 1941 by O.W. Appleton, of Nogales, Arizona. Appleton made contact with both Gibson and Fender but was unable to sell the idea behind his "App" guitar to either company. In 1946, Merle Travis commissioned steel guitar builder Paul Bigsby to build him a solid-body Spanish-style electric. Bigsby delivered the guitar in 1948. The first mass-produced solid-body guitar was Fender Esquire and Fender Broadcaster (later to become the Fender Telecaster), first made in 1948, five years after Les Paul made his prototype. The Gibson Les Paul appeared soon after to compete with the Broadcaster. Another notable solid-body design is the Fender Stratocaster, which was introduced in 1954 and became extremely popular among musicians in the 1960s and 1970s for its wide tonal capabilities and more comfortable ergonomics than other models. Different styles of guitar have different pick-up styles, the main being 2 or 3 ‘single-coil’ pick-ups or a double humbucker, with the Stratocaster being a triple single-coil guitar. The history of Electric Guitars is summarized by Guitar World magazine, and the earliest electric guitar on their top 10 list is the Ro-Pat-In Electro A-25 "Frying Pan" (1932) described as 'The first-fully functioning solid-body electric guitar to be manufactured and sold'. The most recent electric guitar on this list is the Ibanez Jem (1987) which featured '24 frets', 'an impossibly thin neck' and was 'designed to be the ultimate shredder machine'. Numerous other important electric guitars are on the list including Gibson ES-150 (1936), Fender Telecaster (1951), Gibson Les Paul (1952), Gretsch 6128 Duo Jet (1953), Fender Stratocaster (1954), Rickenbacker 360/12 (1964), Van Halen Frankenstrat (1975), Paul Reed Smith Custom (1985) many of these guitars were 'successors' to earlier designs. Electric Guitar designs eventually became culturally important and visually iconic, with various model companies selling miniature model versions of particularly famous electric guitars, for example the Gibson SG used by Angus Young from the group AC/DC. Chambered-body Some solid-bodied guitars and some others, such as the Gibson Les Paul Supreme, the PRS Singlecut, and the Fender Telecaster Thinline, are built with hollow chambers in the body. These chambers are designed to not interfere with the critical bridge and string anchor point on the solid body. In the case of Gibson and PRS, these are called chambered bodies. The motivation for this may be to reduce weight, to achieve a semi-acoustic tone (see below) or both. Semi-acoustic Semi-acoustic guitars have a hollow body (similar in depth to a solid-body guitar) and electronic pickups mounted on the body. They work in a similar way to solid-body electric guitars except that because the hollow body also vibrates, the pickups convert a combination of string and body vibration into an electrical signal. Whereas chambered guitars are made, like solid-body guitars, from a single block of wood, semi-acoustic and full-hollow-body guitars bodies are made from thin sheets of wood. They do not provide enough acoustic volume for live performance, but they can be used unplugged for quiet practice. Semi-acoustics are noted for being able to provide a sweet, plaintive, or funky tone. They are used in many genres, including blues, funk, sixties pop, and indie rock. They generally have cello-style F-shaped sound holes. These can be blocked off to prevent feedback. Feedback can also be reduced by making them with a solid block in the middle of the soundbox. Full hollow-body Full hollow-body guitars have large, deep bodies made of glued-together sheets, or "plates", of wood. They can often be played at the same volume as an acoustic guitar and therefore can be used unplugged at intimate gigs. They qualify as electric guitars inasmuch as they have fitted pickups. Historically, archtop guitars with retrofitted pickups were among the very earliest electric guitars. The instrument originated during the Jazz Age, in the 1920s and 1930s, and are still considered the classic jazz guitar (nicknamed "jazzbox"). Like semi-acoustic guitars, they often have f-shaped sound holes. Having humbucker pickups (sometimes just a neck pickup) and usually strung heavily, jazz boxes are noted for their warm, rich tone. A variation with single-coil pickups, and sometimes with a Bigsby tremolo, has long been popular in country and rockabilly; it has a distinctly more twangy, biting tone than the classic jazzbox. The term archtop refers to a method of construction subtly different from the typical acoustic (or "folk" or "western" or "steel-string" guitar): the top is formed from a moderately thick () piece of wood, which is then carved into a thin () domed shape, whereas conventional acoustic guitars have a thin, flat top. Electric acoustic Some steel-string acoustic guitars are fitted with pickups purely as an alternative to using a separate microphone. They may also be fitted with a piezoelectric pickup under the bridge, attached to the bridge mounting plate, or with a low-mass microphone (usually a condenser mic) inside the body of the guitar that converts the vibrations in the body into electronic signals. Combinations of these types of pickups may be used, with an integral mixer/preamp/graphic equalizer. Such instruments are called electric acoustic guitars. They are regarded as acoustic guitars rather than electric guitars because the pickups do not produce a signal directly from the vibration of the strings, but rather from the vibration of the guitar top or body. Electric acoustic guitars should not be confused with semi-acoustic guitars, which have pickups of the type found on solid-body electric guitars, or solid-body hybrid guitars with piezoelectric pickups. Construction Electric guitar design and construction vary greatly in the shape of the body and the configuration of the neck, bridge, and pickups. However, some features are present on most guitars. The photo below shows the different parts of an electric guitar. The headstock (1) contains the metal machine heads (1.1), which use a worm gear for tuning. The nut (1.4)—a thin fret-like strip of metal, plastic, graphite, or bone—supports the strings at the headstock end of the instrument. The frets (2.3) are thin metal strips that stop the string at the correct pitch when the player pushes a string against the fingerboard. The truss rod (1.2) is a metal rod (usually adjustable) that counters the tension of the strings to keep the neck straight. Position markers (2.2) provide the player with a reference to the playing position on the fingerboard. The neck and fretboard (2.1) extend from the body. At the neck joint (2.4), the neck is either glued or bolted to the body. The body (3) is typically made of wood with a hard, polymerized finish. Strings vibrating in the magnetic field of the pickups (3.1, 3.2) produce an electric current in the pickup winding that passes through the tone and volume controls (3.8) to the output jack. Some guitars have piezo pickups, in addition to or instead of magnetic pickups. Some guitars have a fixed bridge (3.4). Others have a spring-loaded hinged bridge called a vibrato bar, tremolo bar, or whammy bar, which lets players bend notes or chords up or down in pitch or perform a vibrato embellishment. A plastic pickguard on some guitars protects the body from scratches or covers the control cavity, which holds most of the wiring. The degree to which the choice of woods and other materials in the solid-guitar body (3) affects the sonic character of the amplified signal is | melody lines, melodic instrumental fill passages, and solos. In a small group, such as a power trio, one guitarist switches between both roles. In large rock and Metal bands, there is often a rhythm guitarist and a lead guitarist. History Many experiments with electrically amplifying the vibrations of a string instrument were made dating back to the early part of the 20th century. Patents from the 1910s show telephone transmitters were adapted and placed inside violins and banjos to amplify the sound. Hobbyists in the 1920s used carbon button microphones attached to the bridge; however, these detected vibrations from the bridge on top of the instrument, resulting in a weak signal. Electric guitars were originally designed by acoustic guitar makers and instrument manufacturers. The demand for amplified guitars began during the big band era; as orchestras increased in size, guitar players soon realized the necessity in guitar amplification and electrification. The first electric guitars used in jazz were hollow archtop acoustic guitar bodies with electromagnetic transducers. The first electrically amplified stringed instrument to be marketed commercially was a cast aluminium lap steel guitar nicknamed the "Frying Pan" designed in 1931 by George Beauchamp, the general manager of the National Guitar Corporation, with Paul Barth, who was vice president. George Beauchamp, along with Adolph Rickenbacker, invented the electromagnetic pickups. Coils that were wrapped around a magnet would create an electromagnetic field that converted the vibrations of the guitar strings into electrical signals, which could then be amplified. Commercial production began in late summer of 1932 by the Ro-Pat-In Corporation (Electro-Patent-Instrument Company), in Los Angeles, a partnership of Beauchamp, Adolph Rickenbacker (originally Rickenbacher), and Paul Barth. In 1934, the company was renamed the Rickenbacker Electro Stringed Instrument Company. In that year Beauchamp applied for a United States patent for an Electrical Stringed Musical Instrument and the patent was later issued in 1937. By the time it was patented, other manufacturers were already making their own electric guitar designs. Early electric guitar manufacturers include Rickenbacker in 1932; Dobro in 1933; National, AudioVox and Volu-tone in 1934; Vega, Epiphone (Electrophone and Electar), and Gibson in 1935 and many others by 1936. By early-mid 1935, Electro String Instrument Corporation had achieved success with the "Frying Pan", and set out to capture a new audience through its release of the Electro-Spanish Model B and the Electro-Spanish Ken Roberts, which was the first full 25-inch scale electric guitar ever produced. The Electro-Spanish Ken Roberts was revolutionary for its time, providing players a full 25-inch scale, with easy access to 17 frets free of the body. Unlike other lap-steel electrified instruments produced during the time, the Electro-Spanish Ken Roberts was designed to play while standing upright with the guitar on a strap, as with acoustic guitars. The Electro-Spanish Ken Roberts was also the first instrument to feature a hand-operated vibrato as a standard appointment, a device called the "Vibrola," invented by Doc Kauffman. It is estimated that fewer than 50 Electro-Spanish Ken Roberts were constructed between 1933 and 1937; fewer than 10 are known to survive today. The solid-body electric guitar is made of solid wood, without functionally resonating air spaces. The first solid-body Spanish standard guitar was offered by Vivi-Tone no later than 1934. This model featured a guitar-shaped body of a single sheet of plywood affixed to a wood frame. Another early, substantially solid Spanish electric guitar, called the Electro Spanish, was marketed by the Rickenbacker guitar company in 1935 and made of Bakelite. By 1936, the Slingerland company introduced a wooden solid-body electric model, the Slingerland Songster 401 (and a lap steel counterpart, the Songster 400). Gibson's first production electric guitar, marketed in 1936, was the ES-150 model ("ES" for "Electric Spanish", and "150" reflecting the $150 price of the instrument, along with matching amplifier). The ES-150 guitar featured a single-coil, hexagonally shaped "bar" pickup, which was designed by Walt Fuller. It became known as the "Charlie Christian" pickup (named for the great jazz guitarist who was among the first to perform with the ES-150 guitar). The ES-150 achieved some popularity but suffered from unequal loudness across the six strings. A functioning solid-body electric guitar was designed and built in 1940 by Les Paul from an Epiphone acoustic archtop as an experiment. His "log guitar" — a wood post with a neck attached and two hollow-body halves attached to the sides for appearance only — shares nothing in common for design or hardware with the solid-body Gibson Les Paul, designed by Ted McCarty and introduced in 1952. The feedback associated with amplified hollow-bodied electric guitars was understood long before Paul's "log" was created in 1940; Gage Brewer's Ro-Pat-In of 1932 had a top so heavily reinforced that it essentially functioned as a solid-body instrument. Types Solid-body Unlike acoustic guitars, solid-body electric guitars have no vibrating soundboard to amplify string vibration. Instead, solid-body instruments depend on electric pickups (microphones) and an amplifier (or amp) and speaker. The solid body ensures that the amplified sound reproduces the string vibration alone, thus avoiding the wolf tones and unwanted feedback associated with amplified acoustic guitars. These guitars are generally made of hardwood covered with a hard polymer finish, often polyester or lacquer. In large production facilities, the wood is stored for three to six months in a wood-drying kiln before being cut to shape. Premium custom-built guitars are frequently made with much older, hand-selected wood. One of the first solid-body guitars was invented by Les Paul. Gibson did not present their Gibson Les Paul guitar prototypes to the public, as they did not believe the solid-body style would catch on. Another early solid-body Spanish style guitar, resembling what would become Gibson's Les Paul guitar a decade later, was developed in 1941 by O.W. Appleton, of Nogales, Arizona. Appleton made contact with both Gibson and Fender but was unable to sell the idea behind his "App" guitar to either company. In 1946, Merle Travis commissioned steel guitar builder Paul Bigsby to build him a solid-body Spanish-style electric. Bigsby delivered the guitar in 1948. The first mass-produced solid-body guitar was Fender Esquire and Fender Broadcaster (later to become the Fender Telecaster), first made in 1948, five years after Les Paul made his prototype. The Gibson Les Paul appeared soon after to compete with the Broadcaster. Another notable solid-body design is the Fender Stratocaster, which was introduced in 1954 and became extremely popular among musicians in the 1960s and 1970s for its wide tonal capabilities and more comfortable ergonomics than other models. Different styles of guitar have different pick-up styles, the main being 2 or 3 ‘single-coil’ pick-ups or a double humbucker, with the Stratocaster being a triple single-coil guitar. The history of Electric Guitars is summarized by Guitar World magazine, and the earliest electric guitar on their top 10 list is the Ro-Pat-In Electro A-25 "Frying Pan" (1932) described as 'The first-fully functioning solid-body electric guitar to be manufactured and sold'. The most recent electric guitar on this list is the Ibanez Jem (1987) which featured '24 frets', 'an impossibly thin neck' and was 'designed to be the ultimate shredder machine'. Numerous other important electric guitars are on the list including Gibson ES-150 (1936), Fender Telecaster (1951), Gibson Les Paul (1952), Gretsch 6128 Duo Jet (1953), Fender Stratocaster (1954), Rickenbacker 360/12 (1964), Van Halen Frankenstrat (1975), Paul Reed Smith Custom (1985) many of these guitars were 'successors' to earlier designs. Electric Guitar designs eventually became culturally important and visually iconic, with various model companies selling miniature model versions of particularly famous electric guitars, for example the Gibson SG used by Angus Young from the group AC/DC. Chambered-body Some solid-bodied guitars and some others, such as the Gibson Les Paul Supreme, the PRS Singlecut, and the Fender Telecaster Thinline, are built with hollow chambers in the body. These chambers are designed to not interfere with the critical bridge and string anchor point on the solid body. In the case of Gibson and PRS, these are called chambered bodies. The motivation for this may be to reduce weight, to achieve a |
appear more similar than in reality. His also accuses Haeckel of creating early human embryos that he conjured in his imagination rather than obtained through empirical observation. His completes his denunciation of Haeckel by pronouncing that Haeckel had “'relinquished the right to count as an equal in the company of serious researchers.'” Opposition to Haeckel Haeckel encountered numerous oppositions to his artistic depictions of embryonic development during the late nineteenth and early twentieth centuries. Haeckel's opponents believe that he de-emphasizes the differences between early embryonic stages in order to make the similarities between embryos of different species more pronounced. Early opponents: Ludwig Rutimeyer, Theodor Bischoff and Rudolf Virchow The first suggestion of fakery against Haeckel was made in late 1868 by Ludwig Rutimeyer in the Archiv für Anthropogenie. Rutimeyer was a professor of zoology and comparative anatomy at the University of Basel, who rejected natural selection as simply mechanistic and proposed an anti-materialist view of nature. Rutimeyer claimed that Haeckel "had taken to kinds of liberty with established truth". Rutimeyer claimed that Haeckel presented the same image three consecutive times as the embryo of the dog, the chicken, and the turtle. Theodor Bischoff (1807–1882), was a strong opponent of Darwinism. As a pioneer in mammalian embryology, he was one of Haeckel's strongest critics. Although Bischoff's 1840 surveys depict how similar the early embryos of man are to other vertebrates, he later demanded that such hasty generalization was inconsistent with his recent findings regarding the dissimilarity between hamster embryos and those of rabbits and dogs. Nevertheless, Bischoff's main argument was in reference to Haeckel's drawings of human embryos, for Haeckel is later accused of miscopying the dog embryo from him. Throughout Haeckel's time, criticism of his embryo drawings was often due in part to his critics' belief in his representations of embryological development as "crude schemata". Contemporary criticism of Haeckel: Michael Richardson and Stephen Jay Gould Michael Richardson and his colleagues in a July 1997 issue of Anatomy and Embryology, demonstrated that Haeckel falsified his drawings in order to exaggerate the similarity of the phylotypic stage. In a March 2000 issue of Natural History, Stephen Jay Gould argued that Haeckel "exaggerated the similarities by idealizations and omissions." As well, Gould argued that Haeckel's drawings are simply inaccurate and falsified. On the other hand, one of those who criticized Haeckel's drawings, Michael Richardson, has argued that "Haeckel's much-criticized drawings are important as phylogenetic hypotheses, teaching aids, and evidence for evolution". But even Richardson admitted in Science Magazine in 1997 that his team's investigation of Haeckel's drawings were showing them to be "one of the most famous fakes in biology." Some version of Haeckel's drawings can be found in many modern biology textbooks in discussions of the history of embryology, with clarification that these are no longer considered valid . Haeckel's proponents (past and present) Although Charles Darwin accepted Haeckel's support for natural selection, he was tentative in using Haeckel's ideas in his writings; with regard to embryology, Darwin relied far more on von Baer's work. Haeckel's work was published in 1866 and 1874, years after Darwin's "The Origin of Species" (1859). Despite the numerous oppositions, Haeckel has influenced many disciplines in science in his drive to integrate such disciplines of taxonomy and embryology into the Darwinian framework and to investigate phylogenetic reconstruction through his Biogenetic Law. As well, Haeckel served as a mentor to many important scientists, including Anton Dohrn, Richard and Oscar Hertwig, Wilhelm Roux, and Hans Driesch. One of Haeckel's earliest proponents was Carl Gegenbaur at the University of Jena (1865–1873), during which both men were absorbing the impact of Darwin's theory. The two quickly sought to integrate their knowledge into an evolutionary program. In determining the relationships between "phylogenetic linkages" and "evolutionary laws of form," both Gegenbaur and Haeckel relied on a method of comparison. As Gegenbaur argued, the task of comparative anatomy lies in explaining the form and organization of the animal body in order to provide evidence for the continuity and evolution of a series of organs in the body. Haeckel then provided a means of pursuing this aim with his biogenetic law, in which he proposed to compare an individual's various stages of development with its ancestral line. Although Haeckel stressed comparative embryology and Gegenbaur promoted the comparison of adult structures, both believed that the two methods could work in conjunction to produce the goal of evolutionary morphology. The philologist and anthropologist, Friedrich Müller, used Haeckel's concepts as a source for his ethnological research, involving the systematic comparison of the folklore, beliefs and practices of different societies. Müller's work relies specifically on theoretical assumptions that are very similar to Haeckel's and reflects the German practice to maintain strong connections between empirical research and the philosophical framework of science. Language is particularly important, for it establishes a bridge between natural science and philosophy. For Haeckel, language specifically represented the concept that all phenomena of human development relate to the laws of biology. Although Müller did not specifically have an influence in advocating Haeckel's embryo drawings, both shared a common understanding of development from lower to higher forms, for Müller specifically saw humans as the last link in an endless chain of evolutionary development. Modern acceptance of Haeckel's Biogenetic Law, despite current rejection of Haeckelian views, finds support in the certain degree of parallelism between ontogeny and phylogeny. A. M. Khazen, on the one hand, states that "ontogeny is obliged to repeat the main stages of phylogeny." A. S. Rautian, on the other hand, argues that the reproduction of ancestral patterns of development is a key aspect of certain biological systems. Dr. Rolf Siewing acknowledges the similarity of embryos in different species, along with the laws of von Baer, but does not believe that one should compare embryos with adult stages of development. According to M. S. Fischer, reconsideration of the Biogenetic Law is possible as a result | dog, the chicken, and the turtle. Theodor Bischoff (1807–1882), was a strong opponent of Darwinism. As a pioneer in mammalian embryology, he was one of Haeckel's strongest critics. Although Bischoff's 1840 surveys depict how similar the early embryos of man are to other vertebrates, he later demanded that such hasty generalization was inconsistent with his recent findings regarding the dissimilarity between hamster embryos and those of rabbits and dogs. Nevertheless, Bischoff's main argument was in reference to Haeckel's drawings of human embryos, for Haeckel is later accused of miscopying the dog embryo from him. Throughout Haeckel's time, criticism of his embryo drawings was often due in part to his critics' belief in his representations of embryological development as "crude schemata". Contemporary criticism of Haeckel: Michael Richardson and Stephen Jay Gould Michael Richardson and his colleagues in a July 1997 issue of Anatomy and Embryology, demonstrated that Haeckel falsified his drawings in order to exaggerate the similarity of the phylotypic stage. In a March 2000 issue of Natural History, Stephen Jay Gould argued that Haeckel "exaggerated the similarities by idealizations and omissions." As well, Gould argued that Haeckel's drawings are simply inaccurate and falsified. On the other hand, one of those who criticized Haeckel's drawings, Michael Richardson, has argued that "Haeckel's much-criticized drawings are important as phylogenetic hypotheses, teaching aids, and evidence for evolution". But even Richardson admitted in Science Magazine in 1997 that his team's investigation of Haeckel's drawings were showing them to be "one of the most famous fakes in biology." Some version of Haeckel's drawings can be found in many modern biology textbooks in discussions of the history of embryology, with clarification that these are no longer considered valid . Haeckel's proponents (past and present) Although Charles Darwin accepted Haeckel's support for natural selection, he was tentative in using Haeckel's ideas in his writings; with regard to embryology, Darwin relied far more on von Baer's work. Haeckel's work was published in 1866 and 1874, years after Darwin's "The Origin of Species" (1859). Despite the numerous oppositions, Haeckel has influenced many disciplines in science in his drive to integrate such disciplines of taxonomy and embryology into the Darwinian framework and to investigate phylogenetic reconstruction through his Biogenetic Law. As well, Haeckel served as a mentor to many important scientists, including Anton Dohrn, Richard and Oscar Hertwig, Wilhelm Roux, and Hans Driesch. One of Haeckel's earliest proponents was Carl Gegenbaur at the University of Jena (1865–1873), during which both men were absorbing the impact of Darwin's theory. The two quickly sought to integrate their knowledge into an evolutionary program. In determining the relationships between "phylogenetic linkages" and "evolutionary laws of form," both Gegenbaur and Haeckel relied on a method of comparison. As Gegenbaur argued, the task of comparative anatomy lies in explaining the form and organization of the animal body in order to provide evidence for the continuity and evolution of a series of organs in the body. Haeckel then provided a means of pursuing this aim with his biogenetic law, in which he proposed to compare an individual's various stages of development with its ancestral line. Although Haeckel stressed comparative embryology and Gegenbaur promoted the comparison of adult structures, both believed that the two methods could work in conjunction to produce the goal of evolutionary morphology. The philologist and anthropologist, Friedrich Müller, used Haeckel's concepts as a source for his ethnological research, involving the systematic comparison of the folklore, beliefs and practices of different societies. Müller's work relies specifically on theoretical assumptions that are very similar to Haeckel's and reflects the German practice to maintain strong connections between empirical research and the philosophical framework of science. Language is particularly important, for it establishes a bridge between natural science and philosophy. For Haeckel, language specifically represented the concept that all phenomena of human development relate to the laws of biology. Although Müller did not specifically have an influence in advocating Haeckel's embryo drawings, both shared a common understanding of development from lower to higher forms, for Müller specifically saw humans as the last link in an endless chain of evolutionary development. Modern acceptance of Haeckel's Biogenetic Law, despite current rejection of Haeckelian views, finds support in the certain degree of parallelism between ontogeny and phylogeny. A. M. Khazen, on the one hand, states that "ontogeny is obliged to repeat the main stages of phylogeny." A. S. Rautian, on the other hand, argues that the reproduction of ancestral patterns of development is a key aspect of certain biological systems. Dr. Rolf Siewing acknowledges the similarity of embryos in different species, along with the laws of von Baer, but does not believe that one should compare embryos with adult stages of development. According to M. S. Fischer, reconsideration of the Biogenetic Law is possible as a result of two fundamental innovations in biology since Haeckel's time: cladistics and developmental genetics. In defense of Haeckel's embryo drawings, the principal argument is that of "schematisation." Haeckel's drawings were not intended to be technical and scientific depictions, but rather schematic drawings and reconstructions for a specifically lay audience. Therefore, as R. Gursch argues, Haeckel's embryo drawings should be regarded as "reconstructions." Although his drawings are open to criticism, his drawings should not be considered falsifications of any sort. Although modern defense of Haeckel's embryo drawings still considers the inaccuracy of his drawings, charges of fraud are considered unreasonable. As Erland Nordenskiöld argues, charges of fraud against Haeckel are unnecessary. R. Bender ultimately goes so far as to reject His's claims regarding the fabrication of certain stages of development in Haeckel's drawings, arguing that Haeckel's embryo drawings are faithful representations of real stages of embryonic development in comparison to published embryos. The survival and reproduction of Haeckel's embryo drawings Haeckel's embryo drawings, as comparative plates, were at first only copied into biology textbooks, rather than texts on the study of embryology. Even though Haeckel's program in |
there is too little time for heat transfer. This is the basis of the so-called adiabatic approximation that is used in meteorology. Conjugate with the enthalpy, with these arguments, the other characteristic function of state of a thermodynamic system is its entropy, as a function, , of the same list of variables of state, except that the entropy, , is replaced in the list by the enthalpy, . It expresses the entropy representation. The state variables , , and are said to be the natural state variables in this representation. They are suitable for describing processes in which they are experimentally controlled. For example, and can be controlled by allowing heat transfer, and by varying only the external pressure on the piston that sets the volume of the system. Physical interpretation The term is the energy of the system, and the term can be interpreted as the work that would be required to "make room" for the system if the pressure of the environment remained constant. When a system, for example, moles of a gas of volume at pressure and temperature , is created or brought to its present state from absolute zero, energy must be supplied equal to its internal energy plus , where is the work done in pushing against the ambient (atmospheric) pressure. In physics and statistical mechanics it may be more interesting to study the internal properties of a constant-volume system and therefore the internal energy is used. In chemistry, experiments are often conducted at constant atmospheric pressure, and the pressure–volume work represents a small, well-defined energy exchange with the atmosphere, so that is the appropriate expression for the heat of reaction. For a heat engine, the change in its enthalpy after a full cycle is equal to zero, since the final and initial state are equal. Relationship to heat In order to discuss the relation between the enthalpy increase and heat supply, we return to the first law for closed systems, with the physics sign convention: , where the heat is supplied by conduction, radiation, Joule heating, or friction from stirring by a shaft with paddles or by an externally driven magnetic field acting on an internal rotor (which is surroundings-based work, but contributes to system-based heat). We apply it to the special case with a constant pressure at the surface. In this case the work is given by (where is the pressure at the surface, is the increase of the volume of the system). Cases of long range electromagnetic interaction require further state variables in their formulation, and are not considered here. In this case the first law reads: Now, So If the system is under constant pressure, and consequently, the increase in enthalpy of the system is equal to the heat added or given off: This is why the now-obsolete term heat content was used in the 19th century. Applications In thermodynamics, one can calculate enthalpy by determining the requirements for creating a system from "nothingness"; the mechanical work required, , differs based upon the conditions that obtain during the creation of the thermodynamic system. Energy must be supplied to remove particles from the surroundings to make space for the creation of the system, assuming that the pressure remains constant; this is the term. The supplied energy must also provide the change in internal energy, , which includes activation energies, ionization energies, mixing energies, vaporization energies, chemical bond energies, and so forth. Together, these constitute the change in the enthalpy . For systems at constant pressure, with no external work done other than the work, the change in enthalpy is the heat received by the system. For a simple system with a constant number of particles at constant pressure, the difference in enthalpy is the maximum amount of thermal energy derivable from an isobaric thermodynamic process. Heat of reaction The total enthalpy of a system cannot be measured directly; the enthalpy change of a system is measured instead. Enthalpy change is defined by the following equation: where is the "enthalpy change", is the final enthalpy of the system (in a chemical reaction, the enthalpy of the products or the system at equilibrium), is the initial enthalpy of the system (in a chemical reaction, the enthalpy of the reactants). For an exothermic reaction at constant pressure, the system's change in enthalpy, , is negative due to the products of the reaction having a smaller enthalpy than the reactants, and equals the heat released in the reaction if no electrical or shaft work is done. In other words, the overall decrease in enthalpy is achieved by the generation of heat. Conversely, for a constant-pressure endothermic reaction, is positive and equal to the heat absorbed in the reaction. From the definition of enthalpy as , the enthalpy change at constant pressure is . However for most chemical reactions, the work term is much smaller than the internal energy change , which is approximately equal to . As an example, for the combustion of carbon monoxide 2 CO(g) + O2(g) → 2 CO2(g), and . Since the differences are so small, reaction enthalpies are often described as reaction energies and analyzed in terms of bond energies. Specific enthalpy The specific enthalpy of a uniform system is defined as where is the mass of the system. The SI unit for specific enthalpy is joule per kilogram. It can be expressed in other specific quantities by , where is the specific internal energy, is the pressure, and is specific volume, which is equal to , where is the density. Enthalpy changes An enthalpy change describes the change in enthalpy observed in the constituents of a thermodynamic system when undergoing a transformation or chemical reaction. It is the difference between the enthalpy after the process has completed, i.e. the enthalpy of the products assuming that the reaction goes to completion, and the initial enthalpy of the system, namely the reactants. These processes are specified solely by their initial and final states, so that the enthalpy change for the reverse is the negative of that for the forward process. A common standard enthalpy change is the enthalpy of formation, which has been determined for a large number of substances. Enthalpy changes are routinely measured and compiled in chemical and physical reference works, such as the CRC Handbook of Chemistry and Physics. The following is a selection of enthalpy changes commonly recognized in thermodynamics. When used in these recognized terms the qualifier change is usually dropped and the property is simply termed enthalpy of 'process. Since these properties are often used as reference values it is very common to quote them for a standardized set of environmental parameters, or standard conditions, including: A pressure of one atmosphere (1 atm or 101.325 kPa) or 1 bar A temperature of 25 °C or 298.15 K A concentration | altitude, the pressure surrounding it changes, and the process is often so rapid that there is too little time for heat transfer. This is the basis of the so-called adiabatic approximation that is used in meteorology. Conjugate with the enthalpy, with these arguments, the other characteristic function of state of a thermodynamic system is its entropy, as a function, , of the same list of variables of state, except that the entropy, , is replaced in the list by the enthalpy, . It expresses the entropy representation. The state variables , , and are said to be the natural state variables in this representation. They are suitable for describing processes in which they are experimentally controlled. For example, and can be controlled by allowing heat transfer, and by varying only the external pressure on the piston that sets the volume of the system. Physical interpretation The term is the energy of the system, and the term can be interpreted as the work that would be required to "make room" for the system if the pressure of the environment remained constant. When a system, for example, moles of a gas of volume at pressure and temperature , is created or brought to its present state from absolute zero, energy must be supplied equal to its internal energy plus , where is the work done in pushing against the ambient (atmospheric) pressure. In physics and statistical mechanics it may be more interesting to study the internal properties of a constant-volume system and therefore the internal energy is used. In chemistry, experiments are often conducted at constant atmospheric pressure, and the pressure–volume work represents a small, well-defined energy exchange with the atmosphere, so that is the appropriate expression for the heat of reaction. For a heat engine, the change in its enthalpy after a full cycle is equal to zero, since the final and initial state are equal. Relationship to heat In order to discuss the relation between the enthalpy increase and heat supply, we return to the first law for closed systems, with the physics sign convention: , where the heat is supplied by conduction, radiation, Joule heating, or friction from stirring by a shaft with paddles or by an externally driven magnetic field acting on an internal rotor (which is surroundings-based work, but contributes to system-based heat). We apply it to the special case with a constant pressure at the surface. In this case the work is given by (where is the pressure at the surface, is the increase of the volume of the system). Cases of long range electromagnetic interaction require further state variables in their formulation, and are not considered here. In this case the first law reads: Now, So If the system is under constant pressure, and consequently, the increase in enthalpy of the system is equal to the heat added or given off: This is why the now-obsolete term heat content was used in the 19th century. Applications In thermodynamics, one can calculate enthalpy by determining the requirements for creating a system from "nothingness"; the mechanical work required, , differs based upon the conditions that obtain during the creation of the thermodynamic system. Energy must be supplied to remove particles from the surroundings to make space for the creation of the system, assuming that the pressure remains constant; this is the term. The supplied energy must also provide the change in internal energy, , which includes activation energies, ionization energies, mixing energies, vaporization energies, chemical bond energies, and so forth. Together, these constitute the change in the enthalpy . For systems at constant pressure, with no external work done other than the work, the change in enthalpy is the heat received by the system. For a simple system with a constant number of particles at constant pressure, the difference in enthalpy is the maximum amount of thermal energy derivable from an isobaric thermodynamic process. Heat of reaction The total enthalpy of a system cannot be measured directly; the enthalpy change of a system is measured instead. Enthalpy change is defined by the following equation: where is the "enthalpy change", is the final enthalpy of the system (in a chemical reaction, the enthalpy of the products or the system at equilibrium), is the initial enthalpy of the system (in a chemical reaction, the enthalpy of the reactants). For an exothermic reaction at constant pressure, the system's change in enthalpy, , is negative due to the products of the reaction having a smaller enthalpy than the reactants, and equals the heat released in the reaction if no electrical or shaft work is done. In other words, the overall decrease in enthalpy is achieved by the generation of heat. Conversely, for a constant-pressure endothermic reaction, is positive and equal to the heat absorbed in the reaction. From the definition of enthalpy as , the enthalpy change at constant pressure is . However for most chemical reactions, the work term is much smaller than the internal energy change , which is approximately equal to . As an example, for the combustion of carbon monoxide 2 CO(g) + O2(g) → 2 CO2(g), and . Since the differences are so small, reaction enthalpies are often described as reaction energies and analyzed in terms of bond energies. Specific enthalpy The specific enthalpy of a uniform system is defined as where is the mass of the system. The SI unit for specific enthalpy is joule per kilogram. It can be expressed in other specific quantities by , where is the specific internal energy, is the pressure, and is specific volume, which is equal to , where is the density. Enthalpy changes An enthalpy change describes the change in enthalpy observed in the constituents of a thermodynamic system when undergoing a transformation or chemical reaction. It is the difference between the enthalpy after the process has completed, i.e. the enthalpy of the products assuming that the reaction goes to completion, and the initial enthalpy of the system, namely the reactants. These processes are specified solely by their initial and final states, so that the enthalpy change for the reverse is the negative of that for the forward process. A common standard enthalpy change is the enthalpy of formation, which has been determined for a large number of substances. Enthalpy changes are routinely measured and compiled in chemical and physical reference works, such as the CRC Handbook of Chemistry and Physics. The following is a selection of enthalpy changes commonly recognized in thermodynamics. When used in these recognized terms the qualifier change is usually dropped and the property is simply termed enthalpy of 'process. Since these properties are often used as reference values it is very common to quote them for a standardized set of environmental parameters, or standard conditions, including: A pressure of one atmosphere (1 atm or 101.325 kPa) or 1 bar A temperature of 25 °C or 298.15 K A concentration of 1.0 M when the element or compound is present in solution Elements or compounds in their normal physical states, i.e. standard state For such standardized values the name of the enthalpy is commonly prefixed with the term standard, e.g. standard enthalpy of formation. Chemical properties: Enthalpy of reaction, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of substance reacts completely. Enthalpy of formation, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a compound is formed from its elementary antecedents. Enthalpy of combustion, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a substance burns completely with oxygen. Enthalpy of hydrogenation, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of an unsaturated compound reacts completely with an excess of hydrogen to form a saturated compound. Enthalpy of atomization, defined as the enthalpy change required to separate one mole of a substance into its constituent atoms completely. Enthalpy of neutralization, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of water is formed when an acid and a base react. Standard Enthalpy of solution, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a solute is dissolved completely in an excess of solvent, so that the solution is at infinite dilution. Standard enthalpy of Denaturation (biochemistry), defined as the enthalpy change required to denature one mole of compound. Enthalpy of hydration, defined as the enthalpy change observed when one mole of gaseous |
television series such as Music Groschenweise, Einsatz für Lohbeck, Doppelter Einsatz and Die Wache. In March 1996, Action Concept cast Atalay in what would become his breakthrough role, starring as Semir Gerkhan, a police detective of Turkish origin. Atalay co-wrote the screenplay for one of the series' episodes, titled "Checkmate," and is a consulting producer for the series as of 2016. In 2005, Atalay published a short story, "Die Türkei ist da oben" ("Turkey is Up There"), in the German-Turkish anthology Was lebbt du?. In September 2012 he shot together with Ilka Bessin (Cindy from Marzahn) the short film "Alarm for Cindy 11", a parody of Alarm for Cobra 11. The short film was broadcast on September 15, 2012 in the program. Personal life Atalay's first marriage was to film and theatre actress Astrid Pollmann in 2004; the couple separated in late 2009. They have one daughter, Pauletta, who has also starred alongside her father in Alarm für Cobra 11 as Ayda Gerkhan, the middle child of Semir Gerkhan. His second child was born in mid-2012 to makeup artist and manager Katja Ohneck, with whom he is married as of 2017. Filmography Film | the National Theatre of Hanover before studying acting at the Hochschule für Musik und Theater Hamburg. Afterwards he took on guest roles in several German television series such as Music Groschenweise, Einsatz für Lohbeck, Doppelter Einsatz and Die Wache. In March 1996, Action Concept cast Atalay in what would become his breakthrough role, starring as Semir Gerkhan, a police detective of Turkish origin. Atalay co-wrote the screenplay for one of the series' episodes, titled "Checkmate," and is a consulting producer for the series as of 2016. In 2005, Atalay published a short story, "Die Türkei ist da oben" ("Turkey is Up There"), in the German-Turkish anthology Was lebbt du?. In September 2012 he shot together with Ilka Bessin (Cindy from Marzahn) the short film "Alarm for Cindy 11", a parody of Alarm for Cobra |
American folksinger Peter Tevis, with the two collaborating on a version of Woody Guthrie's Pastures of Plenty. Tevis is credited with singing the lyrics of Morricone's songs such as "A Gringo Like Me" (from Gunfight at Red Sands) and "Lonesome Billy" (from Bullets Don't Argue). Tevis later recorded a vocal version of A Fistful of Dollars that was not used in the film. Association with Sergio Leone The turning point in Morricone's career took place in 1964, the year in which his third child, Andrea Morricone, who would also become a film composer, was born. Film director Sergio Leone hired Morricone, and together they created a distinctive score to accompany Leone's different version of the Western, A Fistful of Dollars (1964). The Dollars Trilogy Because budget strictures limited Morricone's access to a full orchestra, he used gunshots, cracking whips, whistle, voices, jew's harp, trumpets, and the new Fender electric guitar, instead of orchestral arrangements of Western standards à la John Ford. Morricone used his special effects to punctuate and comically tweak the action—cluing in the audience to the taciturn man's ironic stance. As memorable as Leone's close-ups, harsh violence, and black comedy, Morricone's work helped to expand the musical possibilities of film scoring. Initially, Morricone was billed on the film as Dan Savio. A Fistful of Dollars came out in Italy in 1964 and was released in America three years later, greatly popularising the so-called Spaghetti Western genre. For the American release, Sergio Leone and Ennio Morricone decided to adopt American-sounding names, so they called themselves respectively, Bob Robertson and Dan Savio. Over the film's theatrical release, it grossed more than any other Italian film up to that point. The film debuted in the United States in January 1967, where it grossed for the year. It eventually grossed $14.5 million in its American release, against its budget of 200,000. With the score of A Fistful of Dollars, Morricone began his 20-year collaboration with his childhood friend Alessandro Alessandroni and his Cantori Moderni. Alessandroni provided the whistling and the twanging guitar on the film scores, while his Cantori Moderni were a flexible troupe of modern singers. Morricone specifically exploited the solo soprano of the group, Edda Dell'Orso, at the height of her powers "an extraordinary voice at my disposal". The composer subsequently scored Leone's other two Dollars Trilogy (or Man with No Name Trilogy) spaghetti westerns: For a Few Dollars More (1965) and The Good, the Bad, and the Ugly (1966). All three films starred the American actor Clint Eastwood as The Man With No Name and depicted Leone's own intense vision of the mythical West. Morricone commented in 2007: "Some of the music was written before the film, which was unusual. Leone's films were made like that because he wanted the music to be an important part of it; he kept the scenes longer because he did not want the music to end." According to Morricone this explains "why the films are so slow". Despite the small film budgets, the Dollars Trilogy was a box-office success. The available budget for The Good, the Bad, and The Ugly was about 1.2 million, but it became the most successful film of the Dollars Trilogy, grossing 25.1 million in the United States and more than 2,3 billion lire (1,2 million EUR) in Italy alone. Morricone's score became a major success and sold more than three million copies worldwide. On 14 August 1968 the original score was certified by the RIAA with a golden record for the sale of 500,000 copies in the United States alone. The main theme to The Good, the Bad, and The Ugly, also titled "The Good, the Bad and the Ugly", was a hit in 1968 for Hugo Montenegro, whose rendition was a No.2 Billboard pop single in the U.S. and a U.K. No.1 single (for four weeks from mid-November that year). "The Ecstasy of Gold" became one of Morricone's best-known compositions. The opening scene of Jeff Tremaine's Jackass Number Two (2006), in which the cast is chased through a suburban neighbourhood by bulls, is accompanied by this piece. While punk rock band The Ramones used "The Ecstasy of Gold" as a closing theme during their live performances, Metallica uses "The Ecstasy of Gold" as the introductory music for its concerts since 1983. This composition is also included on Metallica's live symphonic album S&M as well as the live album Live Shit: Binge & Purge. An instrumental metal cover by Metallica (with minimal vocals by lead singer James Hetfield) appeared on the 2007 Morricone tribute album We All Love Ennio Morricone. This metal version was nominated for a Grammy Award in the category of Best Rock Instrumental Performance. In 2009, the Grammy Award-winning hip-hop artist Coolio extensively sampled the theme for his song "Change". Once Upon a Time in the West and others Subsequent to the success of the Dollars trilogy, Morricone also composed the scores for Once Upon a Time in the West (1968) and Leone's last credited western film A Fistful of Dynamite (1971), as well as the score for My Name Is Nobody (1973). Morricone's score for Once Upon a Time in the West is one of the best-selling original instrumental scores in the world today, with as many as 10 million copies sold, including one million copies in France, and more than 800,000 copies in the Netherlands. One of the main themes from the score, "A Man with Harmonica" (L'uomo Dell'armonica), became known worldwide and sold more than 1,260,000 copies in France. The collaboration with Leone is considered one of the exemplary collaborations between a director and a composer. Morricone's last score for Leone was for his last film, the gangster drama Once Upon a Time in America (1984). Leone died on 30 April 1989 of a heart attack at the age of 60. Before his death in 1989, Leone was part-way through planning a film on the Siege of Leningrad, set during World War II. By 1989, Leone had been able to acquire 100 million in financing from independent backers for the war epic. He had convinced Morricone to compose the film score. The project was cancelled when Leone died two days before he was to officially sign on for the film. In early 2003, Italian filmmaker Giuseppe Tornatore announced he would direct a film called Leningrad. The film has yet to go into production and Morricone was cagey as to details on account of Tornatore's superstitious nature. Association with Sergio Corbucci and Sergio Sollima Two years after the start of his collaboration with Sergio Leone, Morricone also started to score music for another Spaghetti Western director, Sergio Corbucci. The composer wrote music for Corbucci's Navajo Joe (1966), The Hellbenders (1967), The Mercenary/The Professional Gun (1968), The Great Silence (1968), Compañeros (1970), Sonny and Jed (1972), and What Am I Doing in the Middle of the Revolution? (1972). In addition, Morricone composed music for the western films by Sergio Sollima, The Big Gundown (with Lee Van Cleef, 1966), Face to Face (1967), and Run, Man, Run (1968), as well as the 1970 crime thriller Violent City (with Charles Bronson) and the poliziottesco film Revolver (1973). Other westerns Other relevant scores for less popular Spaghetti Westerns include Duello nel Texas (1963), Bullets Don't Argue (1964), A Pistol for Ringo (1965), The Return of Ringo (1965), Seven Guns for the MacGregors (1966), The Hills Run Red (1966), Giulio Petroni's Death Rides a Horse (1967) and Tepepa (1968), A Bullet for the General (1967), Guns for San Sebastian (with Charles Bronson and Anthony Quinn, 1968), A Sky Full of Stars for a Roof (1968), The Five Man Army (1969), Don Siegel's Two Mules for Sister Sara (1970), Life Is Tough, Eh Providence? (1972), and Buddy Goes West (1981). Dramas and political movies With Leone's films, Ennio Morricone's name had been put firmly on the map. Most of Morricone's film scores of the 1960s were composed outside the Spaghetti Western genre, while still using Alessandroni's team. Their music included the themes for Il Malamondo (1964), Slalom (1965), and Listen, Let's Make Love (1967). In 1968, Morricone reduced his work outside the movie business and wrote scores for 20 films in the same year. The scores included psychedelic accompaniment for Mario Bava's superhero romp Danger: Diabolik (1968). Morricone collaborated with Marco Bellocchio (Fists in the Pocket, 1965), Gillo Pontecorvo (The Battle of Algiers (1966), and Queimada! (1969) with Marlon Brando), Roberto Faenza (H2S, 1968), Giuliano Montaldo (Sacco e Vanzetti, 1971), Giuseppe Patroni Griffi ('Tis Pity She's a Whore, 1971), Mauro Bolognini (Drama of the Rich, 1974), Umberto Lenzi (Almost Human, 1974), Pier Paolo Pasolini (Salò, or the 120 Days of Sodom, 1975), Bernardo Bertolucci (Novecento, 1976), and Tinto Brass (The Key, 1983). In 1970, Morricone wrote the score for Violent City. That same year, he received his first Nastro d'Argento for the music in Metti, una sera a cena (Giuseppe Patroni Griffi, 1969) and his second only a year later for Sacco e Vanzetti (Giuliano Montaldo, 1971), in which he collaborated with the legendary American folk singer and activist Joan Baez. His soundtrack for Sacco e Vanzetti contains another well-known composition by Morricone, the folk song "Here's to You", sung by Joan Baez. For the writing of the lyrics, Baez was inspired by a letter from Bartolomeo Vanzetti: "Father, yes, I am a prisoner / Fear not to relay my crime". The song became a hit in several countries, selling more than 790,000 copies in France only. The song was later included in movies such as The Life Aquatic with Steve Zissou. In the beginning of the 1970s, Morricone achieved success with other singles, including A Fistful of Dynamite (1971) and God With Us (1974), having sold respectively 477,000 and 378,000 copies in France only. Giallo and Horror Morricone's eclecticism found its way to films in the horror genre, such as the giallo thrillers of Dario Argento, from The Bird with the Crystal Plumage (1970), The Cat o' Nine Tails (1971), and Four Flies on Grey Velvet (1971) to The Stendhal Syndrome (1996) and The Phantom of the Opera (1998). His other horror scores include Nightmare Castle (1965), A Quiet Place in the Country (1968), The Antichrist (1974), Autopsy (1975), and Night Train Murders (1975). In addition, Morricone composed music for many popular and cult Italian giallo films, such as Senza sapere niente di lei (1969), Forbidden Photos of a Lady Above Suspicion (1970), A Lizard in a Woman's Skin (1971), Cold Eyes of Fear (1971), The Fifth Cord (1971), Short Night of Glass Dolls (1971), Black Belly of the Tarantula (1971) My Dear Killer (1972), What Have You Done to Solange? (1972), Who Saw Her Die? (1972), and Spasmo (1974). In 1977 Morricone scored John Boorman's Exorcist II: The Heretic and Alberto De Martino's apocalyptic horror film Holocaust 2000, starring Kirk Douglas. In 1982 he composed the score for John Carpenter's science fiction horror movie The Thing. Morricone's main theme for the film was reflected in Marco Beltrami's film's score of prequel of the 1982 film, which was released in 2011. Hollywood career The Dollars Trilogy was not released in the United States until 1967 when United Artists, who had already enjoyed success distributing the British-produced James Bond films in the United States, decided to release Sergio Leone's Spaghetti Westerns. The American release gave Morricone an exposure in America and his film music became quite popular in the United States. One of Morricone's first contributions to an American director concerned his music for the religious epic film The Bible: In the Beginning... by John Huston. According to Sergio Miceli's book Morricone, la musica, il cinema, Morricone wrote about 15 or 16 minutes of music, which were recorded for a screen test and conducted by Franco Ferrara. At first Morricone's teacher Goffredo Petrassi had been engaged to write the score for the great big-budget epic, but Huston preferred another composer. RCA Records then proposed Morricone who was under contract with them, but a conflict between the film's producer Dino De Laurentiis and RCA occurred. The producer wanted to have exclusive rights for the soundtrack, while RCA still had the monopoly on Morricone at that time and did not want to release the composer. Subsequently, Morricone's work was rejected because he did not get permission from RCA to work for Dino De Laurentiis alone. The composer reused the parts of his unused score for The Bible: In the Beginning in such films as The Return of Ringo (1965) by Duccio Tessari and Alberto Negrin's The Secret of the Sahara (1987). Morricone never left Rome to compose his music and never learned to speak English. But given that the composer always worked in a wide field of composition genres, from "absolute music", which he always produced, to "applied music", working as orchestrator as well as conductor in the recording field, and then as a composer for theatre, radio, and cinema, the impression arises that he never really cared that much about his standing in the eyes of Hollywood. 1970–1985: From Two Mules to Red Sonja In 1970, Morricone composed the music for Don Siegel's Two Mules for Sister Sara, an American-Mexican western film starring Shirley MacLaine and Clint Eastwood. The same year the composer also delivered the title theme The Men from Shiloh for the American Western television series The Virginian. In 1974–1975 Morricone wrote music for Spazio 1999, an Italian-produced compilation movie made to launch the Italian-British television series Space: 1999, while the original episodes featured music by Barry Gray. A soundtrack album was only released on CD in 2016 and on LP in 2017. In 1975 he scored the George Kennedy revenge thriller The "Human" Factor, which was the final film of director Edward Dmytryk. Two years later he composed the score for the sequel to William Friedkin's 1973 film The Exorcist, directed by John Boorman: Exorcist II: The Heretic. The horror film was a major disappointment at the box office. The film grossed 30,749,142 in the United States. In 1978, the composer worked with Terrence Malick for Days of Heaven starring Richard Gere, for which he earned his first nomination at the Oscars for Best Original Score. Despite the fact that Morricone had produced some of the most popular and widely imitated film music ever written throughout the 1960s and 1970s, Days of Heaven earned him his first Oscar nomination for Best Original Score, with his score up against Jerry Goldsmith's The Boys from Brazil, Dave Grusin's Heaven Can Wait, Giorgio Moroder's Midnight Express (the eventual winner), and John Williams's Superman: The Movie at the Oscar ceremonies in 1979. 1986–2020: From The Mission to The Hateful Eight Association with Roland Joffé The Mission, directed by Joffé, was about a piece of history considerably more distant, as Spanish Jesuit missionaries see their work undone as a tribe of Paraguayan natives fall within a territorial dispute between the Spanish and Portuguese. At one point the score was one of the world's best-selling film scores, selling over 3 million copies worldwide. Morricone finally received a second Oscar nomination for The Mission. Morricone's original score lost out to Herbie Hancock's coolly arranged jazz on Bertrand Tavernier's Round Midnight. It was considered a surprising win and a controversial one, given that much of the music in the film was pre-existing. Morricone stated the following during a 2001 interview with The Guardian: "I definitely felt that I should have won for The Mission. Especially when you consider that the Oscar winner that year was Round Midnight, which was not an original score. It had a very good arrangement by Herbie Hancock, but it used existing pieces. So there could be no comparison with The Mission. There was a theft!" His score for The Mission was ranked at number 1 in a poll of the all-time greatest film scores. The top 10 list was compiled by 40 film composers such as Michael Giacchino and Carter Burwell. The score is ranked 23rd on the AFI's list of 25 greatest film scores of all time. Association with De Palma and Levinson On three occasions, Brian De Palma worked with Morricone: The Untouchables (1987), the 1989 war drama Casualties of War and the science fiction film Mission to Mars (2000). Morricone's score for The Untouchables resulted in his third nomination for Academy Award for Best Original Score. In a 2001 interview with The Guardian, Morricone stated that he had good experiences with De Palma: "De Palma is delicious! He respects music, he respects composers. For The Untouchables, everything I proposed to him was fine, but then he wanted a piece that I didn't like at all, and of course, we didn't have an agreement on that. It was something I didn't want to write – a triumphal piece for the police. I think I wrote nine different pieces for this in total and I said, 'Please don't choose the seventh!' because it was the worst. And guess what he chose? The seventh one. But it really suits the movie." Another American director, Barry Levinson, commissioned the composer on two occasions. First, for the crime-drama Bugsy, starring Warren Beatty, which received ten Oscar nominations, winning two for Best Art Direction-Set Decoration (Dennis Gassner, Nancy Haigh) and Best Costume Design. "He doesn't have a piano in his studio, I always thought that with composers, you sit at the piano, and you try to find the melody. There's no such thing with Morricone. He hears a melody, and he writes it down. He hears the orchestration completely done", said Levinson in an interview. Other notable Hollywood scores During his career in Hollywood, Morricone was approached for numerous other projects, including the Gregory Nava drama A Time of Destiny (1988), Frantic by Polish-French director Roman Polanski (1988, starring Harrison Ford), Franco Zeffirelli's 1990 drama film Hamlet (starring Mel Gibson and Glenn Close), the neo-noir crime film State of Grace by Phil Joanou (1990, starring Sean Penn and Ed Harris), Rampage (1992) by William Friedkin, and the romantic drama Love Affair (1994) by Warren Beatty. Association with Quentin Tarantino In 2009, Tarantino originally wanted Morricone to compose the film score for Inglourious Basterds. Morricone was unable to, because the film's sped-up production schedule conflicted with his scoring of Giuseppe Tornatore's Baarìa. However, Tarantino did use eight tracks composed by Morricone in the film, with four of them included on the soundtrack. The tracks came originally from Morricone's scores for The Big Gundown (1966), Revolver (1973) and Allonsanfàn (1974). In 2012, Morricone composed the song "Ancora Qui" with lyrics by Italian singer Elisa for Tarantino's Django Unchained, a track that appeared together with three existing music tracks composed by Morricone on the soundtrack. "Ancora Qui" was one of the contenders for an Academy Award nomination in the Best Original Song category, but eventually the song was not nominated. On 4 January 2013 Morricone presented Tarantino with a Life Achievement Award at a special ceremony being cast as a continuation of the International Rome Film Festival. In 2014, Morricone was misquoted as claiming that he would "never work" with Tarantino again, and later agreed to write an original film score for Tarantino's The Hateful Eight, which won him an Academy Award in 2016 in the Best Original Score category. His nomination for this film marked him at that time as the second oldest nominee in Academy history, behind Gloria Stuart. Morricone's win marked his first competitive Oscar, and at the age of 87, he became the oldest person at the time to win a competitive Oscar. Composer for Giuseppe Tornatore In 1988, Morricone started an ongoing and very successful collaboration with Italian director Giuseppe Tornatore. His first score for Tornatore was for the drama film Cinema Paradiso. The international version of the film won the Special Jury Prize at the 1989 Cannes Film Festival and the 1989 Best Foreign Language Film Oscar. Morricone received a BAFTA award with his son Andrea, and a David di Donatello for his score. In 2002, the director's cut 173-minute version was released (known in the US as Cinema Paradiso: The New Version). After the success of Cinema Paradiso, the composer wrote the music for all subsequent films by Tornatore: the drama film Everybody's Fine (Stanno Tutti Bene, 1990), A Pure Formality (1994) starring Gérard Depardieu and Roman Polanski, The Star Maker (1995), The Legend of 1900 (1998) starring Tim Roth, the 2000 romantic drama Malèna (which featured Monica Bellucci) and the psychological thriller mystery film La sconosciuta (2006). Morricone also composed the scores for Baarìa (2009), The Best Offer (2013) starring Geoffrey Rush, Jim Sturgess and Donald Sutherland and the romantic drama The Correspondence (2015) The composer won several music awards for his scores in Tornatore's movies. Morricone received a fifth Academy Award nomination and a Golden Globe nomination for Malèna. For Legend of 1900, he won a Golden Globe Award for Best Original Score. In September 2021 Tornatore presented out of competition at the 78th Venice International Film Festival a documentary film about Morricone, Ennio. Television series and last works Morricone wrote the score for the Mafia television series La piovra seasons 2 to 10 from 1985 to 2001, including the themes "Droga e sangue" ("Drugs and Blood"), "La Morale", and "L'Immorale". Morricone worked as the conductor of seasons 3 to 5 of the series. He also worked as the music supervisor for the television project La bibbia ("The Bible"). In the late 1990s, he collaborated with his son Andrea on the Ultimo crime dramas, resulting in Ultimo (1998), Ultimo 2 – La sfida (1999), Ultimo 3 – L'infiltrato (2004) and Ultimo 4 – L'occhio del falco (2013). For Canone inverso (2000) based on the music-themed novel of the same name by the Paolo Maurensig, directed by Ricky Tognazzi and starring Hans Matheson, Morricone won Best Score awards in the David di Donatello Awards and Silver Ribbons. In the 2000s, Morricone continued to compose music for successful television series such as Il Cuore nel Pozzo (2005), Karol: A Man Who Became | an Italian-produced compilation movie made to launch the Italian-British television series Space: 1999, while the original episodes featured music by Barry Gray. A soundtrack album was only released on CD in 2016 and on LP in 2017. In 1975 he scored the George Kennedy revenge thriller The "Human" Factor, which was the final film of director Edward Dmytryk. Two years later he composed the score for the sequel to William Friedkin's 1973 film The Exorcist, directed by John Boorman: Exorcist II: The Heretic. The horror film was a major disappointment at the box office. The film grossed 30,749,142 in the United States. In 1978, the composer worked with Terrence Malick for Days of Heaven starring Richard Gere, for which he earned his first nomination at the Oscars for Best Original Score. Despite the fact that Morricone had produced some of the most popular and widely imitated film music ever written throughout the 1960s and 1970s, Days of Heaven earned him his first Oscar nomination for Best Original Score, with his score up against Jerry Goldsmith's The Boys from Brazil, Dave Grusin's Heaven Can Wait, Giorgio Moroder's Midnight Express (the eventual winner), and John Williams's Superman: The Movie at the Oscar ceremonies in 1979. 1986–2020: From The Mission to The Hateful Eight Association with Roland Joffé The Mission, directed by Joffé, was about a piece of history considerably more distant, as Spanish Jesuit missionaries see their work undone as a tribe of Paraguayan natives fall within a territorial dispute between the Spanish and Portuguese. At one point the score was one of the world's best-selling film scores, selling over 3 million copies worldwide. Morricone finally received a second Oscar nomination for The Mission. Morricone's original score lost out to Herbie Hancock's coolly arranged jazz on Bertrand Tavernier's Round Midnight. It was considered a surprising win and a controversial one, given that much of the music in the film was pre-existing. Morricone stated the following during a 2001 interview with The Guardian: "I definitely felt that I should have won for The Mission. Especially when you consider that the Oscar winner that year was Round Midnight, which was not an original score. It had a very good arrangement by Herbie Hancock, but it used existing pieces. So there could be no comparison with The Mission. There was a theft!" His score for The Mission was ranked at number 1 in a poll of the all-time greatest film scores. The top 10 list was compiled by 40 film composers such as Michael Giacchino and Carter Burwell. The score is ranked 23rd on the AFI's list of 25 greatest film scores of all time. Association with De Palma and Levinson On three occasions, Brian De Palma worked with Morricone: The Untouchables (1987), the 1989 war drama Casualties of War and the science fiction film Mission to Mars (2000). Morricone's score for The Untouchables resulted in his third nomination for Academy Award for Best Original Score. In a 2001 interview with The Guardian, Morricone stated that he had good experiences with De Palma: "De Palma is delicious! He respects music, he respects composers. For The Untouchables, everything I proposed to him was fine, but then he wanted a piece that I didn't like at all, and of course, we didn't have an agreement on that. It was something I didn't want to write – a triumphal piece for the police. I think I wrote nine different pieces for this in total and I said, 'Please don't choose the seventh!' because it was the worst. And guess what he chose? The seventh one. But it really suits the movie." Another American director, Barry Levinson, commissioned the composer on two occasions. First, for the crime-drama Bugsy, starring Warren Beatty, which received ten Oscar nominations, winning two for Best Art Direction-Set Decoration (Dennis Gassner, Nancy Haigh) and Best Costume Design. "He doesn't have a piano in his studio, I always thought that with composers, you sit at the piano, and you try to find the melody. There's no such thing with Morricone. He hears a melody, and he writes it down. He hears the orchestration completely done", said Levinson in an interview. Other notable Hollywood scores During his career in Hollywood, Morricone was approached for numerous other projects, including the Gregory Nava drama A Time of Destiny (1988), Frantic by Polish-French director Roman Polanski (1988, starring Harrison Ford), Franco Zeffirelli's 1990 drama film Hamlet (starring Mel Gibson and Glenn Close), the neo-noir crime film State of Grace by Phil Joanou (1990, starring Sean Penn and Ed Harris), Rampage (1992) by William Friedkin, and the romantic drama Love Affair (1994) by Warren Beatty. Association with Quentin Tarantino In 2009, Tarantino originally wanted Morricone to compose the film score for Inglourious Basterds. Morricone was unable to, because the film's sped-up production schedule conflicted with his scoring of Giuseppe Tornatore's Baarìa. However, Tarantino did use eight tracks composed by Morricone in the film, with four of them included on the soundtrack. The tracks came originally from Morricone's scores for The Big Gundown (1966), Revolver (1973) and Allonsanfàn (1974). In 2012, Morricone composed the song "Ancora Qui" with lyrics by Italian singer Elisa for Tarantino's Django Unchained, a track that appeared together with three existing music tracks composed by Morricone on the soundtrack. "Ancora Qui" was one of the contenders for an Academy Award nomination in the Best Original Song category, but eventually the song was not nominated. On 4 January 2013 Morricone presented Tarantino with a Life Achievement Award at a special ceremony being cast as a continuation of the International Rome Film Festival. In 2014, Morricone was misquoted as claiming that he would "never work" with Tarantino again, and later agreed to write an original film score for Tarantino's The Hateful Eight, which won him an Academy Award in 2016 in the Best Original Score category. His nomination for this film marked him at that time as the second oldest nominee in Academy history, behind Gloria Stuart. Morricone's win marked his first competitive Oscar, and at the age of 87, he became the oldest person at the time to win a competitive Oscar. Composer for Giuseppe Tornatore In 1988, Morricone started an ongoing and very successful collaboration with Italian director Giuseppe Tornatore. His first score for Tornatore was for the drama film Cinema Paradiso. The international version of the film won the Special Jury Prize at the 1989 Cannes Film Festival and the 1989 Best Foreign Language Film Oscar. Morricone received a BAFTA award with his son Andrea, and a David di Donatello for his score. In 2002, the director's cut 173-minute version was released (known in the US as Cinema Paradiso: The New Version). After the success of Cinema Paradiso, the composer wrote the music for all subsequent films by Tornatore: the drama film Everybody's Fine (Stanno Tutti Bene, 1990), A Pure Formality (1994) starring Gérard Depardieu and Roman Polanski, The Star Maker (1995), The Legend of 1900 (1998) starring Tim Roth, the 2000 romantic drama Malèna (which featured Monica Bellucci) and the psychological thriller mystery film La sconosciuta (2006). Morricone also composed the scores for Baarìa (2009), The Best Offer (2013) starring Geoffrey Rush, Jim Sturgess and Donald Sutherland and the romantic drama The Correspondence (2015) The composer won several music awards for his scores in Tornatore's movies. Morricone received a fifth Academy Award nomination and a Golden Globe nomination for Malèna. For Legend of 1900, he won a Golden Globe Award for Best Original Score. In September 2021 Tornatore presented out of competition at the 78th Venice International Film Festival a documentary film about Morricone, Ennio. Television series and last works Morricone wrote the score for the Mafia television series La piovra seasons 2 to 10 from 1985 to 2001, including the themes "Droga e sangue" ("Drugs and Blood"), "La Morale", and "L'Immorale". Morricone worked as the conductor of seasons 3 to 5 of the series. He also worked as the music supervisor for the television project La bibbia ("The Bible"). In the late 1990s, he collaborated with his son Andrea on the Ultimo crime dramas, resulting in Ultimo (1998), Ultimo 2 – La sfida (1999), Ultimo 3 – L'infiltrato (2004) and Ultimo 4 – L'occhio del falco (2013). For Canone inverso (2000) based on the music-themed novel of the same name by the Paolo Maurensig, directed by Ricky Tognazzi and starring Hans Matheson, Morricone won Best Score awards in the David di Donatello Awards and Silver Ribbons. In the 2000s, Morricone continued to compose music for successful television series such as Il Cuore nel Pozzo (2005), Karol: A Man Who Became Pope (2005), La provinciale (2006), Giovanni Falcone (2007), Pane e libertà (2009) and Come Un Delfino 1–2 (2011–2013). Morricone provided the string arrangements on Morrissey's "Dear God Please Help Me" from the album Ringleader of the Tormentors in 2006. In 2008, the composer recorded music for a Lancia commercial, featuring Richard Gere and directed by Harald Zwart (known for directing The Pink Panther 2). In spring and summer 2010, Morricone worked with Hayley Westenra for a collaboration on her album Paradiso. The album features new songs written by Morricone, as well as some of his best-known film compositions of the last 50 years. Westenra recorded the album with Morricone's orchestra in Rome during the summer of 2010. Since 1995, he composed the music for several advertising campaigns of Dolce & Gabbana. The commercials were directed by Giuseppe Tornatore. In 2013, Morricone collaborated with Italian singer-songwriter Laura Pausini on a new version of her hit single "La solitudine" for her 20 years anniversary greatest hits album 20 – The Greatest Hits. Morricone composed the music for The Best Offer (2013) by Giuseppe Tornatore. He wrote the score for Christian Carion's En mai, fais ce qu'il te plait (2015) and the most recent movie by Tornatore: The Correspondence (2016), featuring Jeremy Irons and Olga Kurylenko. In July 2015, Quentin Tarantino announced after the screening of footage of his movie The Hateful Eight at the San Diego Comic-Con International that Morricone would score the film, the first Western that Morricone scored since 1981. The score was critically acclaimed and won several awards including the Golden Globe Award for Best Original Score and the Academy Award for Best Original Score. Live performances Before receiving his diplomas in trumpet, composition and instrumentation from the conservatory, Morricone was already active as a trumpet player, often performing in an orchestra that specialised in music written for films. After completing his education at Saint Cecilia, the composer honed his orchestration skills as an arranger for Italian radio and television. In order to support himself, he moved to RCA in the early sixties and entered the front ranks of the Italian recording industry. Since 1964, Morricone was also a founding member of the Rome-based avant-garde ensemble Gruppo di Improvvisazione Nuova Consonanza. During the existence of the group (until 1978), Morricone performed several times with the group as trumpet player. To ready his music for live performance, he joined smaller pieces of music together into longer suites. Rather than single pieces, which would require the audience to applaud every few minutes, Morricone thought the best idea was to create a series of suites lasting from 15 to 20 minutes, which form a sort of symphony in various movements – alternating successful pieces with personal favourites. In concert, Morricone normally had 180 to 200 musicians and vocalists under his baton, performing multiple genre-crossing collections of music. Rock, symphonic and ethnic instruments share the stage. On 20 September 1984 Morricone conducted the Orchestre national des Pays de la Loire at Cinésymphonie '84 ("Première nuit de la musique de film/First night of film music") in the French concert hall Salle Pleyel in Paris. He performed some of his best-known compositions such as Metti, una sera a cena, Novecento and The Good, the Bad and the Ugly. Michel Legrand and Georges Delerue performed on the same evening. On 15 October 1987 Morricone gave a concert in front of 12,000 people in the Sportpaleis in Antwerp, Belgium, with the Dutch Metropole Orchestra and the Italian operatic soprano Alide Maria Salvetta. A live-album with a recording of this concert was released in the same year. On 9 June 2000 Morricone went to the Flanders International Film Festival Ghent to conduct his music together with the National Orchestra of Belgium. During the concert's first part, the screening of The Life and Death of King Richard III (1912) was accompanied with live music by Morricone. It was the very first time that the score was performed live in Europe. The second part of the evening consisted of an anthology of the composer's work. The event took place on the eve of Euro 2000, the European Football Championship in Belgium and the Netherlands. Morricone performed over 250 concerts as of 2001. The composer started a world tour in 2001, the latter part sponsored by Giorgio Armani, with the Orchestra Roma Sinfonietta, touring London (Barbican 2001; 75th birthday Concerto, Royal Albert Hall 2003), Paris, Verona, and Tokyo. Morricone performed his classic film scores at the Gasteig in Munich in 2004. He made his North American concert debut on 3 February 2007 at Radio City Music Hall in New York City. The previous evening, Morricone had already presented at the United Nations a concert comprising some of his film themes, as well as the cantata Voci dal silenzio to welcome |
of the Grand Rapids–Kalamazoo–Battle Creek television market. The station is owned by McLean, Virginia-based Tegna Inc. WZZM's studios are located on 3 Mile Road NW in Walker (with a Grand Rapids mailing address), and its transmitter is located in Grant, Michigan. The station's transmitter is about north of other stations in the Grand Rapids market, and its over-the-air signal is unavailable in the market's two other major cities as a result. Since April 2009, however, WZZM has been available on most cable providers in Southwest Michigan, even though Battle Creek-based WOTV (channel 41, owned by Nexstar Media Group) serves as the ABC affiliate for that part of the Grand Rapids market. Until then, viewers outside of the reach of WZZM's signal (which remains true in Coldwater and Sturgis as of January 2021) relied on out-of-market stations from South Bend, Indiana or Lansing to view syndicated programs carried by WZZM. History A local group known as West Michigan Telecasters received a construction permit to operate a television station on VHF channel 9 in 1961. However, later that year, the Federal Communications Commission (FCC) revised the channel positions in the area, resulting in the VHF channel 13 allocation moving from Cadillac to Grand Rapids. WWTV in Cadillac, then on channel 13, was required to move to channel 9 as a result of the action. WZZM-TV officially signed on the air at 6:30 p.m. on November 1, 1962. The station went off the air just 20 minutes later, after a tube on its transmitter failed; it returned to the air 10 minutes later. The celebratory opening show anchored by news director Jack Hogan. WZZM certainly had humble beginnings; its first broadcasts were from a banquet room-turned studio at the Pantlind Hotel (now the Amway Grand Plaza Hotel). Live broadcasts included This Morning with Bud Lindeman, Shirley's Show and an evening news program, though the station's most notable show is The Bozo Show, which was broadcast on the station for more than 30 years. Bill Merchant was the original Bozo, with Dick Richards as "The Ringmaster"; Richards took over the role of Bozo shortly thereafter. As a result of the swap with WWTV, WZZM was now short-spaced to WSPD-TV (now WTVG) in Toledo, Ohio. It had to build its transmitter about 40 miles farther north than the other stations in West Michigan and redirect its signal in order to protect WSPD-TV from interference. As a result, WZZM's signal barely reached Kalamazoo and just missed Battle Creek. Southwestern Michigan viewers had to rely on WSJV in Elkhart, Indiana, WXYZ-TV in Detroit, or WLS-TV in Chicago for ABC programming until WUHQ-TV (channel 41, now WOTV) signed on from Battle Creek in 1971. Sometime in late 1964, WZZM-TV signed on a satellite station in Kalamazoo, operating on VHF channel 12. In August 1971, WZZM opened a multimillion-dollar state-of-the-art studio in Walker, with Congressman Gerald Ford presiding over the ceremony. In the following years, WZZM became a formidable force in the Grand Rapids market, gathering high ratings and a reputation as having one of the top newscasts in the West Michigan area. In 1978, West Michigan Telecasters sold WZZM to Miami-based Wometco Enterprises. WZZM (95.7 FM) was sold at that time, becoming WZZR. Wometco's stations were sold to Kohlberg Kravis Roberts (KKR) in 1985; KKR subsequently sold the station to Price Communications in 1986. A local investor, Richard Appleton, formed Northstar Television in 1989 and purchased WZZM. Appleton tried to acquire WUHQ in 1991 and turn it into a satellite of WZZM, which would have created a strong combined signal with about 40% overlap. However, the proposed deal fell through at the last minute. Ironically, WUHQ had served as a de facto satellite of WZZM for most of its history; its engineers had to switch to and from WZZM's signal on most occasions before the station was able to acquire a network feed from ABC. In 1992, the Northstar Television group (WZZM, along with WNAC-TV in Providence, Rhode Island and WAPT in Jackson, Mississippi) was sold to Argyle Television Holdings II. The Gannett Company bought WZZM and sister station WGRZ in Buffalo, New York in January 1997 in a trade deal with Argyle involving WLWT in Cincinnati and KOCO-TV in Oklahoma City. This deal was done to resolve cross-ownership issues that stemmed from Gannett's ownership at the time of cable provider Multimedia Cablevision (which it had acquired with its purchase of Multimedia, Inc. in 1995) in the Oklahoma City market (as well as cross-ownership issues with the newspapers The Cincinnati Enquirer and The Niagara Gazette) as the FCC at the time barred a television station and a cable provider or newspaper from being owned by the same company in a single market. In the 1990s, WZZM made an array of changes with the new millennium looming. The station purchased new news vehicles, a new tape format (Beta SP) was introduced to digitize all media, a new radar receiver and new weather cameras were added across the state; it also built a new set, coinciding with the introduction of a new logo. In early spring of 2006, WZZM finalized a major station overhaul, complete with a new logo, graphics, and promotional campaign. In late September 2006, WZZM announced on-air through a series of commercials that the morning newscast (with Derek Francis, Lauren Stanton and Hally Vogel) moved into first place in viewership, according to Nielsen. On September 14, 2006, WZZM broadcast its first local program in high definition, the special Great Lakes Adventure. Lee Van Ameyde and Juliet Dragos hosted the special about Sleeping Bear Dunes, Mackinac Island, Mackinac Bridge, as well as Michigan's wine country, and charter boats. In 2007, WZZM launched three websites co-developed with Gannett's Michigan newspapers: MichiganMoms.com (now MomsLikeMe.com), MichiganSmartShopper.com, and MyMitten.com. Around the first week of October 2012, Gannett entered a dispute against Dish Network regarding compensation fees and Dish's AutoHop commercial-skip feature on its Hopper digital video recorders. Gannett ordered that Dish discontinue AutoHop on the account that it is affecting advertising revenues for WZZM, thus causing major problems in the West Michigan area. Gannett threatened to suspend its contact with the satellite provider should the skirmish continue beyond October 7 and Dish and Gannett fail to reach an agreement. The two parties eventually reached an agreement after extending the deadline for a few hours. On June 29, 2015, the Gannett Company split in two, with one side specializing in print media and the other side specializing in broadcast and digital media. WZZM was retained by the latter company, named Tegna. Programming Syndicated programs airing on WZZM include Access Daily, Daily Blast Live, Hot Bench, Judge Judy and Entertainment Tonight. WZZM is one of the first stations in Michigan to produce and broadcast local commercials and station promotions in high definition. It is also the first station to air segments such as its popular high school sports franchise 13 On Your Sidelines in high definition. My West Michigan Take Five & Company (originally named Take Five Grand Rapids) is a live talk and entertainment show that airs weekdays at 9:00 a.m. (it originally aired at 5:00 p.m. from its debut in early 2004 until September 2005, when it moved to 4:30 p.m.). The program is hosted by Catherine Behrendt. On August 25, 2008, the program's title was revised to Take Five and Company, expanded from a half-hour to one hour and moved to 9:00 a.m. (displacing Live with Regis and Kelly, which had held that timeslot on WZZM for more than 20 years; Live was then moved to WWMT). The program utilizes some of the same | hosted by longtime WZZM anchors Lauren Stanton and Jennifer Pascua, though the hosts have changed over the years. It was announced on July 24, 2020, that My West Michigan would take a hiatus due to COVID-19-related restrictions that made the live interaction the show is known for impossible. The station plans to bring the show back as soon as it is safe to do so. News operation WZZM presently broadcasts 34½ hours of locally produced newscasts each week (with 5½ hours each weekday, three hours on Saturdays and four hours on Sundays). The station maintains partnerships with two formerly co-owned newspapers in the market, the Grand Haven Tribune and The Daily News in Greenville, to provide weather forecasts. Chief meteorologist George Lessens writes a weekly column for Advance Newspapers, which includes the forecast for the upcoming week, as of February 2006; it originally had a review of the previous week's weather conditions. For most of its history starting in 1969, the station's newscasts were branded as Eyewitness News until the late 1990s, when it was retitled to WZZM 13 News. In 1971, WZZM became the first station in the West Michigan to use a weather radar, which was upgraded in 1974 to a computerized color version. In 1993, WZZM debuted a half-hour news program called 5:30 Edition; the program included soft news features, in addition to news headlines. Many of its feature segments were phased out and the program became a standard newscast by 1997. WZZM started expanded coverage of high school football in 1995 with the debut of the weekly seasonal highlight program Friday Night Football; a few years later, the program was renamed to 13 On Your Sidelines. On June 7, 2009, the station suspended its weekend morning newscasts (which debuted in 2006), due to economic conditions; on March 5, 2011, WZZM resumed the Saturday and Sunday morning newscasts after a two-year absence, airing for two hours from 6 to 8 a.m. on both days. They now also offer a one-hour, 9 a.m. Newscast on Sunday mornings after GMA. In late 2009, WZZM became the third television station in West Michigan to begin broadcasting its local newscasts in widescreen standard definition (WWMT was the last major station in West Michigan with 4:3 standard definition newscasts until April 16, 2011, when it became the second station in the market to upgrade to full high definition newscasts). In June 2010, WZZM rehired Brent Ashcroft, who had left the station twelve years prior to become sports director at Fox affiliate WXMI (channel 17). On December 3, 2011, WZZM became the fourth and final television station in West Michigan to begin broadcasting its local newscasts in high definition. WZZM-TV focuses its newscasts on the northern half of the market (Grand Rapids and Muskegon) with a secondary emphasis on Kalamazoo and Battle Creek. On September 8, 2014, WZZM began airing news at 5 p.m. WZZM now has local news from 5 to 6:30 p.m. In March 2018, the station rebranded from "WZZM 13" to "13 On Your Side." At the same time, the station switched from the older Gannett styled graphics to the new Tegna 2018 graphics package. With the exception of My West Michigan, this branding was carried over to all newscasts. Weather On Target Forecast WZZM's team of meteorologists compares the accuracy of the previous day's forecast in comparison to the actual weather of the current day for its "On Target Forecast". If the day's weather is accurate to the previous day's forecasted prediction, a graphic is displayed showing a bullseye on a calendar for that date, as the forecast was "on target". This calendar shows results going back through the last 31 days. If the temperature was within three degrees, the "Three Degree Guarantee", it gets a check mark on this calendar. If the forecasted temperature was more than three degrees off it is called a miss and gets an "X" on the calendar. Weatherball The original weatherball was constructed in 1967 and perched on top of the Michigan National Bank building in downtown Grand Rapids. The colors it displayed were representative of the forthcoming weather. A poem was written about the weatherball's colors: Weatherball red, warmer weather ahead. Weatherball blue, cooler weather in view. Weatherball green, no change foreseen. Colors blinking bright, rain or snow in sight. However, due to questions about its stability, it was removed in 1987, after twenty years of existence. WZZM located the weatherball, which had resided in a Kalamazoo junkyard since its removal, and purchased it in 1999. In 2002, plans were announced to refurbish the stainless steel ball and to add new neon lights. The weatherball was perched on a monopole at WZZM's studios and was lit on May 7, 2003. It is visible from both Interstate 96 and US 131, the two major freeways in the area. Shortly after the reintroduction of the WZZM 13 Weatherball, a contest was held where viewers submitted video recordings of songs to coincide with its meanings. The winner chosen had their song appear on a new commercial that aired to inform viewers on the significance of the colors. The winners were Dale Ray Schumaker and Allison Rae Schumaker of Holland with their jingle, "Know Before You Go". "DJ Dale" Schumaker and Allison Rae Schumaker are prominent hit songwriters. On June 5, 2008, the Weatherball was struck by lightning for the first time in its history. The lightning scrambled the electronic components of the Weatherball, causing it to glow in a rainbow of colors, and it had to be turned off temporarily for repairs as a result. The station also has a costumed character mascot of the Weatherball named "Blinkie". A similar weatherball is also located on the Citizen's Bank building in Flint. Tegna sister station and fellow ABC affiliate KXTV in Sacramento, California, also owns a weather beacon. In February 2015 the Weatherball was given its own Twitter account (), which has a sarcastic overall personality about area weather, light news, and sporting events. Weather Chaser The Weather Chaser was introduced in 2001; it was a mobile version of the in-studio weather office capable of live broadcasts from anywhere in the broadcast area. During severe weather, the meteorologist using the Chaser was able to track and report storm conditions on location. After not being seen for a number of months, it was spotted being used in August 2006 as a live shot vehicle at the Unity Christian Music Festival in Muskegon, Michigan. The Weather Chaser (seen in the above photo) has since been transformed back into a live truck; a new Weather Chaser was introduced in 2011 and is an SUV, and was used on May 12, 2011 in Grand Haven, Michigan at Mary A. White Elementary School during a visit by chief meteorologist George Lessens to help the students launch a weather balloon. Weather Deck The "Weather Deck" is a deck set up outside of the station's Walker studios for use during weather forecasts on WZZM's newscasts (and is similar to such outdoor weather setups seen on other TEGNA-owned stations). Most weather reports are done outside, except during weather conditions that make it unsafe for the meteorologist to go outside, such as during severe weather. The weather deck was introduced in 1999; from 1995 to 1999, the evening meteorologist reported from the station's parking lot. In the spring of 2009, WZZM stopped using the Weather Deck during its newscasts as lighting for the deck became too expensive for the station to maintain. Instead, these weather reports are done from inside the main news studio. The noon newscast occasionally features a "Weather Deck Guest" live interview segment. Regular weather segments resumed from the Weather Deck at some point during 2010. Awards Over the years, WZZM has received numerous awards for journalistic excellence. Some of these include: Emmy Award – "Sickle Cell Anemia: Paradox of Neglect, Station Award" in 1971. Michigan Associated Press – "Sickle Cell Anemia: Paradox of Neglect, Best Documentary" in 1971. United Press International's "Michigan News Station of the Year" from 1980 to 1985. Michigan Association of Broadcasters' "Best Newscast" and "Best Coverage of Spot News" awards in 1998 Michigan Association of Broadcasters' "Station of the Year" award in 2002. Michigan Television News Photographers Association "Station of the Year" award in 2002. NWS Weather-Ready National Ambassador of Excellence, designated in |
TNT (Trinitrotoluene) PETN (Pentaerythritol tetranitrate) RDX Powdered aluminium. This is only a partial list; there were many others. Many of these compositions are now obsolete and only encountered in legacy munitions and unexploded ordnance. Two nuclear explosives, containing mixtures of | and unexploded ordnance. Two nuclear explosives, containing mixtures of uranium and plutonium, respectively, were also used at the bombings of Hiroshima and Nagasaki See |
queueing space for incoming calls to wait for a free server). Hence, the formula is also used in certain inventory systems with lost sales. The formula applies under the condition that an unsuccessful call, because the line is busy, is not queued or retried, but instead really vanishes forever. It is assumed that call attempts arrive following a Poisson process, so call arrival instants are independent. Further, it is assumed that the message lengths (holding times) are exponentially distributed (Markovian system), although the formula turns out to apply under general holding time distributions. The Erlang B formula assumes an infinite population of sources (such as telephone subscribers), which jointly offer traffic to N servers (such as telephone lines). The rate expressing the frequency at which new calls arrive, λ, (birth rate, traffic intensity, etc.) is constant, and does not depend on the number of active sources. The total number of sources is assumed to be infinite. The Erlang B formula calculates the blocking probability of a buffer-less loss system, where a request that is not served immediately is aborted, causing that no requests become queued. Blocking occurs when a new request arrives at a time where all available servers are currently busy. The formula also assumes that blocked traffic is cleared and does not return. The formula provides the GoS (grade of service) which is the probability Pb that a new call arriving to the resources group is rejected because all resources (servers, lines, circuits) are busy: B(E, m) where E is the total offered traffic in erlang, offered to m identical parallel resources (servers, communication channels, traffic lanes). where: is the probability of blocking m is the number of identical parallel resources such as servers, telephone lines, etc. E = λh is the normalised ingress load (offered traffic stated in erlang). Note: The erlang is a dimensionless load unit calculated as the mean arrival rate, λ, multiplied by the mean call holding time, h. See Little's law to prove that the erlang unit has to be dimensionless for Little's Law to be dimensionally sane. This may be expressed recursively as follows, in a form that is used to simplify the calculation of tables of the Erlang B formula: Typically, instead of B(E, m) the inverse 1/B(E, m) is calculated in numerical computation in order to ensure numerical stability: Function ErlangB (E As Double, m As Integer) As Double Dim InvB As Double Dim j As Integer InvB = 1.0 For j = 1 To m InvB = 1.0 + InvB * j / E Next j ErlangB = 1.0 / InvB End Function The Erlang B formula is decreasing and convex in m. It requires that call arrivals can be modeled by a Poisson process, which not always is a good match, but it is valid for any statistical distribution of call holding times with finite mean. It applies to traffic transmission systems that do not buffer traffic. More modern examples compared to POTS where Erlang B is still applicable, are optical burst switching (OBS) and several current approaches to optical packet switching (OPS). Erlang B was developed as a trunk sizing tool for telephone networks with holding times in the minutes range, but being a mathematical equation it applies on any time-scale. Extended Erlang B Extended Erlang B differs from the classic Erlang-B assumptions by allowing for a proportion of blocked callers to try again, causing an increase in offered traffic from the initial baseline level. It is an iterative calculation rather than a formula and adds an extra parameter, the recall factor , which defines the recall attempts. The steps in the process are as follows. It starts at iteration with a known initial baseline level of traffic , which is successively adjusted to calculate a sequence of new offered traffic values , each of which accounts for the recalls arising from the previously calculated offered traffic . 1. Calculate the probability of a caller being blocked on their first attempt as above for Erlang B. 2. Calculate the probable number of blocked calls 3. Calculate the number of recalls, , assuming a fixed Recall Factor, , 4. Calculate the new offered traffic where is the initial (baseline) level of traffic. 5. Return to step 1, substituting for , and iterate until a stable value of is obtained. Once a satisfactory value of has been found, the blocking probability and the recall factor can be used to calculate the probability that all of a caller's attempts are lost, not just their first call but also any subsequent retries. Erlang C formula The Erlang C formula expresses the probability that an arriving customer will need to queue (as opposed to immediately being served). Just as the Erlang B formula, Erlang C assumes an infinite population of sources, which jointly offer traffic of erlangs to servers. However, if all the servers are busy when a request arrives from a source, the request is queued. An unlimited number of requests may be held in the queue in this way simultaneously. This formula calculates the probability of queuing offered traffic, assuming that blocked calls stay in the system until they can be handled. This formula is used to determine the number of agents or customer service representatives needed to staff a call centre, for a specified desired probability of queuing. However, the Erlang C formula assumes that callers never hang up while in queue, which makes the formula predict that more agents should be used than are really needed to maintain a desired service level. where: is the total traffic offered in units of erlangs is the number of servers is the probability that a customer has to wait for service. It is assumed that the call arrivals can be modeled by a Poisson process and that call holding times are described by an exponential distribution. Limitations of the Erlang formula When Erlang developed the Erlang-B | and blocked calls), and the average call-holding time (for successful calls), h, and then estimate Eo using the formula E = λh. For a situation where the traffic to be handled is completely new traffic, the only choice is to try to model expected user behavior. For example, one could estimate active user population, N, expected level of use, U (number of calls/transactions per user per day), busy-hour concentration factor, C (proportion of daily activity that will fall in the busy hour), and average holding time/service time, h (expressed in minutes). A projection of busy-hour offered traffic would then be Eo = h erlangs. (The division by 60 translates the busy-hour call/transaction arrival rate into a per-minute value, to match the units in which h is expressed.) Erlang B formula The Erlang B formula (or Erlang-B with a hyphen), also known as the Erlang loss formula, is a formula for the blocking probability that describes the probability of call losses for a group of identical parallel resources (telephone lines, circuits, traffic channels, or equivalent), sometimes referred to as an M/M/c/c queue. It is, for example, used to dimension a telephone network's links. The formula was derived by Agner Krarup Erlang and is not limited to telephone networks, since it describes a probability in a queuing system (albeit a special case with a number of servers but no queueing space for incoming calls to wait for a free server). Hence, the formula is also used in certain inventory systems with lost sales. The formula applies under the condition that an unsuccessful call, because the line is busy, is not queued or retried, but instead really vanishes forever. It is assumed that call attempts arrive following a Poisson process, so call arrival instants are independent. Further, it is assumed that the message lengths (holding times) are exponentially distributed (Markovian system), although the formula turns out to apply under general holding time distributions. The Erlang B formula assumes an infinite population of sources (such as telephone subscribers), which jointly offer traffic to N servers (such as telephone lines). The rate expressing the frequency at which new calls arrive, λ, (birth rate, traffic intensity, etc.) is constant, and does not depend on the number of active sources. The total number of sources is assumed to be infinite. The Erlang B formula calculates the blocking probability of a buffer-less loss system, where a request that is not served immediately is aborted, causing that no requests become queued. Blocking occurs when a new request arrives at a time where all available servers are currently busy. The formula also assumes that blocked traffic is cleared and does not return. The formula provides the GoS (grade of service) which is the probability Pb that a new call arriving to the resources group is rejected because all resources (servers, lines, circuits) are busy: B(E, m) where E is the total offered traffic in erlang, offered to m identical parallel resources (servers, communication channels, traffic lanes). where: is the probability of blocking m is the number of identical parallel resources such as servers, telephone lines, etc. E = λh is the normalised ingress load (offered traffic stated in erlang). Note: The erlang is a dimensionless load unit calculated as the mean arrival rate, λ, multiplied by the mean call holding time, h. See Little's law to prove that the erlang unit has to be dimensionless for Little's Law to be dimensionally sane. This may be expressed recursively as follows, in a form that is used to simplify the calculation of tables of the Erlang B formula: Typically, instead of B(E, m) the inverse 1/B(E, m) is calculated in numerical computation in order to ensure numerical stability: Function ErlangB (E As Double, m As Integer) As Double Dim InvB As Double Dim j As Integer InvB = 1.0 For j = 1 To m InvB = 1.0 + InvB * j / E Next j ErlangB = 1.0 / InvB End Function The Erlang B formula is decreasing and convex in m. It requires that call arrivals can be modeled by a Poisson process, which not always is a good match, but it is valid for any statistical distribution of call holding times with finite mean. It applies to traffic transmission systems that do not buffer traffic. More modern examples compared to POTS where Erlang B is still applicable, are optical burst switching (OBS) and several current approaches to optical packet switching (OPS). Erlang B was developed as a trunk sizing tool for telephone networks with holding times in the minutes range, but being a mathematical equation it applies on any time-scale. Extended Erlang B Extended Erlang B differs from the classic Erlang-B assumptions by allowing for a proportion of blocked callers to try again, causing an increase in offered traffic from the initial baseline level. It is an iterative calculation rather than a formula and adds an extra parameter, the recall factor , which defines the recall attempts. The steps in the process are as follows. It starts at iteration with a known initial baseline level of traffic , which is successively adjusted to calculate a sequence of new offered traffic values , each of which accounts for the recalls arising from the previously calculated offered traffic . 1. Calculate the probability of a caller being blocked on their first attempt as above for Erlang B. 2. Calculate the probable number of blocked calls 3. Calculate the number of recalls, , assuming a fixed Recall Factor, , 4. Calculate the new offered traffic where is the initial (baseline) level of traffic. 5. Return to step 1, substituting for , and iterate until a stable value of is obtained. Once a satisfactory value of has been found, the blocking probability and the recall factor can be used to calculate the probability that all of a caller's attempts are lost, not just their first call but also any subsequent retries. Erlang C formula The Erlang C formula expresses the probability that an arriving customer will need to queue (as opposed to immediately being served). Just as the Erlang B formula, Erlang C assumes an infinite population of sources, which jointly offer traffic of erlangs to servers. However, if all the servers are busy when a request arrives from a source, the request is queued. An unlimited number of requests may be held in the queue in this way simultaneously. This formula calculates the probability of queuing offered traffic, assuming that blocked calls stay in the system until they can be handled. This formula is used to determine the number of agents or customer service representatives needed to staff a call centre, for a specified desired probability of queuing. However, the Erlang C formula assumes that callers never hang up while in queue, which makes the formula predict that more agents should be used than are really needed to maintain a desired service level. where: is the total traffic offered in units of erlangs is the number of servers is the probability that a customer has to wait for service. It is assumed that the call arrivals can be modeled by a Poisson process and that call holding times are described by an exponential distribution. Limitations of the Erlang formula When Erlang developed the Erlang-B and Erlang-C traffic equations, they were developed on a set of assumptions. These assumptions are accurate under most conditions; however in the event of extremely high traffic congestion, Erlang's equations fail to accurately predict the correct number of circuits required because of re-entrant traffic. This is termed a high-loss system, where congestion breeds further congestion at peak times. In such cases, it is first necessary for many additional circuits to be made available so that the high loss can be alleviated. Once this action has been taken, congestion will return to reasonable |
when a flanker or slot receiver, who is supposed to line up behind the line of scrimmage, instead lines up on the line of scrimmage between the offensive line and a split end. In most cases where a pass is caught by an ineligible receiver, it is usually because the quarterback was under pressure and threw it to an offensive lineman out of desperation. Eligible receivers must wear certain uniform numbers, so that the officials can more easily distinguish between eligible and ineligible receivers. In the NFL running backs must wear numbers 20 to 49, tight ends must wear numbers 80 to 89 (or 40 to 49 if the numbers 80 to 89 have been exhausted), and wide receivers must wear numbers 10 to 19 or 80 to 89. In the CFL ineligible receivers must wear numbers 50 to 69; all other numbers (including 0 and 00) may be worn by eligible receivers. A player who is not wearing a number that corresponds to an eligible receiver is ineligible even if he lines up in an eligible position. However, a player who reports to the referee that he intends to be eligible in the following play is allowed to line up and act as an eligible receiver. An example of this was a 1985 NFL game in which William Perry, wearing number 72 and normally a defensive lineman, was made an eligible receiver on an offensive play, and successfully caught a touchdown pass attempt. A more recent example, and more commonly used, has been former New England Patriots linebacker Mike Vrabel lining up as a tight end in goal line situations. In the 2018 season, George Fant has also lined up in the tight end position for the Seattle Seahawks due to injuries to the starting tight ends Ed Dickson and Will Dissly. In the 2019 season the Atlanta Falcons declared right tackle Ty Sambrailo eligible on many plays before throwing the ball to him for a 35-yard touchdown against the Tampa Bay Buccaneers Before the snap of the ball, in the American game, backfield players may only move parallel to the line of scrimmage, only one back may be in motion at any given time, and if forward motion has occurred, the back must be still for a full second before the snap. The receiver may be in motion laterally or away from the line of scrimmage at the snap. A breach of this rule results in a penalty for illegal procedure (five yards). However, in the Canadian game, eligible receivers may move in any direction before the snap, any number may be in motion at any one time, and there is no need to be motionless before the snap. The rules on eligible receivers only apply to forward passes. Any player may legally catch a backwards or lateral pass. In the American game, once the play has | is thrown, a foul of "ineligible receiver downfield" (resulting in a penalty of five yards, but no loss of down) is called. Each league has slightly different rules regarding who is considered an eligible receiver. College football The NCAA rulebook defines eligible receivers for college football in Rule 7, Section 3, Article 3. The determining factors are the player's position on the field at the snap and their jersey number. Specifically, any players on offense wearing numbers between 50 and 79 are always ineligible. All defensive players are eligible receivers and offensive players who are not wearing an ineligible number are eligible receivers if they meet one of the following three criteria: Player is at either end of the group of players on the line of scrimmage (wide receivers and/or tight ends) Player is lined up at least one yard behind the line of scrimmage (running backs, fullbacks, slot receivers, etc.) Player is positioned to receive a hand-to-hand snap from the center (almost always the quarterback) Players may only wear eligible numbers at an ineligible position when it is obvious that a punt or field goal is to be attempted. If a player is to change between eligible and ineligible positions, they must physically change jersey numbers to reflect the position. A receiver loses his eligibility by leaving the field of play unless he was forced out by a defensive player and immediately attempts to get back inbounds (Rule 7-3-4). All players on the field become eligible as soon as the ball is touched by a defensive player or an official during play (Rule 7-3-5). Professional football In both American and Canadian professional football, every player on the defensive team is considered eligible. The offensive team must have at least seven players lined up on the line of scrimmage. Of the players on the line of scrimmage, only the two players on the ends of the line of scrimmage are eligible receivers. The remaining players are in the backfield (four in American football, five in Canadian football), including the quarterback. These backfield players are also eligible receivers. In the National Football League (NFL), a quarterback who takes his stance behind center as a T-formation quarterback is not eligible unless, before the ball is snapped, he legally moves to a position at least one yard behind the line of scrimmage or on the end of the line, and is stationary in that position for at least one second before the snap, but is nonetheless not counted toward the seven men required on the line of scrimmage. If, for example, eight men line up on the line of scrimmage, the team loses an eligible receiver. This can often happen when a flanker or slot |
in 1939, fell to 30% by 1950, and by 1985 it was equal to that of a Western country. By 1949, the US and British intelligence organisations were working with the former King Zog and the mountain men of his personal guard. They recruited Albanian refugees and émigrés from Egypt, Italy and Greece, trained them in Cyprus, Malta and the Federal Republic of Germany (West Germany), and infiltrated them into Albania. Guerrilla units entered Albania in 1950 and 1952, but they were killed or captured by Albanian security forces. Kim Philby, a Soviet double agent working as a liaison officer between MI6 and the CIA, had leaked details of the infiltration plan to Moscow, and the security breach claimed the lives of about 300 infiltrators. On 19 February 1951, a bombing occurred at the Soviet embassy in Tirana, after which 23 accused intellectuals were arrested and put in prison. One of them, Jonuz Kaceli, was killed by Prime Minister Mehmet Shehu during interrogation. Subsequently, the 22 others were executed without trial under Hoxha's orders. They were later found to be innocent. The State University of Tirana was established in 1957, which was the first of its kind in Albania. The medieval Gjakmarrja (blood feud) was banned. Malaria, the most widespread disease, was successfully fought through advances in health care, the use of DDT, and through the draining of swampland. From 1965 to 1985, no cases of malaria were reported, whereas previously Albania had the greatest number of infected patients in Europe. No cases of syphilis had been recorded for 30 years. In 1938 the number of physicians was 1.1 per 10,000 and there was only one hospital bed per 1,000 people. In 1950, while the number of physicians had not increased, there were four times as many hospital beds per head, and health expenditures had risen to 5% of the budget, up from 1% before the war. Relations with Yugoslavia At this point, relations with Yugoslavia had begun to change. The roots of the change began on 20 October 1944 at the Second Plenary Session of the Communist Party of Albania. The Session considered the problems that the post-independence Albanian government would face. However, the Yugoslav delegation led by Velimir Stoinić accused the party of "sectarianism and opportunism" and blamed Hoxha for these errors. He also stressed the view that the Yugoslav Communist partisans spearheaded the Albanian partisan movement. Anti-Yugoslav members of the Albanian Communist Party had begun to think that this was a plot by Tito who intended to destabilize the Party. Koçi Xoxe, Sejfulla Malëshova and others who supported Yugoslavia were looked upon with deep suspicion. Tito's position on Albania was that it was too weak to stand on its own and that it would do better as a part of Yugoslavia. Hoxha alleged that Tito had made it his goal to get Albania into Yugoslavia, firstly by creating the Treaty of Friendship, Co-operation and Mutual Aid in 1946. In time, Albania began to feel that the treaty was heavily slanted towards Yugoslav interests, much like the Italian agreements with Albania under Zog that made the nation dependent upon Italy. The first issue was that the Albanian lek became revalued in terms of the Yugoslav dinar as a customs union was formed and Albania's economic plan was decided more by Yugoslavia. Albanian economists H. Banja and V. Toçi stated that the relationship between Albania and Yugoslavia during this period was exploitative and that it constituted attempts by Yugoslavia to make the Albanian economy an "appendage" to the Yugoslav economy. Hoxha then began to accuse Yugoslavia of misconduct: Stalin advised Hoxha that Yugoslavia was attempting to annex Albania: "We did not know that the Yugoslavs, under the pretext of 'defending' your country against an attack from the Greek fascists, wanted to bring units of their army into the PRA [People's Republic of Albania]. They tried to do this in a very secretive manner. In reality, their aim in this direction was utterly hostile, for they intended to overturn the situation in Albania." By June 1947, the Central Committee of Yugoslavia began publicly condemning Hoxha, accusing him of taking an individualistic and anti-Marxist line. When Albania responded by making agreements with the Soviet Union to purchase a supply of agricultural machinery, Yugoslavia said that Albania could not enter into any agreements with other countries without Yugoslav approval. Koçi Xoxe tried to stop Hoxha from improving relations with Bulgaria, reasoning that Albania would be more stable with one trading partner rather than with many. Nako Spiru, an anti-Yugoslav member of the Party, condemned Xoxe and vice versa. With no one coming to Spiru's defense, he viewed the situation as hopeless and feared that Yugoslav domination of his nation was imminent, which caused him to commit suicide in November. At the Eighth Plenum of the Central Committee of the Party which lasted from 26 February to 8 March 1948, Xoxe was implicated in a plot to isolate Hoxha and consolidate his own power. He accused Hoxha of being responsible for the decline in relations with Yugoslavia and stated that a Soviet military mission should be expelled in favor of a Yugoslav counterpart. Hoxha managed to remain firm and his support had not declined. When Yugoslavia publicly broke with the Soviet Union, Hoxha's support base grew stronger. Then, on 1 July 1948, Tirana called on all Yugoslav technical advisors to leave the country and unilaterally declared all treaties and agreements between the two countries null and void. Xoxe was expelled from the party and on 13 June 1949, he was executed by hanging. Relations with the Soviet Union After the break with Yugoslavia, Hoxha aligned himself with the Soviet Union. From 1948 to 1960, $200 million in Soviet aid was given to Albania for technical and infrastructural expansion. Albania was admitted to the Comecon on 22 February 1949 and served as a pro-Soviet force on the Adriatic. A Soviet submarine base was built on the Albanian island of Sazan near Vlorë, posing a hypotethical threat to the U.S. Sixth Fleet in the Mediterranean. Relations with the Soviet Union remained close until the death of Stalin in March 1953. It was followed by 14 days of national mourning in Albania – more than in the Soviet Union. Hoxha assembled the population of Tirana in the capital's largest square, featuring a Stalin statue, requested that they kneel take a 2,000-word oath of "eternal fidelity" and "gratitude" to their "beloved father" and "great liberator." Under Nikita Khrushchev, Stalin's eventual successor, aid was reduced and Albania was encouraged to adopt Khrushchev's specialisation policy. Under it, Albania would develop its agricultural output in order to supply the Soviet Union and other Warsaw Pact countries while they would be developing products of their own, which would, in theory, strengthen the Warsaw Pact. However, this also meant that Albanian industrial development, which was stressed heavily by Hoxha, would be hindered. In May–June 1955, Nikolai Bulganin and Anastas Mikoyan visited Yugoslavia while Khrushchev renounced the expulsion of Yugoslavia from the Communist bloc. Khrushchev also began making references to Palmiro Togliatti's polycentrism theory. Hoxha had not been consulted on this and was offended. Yugoslavia began asking for Hoxha to rehabilitate the image of Koçi Xoxe, which Hoxha steadfastly rejected. In 1956 at the Twentieth Party Congress of the Communist Party of the Soviet Union, Khrushchev condemned the cult of personality that had been built up around Joseph Stalin and denounced his excesses. Khrushchev then announced the theory of peaceful coexistence, which angered the Stalinist Hoxha greatly. The Institute of Marxist–Leninist Studies, led by Hoxha's wife Nexhmije, quoted Vladimir Lenin: "The fundamental principle of the foreign policy of a socialist country and of a Communist party is proletarian internationalism; not peaceful coexistence." Hoxha now took a more active stand against perceived revisionism. Unity within the Albanian Party of Labour began to decline as well, with a special delegate meeting held in Tirana in April 1956, composed of 450 delegates and having unexpected results. The delegates "criticized the conditions in the party, the negative attitude toward the masses, the absence of party and socialist democracy, the economic policy of the leadership, etc." while also calling for discussions on the cult of personality and the Twentieth Party Congress. Movement towards China and Maoism In 1956, Hoxha called for a resolution which would confirm the existing leadership of the Party. The resolution was accepted, and all of the delegates who had spoken against it were expelled from the party and imprisoned. Hoxha claimed that Yugoslavia had attempted to overthrow the leadership of Albania. This incident increased Hoxha's power, effectively making Khrushchev-style reforms impossible there. In the same year, Hoxha travelled to the People's Republic of China, then embroiled in the Sino-Soviet split, and met Mao Zedong. Chinese aid to Albania rose sharply in the next two years. In an effort to keep Albania in the Soviet sphere, increased Soviet aid was given but relations with the Soviet Union remained at the same level until 1960, when Khrushchev met Sofoklis Venizelos, a liberal Greek politician. Khrushchev sympathised with the concept of an autonomous Greek North Epirus and he hoped to use Greek claims to keep the Albanian leadership in line. Hoxha reacted by only sending Hysni Kapo, a member of the Albanian Political Bureau, to the Third Congress of the Romanian Workers' Party in Bucharest, an event Communist heads of state were normally expected to attend. As relations between the two countries continued to deteriorate in the course of the meeting, Khrushchev said: Friction with the Soviet Union Relations with the Soviet Union declined rapidly. A hardline policy was adopted and the Soviets reduced grain shipments at a time when Albania needed them due to the possibility of a flood-induced famine. In July 1960, a plot to overthrow the Albanian government was discovered. It was to be organised by Soviet-trained Rear Admiral Teme Sejko. After this, two pro-Soviet members of the Party, Liri Belishova and Koço Tashko, were expelled. In August, the Party's Central Committee sent a protest to the Central Committee of the CPSU about having an anti-Albanian Soviet Ambassador in Tirana. The Fourth Congress of the Party, held from 13 to 20 February 1961, was the last meeting that the Soviet Union or other Eastern European nations attended in Albania. During the congress, Mehmet Shehu stated that while many members of the Party were accused of tyranny, this was a baseless charge and unlike the Soviet Union, Albania was led by genuine Marxists. The Soviet Union retaliated by threatening "dire consequences" if the condemnations were not retracted. Days later, Khrushchev and Antonín Novotný, President of Czechoslovakia, threatened to cut off economic aid. In March, Albania was not invited to attend the meeting of the Warsaw Pact nations, and in April all Soviet technicians were withdrawn from Albania. In May nearly all Soviet troops at the Soviet submarine base were withdrawn. On 7 November 1961, Hoxha made a speech in which he called Khrushchev a "revisionist, an anti-Marxist and a defeatist". Hoxha portrayed Stalin as the last Communist leader of the Soviet Union and alluded to Albania's independence. By 11 November, the USSR and every other Warsaw Pact nation broke diplomatic relations with Albania. Albania was unofficially excluded from the Warsaw Pact and Comecon. The Soviet Union also attempted to claim control of the submarine base. The Albanian Party then passed a law prohibiting any other nation from owning an Albanian port. The Soviet–Albanian split was now complete. Later rule (1965–1985) As Hoxha's leadership continued, he took on an increasingly theoretical stance. He wrote criticisms which were based on theory and current events which occurred at the time; his most notable criticisms were his condemnations of Maoism after 1978. A major achievement under Hoxha was the advancement of women's rights. Albania had been one of the most, if not the most, patriarchal countries in Europe. The ancient Code of Lekë, which regulated the status of women, states, "A woman is known as a sack, made to endure as long as she lives in her husband's house." Women were not allowed to inherit anything from their parents, and discrimination was even made in the case of the murder of a pregnant woman: Women were forbidden from obtaining a divorce, and the wife's parents were obliged to return a runaway daughter to her husband or else suffer shame which could even result in a generations-long blood feud. During World War II, the Albanian Communists encouraged women to join the partisans and following the war, women were encouraged to take up menial jobs, as the education necessary for higher level work was out of most women's reach. In 1938, 4% worked in various sectors of the economy. In 1970, this number had risen to 38%, and in 1982 to 46%. During the Cultural and Ideological Revolution (discussed below), women were encouraged to take up all jobs, including government posts, which resulted in 40.7% of the People's Councils and 30.4% of the People's Assembly being made up of women, including two women in the Central Committee by 1985. In 1978, 15.1 times as many females attended eight-year schools as had done so in 1938 and 175.7 times as many females attended secondary schools. By 1978, 101.9 times as many women attended higher schools as in 1957. Hoxha said of women's rights in 1967:The entire party and country should hurl into the fire and break the neck of anyone who dared trample underfoot the sacred edict of the party on the defense of women's rights.In 1969, direct taxation was abolished and during this period the quality of schooling and health care continued to improve. An electrification campaign was begun in 1960 and the entire nation was expected to have electricity by 1985. Instead, it achieved this on 25 October 1970, making it the first nation with complete electrification in the world. During the Cultural & Ideological Revolution of 1967–1968 the military changed from traditional Communist army tactics and began to adhere to the Maoist strategy known as people's war, which included the abolition of military ranks, which were not fully restored until 1991. Mehmet Shehu said of the country's health service in 1979: Hoxha's legacy also included a complex of 173,371 one-man concrete bunkers across a country of 3 million inhabitants, to act as look-outs and gun emplacements along with chemical weapons. The bunkers were built strong and mobile, with the intention that they could be easily placed by a crane or a helicopter in a hole. The types of bunkers vary from machine gun pillboxes, beach bunkers, to underground naval facilities and even Air Force Mountain and underground bunkers. Hoxha's internal policies were true to Stalin's paradigm which he admired, and the personality cult which was developed in the 1970s and organised around him by the Party also bore a striking resemblance to that of Stalin. At times it even reached an intensity which was as extreme as the personality cult of Kim Il-sung (which Hoxha condemned) with Hoxha being portrayed as a genius commenting on virtually all facets of life from culture to economics to military matters. Each schoolbook required one or more quotations from him on the subjects being studied. The Party honored him with titles such as Supreme Comrade, Sole Force and Great Teacher. He adopted a different type military salute for the People's Army to render honors which was known as the Hoxhaist Salute, which involves soldiers curling their right fist and raising it to shoulder level. It replaced the Zogist salute, which was used by the Royal Albanian Army for many years. Hoxha's governance was also distinguished by his encouragement of a high birthrate policy. For instance, a woman who bore an above-average number of children would be given the government award of Heroine Mother (in Albanian: Nënë Heroinë) along with cash rewards. Abortion was essentially restricted (to encourage high birth rates), except if the birth posed a danger to the mother's life, though it was not completely banned; the process was decided by district medical commissions. As a result, the population of Albania tripled from 1 million in 1944 to around 3 million in 1985. Relations with China In Albania's Third Five-year Plan, China promised a loan of $125 million to build twenty-five chemical, electrical and metallurgical plants in accordance to the Plan. However, the nation found difficulty doing so, as Albania's poor relations with its neighbours and the distance between the two nations complicated matters. Unlike Yugoslavia or the USSR, China had less economic influence on Albania during Hoxha's leadership. The previous fifteen years (1946–1961) had at least 50% of the economy under foreign commerce. By the time the 1976 Constitution was promulgated, Albania had become mostly self-sufficient but lacked modern technology. Ideologically, Hoxha found Mao's initial views to be in line with Marxism-Leninism, due to his condemnation of Nikita Khrushchev's alleged revisionism and Yugoslavia. Aid given from China was interest-free and it did not have to be repaid until Albania could afford to do so. China never intervened in Albania's economic output, with Chinese technicians working for the same wages as Albanian workers. Albanian newspapers were reprinted in Chinese newspapers and read on Chinese radio, and Albania led the movement to give the People's Republic of China a seat on the UN Security Council. During this period, Albania became the second largest producer of chromium in the world, which was considered important to the country. Strategically, the Adriatic Sea was attractive to China, as it was hoped that more allies could be gained in Eastern Europe through Albania - which failed. Zhou Enlai visited Albania in January 1964. On 9 January, "The 1964 Sino-Albanian Joint Statement" was signed in Tirana. The statement said of relations between socialist countries: Like Albania, China defended the "purity" of Marxism by attacking both US imperialism and "Soviet and Yugoslav revisionism", both equally as part of a "dual adversary" theory. Yugoslavia was viewed as both a "special detachment of U.S. imperialism" and a "saboteur against world revolution." These views, however, began to change in China, which was one of the major issues which Albania had with the alliance. Also unlike Yugoslavia and the Soviet Union, the Sino-Albanian alliance lacked "... an organisational structure for regular consultations and policy coordination, and it was also characterized by an informal relationship which was conducted on an ad hoc basis." Mao made a speech on 3 November 1966 in which he claimed that Albania was the only Marxist-Leninist state in Europe and in the same speech, he also stated that "an attack on Albania will have to reckon with great People's Republic of China. If the U.S. imperialists, the modern Soviet revisionists or any of their lackeys dare to touch Albania in the slightest, nothing lies ahead for them but a complete, shameful and memorable defeat." Hoxha likewise stated that "You may rest assured, comrades, that come what may in the world at large, our two parties and our two peoples will certainly remain together. They will fight together and they will win together." Shift in Chinese foreign policy after the Cultural Revolution China entered into a four-year period of relative diplomatic isolation following the Cultural Revolution, at which point relations between China and Albania were generally mostly positive. On 20 August 1968, the Soviet invasion of Czechoslovakia was condemned by Albania, as was the Brezhnev doctrine. Albania refused to send troops in support of the invasion, officially withdrawing from the Warsaw Pact on 5 September. Relations with China began to deteriorate on 15 July 1971, when United States President Richard Nixon agreed to visit China to meet with Zhou Enlai. Hoxha felt betrayed by this, with the Central Committee of the PLA sending a letter to the Central Committee of the CCP on 6 August calling Nixon a "frenzied anti-Communist". The letter stated: The result of this criticism was a message from the Chinese leadership in 1971 stating that Albania could not depend on an indefinite flow of further Chinese aid, and in 1972 Albania was advised to "curb its expectations about further Chinese contributions to its economic development". By 1972, Hoxha wrote in his diary Reflections on China that China was no longer a socialist country: And in 1973, wrote that the Chinese leaders: In response, trade with COMECON (although trade with the Soviet Union was still blocked) and Yugoslavia grew. Trade with Third World nations was $0.5 million in 1973, but $8.3 million in 1974. Trade rose from 0.1% to 1.6%. Following Mao's death on 9 September 1976, Hoxha remained optimistic about Sino-Albanian relations, but in August 1977, Hua Guofeng, the new leader of China, stated that Mao's Three Worlds Theory would become official foreign policy. Hoxha viewed this as a way for China to justify having the U.S. as the "secondary enemy" while viewing the Soviet Union as the main one, thus allowing China to trade with the U.S. He stated that: From 30 August to 7 September 1977, Tito visited Beijing and was welcomed by the Chinese leadership. Following this, the PLA declared that China was now a revisionist state akin to the Soviet Union and Yugoslavia, and that Albania was the only Marxist–Leninist state on Earth. Hoxha stated: On 13 July 1978, China announced that it was cutting off all aid to Albania. For the first time in modern history, Albania did not have either an ally or a major trading partner. Political repressions and emigration Certain clauses in the 1976 constitution circumscribed the exercise of political liberties which the government interpreted as contrary to the established order. The government denied the population access to information other than that disseminated by the government-controlled media. Internally, the Sigurimi followed the repressive methods of the NKVD, MGB, KGB and the East German Stasi. At one point, every third Albanian had either been interrogated by the Sigurimi or incarcerated in labour camps. The government imprisoned thousands in forced-labour camps or executed them for crimes such as alleged treachery or for disrupting the proletarian dictatorship. After 1968, travel abroad was forbidden to all but those on official business. Western European culture was looked upon with deep suspicion, resulting in bans on any unauthorised foreign material and arrests. Art was required to reflect the styles of socialist realism. | but lost an Albanian state scholarship for neglecting his studies. He later went to Paris, where he presented himself to anti-Zogist immigrants as the brother-in-law of Bahri Omari. From 1935 to 1936, he was employed as a secretary at the Albanian consulate in Brussels. After returning to Albania, he worked as a contract teacher in the Gymnasium of Tirana. Hoxha taught French and morals in the Korça Liceum from 1937 to 1939 and also served as the caretaker of the school library. On 7 April 1939, Albania was invaded by Fascist Italy. The Italians established a puppet government, the Albanian Kingdom (1939–43), under Shefqet Vërlaci. At the end of 1939, he was transferred to the Gjirokastra Gymnasium, but he soon returned to Tirana. He was helped by his best friend, Esat Dishnica, who introduced Hoxha to Dishnica's cousin Ibrahim Biçakçiu. Hoxha started to sleep in Biçakçiu's tobacco factory "Flora", and after a while Dishnica opened a shop with the same name, where Hoxha began working. He was a sympathiser of Korça's Communist Group. Partisan life On 8 November 1941, the Communist Party of Albania (later renamed the Party of Labour of Albania in 1948) was founded. Hoxha was chosen from the "Korça group" as a Muslim representative by the two Yugoslav envoys as one of the seven members of the provisional Central Committee. The First Consultative Meeting of Activists of the Communist Party of Albania was held in Tirana from 8 to 11 April 1942, with Hoxha himself delivering the main report on 8 April 1942. In July 1942, Hoxha wrote "Call to the Albanian Peasantry", issued in the name of the Communist Party of Albania. The call sought to enlist support in Albania for the war against the fascists. The peasants were encouraged to hoard their grain and refuse to pay taxes or livestock levies brought by the government. After the September 1942 Conference at Pezë, the National Liberation Movement was founded with the purpose of uniting the anti-fascist Albanians, regardless of ideology or class. By March 1943, the first National Conference of the Communist Party elected Hoxha formally as First Secretary. During WWII, the Soviet Union's role in Albania was negligible. On 10 July 1943, the Albanian partisans were organised in regular units of companies, battalions and brigades and named the Albanian National Liberation Army. The organization received military support from the British intelligence service, SOE. The General Headquarters was created, with Spiro Moisiu as the commander and Hoxha as political commissar. The Yugoslav Partisans had a much more practical role, helping to plan attacks and exchanging supplies, but communication between them and the Albanians was limited and letters often arrived late, sometimes well after a plan had been agreed upon by the National Liberation Army without consultation from the Yugoslav partisans. Within Albania, repeated attempts were made during the war to remedy the communications difficulties which faced partisan groups. In August 1943, a secret meeting, the Mukje Conference, was held between the anti-communist Balli Kombëtar (National Front) and the Communist Party of Albania. The result of this was an agreement to: Unite in a single struggle against the fascist invader. Cease all attacks between the two parties signing the agreement. Form a joint operational staff to coordinate military actions within Albania. Recognise that the democratically elected national liberations councils are the state power in Albania. Recognise that the goal for the post-war era is an independent, democratic Albania where the people themselves will decide the form of government. Recognise and respect the Atlantic Charter, the London and Washington Treaties between the USSR, Great Britain and the US in connection with the question of Kosovo and Çamëria. Be it resolved that the populations of Kosovo and Camëria will themselves decide their future in accordance with their wishes. Unite with any political group, whatever their beliefs, in a common military effort against the fascist invaders. However, the Communist Party of Albania will not collaborate with any group of the National Front that continues to maintain contacts with the fascist invaders. The Communist Party of Albania will unite with any group that used to have contacts with the fascist invaders, but has now terminated those contacts and is willing to now fight against the fascist invaders, provided those groups have not committed any crimes against the people. To encourage the Balli Kombëtar to sign, the Greater Albania sections that included Kosovo (part of Yugoslavia) and Chamëria were made part of the Agreement. Disagreement with Yugoslav communists A problem developed when the Yugoslav Communists disagreed with the goal of a Greater Albania and asked the Communists in Albania to withdraw their agreement. According to Hoxha, Josip Broz Tito had not agreed that "Kosovo was Albanian" and that Serbian opposition made the transfer an unwise option. After the Albanian Communists repudiated the Greater Albania agreement, the Balli Kombëtar condemned the Communists, who in turn accused the Balli Kombëtar of siding with the Italians. The Balli Kombëtar, however, lacked support from the people. After judging the Communists as an immediate threat, the Balli Kombëtar sided with Nazi Germany, fatally damaging its image among those fighting the Fascists. The Communists quickly added to their ranks many of those disillusioned with the Balli Kombëtar and took centre stage in the fight for liberation. The Permet National Congress held during that time called for a "new democratic Albania for the people". Although the monarchy was not formally abolished, Zog I of Albania was barred from returning to the country, which further increased the Communists' control. The Anti-Fascist Committee for National Liberation was founded, chaired by Hoxha. On 22 October 1944, the Committee became the Democratic Government of Albania after a meeting in Berat and Hoxha was chosen as interim Prime Minister. Tribunals were set up to try alleged war criminals who were designated "enemies of the people" and were presided over by Koçi Xoxe. After liberation on 29 November 1944, several Albanian partisan divisions crossed the border into German-occupied Yugoslavia, where they fought alongside Tito's partisans and the Soviet Red Army in a joint campaign which succeeded in driving out the last pockets of German resistance. Marshal Tito, during a Yugoslavian conference in later years, thanked Hoxha for the assistance that the Albanian partisans had given during the War for National Liberation (Lufta Nacionalçlirimtare). The Democratic Front, dominated by the Albanian Communist Party, succeeded the National Liberation Front in August 1945, and the first post-war election was held on 2 December that year. The Front was the only legal political organisation allowed to stand in the elections, and the government reported that 93% of Albanians voted for it. On 11 January 1946, Zog was officially deposed and Albania was proclaimed the People's Republic of Albania (renamed the People's Socialist Republic of Albania in 1976). As First Secretary of the party, Hoxha was de facto head of state and the most powerful man in the country. Albanians celebrate their independence day on 28 November (which is the date on which they declared their independence from the Ottoman Empire in 1912), while in the former People's Socialist Republic of Albania the national day was 29 November, the day the country was liberated from the Italians. Both days are currently national holidays. Early leadership (1946–1965) Hoxha declared himself a Marxist–Leninist and strongly admired Soviet leader Joseph Stalin. During the period of 1945–1950, the government adopted policies and actions intended to consolidate power which included extrajudicial killings and executions that targeted and eliminated anti-communists. The Agrarian Reform Law was passed in August 1945. It confiscated land from beys and large landowners, giving it without compensation to peasants. 52% of all land was owned by large landowners before the law was passed; this declined to 16% after the law's passage. Illiteracy, which was 90–95% in rural areas in 1939, fell to 30% by 1950, and by 1985 it was equal to that of a Western country. By 1949, the US and British intelligence organisations were working with the former King Zog and the mountain men of his personal guard. They recruited Albanian refugees and émigrés from Egypt, Italy and Greece, trained them in Cyprus, Malta and the Federal Republic of Germany (West Germany), and infiltrated them into Albania. Guerrilla units entered Albania in 1950 and 1952, but they were killed or captured by Albanian security forces. Kim Philby, a Soviet double agent working as a liaison officer between MI6 and the CIA, had leaked details of the infiltration plan to Moscow, and the security breach claimed the lives of about 300 infiltrators. On 19 February 1951, a bombing occurred at the Soviet embassy in Tirana, after which 23 accused intellectuals were arrested and put in prison. One of them, Jonuz Kaceli, was killed by Prime Minister Mehmet Shehu during interrogation. Subsequently, the 22 others were executed without trial under Hoxha's orders. They were later found to be innocent. The State University of Tirana was established in 1957, which was the first of its kind in Albania. The medieval Gjakmarrja (blood feud) was banned. Malaria, the most widespread disease, was successfully fought through advances in health care, the use of DDT, and through the draining of swampland. From 1965 to 1985, no cases of malaria were reported, whereas previously Albania had the greatest number of infected patients in Europe. No cases of syphilis had been recorded for 30 years. In 1938 the number of physicians was 1.1 per 10,000 and there was only one hospital bed per 1,000 people. In 1950, while the number of physicians had not increased, there were four times as many hospital beds per head, and health expenditures had risen to 5% of the budget, up from 1% before the war. Relations with Yugoslavia At this point, relations with Yugoslavia had begun to change. The roots of the change began on 20 October 1944 at the Second Plenary Session of the Communist Party of Albania. The Session considered the problems that the post-independence Albanian government would face. However, the Yugoslav delegation led by Velimir Stoinić accused the party of "sectarianism and opportunism" and blamed Hoxha for these errors. He also stressed the view that the Yugoslav Communist partisans spearheaded the Albanian partisan movement. Anti-Yugoslav members of the Albanian Communist Party had begun to think that this was a plot by Tito who intended to destabilize the Party. Koçi Xoxe, Sejfulla Malëshova and others who supported Yugoslavia were looked upon with deep suspicion. Tito's position on Albania was that it was too weak to stand on its own and that it would do better as a part of Yugoslavia. Hoxha alleged that Tito had made it his goal to get Albania into Yugoslavia, firstly by creating the Treaty of Friendship, Co-operation and Mutual Aid in 1946. In time, Albania began to feel that the treaty was heavily slanted towards Yugoslav interests, much like the Italian agreements with Albania under Zog that made the nation dependent upon Italy. The first issue was that the Albanian lek became revalued in terms of the Yugoslav dinar as a customs union was formed and Albania's economic plan was decided more by Yugoslavia. Albanian economists H. Banja and V. Toçi stated that the relationship between Albania and Yugoslavia during this period was exploitative and that it constituted attempts by Yugoslavia to make the Albanian economy an "appendage" to the Yugoslav economy. Hoxha then began to accuse Yugoslavia of misconduct: Stalin advised Hoxha that Yugoslavia was attempting to annex Albania: "We did not know that the Yugoslavs, under the pretext of 'defending' your country against an attack from the Greek fascists, wanted to bring units of their army into the PRA [People's Republic of Albania]. They tried to do this in a very secretive manner. In reality, their aim in this direction was utterly hostile, for they intended to overturn the situation in Albania." By June 1947, the Central Committee of Yugoslavia began publicly condemning Hoxha, accusing him of taking an individualistic and anti-Marxist line. When Albania responded by making agreements with the Soviet Union to purchase a supply of agricultural machinery, Yugoslavia said that Albania could not enter into any agreements with other countries without Yugoslav approval. Koçi Xoxe tried to stop Hoxha from improving relations with Bulgaria, reasoning that Albania would be more stable with one trading partner rather than with many. Nako Spiru, an anti-Yugoslav member of the Party, condemned Xoxe and vice versa. With no one coming to Spiru's defense, he viewed the situation as hopeless and feared that Yugoslav domination of his nation was imminent, which caused him to commit suicide in November. At the Eighth Plenum of the Central Committee of the Party which lasted from 26 February to 8 March 1948, Xoxe was implicated in a plot to isolate Hoxha and consolidate his own power. He accused Hoxha of being responsible for the decline in relations with Yugoslavia and stated that a Soviet military mission should be expelled in favor of a Yugoslav counterpart. Hoxha managed to remain firm and his support had not declined. When Yugoslavia publicly broke with the Soviet Union, Hoxha's support base grew stronger. Then, on 1 July 1948, Tirana called on all Yugoslav technical advisors to leave the country and unilaterally declared all treaties and agreements between the two countries null and void. Xoxe was expelled from the party and on 13 June 1949, he was executed by hanging. Relations with the Soviet Union After the break with Yugoslavia, Hoxha aligned himself with the Soviet Union. From 1948 to 1960, $200 million in Soviet aid was given to Albania for technical and infrastructural expansion. Albania was admitted to the Comecon on 22 February 1949 and served as a pro-Soviet force on the Adriatic. A Soviet submarine base was built on the Albanian island of Sazan near Vlorë, posing a hypotethical threat to the U.S. Sixth Fleet in the Mediterranean. Relations with the Soviet Union remained close until the death of Stalin in March 1953. It was followed by 14 days of national mourning in Albania – more than in the Soviet Union. Hoxha assembled the population of Tirana in the capital's largest square, featuring a Stalin statue, requested that they kneel take a 2,000-word oath of "eternal fidelity" and "gratitude" to their "beloved father" and "great liberator." Under Nikita Khrushchev, Stalin's eventual successor, aid was reduced and Albania was encouraged to adopt Khrushchev's specialisation policy. Under it, Albania would develop its agricultural output in order to supply the Soviet Union and other Warsaw Pact countries while they would be developing products of their own, which would, in theory, strengthen the Warsaw Pact. However, this also meant that Albanian industrial development, which was stressed heavily by Hoxha, would be hindered. In May–June 1955, Nikolai Bulganin and Anastas Mikoyan visited Yugoslavia while Khrushchev renounced the expulsion of Yugoslavia from the Communist bloc. Khrushchev also began making references to Palmiro Togliatti's polycentrism theory. Hoxha had not been consulted on this and was offended. Yugoslavia began asking for Hoxha to rehabilitate the image of Koçi Xoxe, which Hoxha steadfastly rejected. In 1956 at the Twentieth Party Congress of the Communist Party of the Soviet Union, Khrushchev condemned the cult of personality that had been built up around Joseph Stalin and denounced his excesses. Khrushchev then announced the theory of peaceful coexistence, which angered the Stalinist Hoxha greatly. The Institute of Marxist–Leninist Studies, led by Hoxha's wife Nexhmije, quoted Vladimir Lenin: "The fundamental principle of the foreign policy of a socialist country and of a Communist party is proletarian internationalism; not peaceful coexistence." Hoxha now took a more active stand against perceived revisionism. Unity within the Albanian Party of Labour began to decline as well, with a special delegate meeting held in Tirana in April 1956, composed of 450 delegates and having unexpected results. The delegates "criticized the conditions in the party, the negative attitude toward the masses, the absence of party and socialist democracy, the economic policy of the leadership, etc." while also calling for discussions on the cult of personality and the Twentieth Party Congress. Movement towards China and Maoism In 1956, Hoxha called for a resolution which would confirm the existing leadership of the Party. The resolution was accepted, and all of the delegates who had spoken against it were expelled from the party and imprisoned. Hoxha claimed that Yugoslavia had attempted to overthrow the leadership of Albania. This incident increased Hoxha's power, effectively making Khrushchev-style reforms impossible there. In the same year, Hoxha travelled to the People's Republic of China, then embroiled in the Sino-Soviet split, and met Mao Zedong. Chinese aid to Albania rose sharply in the next two years. In an effort to keep Albania in the Soviet sphere, increased Soviet aid was given but relations with the Soviet Union remained at the same level until 1960, when Khrushchev met Sofoklis Venizelos, a |
renew its neutrality agreement. Japan's ally Germany surrendered in early May 1945. In June, the cabinet reassessed the war strategy, only to decide more firmly than ever on a fight to the last man. This strategy was officially affirmed at a brief Imperial Council meeting, at which, as was normal, the Emperor did not speak. The following day, Lord Keeper of the Privy Seal Kōichi Kido prepared a draft document which summarized the hopeless military situation and proposed a negotiated settlement. Extremists in Japan were also calling for a death-before-dishonor mass suicide, modeled on the "47 Ronin" incident. By mid-June 1945, the cabinet had agreed to approach the Soviet Union to act as a mediator for a negotiated surrender but not before Japan's bargaining position had been improved by repulse of the anticipated Allied invasion of mainland Japan. On 22 June, the Emperor met with his ministers saying, "I desire that concrete plans to end the war, unhampered by existing policy, be speedily studied and that efforts be made to implement them." The attempt to negotiate a peace via the Soviet Union came to nothing. There was always the threat that extremists would carry out a coup or foment other violence. On 26 July 1945, the Allies issued the Potsdam Declaration demanding unconditional surrender. The Japanese government council, the Big Six, considered that option and recommended to the Emperor that it be accepted only if one to four conditions were agreed upon, including a guarantee of the Emperor's continued position in Japanese society. The Emperor decided not to surrender. That changed after the atomic bombings of Hiroshima and Nagasaki and the Soviet declaration of war. On 9 August, Emperor Hirohito told Kōichi Kido: "The Soviet Union has declared war and today began hostilities against us." On 10 August, the cabinet drafted an "Imperial Rescript ending the War" following the Emperor's indications that the declaration did not compromise any demand which prejudiced his prerogatives as a sovereign ruler. On 12 August 1945, the Emperor informed the imperial family of his decision to surrender. One of his uncles, Prince Yasuhiko Asaka, asked whether the war would be continued if the kokutai (national polity) could not be preserved. The Emperor simply replied "Of course." On 14 August the Suzuki government notified the Allies that it had accepted the Potsdam Declaration. On 15 August, a recording of the Emperor's surrender speech ("Gyokuon-hōsō", literally "Jewel Voice Broadcast") was broadcast over the radio (the first time the Emperor was heard on the radio by the Japanese people) announcing Japan's acceptance of the Potsdam Declaration. During the historic broadcast the Emperor stated: "Moreover, the enemy has begun to employ a new and most cruel bomb, the power of which to do damage is, indeed, incalculable, taking the toll of many innocent lives. Should we continue to fight, not only would it result in an ultimate collapse and obliteration of the Japanese nation, but also it would lead to the total extinction of human civilization." The speech also noted that "the war situation has developed not necessarily to Japan's advantage" and ordered the Japanese to "endure the unendurable." The speech, using formal, archaic Japanese, was not readily understood by many commoners. According to historian Richard Storry in A History of Modern Japan, the Emperor typically used "a form of language familiar only to the well-educated" and to the more traditional samurai families. A faction of the army opposed to the surrender attempted a coup d'état on the evening of 14 August, prior to the broadcast. They seized the Imperial Palace (the Kyūjō incident), but the physical recording of the emperor's speech was hidden and preserved overnight. The coup failed, and the speech was broadcast the next morning. In his first ever press conference given in Tokyo in 1975, when he was asked what he thought of the bombing of Hiroshima, the Emperor answered: "It's very regrettable that nuclear bombs were dropped and I feel sorry for the citizens of Hiroshima but it couldn't be helped because that happened in wartime" (shikata ga nai, meaning "it cannot be helped"). Accountability for Japanese war crimes The issue of Emperor Hirohito's war responsibility is a controversial matter. There is no consensus among scholars. During wartime the allies frequently depicted Hirohito to equate with Hitler and Mussolini as the three Axis dictators. The apologist thesis, which argues that Hirohito had been a "powerless figurehead" without any implication in wartime policies, was the dominant postwar narrative until 1989. After Hirohito's death, critical historians say that Hirohito wielded more power than previously believed, and he was actively involved in the decision to launch the war as well as in other political and military decisions before. Moderates argue that Hirohito had some involvement, but his power was limited by cabinet members, ministers and other people of the military oligarchy. The critical thesis Historians who follow this thesis believe Emperor Hirohito was directly responsible for the atrocities committed by the imperial forces in the Second Sino-Japanese War and in World War II. They feel that he, and some members of the imperial family such as his brother Prince Chichibu, his cousins the princes Takeda and Fushimi, and his uncles the princes Kan'in, Asaka, and Higashikuni, should have been tried for war crimes. The debate over Hirohito's responsibility for war crimes concerns how much real control the Emperor had over the Japanese military during the two wars. Officially, the imperial constitution, adopted under Emperor Meiji, gave full power to the Emperor. Article 4 prescribed that, "The Emperor is the head of the Empire, combining in Himself the rights of sovereignty, and exercises them, according to the provisions of the present Constitution," while according to article 6, "The Emperor gives sanction to laws and orders them to be promulgated and executed," and article 11, "The Emperor has the supreme command of the Army and the Navy." The Emperor was thus the leader of the Imperial General Headquarters. Poison gas weapons, such as phosgene, were produced by Unit 731 and authorized by specific orders given by Hirohito himself, transmitted by the chief of staff of the army. For example, Hirohito authorised the use of toxic gas 375 times during the Battle of Wuhan from August to October 1938. Historians such as Herbert Bix, Akira Fujiwara, Peter Wetzler, and Akira Yamada assert that the post-war view focusing on imperial conferences misses the importance of numerous "behind the chrysanthemum curtain" meetings where the real decisions were made between the Emperor, his chiefs of staff, and the cabinet. Historians such as Fujiwara and Wetzler, based on the primary sources and the monumental work of Shirō Hara, have produced evidence suggesting that the Emperor worked through intermediaries to exercise a great deal of control over the military and was neither bellicose nor a pacifist but an opportunist who governed in a pluralistic decision-making process. American historian Herbert P. Bix argues that Emperor Hirohito might have been the prime mover of most of the events of the two wars. The view promoted by both the Japanese Imperial Palace and the American occupation forces immediately after World War II portrayed Emperor Hirohito as a powerless figurehead behaving strictly according to protocol while remaining at a distance from the decision-making processes. This view was endorsed by Prime Minister Noboru Takeshita in a speech on the day of Hirohito's death in which Takeshita asserted that the war "had broken out against [Hirohito's] wishes." Takeshita's statement provoked outrage in nations in East Asia and Commonwealth nations such as the United Kingdom, Canada, Australia, and New Zealand. According to historian Fujiwara, "The thesis that the Emperor, as an organ of responsibility, could not reverse cabinet decision is a myth fabricated after the war." Historian Yinan He agrees with Fujiwara, stating that the exoneration of the Emperor was embodied on a myth used to whitewash the complicity of many wartime political actors, including Hirohito. In Japan, debate over the Emperor's responsibility was taboo while he was still alive. After his death, however, debate began to surface over the extent of his involvement and thus his culpability. In the years immediately after Hirohito's death, the debate in Japan was fierce. Susan Chira reported, "Scholars who have spoken out against the late Emperor have received threatening phone calls from Japan's extremist right wing." One example of actual violence occurred in 1990 when the mayor of Nagasaki, Hitoshi Motoshima, was shot and critically wounded by a member of the ultranationalist group, Seikijuku. A year before, in 1989, Motoshima had broken what was characterized as "one of [Japan's] most sensitive taboos" by asserting that Emperor Hirohito bore responsibility for World War II. Kentarō Awaya argues that post-war Japanese public opinion supporting protection of the Emperor was influenced by U.S. propaganda promoting the view that the Emperor together with the Japanese people had been fooled by the military. Regarding Hirohito's exemption from trial before the International Military Tribunal of the Far East, opinions were not unanimous. Sir William Webb, the president of the tribunal, declared: "This immunity of the Emperor is contrasted with the part he played in launching the war in the Pacific, is, I think, a matter which the tribunal should take into consideration in imposing the sentences." Likewise, the French judge, Henri Bernard, wrote about Hirohito's accountability that the declaration of war by Japan "had a principal author who escaped all prosecution and of whom in any case the present defendants could only be considered accomplices." An account from the Vice Interior Minister in 1941, Michio Yuzawa, asserts that Hirohito was "at ease" with the attack on Pearl Harbor "once he had made a decision." Vice Interior Minister Yuzawa's account on Hirohito's role in Pearl Harbor raid In late July 2018, the bookseller Takeo Hatano, an acquaintance of the descendants of Michio Yuzawa (Japanese Vice Interior Minister in 1941), released to Japan's Yomiuri Shimbun newspaper a memo by Yuzawa that Hatano had kept for nine years since he received it from Yuzawa's family. The bookseller said: "It took me nine years to come forward, as I was afraid of a backlash. But now I hope the memo would help us figure out what really happened during the war, in which 3.1 million people were killed." Takahisa Furukawa, expert on wartime history from Nihon University, confirmed the authenticity of the memo, calling it "the first look at the thinking of Emperor Hirohito and Prime Minister Hideki Tojo on the eve of the Japanese attack on Pearl Harbor." In this document, Yuzawa details a conversation he had with Tojo a few hours before the attack. The Vice Minister quotes Tojo saying: Historian Furukawa concluded from Yuzawa's memo: Shinobu Kobayashi's diary In August 2018, the diary of Shinobu Kobayashi, the Emperor's chamberlain between 1974 and 2000, was released. This diary contains numerous quotes from Hirohito (see below). Jennifer Lind, a specialist in Japanese war memory, concluded from these quotes: Similarly, historian Takahisa Furukawa concluded: Hirohito's preparations for war described in Saburō Hyakutake's diary In September 2021, 25 diaries, pocket notebooks and memos by Saburō Hyakutake (Emperor Hirohito's Grand Chamberlain from 1936 to 1944) deposited by his relatives to the library of the University of Tokyo's graduate schools for law and politics became available to the public. Hyakutake's diary quotes some Hirohito's ministers and advisers worried that the Emperor was getting ahead of them in terms of battle preparations. Thus, Hyakutake quotes Tsuneo Matsudaira, the Imperial Household Minister, saying: Likewise, Koichi Kido, Lord Keeper of the Privy Seal, is quoted as saying: Seiichi Chadani, professor of modern Japanese history with Shigakukan University who has studied Hirohito’s actions before and during the war said on the discovery of Hyakutake’s diary: The moderate thesis After the death of Emperor Shōwa, on 14 February 1989 (Heisei 1), the Cabinet Committee of the House of Councilors at the time (Prime Minister Noboru Takeshita, Cabinet of Takeshita), Secretary-General of the Cabinet Legislation Bureau, Mimura Osamu (味村治) said, "There are no responsibilities for war under domestic law or international law due to the two points of no response and no prosecution in the International Military Tribunal for the Far East according to Article 3 of the Constitution of the Empire of Japan." It is also argued that the Emperor did not defy the military oligarchy that got Japan into World War II until the first atomic bomb fell on Hiroshima. This is supported by Hirohito's personal statements during interviews. It is also pointed out that the Emperors had for millennia been a great symbolic authority, but had little political power. Thus Hirohito had little reason to defy the military oligarchy. The Emperor could not defy the cabinet's decision to start World War II and he was not trained or accustomed to do so. Hirohito said he only received reports about military operations after the military commanders made detailed decisions. Hirohito stated that he only made his own decisions twice: for the February 26 Incident and the end of World War II. The declassified January 1989 British government assessment of Hirohito describes him as "too weak to alter the course of events" and Hirohito was "powerless" and comparisons with Hitler are "ridiculously wide off the mark." Hirohito's power was limited by ministers and the military and if he asserted his views too much he would have been replaced by another member of the royal family. Some scholars think it right that Hirohito was not tried by the International Military Tribunal for the Far East. For example, Indian jurist Radhabinod Pal opposed the International Military Tribunal and made a 1,235-page judgment. He found the entire prosecution case to be weak regarding the conspiracy to commit an act of aggressive war with brutalization and subjugation of conquered nations. Pal said there is "no evidence, testimonial or circumstantial, concomitant, prospectant, restrospectant, that would in any way lead to the inference that the government in any way permitted the commission of such offenses". He added that conspiracy to wage aggressive war was not illegal in 1937, or at any point since. Pal supported the acquittal of all of the defendants. He considered the Japanese military operations as justified, because Chiang Kai-shek supported the boycott of trade operations by the Western Powers, particularly the United States boycott of oil exports to Japan. Pal argued the attacks on neighboring territories were justified to protect the Japanese Empire from an aggressive environment, especially the Soviet Union. He considered that to be self-defense operations which are not criminal. Pal said "the real culprits are not before us" and concluded that "only a lost war is an international crime". The Emperor's own statements 8 September 1975 TV interview with NBC, USA Reporter: "How far has your Majesty been involved in Japan's decision to end the war in 1945? What was the motivation for your launch?" Emperor: "Originally, this should be done by the Cabinet. I heard the results, but at the last meeting I asked for a decision. I decided to end the war on my own. (...) I thought that the continuation of the war would only bring more misery to the people." Interview with Newsweek, USA, 20 September 1975 Reporter: "(Abbreviation) How do you answer those who claim that your Majesty was also involved in the decision-making process that led Japan to start the war?" Emperor: "(Omission) At the start of the war, a cabinet decision was made, and I could not reverse that decision. We believe this is consistent with the provisions of the Imperial Constitution." 22 September 1975-Press conference with Foreign Correspondents Reporter: "How long before the attack on Pearl Harbor did your Majesty know about the attack plan? And did you approve the plan?" Emperor: "It is true that I had received information on military operations in advance. However, I only received those reports after the military commanders made detailed decisions. Regarding issues of political character and military command, I believe that I acted in accordance with the provisions of the Constitution." On 31 October 1975, a press conference was held immediately after returning to the United States after visiting Japan. Question: "Your majesty, at your White House banquet you said, 'I deeply deplore that unfortunate war.' (See also Emperor Shōwa's Theory of War Responsibility.) Does your majesty feel responsibility for the war itself, including the opening of hostilities? Also, what does your majesty think about so-called war responsibility?" (The Times reporter) Emperor: "I can't answer that kind of question because I haven't thoroughly studied the literature in this field, and so don't really appreciate the nuances of your words." Question: "How did you understand that the atomic bomb was dropped on Hiroshima at the end of the war?" (RCC Broadcasting Reporter) Emperor: "I am sorry that the atomic bomb was dropped, but because of this war, I feel sorry for the citizens of Hiroshima, but I think it is unavoidable." 17 April 1981 Press conference with the presidents of the press Reporter: "What was the most enjoyable of your memories of eighty years?" Emperor: "Since I saw the constitutional politics of Britain as the Crown Prince, I felt strongly that I must adhere to the constitutional politics. But I was too particular about it to prevent the war. I made my own decisions twice (February 26 Incident and the end of World War II)." British government assessment of Hirohito A January 1989 declassified British government assessment of Hirohito said the Emperor was "uneasy with Japan's drift to war in the 1930s and 1940s but was too weak to alter the course of events." The dispatch by John Whitehead, former ambassador of the United Kingdom to Japan, to Foreign Secretary Geoffrey Howe was declassified on Thursday 20 July 2017 at the National Archives in London. Britain's ambassador to Japan John Whitehead stated in 1989: Whitehead concludes that ultimately Hirohito was "powerless" and comparisons with Hitler are "ridiculously wide off the mark." If Hirohito acted too insistently with his views he could have been isolated or replaced with a more pliant member of the royal family. The pre-war Meiji Constitution defined the emperor as "sacred" and all-powerful, but according to Whitehead, Hirohito's power was limited by ministers and the military. Whitehead explained after World War II that Hirohito's humility was fundamental for the Japanese people to accept the new 1947 constitution and allied occupation. Hirohito's quotes in chamberlain Kobayashi's diary Shinobu Kobayashi was the Emperor's chamberlain from April 1974 until June 2000, when Empress Kōjun died. Kobayashi kept a diary with near-daily remarks of Hirohito for 26 years. It was made public on Wednesday 22 August 2018. The rare diary was borrowed from Kobayashi's family by Kyodo News and analyzed by Kyodo News with writer and history expert of the Shōwa era Kazutoshi Hando and nonfiction writer Masayasu Hosaka. Here are some quotes from the diary: On 27 May 1980, the Emperor wanted to express his regret about the Sino-Japanese war to former Chinese Premier Hua Guofeng who visited at the time, but was stopped by senior members of the Imperial Household Agency due to fear of backlash from far right groups. On 7 April 1987, two years before his death, this diary entry shows the Emperor was haunted by perceived discussions about World War II responsibility and lost the will to live. Prince Takamatsu died in February 1987. Kobayashi tried to soothe the Emperor by saying: Senior chamberlain, Ryogo Urabe's diary entry of the same day supports the remarks stating that Kobayashi "tried to soothe" the Emperor, when he said "there is nothing good in living long". Michiji Tajima's notes in 1952 According to notebooks by Michiji Tajima, a top Imperial Household Agency official who took office after the war, Emperor Hirohito privately expressed regret about the atrocities that were committed by Japanese troops during the Nanjing Massacre. In addition to feeling remorseful about his own role in the war, he "fell short by allowing radical elements of the military to drive the conduct of the war." Postwar reign As the Emperor chose his uncle Prince Higashikuni as prime minister to assist the American occupation, there were attempts by numerous leaders to have him put on trial for alleged war crimes. Many members of the imperial family, such as Princes Chichibu, Takamatsu, and Higashikuni, pressured the Emperor to abdicate so that one of the Princes could serve as regent until Crown Prince Akihito came of age. On 27 February 1946, the Emperor's youngest brother, Prince Mikasa, even stood up in the privy council and indirectly urged the Emperor to step down and accept responsibility for Japan's defeat. According to Minister of Welfare Ashida's diary, "Everyone seemed to ponder Mikasa's words. Never have I seen His Majesty's face so pale." U.S. General Douglas MacArthur insisted that Emperor Hirohito retain the throne. MacArthur saw the Emperor as a symbol of the continuity and cohesion of the Japanese people. Some historians criticize the decision to exonerate the Emperor and all members of the imperial family who were implicated in the war, such as Prince Chichibu, Prince Asaka, Prince Higashikuni, and Prince Hiroyasu Fushimi, from criminal prosecutions. Before the war crime trials actually convened, the Supreme Commander of the Allied Powers, its International Prosecution Section (IPS) and Japanese officials worked behind the scenes not only to prevent the Imperial family from being indicted, but also to influence the testimony of the defendants to ensure that no one implicated the Emperor. High officials in court circles and the Japanese government collaborated with Allied General Headquarters in compiling lists of prospective war criminals, while the individuals arrested as Class A suspects and incarcerated solemnly vowed to protect their sovereign against any possible taint of war responsibility. Thus, "months before the Tokyo tribunal commenced, MacArthur's highest subordinates were working to attribute ultimate responsibility for Pearl Harbor to Hideki Tōjō" by allowing "the major criminal suspects to coordinate their stories so that the Emperor would be spared from indictment." According to John W. Dower, "This successful campaign to absolve the Emperor of war responsibility knew no bounds. Hirohito was not merely presented as being innocent of any formal acts that might make him culpable to indictment as a war criminal, he was turned into an almost saintly figure who did not even bear moral responsibility for the war." According to Bix, "MacArthur's truly extraordinary measures to save Hirohito from trial as a war criminal had a lasting and profoundly distorting impact on Japanese understanding of the lost war." Imperial status Hirohito was not put on trial, but he was forced to explicitly reject the quasi-official claim that the Emperor of Japan was an arahitogami, i.e., an incarnate divinity. This was motivated by the fact that, according to the Japanese constitution of 1889, the Emperor had a divine power over his country which was derived from the Shinto belief that the Japanese Imperial Family were the descendants of the sun goddess Amaterasu. Hirohito was however persistent in the idea that the Emperor of Japan should be considered a descendant of the gods. In December 1945, he told his vice-grand-chamberlain Michio Kinoshita: "It is permissible to say that the idea that the Japanese are descendants of the gods is a false conception; but it is absolutely impermissible to call chimerical the idea that the Emperor is a descendant of the gods." In any case, the "renunciation of divinity" was noted more by foreigners than by Japanese, and seems to have been intended for the consumption of the former. The theory of a constitutional monarchy had already had some proponents in Japan. In 1935, when Tatsukichi Minobe advocated the theory that sovereignty resides in the state, of which the Emperor is just an organ (the tennō kikan setsu), it caused a furor. He was forced to resign from the House of Peers and his post at the Tokyo Imperial University, his books were banned, and an attempt was made on his life. Not until 1946 was the tremendous step made to alter the Emperor's title from "imperial sovereign" to "constitutional monarch." Although the Emperor had supposedly repudiated claims to divinity, his public position was deliberately left vague, partly because General MacArthur thought him probable to be a useful partner to get the Japanese to accept the occupation and partly due to behind-the-scenes maneuvering by Shigeru Yoshida to thwart attempts to cast him as a European-style monarch. Nevertheless, Hirohito's status as a limited constitutional monarch was formalized with the enactment of the 1947 Constitution–officially, an amendment to the Meiji Constitution. It defined the Emperor as "the symbol of the state and the unity of the people," and stripped him of even nominal power in government matters. His role was limited to matters of state as delineated in the Constitution, and in most cases his actions in that realm were carried out in accordance with the binding instructions of the Cabinet. Following the Iranian Revolution and the end of the short-lived Central African Empire, both in 1979, Hirohito found himself the last monarch in the world to bear any variation of the highest royal title "emperor." Public figure For the rest of his life, Hirohito was an active figure in Japanese life and performed many of the duties commonly associated with a constitutional head of state. He and his family maintained a strong public presence, often holding public walkabouts and making public appearances on special events and ceremonies. For example, in 1947, the Emperor made a public visit to Hiroshima and held a speech in front of a massive crowd encouraging the city's citizens. He also played an important role in rebuilding Japan's diplomatic image, traveling abroad to meet with many foreign leaders, including Queen Elizabeth II (1971) and President Gerald Ford (1975). | the 19th to the 20th, and was also awarded an Honorary Doctor of Laws at the University of Edinburgh. He stayed at the residence of John Stewart-Murray, 8th Duke of Atholl, for three days. On his stay with Stuart-Murray, the prince was quoted as saying, "The rise of Bolsheviks won't happen if you live a simple life like Duke Athol." In Italy, he met with King Vittorio Emanuele III and others, attended official banquets in various countries, and visited places such as the fierce battlefields of World War I. Regency After returning to Japan, Hirohito became Regent of Japan (Sesshō) on 25 November 1921, in place of his ailing father, who was affected by mental illness. In 1923 he was promoted to the rank of Lieutenant-Colonel in the army and Commander in the navy, and army Colonel and Navy Captain in 1925. During Hirohito's regency, many important events occurred: In the Four-Power Treaty on Insular Possessions signed on 13 December 1921, Japan, the United States, Britain, and France agreed to recognize the status quo in the Pacific. Japan and Britain agreed to end the Anglo-Japanese Alliance. The Washington Naval Treaty limiting warship numbers was signed on 6 February 1922. Japan withdrew troops from the Siberian Intervention on 28 August 1922. The Great Kantō earthquake devastated Tokyo on 1 September 1923. On 27 December 1923, Daisuke Namba attempted to assassinate Hirohito in the Toranomon Incident, but his attempt failed. During interrogation, he claimed to be a communist and was executed, but some have suggested that he was in contact with the Nagacho faction in the Army. Marriage Prince Hirohito married his distant cousin Princess Nagako Kuni, the eldest daughter of Prince Kuniyoshi Kuni, on 26 January 1924. They had two sons and five daughters (see Issue). The daughters who lived to adulthood left the imperial family as a result of the American reforms of the Japanese imperial household in October 1947 (in the case of Princess Shigeko) or under the terms of the Imperial Household Law at the moment of their subsequent marriages (in the cases of Princesses Kazuko, Atsuko, and Takako). Ascension On 25 December 1926, Hirohito assumed the throne upon the death of his father, Yoshihito. The Crown Prince was said to have received the succession (senso). The Taishō era's end and the Shōwa era's beginning (Enlightened Peace) were proclaimed. The deceased Emperor was posthumously renamed Emperor Taishō within days. Following Japanese custom, the new Emperor was never referred to by his given name but rather was referred to simply as "His Majesty the Emperor" which may be shortened to "His Majesty." In writing, the Emperor was also referred to formally as "The Reigning Emperor." In November 1928, the Emperor's ascension was confirmed in ceremonies (sokui) which are conventionally identified as "enthronement" and "coronation" (Shōwa no tairei-shiki); but this formal event would have been more accurately described as a public confirmation that his Imperial Majesty possesses the Japanese Imperial Regalia, also called the Three Sacred Treasures, which have been handed down through the centuries. Early reign The first part of Hirohito's reign took place against a background of financial crisis and increasing military power within the government through both legal and extralegal means. The Imperial Japanese Army and Imperial Japanese Navy held veto power over the formation of cabinets since 1900. Between 1921 and 1944, there were 64 separate incidents of political violence. Hirohito narrowly escaped assassination by a hand grenade thrown by a Korean independence activist, Lee Bong-chang, in Tokyo on 9 January 1932, in the Sakuradamon Incident. Another notable case was the assassination of moderate Prime Minister Inukai Tsuyoshi in 1932, marking the end of civilian control of the military. The February 26 incident, an attempted military coup, followed in February 1936. It was carried out by junior Army officers of the Kōdōha faction who had the sympathy of many high-ranking officers including Prince Chichibu (Yasuhito), one of the Emperor's brothers. This revolt was occasioned by a loss of political support by the militarist faction in Diet elections. The coup resulted in the murders of several high government and Army officials. When Chief Aide-de-camp Shigeru Honjō informed him of the revolt, the Emperor immediately ordered that it be put down and referred to the officers as "rebels" (bōto). Shortly thereafter, he ordered Army Minister Yoshiyuki Kawashima to suppress the rebellion within the hour. He asked for reports from Honjō every 30 minutes. The next day, when told by Honjō that the high command had made little progress in quashing the rebels, the Emperor told him "I Myself, will lead the Konoe Division and subdue them." The rebellion was suppressed following his orders on 29 February. Second Sino-Japanese War Starting from the Mukden Incident in 1931 in which Japan staged a sham "Chinese attack" as a pretext to invade Manchuria, Japan occupied Chinese territories and established puppet governments. Such "aggression was recommended to Hirohito" by his chiefs of staff and prime minister Fumimaro Konoe, and Hirohito never personally objected to any invasion of China. His main concern seems to have been the possibility of an attack by the Soviet Union in the north. His questions to his chief of staff, Prince Kan'in Kotohito, and minister of the army, Hajime Sugiyama, were mostly about the time it could take to crush Chinese resistance. According to Akira Fujiwara, Hirohito endorsed the policy of qualifying the invasion of China as an "incident" instead of a "war"; therefore, he did not issue any notice to observe international law in this conflict (unlike what his predecessors did in previous conflicts officially recognized by Japan as wars), and the Deputy Minister of the Japanese Army instructed the chief of staff of Japanese China Garrison Army on 5 August not to use the term "prisoners of war" for Chinese captives. This instruction led to the removal of the constraints of international law on the treatment of Chinese prisoners. The works of Yoshiaki Yoshimi and Seiya Matsuno show that the Emperor also authorized, by specific orders (rinsanmei), the use of chemical weapons against the Chinese. During the invasion of Wuhan, from August to October 1938, the Emperor authorized the use of toxic gas on 375 separate occasions, despite the resolution adopted by the League of Nations on 14 May condemning Japanese use of toxic gas. World War II Preparations In July 1939, the Emperor quarrelled with his brother, Prince Chichibu, over whether to support the Anti-Comintern Pact, and reprimanded the army minister, Seishirō Itagaki. But after the success of the Wehrmacht in Europe, the Emperor consented to the alliance. On 27 September 1940, ostensibly under Hirohito's leadership, Japan became a contracting partner of the Tripartite Pact with Germany and Italy forming the Axis Powers. On 4 September 1941, the Japanese Cabinet met to consider war plans prepared by Imperial General Headquarters and decided that: The objectives to be obtained were clearly defined: a free hand to continue with the conquest of China and Southeast Asia, no increase in US or British military forces in the region, and cooperation by the West "in the acquisition of goods needed by our Empire." On 5 September, Prime Minister Konoe informally submitted a draft of the decision to the Emperor, just one day in advance of the Imperial Conference at which it would be formally implemented. On this evening, the Emperor had a meeting with the chief of staff of the army, Sugiyama, chief of staff of the navy, Osami Nagano, and Prime Minister Konoe. The Emperor questioned Sugiyama about the chances of success of an open war with the Occident. As Sugiyama answered positively, the Emperor scolded him: Chief of Naval General Staff Admiral Nagano, a former Navy Minister and vastly experienced, later told a trusted colleague, "I have never seen the Emperor reprimand us in such a manner, his face turning red and raising his voice." Nevertheless, all speakers at the Imperial Conference were united in favor of war rather than diplomacy. Baron Yoshimichi Hara, President of the Imperial Council and the Emperor's representative, then questioned them closely, producing replies to the effect that war would be considered only as a last resort from some, and silence from others. At this point, the Emperor astonished all present by addressing the conference personally. In breaking the tradition of Imperial silence, he left his advisors "struck with awe" (Prime Minister Fumimaro Konoe's description of the event). Hirohito stressed the need for peaceful resolution of international problems, expressed regret at his ministers' failure to respond to Baron Hara's probings, and recited a poem written by his grandfather, Emperor Meiji, which, he said, he had read "over and over again": Recovering from their shock, the ministers hastened to express their profound wish to explore all possible peaceful avenues. The Emperor's presentation was in line with his practical role as leader of the State Shinto religion. At this time, Army Imperial Headquarters was continually communicating with the Imperial household in detail about the military situation. On 8 October, Sugiyama signed a 47-page report to the Emperor (sōjōan) outlining in minute detail plans for the advance into Southeast Asia. During the third week of October, Sugiyama gave the Emperor a 51-page document, "Materials in Reply to the Throne," about the operational outlook for the war. As war preparations continued, Prime Minister Fumimaro Konoe found himself increasingly isolated, and he resigned on 16 October. He justified himself to his chief cabinet secretary, Kenji Tomita, by stating: The army and the navy recommended the appointment of Prince Naruhiko Higashikuni, one of the Emperor's uncles, as prime minister. According to the Shōwa "Monologue", written after the war, the Emperor then said that if the war were to begin while a member of the imperial house was prime minister, the imperial house would have to carry the responsibility and he was opposed to this. Instead, the Emperor chose the hard-line General Hideki Tōjō, who was known for his devotion to the imperial institution, and asked him to make a policy review of what had been sanctioned by the Imperial Conferences. On 2 November Tōjō, Sugiyama, and Nagano reported to the Emperor that the review of eleven points had been in vain. Emperor Hirohito gave his consent to the war and then asked: "Are you going to provide justification for the war?" The decision for war against the United States was presented for approval to Hirohito by General Tōjō, Naval Minister Admiral Shigetarō Shimada, and Japanese Foreign Minister Shigenori Tōgō. On 3 November, Nagano explained in detail the plan of the attack on Pearl Harbor to the Emperor. On 5 November Emperor Hirohito approved in imperial conference the operations plan for a war against the Occident and had many meetings with the military and Tōjō until the end of the month. On 25 November Henry L. Stimson, United States Secretary of War, noted in his diary that he had discussed with US President Franklin D. Roosevelt the severe likelihood that Japan was about to launch a surprise attack and that the question had been "how we should maneuver them [the Japanese] into the position of firing the first shot without allowing too much danger to ourselves." On the following day, 26 November 1941, US Secretary of State Cordell Hull presented the Japanese ambassador with the Hull note, which as one of its conditions demanded the complete withdrawal of all Japanese troops from French Indochina and China. Japanese Prime Minister Hideki Tojo said to his cabinet, "This is an ultimatum." On 1 December an Imperial Conference sanctioned the "War against the United States, United Kingdom and the Kingdom of the Netherlands." War: advance and retreat On 8 December (7 December in Hawaii), 1941, in simultaneous attacks, Japanese forces struck at the Hong Kong Garrison, the US Fleet in Pearl Harbor and in the Philippines, and began the invasion of Malaya. With the nation fully committed to the war, the Emperor took a keen interest in military progress and sought to boost morale. According to Akira Yamada and Akira Fujiwara, the Emperor made major interventions in some military operations. For example, he pressed Sugiyama four times, on 13 and 21 January and 9 and 26 February, to increase troop strength and launch an attack on Bataan. On 9 February 19 March, and 29 May, the Emperor ordered the Army Chief of staff to examine the possibilities for an attack on Chungking in China, which led to Operation Gogo. As the tide of war began to turn against Japan (around late 1942 and early 1943), the flow of information to the palace gradually began to bear less and less relation to reality, while others suggest that the Emperor worked closely with Prime Minister Hideki Tojo, continued to be well and accurately briefed by the military, and knew Japan's military position precisely right up to the point of surrender. The chief of staff of the General Affairs section of the Prime Minister's office, Shuichi Inada, remarked to Tōjō's private secretary, Sadao Akamatsu: In the first six months of war, all the major engagements had been victories. Japanese advances were stopped in the summer of 1942 with the battle of Midway and the landing of the American forces on Guadalcanal and Tulagi in August. The emperor played an increasingly influential role in the war; in eleven major episodes he was deeply involved in supervising the actual conduct of war operations. Hirohito pressured the High Command to order an early attack on the Philippines in 1941–42, including the fortified Bataan peninsula. He secured the deployment of army air power in the Guadalcanal campaign. Following Japan's withdrawal from Guadalcanal he demanded a new offensive in New Guinea, which was duly carried out but failed badly. Unhappy with the navy's conduct of the war, he criticized its withdrawal from the central Solomon Islands and demanded naval battles against the Americans for the losses they had inflicted in the Aleutians. The battles were disasters. Finally, it was at his insistence that plans were drafted for the recapture of Saipan and, later, for an offensive in the Battle of Okinawa. With the Army and Navy bitterly feuding, he settled disputes over the allocation of resources. He helped plan military offenses. The media, under tight government control, repeatedly portrayed him as lifting the popular morale even as the Japanese cities came under heavy air attack in 1944-45 and food and housing shortages mounted. Japanese retreats and defeats were celebrated by the media as successes that portended "Certain Victory." Only gradually did it become apparent to the Japanese people that the situation was very grim due to growing shortages of food, medicine, and fuel as U.S submarines began wiping out Japanese shipping. Starting in mid 1944, American raids on the major cities of Japan made a mockery of the unending tales of victory. Later that year, with the downfall of Tojo's government, two other prime ministers were appointed to continue the war effort, Kuniaki Koiso and Kantarō Suzuki—each with the formal approval of the Emperor. Both were unsuccessful and Japan was nearing disaster. Surrender In early 1945, in the wake of the losses in the Battle of Leyte, Emperor Hirohito began a series of individual meetings with senior government officials to consider the progress of the war. All but ex-Prime Minister Fumimaro Konoe advised continuing the war. Konoe feared a communist revolution even more than defeat in war and urged a negotiated surrender. In February 1945, during the first private audience with the Emperor he had been allowed in three years, Konoe advised Hirohito to begin negotiations to end the war. According to Grand Chamberlain Hisanori Fujita, the Emperor, still looking for a tennozan (a great victory) in order to provide a stronger bargaining position, firmly rejected Konoe's recommendation. With each passing week victory became less likely. In April, the Soviet Union issued notice that it would not renew its neutrality agreement. Japan's ally Germany surrendered |
as many as 30 pubs and beer houses; today, only nine remain. At the beginning of the 19th century, Emsworth had a population of less than 1,200 but it was still considered a large village for the time. By the end of the 18th century, it became fashionable for wealthy people to spend the summer by the sea. In 1805 a bathing house was built where people could have a bath in seawater. The parish Church of St James was built in 1840 to a design by John Elliott. It was expanded in the late 1850s this time to a design by John Colson. Colson's designs were again used in an expansion of 1865. A final round of building took place in the early 1890s this time to a design by Arthur Blomfield. The reredos added in the 1920s features a painting by Percy George Bentham. Queen Victoria visited Emsworth in 1842, resulting in Queen Street and Victoria Road being named after her. In 1847 the London, Brighton and South Coast Railway (now the West Coastway line) came to Emsworth, with a railway station built to serve the town. Hollybank House to the north of the town was built in 1825 and is now a hotel. Emsworth became part of Warblington Urban District which held its first meeting in 1895. The Urban District was abolished in 1932. Emsworth subsequently became part of Havant Urban District. Modern Emsworth By 1901 the population of Emsworth was about 2,000. It grew rapidly during the 20th century to about 5,000 by the middle of the century. In 1906 construction began on the post office, with local cricketer George Wilder laying an inscribed brick. The renamed Emsworth Recreation Ground dates from 1909 and is the current home of Emsworth Cricket Club, which was founded in 1811. Cricket in Emsworth has been played at the same ground, Cold Harbour Lawn, since 1761. In 1902 the once famous Emsworth oyster industry went into rapid decline. This was after many of the guests at mayoral banquets in Southampton and Winchester became seriously ill and four died after consuming oysters. The infection was due to oysters sourced from Emsworth, as the oyster beds had been contaminated with raw sewage. Fishing oysters at Emsworth was subsequently halted until new sewers were dug, though the industry never completely recovered. Recently, Emsworth's last remaining oyster boat, The Terror, was restored and is now sailing again. But the oyster industry is again under threat, because the reproductive rate of the oysters has plunged, as they now contain microscopic glass spicules that are shed into the water from the hulls of the numerous plastic fibreglass boats in Chichester Harbour. During the Second World War, nearby Thorney Island was used as a Royal Air Force station, playing a role in defence in the Battle of Britain. The north of Emsworth at this time was used for growing flowers and further north was woodland (today Hollybank Woods). In the run up to D-Day, the Canadian Army used these woods as one of their pre-invasion assembly points for men and material. Today the foundations of their barracks can still be seen. In the 1960s large parts of this area were developed with a mix of bungalow and terraced housing. For a few years (2001 to 2007), Emsworth held a food festival. It was the largest event of its type in the UK, with more than 50,000 visitors in 2007. The festival was cancelled due to numerous complaints of disruption to residents and businesses in the proximity. A Baptist church was | By 1901 the population of Emsworth was about 2,000. It grew rapidly during the 20th century to about 5,000 by the middle of the century. In 1906 construction began on the post office, with local cricketer George Wilder laying an inscribed brick. The renamed Emsworth Recreation Ground dates from 1909 and is the current home of Emsworth Cricket Club, which was founded in 1811. Cricket in Emsworth has been played at the same ground, Cold Harbour Lawn, since 1761. In 1902 the once famous Emsworth oyster industry went into rapid decline. This was after many of the guests at mayoral banquets in Southampton and Winchester became seriously ill and four died after consuming oysters. The infection was due to oysters sourced from Emsworth, as the oyster beds had been contaminated with raw sewage. Fishing oysters at Emsworth was subsequently halted until new sewers were dug, though the industry never completely recovered. Recently, Emsworth's last remaining oyster boat, The Terror, was restored and is now sailing again. But the oyster industry is again under threat, because the reproductive rate of the oysters has plunged, as they now contain microscopic glass spicules that are shed into the water from the hulls of the numerous plastic fibreglass boats in Chichester Harbour. During the Second World War, nearby Thorney Island was used as a Royal Air Force station, playing a role in defence in the Battle of Britain. The north of Emsworth at this time was used for growing flowers and further north was woodland (today Hollybank Woods). In the run up to D-Day, the Canadian Army used these woods as one of their pre-invasion assembly points for men and material. Today the foundations of their barracks can still be seen. In the 1960s large parts of this area were developed with a mix of bungalow and terraced housing. For a few years (2001 to 2007), Emsworth held a food festival. It was the largest event of its type in the UK, with more than 50,000 visitors in 2007. The festival was cancelled due to numerous complaints of disruption to residents and businesses in the proximity. A Baptist church was constructed in north street in 2015. The harbour is now used for recreational sailing, paddle boarding, kayaking and swimming. The town has two sailing clubs, Emsworth Sailing Club (established in 1919) and Emsworth Slipper Sailing Club (in 1921), the latter based at Quay Mill, a former tide mill. Both clubs organise a programme of racing and social events during the sailing season. Emsworth Sailing Club In April 2014, Emsworth Sailing Club received national media coverage when retired Royal Navy Captain Clifford "John" Caughey drove his car into the clubhouse, causing a loud explosion and requiring thirty firefighters to extinguish the blaze. Culture and community Emsworth Library was considered for closure in 2020 but following public consultation, was reprieved. Emsworth Museum is administered by the Emsworth Maritime & Historical Trust. The town is twinned with Saint-Aubin-sur-Mer in Normandy, France Politics The town is part of the Havant constituency, which since the 1983 election has been a Conservative seat. The current Member of Parliament (MP) is Alan Mak MP. The town is represented at Havant Borough Council by Councillors Richard Kennett, Julie Thain-Smith and Lulu Bowerman. The local Hampshire County Councillor is Lulu Bowerman. The town has branches of the Conservative Party, Liberal Democrats, the Labour Party and United Kingdom Independence Party. Transport Emsworth railway station is on the West Coastway Line. It has services that run to Portsmouth, Southampton, Brighton and London Victoria. Stagecoach South operates the number 700 bus, which runs between Brighton and Southsea. Havant Borough Council claims local bus |
exceptions to this rule – microemulsions are thermodynamically stable, while translucent nanoemulsions are kinetically stable. Whether an emulsion of oil and water turns into a "water-in-oil" emulsion or an "oil-in-water" emulsion depends on the volume fraction of both phases and the type of emulsifier (surfactant) (see Emulsifier, below) present. Instability Emulsion stability refers to the ability of an emulsion to resist change in its properties over time. There are four types of instability in emulsions: flocculation, coalescence, creaming/sedimentation, and Ostwald ripening. Flocculation occurs when there is an attractive force between the droplets, so they form flocs, like bunches of grapes. This process can be desired, if controlled in its extent, to tune physical properties of emulsions such as their flow behaviour. Coalescence occurs when droplets bump into each other and combine to form a larger droplet, so the average droplet size increases over time. Emulsions can also undergo creaming, where the droplets rise to the top of the emulsion under the influence of buoyancy, or under the influence of the centripetal force induced when a centrifuge is used. Creaming is a common phenomenon in dairy and non-dairy beverages (i.e. milk, coffee milk, almond milk, soy milk) and usually does not change the droplet size. Sedimentation is the opposite phenomenon of creaming and normally observed in water-in-oil emulsions. Sedimentation happens when the dispersed phase is denser than the continuous phase and the gravitational forces pull the denser globules towards the bottom of the emulsion. Similar to creaming, sedimentation follows Stokes' law. An appropriate "surface active agent" (or "surfactant") can increase the kinetic stability of an emulsion so that the size of the droplets does not change significantly with time. The stability of an emulsion, like a suspension, can be studied in terms of zeta potential, which indicates the repulsion between droplets or particles. If the size and dispersion of droplets does not change over time, it is said to be stable. For example, oil-in-water emulsions containing mono- and diglycerides and milk protein as surfactant showed that stable oil droplet size over 28 days storage at 25°C. Monitoring physical stability The stability of emulsions can be characterized using techniques such as light scattering, focused beam reflectance measurement, centrifugation, and rheology. Each method has advantages and disadvantages. Accelerating methods for shelf life prediction The kinetic process of destabilization can be rather long – up to several months, or even years for some products. Often the formulator must accelerate this process in order to test products in a reasonable time during product design. Thermal methods are the most commonly used – these consist of increasing the emulsion temperature to accelerate destabilization (if below critical temperatures for phase inversion or chemical degradation). Temperature affects not only the viscosity but also the interfacial tension in the case of non-ionic surfactants or, on a broader scope, interactions between droplets within the system. Storing an emulsion at high temperatures enables the simulation of realistic conditions for a product (e.g., a tube of sunscreen emulsion in a car in the summer heat), but also accelerates destabilization processes up to 200 times. Mechanical methods of acceleration, including vibration, centrifugation, and agitation, can also be used. These methods are almost always empirical, without a sound scientific basis. Emulsifiers An emulsifier (also known as an "emulgent") is a substance that stabilizes an emulsion by increasing its kinetic stability. Emulsifiers are part of a broader group of compounds known as surfactants, or "surface active agents". Surfactants (emulsifiers) are compounds that are typically amphiphilic, meaning they have a polar or hydrophilic (i.e. water-soluble) part and a non-polar (i.e. hydrophobic or lipophilic) part. Because of this, emulsifiers tend to have more or less solubility either in water or in oil. Emulsifiers that are more soluble in water (and conversely, less soluble in oil) will generally form oil-in-water emulsions, while emulsifiers that are more soluble in oil will form water-in-oil emulsions. Examples of food emulsifiers are: Egg yolk – in which the main emulsifying and thickening agent is lecithin. In fact, lecithos is the Greek word for egg yolk. Mustard – where a variety of chemicals in the mucilage surrounding the seed hull act as emulsifiers Soy lecithin is another emulsifier and thickener Pickering stabilization – uses particles under certain circumstances Sodium phosphates – not directly an emulsifier, but modifies behavior of other molecules, e.g. casein Mono- and diglycerides – a common emulsifier found in many food products (coffee creamers, ice-creams, spreads, breads, cakes) Sodium stearoyl lactylate DATEM (diacetyl tartaric acid esters of mono- and diglycerides) – an emulsifier used primarily in baking Simple cellulose – a particulate emulsifier derived from plant material using only water Proteins – those with both hydrophilic and hydrophobic regions, e.g. sodium caseinate, as in meltable cheese product Detergents are another class of surfactant, and will interact physically with both oil and water, thus stabilizing the interface between the oil and water droplets in suspension. This principle is exploited in soap, to remove grease for the purpose of cleaning. Many different emulsifiers are used in pharmacy | emulsion of fat and water, along with other components, including colloidal casein micelles (a type of secreted biomolecular condensate). Appearance and properties Emulsions contain both a dispersed and a continuous phase, with the boundary between the phases called the "interface". Emulsions tend to have a cloudy appearance because the many phase interfaces scatter light as it passes through the emulsion. Emulsions appear white when all light is scattered equally. If the emulsion is dilute enough, higher-frequency (low-wavelength) light will be scattered more, and the emulsion will appear bluer – this is called the "Tyndall effect". If the emulsion is concentrated enough, the color will be distorted toward comparatively longer wavelengths, and will appear more yellow. This phenomenon is easily observable when comparing skimmed milk, which contains little fat, to cream, which contains a much higher concentration of milk fat. One example would be a mixture of water and oil. Two special classes of emulsions – microemulsions and nanoemulsions, with droplet sizes below 100 nm – appear translucent. This property is due to the fact that light waves are scattered by the droplets only if their sizes exceed about one-quarter of the wavelength of the incident light. Since the visible spectrum of light is composed of wavelengths between 390 and 750 nanometers (nm), if the droplet sizes in the emulsion are below about 100 nm, the light can penetrate through the emulsion without being scattered. Due to their similarity in appearance, translucent nanoemulsions and microemulsions are frequently confused. Unlike translucent nanoemulsions, which require specialized equipment to be produced, microemulsions are spontaneously formed by "solubilizing" oil molecules with a mixture of surfactants, co-surfactants, and co-solvents. The required surfactant concentration in a microemulsion is, however, several times higher than that in a translucent nanoemulsion, and significantly exceeds the concentration of the dispersed phase. Because of many undesirable side-effects caused by surfactants, their presence is disadvantageous or prohibitive in many applications. In addition, the stability of a microemulsion is often easily compromised by dilution, by heating, or by changing pH levels. Common emulsions are inherently unstable and, thus, do not tend to form spontaneously. Energy input – through shaking, stirring, homogenizing, or exposure to power ultrasound – is needed to form an emulsion. Over time, emulsions tend to revert to the stable state of the phases comprising the emulsion. An example of this is seen in the separation of the oil and vinegar components of vinaigrette, an unstable emulsion that will quickly separate unless shaken almost continuously. There are important exceptions to this rule – microemulsions are thermodynamically stable, while translucent nanoemulsions are kinetically stable. Whether an emulsion of oil and water turns into a "water-in-oil" emulsion or an "oil-in-water" emulsion depends on the volume fraction of both phases and the type of emulsifier (surfactant) (see Emulsifier, below) present. Instability Emulsion stability refers to the ability of an emulsion to resist change in its properties over time. There are four types of instability in emulsions: flocculation, coalescence, creaming/sedimentation, and Ostwald ripening. Flocculation occurs when there is an attractive force between the droplets, so they form flocs, like bunches of grapes. This process can be desired, if controlled in its extent, to tune physical properties of emulsions such as their flow behaviour. Coalescence occurs when droplets bump into each other and combine to form a larger droplet, so the average droplet size increases over time. Emulsions can also undergo creaming, where the droplets rise to the top of the emulsion under the influence of buoyancy, or under the influence of the centripetal force induced when a centrifuge is used. Creaming is a common phenomenon in dairy and non-dairy beverages (i.e. milk, coffee milk, almond milk, soy milk) and usually does not change the droplet size. Sedimentation is the opposite phenomenon of creaming and normally observed in water-in-oil emulsions. Sedimentation happens when the dispersed phase is denser than the continuous phase and the gravitational forces pull the denser globules towards the bottom of the emulsion. Similar to creaming, sedimentation follows Stokes' law. An appropriate "surface active agent" (or "surfactant") can increase the kinetic stability of an emulsion so that the size of the droplets does not change significantly with time. The stability of an emulsion, like a suspension, can be studied in terms of zeta potential, which indicates the repulsion between droplets or particles. If the size and dispersion of droplets does not change over time, it is said to be stable. For example, oil-in-water emulsions containing mono- and diglycerides and milk protein as surfactant showed that stable oil droplet size over 28 days storage at 25°C. Monitoring physical stability The stability of emulsions can be characterized using techniques such as light scattering, focused beam reflectance measurement, centrifugation, and rheology. Each method has advantages and disadvantages. Accelerating methods for shelf life prediction The kinetic process of destabilization can be rather long – up to several months, or even years for some products. Often the formulator must accelerate this process in order to test products in a reasonable time during product design. Thermal methods are the most commonly used – these consist of increasing the emulsion temperature to accelerate destabilization (if below critical temperatures for phase inversion or chemical degradation). Temperature affects not only the viscosity but also the interfacial tension in the case of non-ionic surfactants or, on a broader scope, interactions between droplets within the system. Storing an emulsion at high temperatures enables the simulation of realistic conditions for a product (e.g., a tube of sunscreen emulsion in a car in the summer heat), but also accelerates destabilization processes up to 200 times. Mechanical methods of acceleration, including vibration, centrifugation, and agitation, can also be used. These methods are almost always empirical, without a sound scientific basis. Emulsifiers An emulsifier (also known as an "emulgent") is a substance that stabilizes an emulsion by increasing its kinetic stability. Emulsifiers are part of a broader group of compounds known as surfactants, or "surface active agents". Surfactants (emulsifiers) are compounds that are typically amphiphilic, meaning they have a polar or hydrophilic (i.e. water-soluble) part and a non-polar (i.e. hydrophobic or lipophilic) part. Because of this, emulsifiers tend to have more or less solubility either in water or in oil. Emulsifiers that are more soluble in water (and conversely, less soluble in oil) will generally form oil-in-water emulsions, while emulsifiers that are more soluble in oil will form water-in-oil emulsions. Examples of food emulsifiers are: Egg yolk – in which the main emulsifying and thickening agent is lecithin. In fact, lecithos is the Greek word for egg yolk. Mustard – where a variety of chemicals in the mucilage surrounding the seed hull act as emulsifiers Soy lecithin is another emulsifier and thickener Pickering stabilization – uses particles under certain circumstances Sodium phosphates – not directly an emulsifier, but modifies behavior of other molecules, e.g. casein Mono- and diglycerides – a common emulsifier found in many food products (coffee creamers, ice-creams, spreads, breads, cakes) Sodium stearoyl lactylate DATEM (diacetyl tartaric acid esters of mono- and diglycerides) – an emulsifier used primarily in baking Simple cellulose – a particulate emulsifier derived from plant material using only water Proteins – those with both hydrophilic and |
Isle of Wight from 20 July 1965 and then the first Lord Lieutenant of the Isle of Wight from 1 April 1974. Mountbatten was elected a Fellow of the Royal Society and had received an honorary doctorate from Heriot-Watt University in 1968. In 1969, Mountbatten tried unsuccessfully to persuade his cousin, the Spanish pretender Infante Juan, Count of Barcelona, to ease the eventual accession of his son, Juan Carlos, to the Spanish throne by signing a declaration of abdication while in exile. The next year Mountbatten attended an official White House dinner during which he took the opportunity to have a 20-minute conversation with Richard Nixon and Secretary of State William P. Rogers, about which he later wrote, "I was able to talk to the President a bit about both Tino [Constantine II of Greece] and Juanito [Juan Carlos of Spain] to try and put over their respective points of view about Greece and Spain, and how I felt the US could help them." In January 1971, Nixon hosted Juan Carlos and his wife Sofia (sister of the exiled King Constantine) during a visit to Washington and later that year The Washington Post published an article alleging that Nixon's administration was seeking to persuade Franco to retire in favour of the young Bourbon prince. From 1967 until 1978, Mountbatten was president of the United World Colleges Organisation, then represented by a single college: that of Atlantic College in South Wales. Mountbatten supported the United World Colleges and encouraged heads of state, politicians, and personalities throughout the world to share his interest. Under his presidency and personal involvement, the United World College of South East Asia was established in Singapore in 1971, followed by the United World College of the Pacific in Victoria, British Columbia, in 1974. In 1978, Mountbatten passed the presidency of the college to his great-nephew, the Prince of Wales. Mountbatten also helped to launch the International Baccalaureate; in 1971 he presented the first IB diplomas in the Greek Theatre of the International School of Geneva, Switzerland. In 1975 Mountbatten finally visited the Soviet Union, leading the delegation from UK as personal representative of Queen Elizabeth II at the celebrations to mark the 30th anniversary of Victory Day in World War II in Moscow. Alleged plots against Harold Wilson Peter Wright, in his 1987 book Spycatcher, claimed that in May 1968 Mountbatten attended a private meeting with press baron Cecil King and the government's Chief Scientific Adviser, Solly Zuckerman. Wright alleged that "up to thirty" MI5 officers had joined a secret campaign to undermine the crisis-stricken Labour government of Harold Wilson and that King was an MI5 agent. In the meeting, King allegedly urged Mountbatten to become the leader of a government of national salvation. Solly Zuckerman pointed out that it was "rank treachery" and the idea came to nothing because of Mountbatten's reluctance to act. In contrast, Andrew Lownie has suggested that it took the intervention of the Queen to dissuade Mountbatten from plotting against Wilson. In 2006, the BBC documentary The Plot Against Harold Wilson alleged that there had been another plot involving Mountbatten to oust Wilson during his second term in office (1974–1976). The period was characterised by high inflation, increasing unemployment, and widespread industrial unrest. The alleged plot revolved around right-wing former military figures who were supposedly building private armies to counter the perceived threat from trade unions and the Soviet Union. They believed that the Labour Party was unable and unwilling to counter these developments and that Wilson was either a Soviet agent or at the very least a Communist sympathiser – claims Wilson strongly denied. The documentary makers alleged that a coup was planned to overthrow Wilson and replace him with Mountbatten using the private armies and sympathisers in the military and MI5. The first official history of MI5, The Defence of the Realm (2009), implied that there was a plot against Wilson and that MI5 did have a file on him. Yet it also made clear that the plot was in no way official and that any activity centred on a small group of discontented officers. This much had already been confirmed by former cabinet secretary Lord Hunt, who concluded in a secret inquiry conducted in 1996 that "there is absolutely no doubt at all that a few, a very few, malcontents in MI5 ... a lot of them like Peter Wright who were right-wing, malicious and had serious personal grudges – gave vent to these and spread damaging malicious stories about that Labour government." Personal life Marriage Mountbatten was married on 18 July 1922 to Edwina Cynthia Annette Ashley, daughter of Wilfred William Ashley, later 1st Baron Mount Temple, himself a grandson of the 7th Earl of Shaftesbury. She was the favourite granddaughter of the Edwardian magnate Sir Ernest Cassel and the principal heir to his fortune. The couple spent heavily on households, luxuries, and entertainment. There followed a honeymoon tour of European royal courts and America which included a visit to Niagara Falls (because "all honeymooners went there"). Mountbatten admitted: "Edwina and I spent all our married lives getting into other people's beds." He maintained an affair for several years with Yola Letellier, the wife of Henri Letellier, publisher of Le Journal and mayor of Deauville (1925–28). Yola Letellier's life story was the inspiration for Colette's novel Gigi. After Edwina died in 1960, Mountbatten was involved in relationships with young women, according to his daughter Patricia, his secretary John Barratt, his valet Bill Evans, and William Stadiem, an employee of Madame Claude. He had a long-running affair with American actress Shirley MacLaine, whom he met in the 1960s. Sexuality Ron Perks, Mountbatten's driver in Malta in 1948, alleged that he used to visit the Red House, an upmarket gay brothel in Rabat used by naval officers. Andrew Lownie, a fellow of the Royal Historical Society, wrote that the United States Federal Bureau of Investigation (FBI) maintained files regarding Mountbatten's alleged homosexuality. Lownie also interviewed several young men who claimed to have been in a relationship with Mountbatten. John Barratt, Mountbatten's personal and private secretary for 20 years, has said Mountbatten was not a homosexual, and that it would have been impossible for such a fact to have been hidden from him. Allegations of sexual abuse On 20 August 2019, files became public showing that the FBI knew in the 1940s that Mountbatten was an alleged homosexual and a pedophile. The FBI file on Mountbatten, begun after he took on the role of Supreme Allied Commander in Southeast Asia in 1944, describes Mountbatten and his wife Edwina as "persons of extremely low morals", and contains a claim by American author Elizabeth, Baroness Decies, that Mountbatten was known to be a homosexual and had "a perversion for young boys". Norman Nield, Mountbatten's driver from 1942 to 1943, told the tabloid New Zealand Truth that he transported young boys aged 8 to 12 who had been procured for the Admiral to Mountbatten's official residence and was paid to keep quiet. Robin Bryans had also claimed to the Irish magazine Now that Mountbatten and Anthony Blunt, along with others, were part of a ring that engaged in homosexual orgies and procured boys in their first year at public schools such as the Portora Royal School in Enniskillen. Former residents of the Kincora Boys' Home in Belfast have asserted that they were trafficked to Mountbatten at Classiebawn Castle his residence in Mullaghmore, County Sligo. These claims were dismissed by the Historical Institution Abuse (HIA) Inquiry. The HIA stated that the article making the original allegations "did not give any basis for the assertions that any of these people [Mountbatten and others] were connected with Kincora". Daughter as heir Lord and Lady Mountbatten had two daughters: Patricia Knatchbull (14 February 1924 – 13 June 2017), sometime lady-in-waiting to Queen Elizabeth II, and Lady Pamela Hicks (born 19 April 1929), who accompanied them to India in 1947–1948 and was also sometime lady-in-waiting to the Queen. Since Mountbatten had no sons when he was created Viscount Mountbatten of Burma, of Romsey in the County of Southampton on 27 August 1946 and then Earl Mountbatten of Burma and Baron Romsey, in the County of Southampton on 28 October 1947, the Letters Patent were drafted such that in the event he left no sons or issue in the male line, the titles could pass to his daughters, in order of seniority of birth, and to their male heirs respectively. Leisure interests Like many members of the royal family, Mountbatten was an aficionado of polo. He received US patent 1,993,334 in 1931 for a polo stick. Mountbatten introduced the sport to the Royal Navy in the 1920s and wrote a book on the subject. He also served as Commodore of Emsworth Sailing Club in Hampshire from 1931. He was a long-serving Patron of the Society for Nautical Research (1951–1979). Mentorship of the Prince of Wales Mountbatten was a strong influence in the upbringing of his grand-nephew, Charles, Prince of Wales, and later as a mentor – "Honorary Grandfather" and "Honorary Grandson", they fondly called each other according to the Jonathan Dimbleby biography of the Prince – though according to both the Ziegler biography of Mountbatten and the Dimbleby biography of the Prince, the results may have been mixed. He from time to time strongly upbraided the Prince for showing tendencies towards the idle pleasure-seeking dilettantism of his predecessor as Prince of Wales, King Edward VIII, whom Mountbatten had known well in their youth. Yet he also encouraged the Prince to enjoy the bachelor life while he could, and then to marry a young and inexperienced girl so as to ensure a stable married life. Mountbatten's qualification for offering advice to this particular heir to the throne was unique; it was he who had arranged the visit of King George VI and Queen Elizabeth to Dartmouth Royal Naval College on 22 July 1939, taking care to include the young Princesses Elizabeth and Margaret in the invitation, but assigning his nephew, Cadet Prince Philip of Greece, to keep them amused while their parents toured the facility. This was the first recorded meeting of Charles's future parents. But a few months later, Mountbatten's efforts nearly came to naught when he received a letter from his sister Alice in Athens informing him that Philip was visiting her and had agreed to repatriate permanently to Greece. Within days, Philip received a command from his cousin and sovereign, King George II of Greece, to resume his naval career in Britain which, though given without explanation, the young prince obeyed. In 1974, Mountbatten began corresponding with Charles about a potential marriage to his granddaughter, Hon. Amanda Knatchbull. It was about this time he also recommended that the 25-year-old prince get on with "sowing some wild oats". Charles dutifully wrote to Amanda's mother (who was also his godmother), Lady Brabourne, about his interest. Her answer was supportive, but advised him that she thought her daughter still rather young to be courted. In February 1975, Charles visited New Delhi to play polo and was shown around Rashtrapati Bhavan, the former Viceroy's House, by Mountbatten. Four years later, Mountbatten secured an invitation for himself and Amanda to accompany Charles on his planned 1980 tour of India. Their fathers promptly objected. Prince Philip thought that the Indian public's reception would more likely reflect response to the uncle than to the nephew. Lord Brabourne counselled that the intense scrutiny of the press would be more likely to drive Mountbatten's godson and granddaughter apart than together. Charles was rescheduled to tour India alone, but Mountbatten did not live to the planned date of departure. When Charles finally did propose marriage to Amanda later in 1979, the circumstances were changed and she refused him. Television appearances On 27 April 1977, shortly before his 77th birthday, Mountbatten became the first member of the Royal Family to appear on the TV guest show This Is Your Life. Death Assassination Mountbatten usually holidayed at his summer home, Classiebawn Castle, on the Mullaghmore Peninsula in County Sligo, in the north-west of Ireland. The village was only from the border with County Fermanagh in Northern Ireland and near an area known to be used as a cross-border refuge by IRA members. In 1978, the IRA had allegedly attempted to shoot Mountbatten as he was aboard his boat, but poor weather had prevented the sniper taking his shot. On 27 August 1979, Mountbatten went lobster-potting and tuna fishing in his wooden boat, Shadow V, which had been moored in the harbour at Mullaghmore. IRA member Thomas McMahon had slipped onto the unguarded boat that night and attached a radio-controlled bomb weighing . When Mountbatten and his party had taken the boat just a few hundred yards from the shore, the bomb was detonated. The boat was destroyed by the force of the blast and Mountbatten's legs were almost blown off. Mountbatten, then aged 79, was pulled alive from the water by nearby fishermen, but died from his injuries before being brought to shore. Also aboard the boat were his elder daughter Patricia, Lady Brabourne; her husband Lord Brabourne; their twin sons Nicholas and Timothy Knatchbull; Lord Brabourne's mother Doreen, Dowager Lady Brabourne; and Paul Maxwell, a young crew member from Enniskillen in County Fermanagh. Nicholas (aged 14) and Paul (aged 15) were killed by the blast and the others were seriously injured. Doreen, Dowager Lady Brabourne (aged 83), died from her injuries the following day. The attack triggered outrage and condemnation around the world. The Queen received messages of condolence from leaders including American President Jimmy Carter and Pope John Paul II. Carter expressed his "profound sadness" at the death. Prime Minister Margaret Thatcher said:His death leaves a gap that can never be filled. The British people give thanks for his life and grieve at his passing. George Colley, the Tánaiste (Deputy head of government) of the Republic of Ireland, said:No effort will be spared to bring those responsible to justice. It is understood that subversives have claimed responsibility for the explosion. Assuming that police investigations substantiate the claim, I know that the Irish people will join me in condemning this heartless and terrible outrage. The IRA issued a statement afterward, saying:The IRA claim responsibility for the execution of Lord Louis Mountbatten. This operation is one of the discriminate ways we can bring to the attention of the English people the continuing occupation of our country. ... The death of Mountbatten and the tributes paid to him will be seen in sharp contrast to the apathy of the British Government and the English people to the deaths of over three hundred British soldiers, and the deaths of Irish men, women, and children at the hands of their forces. Six weeks later, Sinn Féin vice-president Gerry Adams said of Mountbatten's death:The IRA gave clear reasons for the execution. I think it is unfortunate that anyone has to be killed, but the furor created by Mountbatten's death showed up the hypocritical attitude of the media establishment. As a member of the House of Lords, Mountbatten was an emotional figure in both British and Irish politics. What the IRA did to him is what Mountbatten had been doing all his life to other people; and with his war record I don't think he could have objected to dying in what was clearly a war situation. He knew the danger involved in coming to this country. In my opinion, the IRA achieved its objective: people started paying attention to what was happening in Ireland. Adams later said in an interview, "I stand over what I said then. I'm not one of those people that engages in revisionism. Thankfully the war is over." On the day of the bombing, the IRA also ambushed and killed eighteen British soldiers at the gates of Narrow Water Castle, just outside Warrenpoint, in County Down in Northern Ireland, sixteen of them from the Parachute Regiment, in what became known as the Warrenpoint ambush. It was the deadliest attack on the British Army during the Troubles. Funeral On 5 September 1979 Mountbatten received a ceremonial funeral at Westminster Abbey, which was attended by the Queen, the royal family, and members of the European royal houses. Watched by thousands of people, the funeral procession, which started at Wellington Barracks, included representatives of all three British Armed Services, and military contingents from Burma, India, the United States (represented by 70 sailors of the U.S. Navy and 50 U.S. Marines), France (represented by the French Navy) and Canada. His coffin was drawn on a gun carriage by 118 Royal Navy ratings. During the televised service, the Prince of Wales read the lesson from Psalm 107. In an address, the Archbishop of Canterbury, Donald Coggan, highlighted his various achievements and his "lifelong devotion to the Royal Navy". After the public ceremonies, which he had planned himself, Mountbatten was buried in Romsey Abbey. As part of the funeral arrangements, his body had been embalmed by Desmond Henley. Aftermath Two hours before the bomb detonated, Thomas McMahon had been arrested at a Garda checkpoint between Longford and Granard on suspicion of driving a stolen vehicle. He was tried for the assassinations in Ireland and convicted on 23 November 1979 based on forensic evidence supplied by James O'Donovan that showed flecks of paint from the boat and traces of nitroglycerine on his clothes. He was released in 1998 under the terms of the Good Friday Agreement. On hearing of Mountbatten's death, the then Master of the Queen's Music, Malcolm Williamson, wrote the Lament in Memory of Lord Mountbatten of Burma for violin and string orchestra. The 11-minute work was given its first performance on 5 May 1980 by the Scottish Baroque Ensemble, conducted by Leonard Friedman. Legacy Mountbatten's faults, according to his biographer Philip Ziegler, like everything else about him, "were on the grandest scale. His vanity though child-like, was monstrous, his ambition unbridled ... He sought to rewrite history with cavalier indifference to the facts to magnify his own achievements." However, Ziegler concludes that Mountbatten's virtues outweighed his defects: He was generous and loyal ... He was warm-hearted, predisposed to like everyone he met, quick-tempered but never bearing grudges ... His tolerance was extraordinary; his readiness to respect and listen to the views of others was remarkable throughout his life. Ziegler argues he was truly a great man, although not profound or original. What he could do with superlative aplomb was to identify the object at which he was aiming, and force it through to its conclusion. A powerful, analytic mind of crystalline clarity, a superabundance of energy, great persuasive powers, endless resilience in the face of setback or disaster rendered him the most formidable of operators. He was infinitely resourceful, quick in his reactions, always ready to cut his losses and start again ... He was an executor of policy rather than an initiator; but whatever the policy, he espoused it with such energy and enthusiasm, made it so completely his own, that it became identified with him and, in the eyes of the outside world as well as his own, his creation. Others were not so conflicted. Field Marshal Sir Gerald Templer, the former Chief of the Imperial General Staff, once told him, "You are so crooked, Dickie, that if you swallowed a nail, you would shit a corkscrew". Mountbatten's most controversial legacy came in his support for the burgeoning nationalist movements which grew up in the shadow of Japanese occupation. His priority was to maintain practical, stable government, but driving him was an idealism in which he believed every people should be allowed to control their own destiny. Critics said he was too ready to overlook their faults, and especially their subordination to communist control. Ziegler says that in Malaya, where the main resistance to the Japanese came from Chinese who were under considerable communist influence, "Mountbatten proved to have been naïve in his assessment. ... He erred, however, not because he was 'soft on Communism' ... but from an over-readiness to assume the best of those with whom he had dealings." Furthermore, Ziegler argues, he was following a practical policy based on the assumption that it would take a long and bloody struggle to drive the Japanese out, and he needed the support of all the anti-Japanese elements, most of which were either nationalists or communists. Mountbatten took pride in enhancing intercultural understanding and in 1984, with his elder daughter as the patron, the Mountbatten Institute was developed to allow young adults the opportunity to enhance their intercultural appreciation and experience by spending time abroad. The IET annually awards the Mountbatten Medal for an outstanding contribution, or contributions over a period, to the promotion of electronics or information technology and their application. Canada's capital city of Ottawa, Ontario, erected Mountbatten Avenue in his memory. The Mountbatten estate in Singapore and Mountbatten MRT station were named after him. Mountbatten's personal papers (containing | of the young Bourbon prince. From 1967 until 1978, Mountbatten was president of the United World Colleges Organisation, then represented by a single college: that of Atlantic College in South Wales. Mountbatten supported the United World Colleges and encouraged heads of state, politicians, and personalities throughout the world to share his interest. Under his presidency and personal involvement, the United World College of South East Asia was established in Singapore in 1971, followed by the United World College of the Pacific in Victoria, British Columbia, in 1974. In 1978, Mountbatten passed the presidency of the college to his great-nephew, the Prince of Wales. Mountbatten also helped to launch the International Baccalaureate; in 1971 he presented the first IB diplomas in the Greek Theatre of the International School of Geneva, Switzerland. In 1975 Mountbatten finally visited the Soviet Union, leading the delegation from UK as personal representative of Queen Elizabeth II at the celebrations to mark the 30th anniversary of Victory Day in World War II in Moscow. Alleged plots against Harold Wilson Peter Wright, in his 1987 book Spycatcher, claimed that in May 1968 Mountbatten attended a private meeting with press baron Cecil King and the government's Chief Scientific Adviser, Solly Zuckerman. Wright alleged that "up to thirty" MI5 officers had joined a secret campaign to undermine the crisis-stricken Labour government of Harold Wilson and that King was an MI5 agent. In the meeting, King allegedly urged Mountbatten to become the leader of a government of national salvation. Solly Zuckerman pointed out that it was "rank treachery" and the idea came to nothing because of Mountbatten's reluctance to act. In contrast, Andrew Lownie has suggested that it took the intervention of the Queen to dissuade Mountbatten from plotting against Wilson. In 2006, the BBC documentary The Plot Against Harold Wilson alleged that there had been another plot involving Mountbatten to oust Wilson during his second term in office (1974–1976). The period was characterised by high inflation, increasing unemployment, and widespread industrial unrest. The alleged plot revolved around right-wing former military figures who were supposedly building private armies to counter the perceived threat from trade unions and the Soviet Union. They believed that the Labour Party was unable and unwilling to counter these developments and that Wilson was either a Soviet agent or at the very least a Communist sympathiser – claims Wilson strongly denied. The documentary makers alleged that a coup was planned to overthrow Wilson and replace him with Mountbatten using the private armies and sympathisers in the military and MI5. The first official history of MI5, The Defence of the Realm (2009), implied that there was a plot against Wilson and that MI5 did have a file on him. Yet it also made clear that the plot was in no way official and that any activity centred on a small group of discontented officers. This much had already been confirmed by former cabinet secretary Lord Hunt, who concluded in a secret inquiry conducted in 1996 that "there is absolutely no doubt at all that a few, a very few, malcontents in MI5 ... a lot of them like Peter Wright who were right-wing, malicious and had serious personal grudges – gave vent to these and spread damaging malicious stories about that Labour government." Personal life Marriage Mountbatten was married on 18 July 1922 to Edwina Cynthia Annette Ashley, daughter of Wilfred William Ashley, later 1st Baron Mount Temple, himself a grandson of the 7th Earl of Shaftesbury. She was the favourite granddaughter of the Edwardian magnate Sir Ernest Cassel and the principal heir to his fortune. The couple spent heavily on households, luxuries, and entertainment. There followed a honeymoon tour of European royal courts and America which included a visit to Niagara Falls (because "all honeymooners went there"). Mountbatten admitted: "Edwina and I spent all our married lives getting into other people's beds." He maintained an affair for several years with Yola Letellier, the wife of Henri Letellier, publisher of Le Journal and mayor of Deauville (1925–28). Yola Letellier's life story was the inspiration for Colette's novel Gigi. After Edwina died in 1960, Mountbatten was involved in relationships with young women, according to his daughter Patricia, his secretary John Barratt, his valet Bill Evans, and William Stadiem, an employee of Madame Claude. He had a long-running affair with American actress Shirley MacLaine, whom he met in the 1960s. Sexuality Ron Perks, Mountbatten's driver in Malta in 1948, alleged that he used to visit the Red House, an upmarket gay brothel in Rabat used by naval officers. Andrew Lownie, a fellow of the Royal Historical Society, wrote that the United States Federal Bureau of Investigation (FBI) maintained files regarding Mountbatten's alleged homosexuality. Lownie also interviewed several young men who claimed to have been in a relationship with Mountbatten. John Barratt, Mountbatten's personal and private secretary for 20 years, has said Mountbatten was not a homosexual, and that it would have been impossible for such a fact to have been hidden from him. Allegations of sexual abuse On 20 August 2019, files became public showing that the FBI knew in the 1940s that Mountbatten was an alleged homosexual and a pedophile. The FBI file on Mountbatten, begun after he took on the role of Supreme Allied Commander in Southeast Asia in 1944, describes Mountbatten and his wife Edwina as "persons of extremely low morals", and contains a claim by American author Elizabeth, Baroness Decies, that Mountbatten was known to be a homosexual and had "a perversion for young boys". Norman Nield, Mountbatten's driver from 1942 to 1943, told the tabloid New Zealand Truth that he transported young boys aged 8 to 12 who had been procured for the Admiral to Mountbatten's official residence and was paid to keep quiet. Robin Bryans had also claimed to the Irish magazine Now that Mountbatten and Anthony Blunt, along with others, were part of a ring that engaged in homosexual orgies and procured boys in their first year at public schools such as the Portora Royal School in Enniskillen. Former residents of the Kincora Boys' Home in Belfast have asserted that they were trafficked to Mountbatten at Classiebawn Castle his residence in Mullaghmore, County Sligo. These claims were dismissed by the Historical Institution Abuse (HIA) Inquiry. The HIA stated that the article making the original allegations "did not give any basis for the assertions that any of these people [Mountbatten and others] were connected with Kincora". Daughter as heir Lord and Lady Mountbatten had two daughters: Patricia Knatchbull (14 February 1924 – 13 June 2017), sometime lady-in-waiting to Queen Elizabeth II, and Lady Pamela Hicks (born 19 April 1929), who accompanied them to India in 1947–1948 and was also sometime lady-in-waiting to the Queen. Since Mountbatten had no sons when he was created Viscount Mountbatten of Burma, of Romsey in the County of Southampton on 27 August 1946 and then Earl Mountbatten of Burma and Baron Romsey, in the County of Southampton on 28 October 1947, the Letters Patent were drafted such that in the event he left no sons or issue in the male line, the titles could pass to his daughters, in order of seniority of birth, and to their male heirs respectively. Leisure interests Like many members of the royal family, Mountbatten was an aficionado of polo. He received US patent 1,993,334 in 1931 for a polo stick. Mountbatten introduced the sport to the Royal Navy in the 1920s and wrote a book on the subject. He also served as Commodore of Emsworth Sailing Club in Hampshire from 1931. He was a long-serving Patron of the Society for Nautical Research (1951–1979). Mentorship of the Prince of Wales Mountbatten was a strong influence in the upbringing of his grand-nephew, Charles, Prince of Wales, and later as a mentor – "Honorary Grandfather" and "Honorary Grandson", they fondly called each other according to the Jonathan Dimbleby biography of the Prince – though according to both the Ziegler biography of Mountbatten and the Dimbleby biography of the Prince, the results may have been mixed. He from time to time strongly upbraided the Prince for showing tendencies towards the idle pleasure-seeking dilettantism of his predecessor as Prince of Wales, King Edward VIII, whom Mountbatten had known well in their youth. Yet he also encouraged the Prince to enjoy the bachelor life while he could, and then to marry a young and inexperienced girl so as to ensure a stable married life. Mountbatten's qualification for offering advice to this particular heir to the throne was unique; it was he who had arranged the visit of King George VI and Queen Elizabeth to Dartmouth Royal Naval College on 22 July 1939, taking care to include the young Princesses Elizabeth and Margaret in the invitation, but assigning his nephew, Cadet Prince Philip of Greece, to keep them amused while their parents toured the facility. This was the first recorded meeting of Charles's future parents. But a few months later, Mountbatten's efforts nearly came to naught when he received a letter from his sister Alice in Athens informing him that Philip was visiting her and had agreed to repatriate permanently to Greece. Within days, Philip received a command from his cousin and sovereign, King George II of Greece, to resume his naval career in Britain which, though given without explanation, the young prince obeyed. In 1974, Mountbatten began corresponding with Charles about a potential marriage to his granddaughter, Hon. Amanda Knatchbull. It was about this time he also recommended that the 25-year-old prince get on with "sowing some wild oats". Charles dutifully wrote to Amanda's mother (who was also his godmother), Lady Brabourne, about his interest. Her answer was supportive, but advised him that she thought her daughter still rather young to be courted. In February 1975, Charles visited New Delhi to play polo and was shown around Rashtrapati Bhavan, the former Viceroy's House, by Mountbatten. Four years later, Mountbatten secured an invitation for himself and Amanda to accompany Charles on his planned 1980 tour of India. Their fathers promptly objected. Prince Philip thought that the Indian public's reception would more likely reflect response to the uncle than to the nephew. Lord Brabourne counselled that the intense scrutiny of the press would be more likely to drive Mountbatten's godson and granddaughter apart than together. Charles was rescheduled to tour India alone, but Mountbatten did not live to the planned date of departure. When Charles finally did propose marriage to Amanda later in 1979, the circumstances were changed and she refused him. Television appearances On 27 April 1977, shortly before his 77th birthday, Mountbatten became the first member of the Royal Family to appear on the TV guest show This Is Your Life. Death Assassination Mountbatten usually holidayed at his summer home, Classiebawn Castle, on the Mullaghmore Peninsula in County Sligo, in the north-west of Ireland. The village was only from the border with County Fermanagh in Northern Ireland and near an area known to be used as a cross-border refuge by IRA members. In 1978, the IRA had allegedly attempted to shoot Mountbatten as he was aboard his boat, but poor weather had prevented the sniper taking his shot. On 27 August 1979, Mountbatten went lobster-potting and tuna fishing in his wooden boat, Shadow V, which had been moored in the harbour at Mullaghmore. IRA member Thomas McMahon had slipped onto the unguarded boat that night and attached a radio-controlled bomb weighing . When Mountbatten and his party had taken the boat just a few hundred yards from the shore, the bomb was detonated. The boat was destroyed by the force of the blast and Mountbatten's legs were almost blown off. Mountbatten, then aged 79, was pulled alive from the water by nearby fishermen, but died from his injuries before being brought to shore. Also aboard the boat were his elder daughter Patricia, Lady Brabourne; her husband Lord Brabourne; their twin sons Nicholas and Timothy Knatchbull; Lord Brabourne's mother Doreen, Dowager Lady Brabourne; and Paul Maxwell, a young crew member from Enniskillen in County Fermanagh. Nicholas (aged 14) and Paul (aged 15) were killed by the blast and the others were seriously injured. Doreen, Dowager Lady Brabourne (aged 83), died from her injuries the following day. The attack triggered outrage and condemnation around the world. The Queen received messages of condolence from leaders including American President Jimmy Carter and Pope John Paul II. Carter expressed his "profound sadness" at the death. Prime Minister Margaret Thatcher said:His death leaves a gap that can never be filled. The British people give thanks for his life and grieve at his passing. George Colley, the Tánaiste (Deputy head of government) of the Republic of Ireland, said:No effort will be spared to bring those responsible to justice. It is understood that subversives have claimed responsibility for the explosion. Assuming that police investigations substantiate the claim, I know that the Irish people will join me in condemning this heartless and terrible outrage. The IRA issued a statement afterward, saying:The IRA claim responsibility for the execution of Lord Louis Mountbatten. This operation is one of the discriminate ways we can bring to the attention of the English people the continuing occupation of our country. ... The death of Mountbatten and the tributes paid to him will be seen in sharp contrast to the apathy of the British Government and the English people to the deaths of over three hundred British soldiers, and the deaths of Irish men, women, and children at the hands of their forces. Six weeks later, Sinn Féin vice-president Gerry Adams said of Mountbatten's death:The IRA gave clear reasons for the execution. I think it is unfortunate that anyone has to be killed, but the furor created by Mountbatten's death showed up the hypocritical attitude of the media establishment. As a member of the House of Lords, Mountbatten was an emotional figure in both British and Irish politics. What the IRA did to him is what Mountbatten had been doing all his life to other people; and with his war record I don't think he could have objected to dying in what was clearly a war situation. He knew the danger involved in coming to this country. In my opinion, the IRA achieved its objective: people started paying attention to what was happening in Ireland. Adams later said in an interview, "I stand over what I said then. I'm not one of those people that engages in revisionism. Thankfully the war is over." On the day of the bombing, the IRA also ambushed and killed eighteen British soldiers at the gates of Narrow Water Castle, just outside Warrenpoint, in County Down in Northern Ireland, sixteen of them from the Parachute Regiment, in what became known as the Warrenpoint ambush. It was the deadliest attack on the British Army during the Troubles. Funeral On 5 September 1979 Mountbatten received a ceremonial funeral at Westminster Abbey, which was attended by the Queen, the royal family, and members of the European royal houses. Watched by thousands of people, the funeral procession, which started at Wellington Barracks, included representatives of all three British Armed Services, and military contingents from Burma, India, the United States (represented by 70 sailors of the U.S. Navy and 50 U.S. Marines), France (represented by the French Navy) and Canada. His coffin was drawn on a gun carriage by 118 Royal Navy ratings. During the televised service, the Prince of Wales read the lesson from Psalm 107. In an address, the Archbishop of Canterbury, Donald Coggan, highlighted his various achievements and his "lifelong devotion to the Royal Navy". After the public ceremonies, which he had planned himself, Mountbatten was buried in Romsey Abbey. As part of the funeral arrangements, his body had been embalmed by Desmond Henley. Aftermath Two hours before the bomb detonated, Thomas McMahon had been arrested at a Garda checkpoint between Longford and Granard on suspicion of driving a stolen vehicle. He was tried for the assassinations in Ireland and convicted on 23 November 1979 based on forensic evidence supplied by James O'Donovan that showed flecks of paint from the boat and traces of nitroglycerine on his clothes. He was released in 1998 under the terms of the Good Friday Agreement. On hearing of Mountbatten's death, the then Master of the Queen's Music, Malcolm Williamson, wrote the Lament in Memory of Lord Mountbatten of Burma for violin and string orchestra. The 11-minute work was given its first performance on 5 May 1980 by the Scottish Baroque Ensemble, conducted by Leonard Friedman. Legacy Mountbatten's faults, according to his biographer Philip Ziegler, like everything else about him, "were on the grandest scale. His vanity though child-like, was monstrous, his ambition unbridled ... He sought to rewrite history with cavalier indifference to the facts to magnify his own achievements." However, Ziegler concludes that Mountbatten's virtues outweighed his defects: He was generous and loyal ... He was warm-hearted, predisposed to like everyone he met, quick-tempered but never bearing grudges ... His tolerance was extraordinary; his readiness to respect and listen to the views of others was remarkable throughout his life. Ziegler argues he was truly a great man, although not profound or original. What he could do with superlative aplomb was to identify the object at which he was aiming, and force it through to its conclusion. A powerful, analytic mind of crystalline clarity, a superabundance of energy, great persuasive powers, endless resilience in the face of setback or disaster rendered him the most formidable of operators. He was infinitely resourceful, quick in his reactions, always ready to cut his losses and start again ... He was an executor of policy rather than an initiator; but whatever the policy, he espoused it with such energy and enthusiasm, made it so completely his own, that it became identified with him and, in the eyes of the outside world as well as his own, his creation. Others were not so conflicted. Field Marshal Sir Gerald Templer, the former Chief of the Imperial General Staff, once told him, "You are so crooked, Dickie, that if you swallowed a nail, you would shit a corkscrew". Mountbatten's most controversial legacy came in his support for the burgeoning nationalist movements which grew up in the shadow of Japanese occupation. His priority was to maintain practical, stable government, but driving him was an idealism in which he believed every people should be allowed to control their own destiny. Critics said he was too ready to overlook their faults, and especially their subordination to communist control. Ziegler says that in Malaya, where the main resistance to the Japanese came from Chinese who were under considerable communist influence, "Mountbatten proved to have been naïve in his assessment. ... He erred, however, not because he was 'soft on Communism' ... but from an over-readiness to assume the best of those with whom he had dealings." Furthermore, Ziegler argues, he was following a practical policy based on the assumption that it would take a long and bloody struggle to drive the Japanese out, and he needed the support of all the anti-Japanese elements, most of which were either nationalists or communists. Mountbatten took pride in enhancing intercultural understanding and in 1984, with his elder daughter as the patron, the Mountbatten Institute was developed to allow young adults the opportunity to enhance their intercultural appreciation and experience by spending time abroad. The IET annually awards the Mountbatten Medal for an outstanding contribution, or contributions over a period, to the promotion of electronics or information technology and their application. Canada's capital city of Ottawa, Ontario, erected Mountbatten Avenue in his memory. The Mountbatten estate in Singapore and Mountbatten MRT station were named after him. Mountbatten's personal papers (containing approximately 250,000 papers and 50,000 photographs) are preserved in the University of Southampton Library. Awards and decorations He was appointed personal aide-de-camp by Edward VIII, George VI and Elizabeth II, and therefore bore the unusual distinction of being allowed to wear three royal cyphers on his shoulder straps. Arms References Footnotes Works cited Grove, Eric, and Sally Rohan. "The Limits of Opposition: Admiral Earl Mountbatten of Burma, First Sea Lord and Chief of Naval Staff." Contemporary British History 13.2 (1999): 98–116. published in Further reading Coll, Rebecca. "Autobiography and history on screen: The Life and Times of Lord Mountbatten." Historical Journal of Film, Radio and Television 37.4 (2017): 665–682. McLynn, Frank. The Burma Campaign: Disaster into Triumph 1942–1945 (Yale UP, 2011). 544pp Neillands, Robin. The Dieppe Raid: the story of the disastrous 1942 expedition (Indiana UP, 2005). Ritter, Jonathan Templin. Stilwell and Mountbatten in Burma: Allies at War, 1943–1944 (U of North Texas Press, 2017). Smith, Adrian. "Command and Control in Postwar Britain Defence Decision-making in the United Kingdom, 1945-1984". Twentieth-Century British History 2 (1991): 291–327. online Smith, Adrian. "Mountbatten goes to the movies: Promoting the heroic myth through cinema." Historical Journal of Film, Radio and Television 26.3 (2006): 395–416. Villa, Brian Loring, and Peter J. Henshaw. "The Dieppe Raid Debate." Canadian Historical Review 79.2 (1998): 304–315. excerpt External links Tribute & Memorial Website to Louis, 1st Earl Mountbatten of Burma 70th Anniversary of Indian Independence – Mountbatten: The Last Viceroy – UK Parliament Living Heritage Papers of Louis, Earl Mountbatten of Burma |- |- |- |- |- |- |- |- |- |- 1900 births 1979 crimes in the Republic of Ireland 1979 deaths 1979 murders in Europe 1940s in British India 1970s murders in the Republic of Ireland Alumni of Christ's College, Cambridge Assassinated British politicians Assassinated military personnel Assassinated royalty Louis Francis British Empire in World War II British people murdered abroad British terrorism victims Burma in World War II Chief Commanders of the Legion of Merit Chiefs of the Defence Staff (United Kingdom) Companions of the Distinguished Service Order Deaths by improvised explosive device in the Republic of Ireland Earls Mountbatten of Burma English people of German descent Fellows of the Royal Society (Statute 12) First Sea Lords and Chiefs of the Naval Staff Foreign recipients of the Distinguished Service Medal (United States) German princes Governors-General of India Graduates of the Royal Naval College, Greenwich Grand Croix of the Légion d'honneur Grand Crosses of the Order of Aviz Grand Crosses of the Order of George I Grand Crosses of the Order of the Crown (Romania) Grand Crosses of the Order of the Dannebrog Grand Crosses of the Order of the Star of Romania Improvised explosive device bombings in the Republic of Ireland Knights Grand Commander of the Order of the Indian Empire Knights Grand Commander of the Order of the Star of India Knights Grand Cross of the Order of the Bath Knights Grand Cross of the Order of Isabella the Catholic Knights Grand Cross of the Royal Victorian Order Knights of Justice of the Order of St John Knights of the Garter Legion of Frontiersmen members Lord Lieutenants of the Isle of Wight Male murder victims Members of the Order of Merit Members of the Privy Council of the United Kingdom Military of Singapore under British rule Louis NATO military personnel Pakistan Movement People associated with the Royal National College for the Blind People educated at Lockers Park School People educated at the Royal Naval College, Osborne People from Windsor, Berkshire People killed by the Provisional Irish Republican Army People murdered in the Republic of Ireland Presidents of the British Computer Society Recipients of the Croix de Guerre (France) Recipients of the Order of Polonia Restituta Recipients of the Order of the Star of Nepal Recipients of the War Cross (Greece) Royal Navy admirals of the fleet Royal Navy admirals of |
to resign from the committee the next year. Gerry and other prominent Marbleheaders had established a hospital for performing smallpox inoculations on Cat Island; because the means of transmission of the disease were not known at the time, fears amongst the local population led to protests which escalated into violence that wrecked the facilities and threatened the proprietors' other properties. Gerry reentered politics after the Boston Port Act closed that city's port in 1774, and Marblehead became an alternative port to which relief supplies from other colonies could be delivered. As one of the town's leading merchants and Patriots, Gerry played a major role in ensuring the storage and delivery of supplies from Marblehead to Boston, interrupting those activities only to care for his dying father. He was elected as a representative to the First Continental Congress in September 1774, but declined, still grieving the loss of his father. Congress and Revolution Gerry was elected to the provincial assembly, which reconstituted itself as the Massachusetts Provincial Congress after British Governor Thomas Gage dissolved the body in October 1774. He was assigned to its committee of safety, responsible for ensuring that the province's limited supplies of weapons and gunpowder remained out of British Army hands. His actions were partly responsible for the storage of weapons and ammunition in Concord; these stores were the target of the British raiding expedition that sparked the start of the American Revolutionary War with the Battles of Lexington and Concord in April 1775. (Gerry was staying at an inn at Menotomy, now Arlington, when the British marched through on the night of April 18.) During the Siege of Boston that followed, Gerry continued to take a leading role in supplying the nascent Continental Army, something he would continue to do as the war progressed. He leveraged business contacts in France and Spain to acquire not just munitions, but supplies of all types, and was involved in the transfer of financial subsidies from Spain to Congress. He sent ships to ports all along the American coast and dabbled in financing privateering operations against British shipping. Unlike some other merchants, there is no evidence that Gerry profiteered directly from the hostilities (he spoke out against price gouging and in favor of price controls), although his war-related merchant activities notably increased the family's wealth. His gains were tempered to some extent by the precipitous decline in the value of paper currencies, which he held in large quantities and speculated in. Gerry served in the Second Continental Congress from February 1776 to 1780, when matters of the ongoing war occupied the body's attention. He was influential in convincing several delegates to support passage of the Declaration of Independence in the debates held during the summer of 1776; John Adams wrote of him, "If every Man here was a Gerry, the Liberties of America would be safe against the Gates of Earth and Hell." He was implicated as a member of the so-called "Conway Cabal", a group of Congressmen and military officers who were dissatisfied with the performance of General George Washington during the 1777 military campaign. However, Gerry took Pennsylvania leader Thomas Mifflin, one of Washington's critics, to task early in the episode and specifically denied knowledge of any sort of conspiracy against Washington in February 1778. Gerry's political philosophy was one of limited central government, and he regularly advocated for the maintenance of civilian control of the military. He held these positions fairly consistently throughout his political career (wavering principally on the need for stronger central government in the wake of the 1786–87 Shays' Rebellion) and was well known for his personal integrity. In later years he opposed the idea of political parties, remaining somewhat distant from both the developing Federalist and Democratic-Republican parties until later in his career. It was not until 1800 that he formally associated with the Democratic-Republicans in opposition to what he saw as attempts by the Federalists to centralize too much power in the national government. In 1780, he resigned from the Continental Congress over the issue and refused offers from the state legislature to return to the Congress. He also refused appointment to the state senate, claiming he would be more effective in the state's lower chamber, and also refused appointment as a county judge, comparing the offer by Governor John Hancock to those made by royally-appointed governors to benefit their political allies. He was elected a fellow of the American Academy of Arts and Sciences in 1781. Gerry was convinced to rejoin the Confederation Congress in 1783, when the state legislature agreed to support his call for needed reforms. He served in that body until September 1785, during which time it met in New York City. The following year he married Ann Thompson, the daughter of a wealthy New York merchant who was twenty years his junior; his best man was his good friend James Monroe. The couple had ten children between 1787 and 1801, straining Ann's health. The war made Gerry sufficiently wealthy that when it ended he sold off his merchant interests and began investing in land. In 1787, he purchased the Cambridge, Massachusetts, estate of the last royal lieutenant governor of Massachusetts, Thomas Oliver, which had been confiscated by the state. This property, known as Elmwood, became the family home for the rest of Gerry's life. He continued to own property in Marblehead and bought several properties in other Massachusetts communities. He also owned shares in the Ohio Company, prompting some political opponents to characterize him as an owner of vast tracts of western lands. Constitutional Convention Gerry played a major role in the Constitutional Convention, held in Philadelphia during the summer of 1787. In its deliberations, he consistently advocated for a strong delineation between state and federal government powers, with state legislatures shaping the membership of federal government positions. Gerry's opposition to popular election of representatives was rooted in part by the events of Shays's Rebellion, a populist uprising in western Massachusetts in the year preceding the convention. Despite that position, he also sought to maintain individual liberties by providing checks on government power that might abuse or limit those freedoms. He supported the idea that the Senate composition should not be determined by population; the view that it should instead be composed of equal numbers of members for each state prevailed in the Connecticut Compromise. The compromise was adopted on a narrow vote in which the Massachusetts delegation was divided, Gerry and Caleb Strong voting in favor. Gerry further proposed that senators of a state, rather than casting a single vote on behalf of the state, vote instead as individuals. Gerry was also vocal in opposing the Three-fifths Compromise, which counted slaves as three-fifths of a free person for the purposes of apportionment in the House of Representatives, whereas counting each slave individually would have given southern slave states a decided advantage. Gerry opposed slavery and said the constitution should have "nothing to do" with slavery so as "not to sanction it." Gerry would ultimately not sign the final draft of the constitution because it allowed for slavery. Because of his fear of demagoguery and belief the people of the United States could be easily misled, Gerry also advocated indirect elections. Although he was unsuccessful in obtaining them for the lower house of Congress, Gerry did obtain such indirect elections for the Senate, whose members were to be selected by the state legislatures. Gerry also advanced numerous | narrow victory. Republicans cast Gore as an ostentatious British-loving Tory who wanted to restore the monarchy (his parents were Loyalists during the Revolution), and Gerry as a patriotic American, while Federalists described Gerry as a "French partizan" and Gore as an honest man devoted to ridding the government of foreign influence. A temporary lessening in the threat of war with Britain aided Gerry. The two battled again in 1811, with Gerry once again victorious in a highly acrimonious campaign. Gerry's first year as governor was less controversial than his second, because the Federalists controlled the state senate. He preached moderation in the political discourse, noting that it was important that the nation present a unified front in its dealings with foreign powers. In his second term, with full Republican control of the legislature, he became notably more partisan, purging much of the state government of Federalist appointees. The legislature also enacted "reforms" of the court system that resulted in an increase in the number of judicial appointments, which Gerry filled with Republican partisans. However, infighting within the party and a shortage of qualified candidates played against Gerry, and the Federalists scored points by complaining vocally about the partisan nature of the reforms. Other legislation passed during Gerry's second year included a bill broadening the membership of Harvard's Board of Overseers to diversify its religious membership, and another that liberalized religious taxes. The Harvard bill had significant political slant because the recent split between orthodox Congregationalists and Unitarians also divided the state to some extent along party lines, and Federalist Unitarians had recently gained control over the Harvard board. In 1812, the state adopted new constitutionally mandated electoral district boundaries. The Republican-controlled legislature had created district boundaries designed to enhance their party's control over state and national offices, leading to some oddly shaped legislative districts. Although Gerry was unhappy about the highly partisan districting (according to his son-in-law, he thought it "highly disagreeable"), he signed the legislation. The shape of one of the state senate districts in Essex County was compared to a salamander by a local Federalist newspaper in a political cartoon, calling it a "Gerry-mander". Ever since, the creation of such districts has been called gerrymandering. Gerry also engaged in partisan investigations of potential libel against him by elements of the Federalist press, further damaging his popularity with moderates. The redistricting controversy, along with the libel investigation and the impending War of 1812, contributed to Gerry's defeat in 1812 (once again at the hands of Caleb Strong, whom the Federalists had brought out of retirement). The gerrymandering of the state Senate was a notable success in the 1812 election: the body was thoroughly dominated by Republicans, even though the house and the governor's seat went to Federalists by substantial margins. Vice Presidency and death Gerry's financial difficulties prompted him to ask President James Madison for a federal position after his loss in the 1812 election (which was held early in the year). He was chosen by the party Congressional nominating caucus to be Madison's vice presidential running mate in the 1812 presidential election, although the nomination was first offered to John Langdon. He was viewed as a relatively safe choice who would attract Northern votes but not pose a threat to James Monroe, who was thought likely to succeed Madison. Madison narrowly won re-election, and Gerry took the oath of office at Elmwood in March 1813. At that time the office of vice president was largely a sinecure; Gerry's duties included advancing the administration's agenda in Congress and dispensing patronage positions in New England. Gerry's actions in support of the War of 1812 had a partisan edge: he expressed concerns over a possible Federalist seizure of Fort Adams (as Boston's Fort Independence was then known) as a prelude to Anglo-Federalist cooperation and sought the arrest of printers of Federalist newspapers. On November 23, 1814, Gerry fell seriously ill while visiting Joseph Nourse of the Treasury Department, and he died not long after returning to his home in the Seven Buildings. He is buried in the Congressional Cemetery in Washington, D.C., with a memorial by John Frazee. He is the only signer of the Declaration of Independence who was buried in the nation's capital city. The estate he left his wife and children was rich in land and poor in cash, but he had managed to repay his brother's debts with his pay as vice president. Aged 68 at the start of his vice presidency, he was the oldest person to become vice president until Charles Curtis in 1929. Legacy Gerry is generally remembered for the use of his name in the word gerrymander, for his refusal to sign the United States Constitution, and for his role in the XYZ Affair. His path through the politics of the age has been difficult to characterize. Early biographers, including his son-in-law James T. Austin and Samuel Eliot Morison, struggled to explain his apparent changes in position. Biographer George Athan Billias posits that Gerry was a consistent advocate and practitioner of republicanism as it was originally envisioned, and that his role in the Constitutional Convention had a significant impact on the document it eventually produced. Gerry had ten children, of whom nine survived into adulthood: Catharine Gerry (1787–1850) Eliza Gerry (1791–1882) Ann Gerry (1791–1883) Elbridge Gerry, Jr. (1793–1867) Thomas Russell Gerry (1794–1848), who married Hannah Green Goelet (1804–1845) Helen Maria Gerry (1796–1864) James Thompson Gerry (1797–1854), who left West Point upon his father's death and was Commander of the war-sloop USS Albany; the sloop disappeared with all hands September 28 or 29, 1854 near the West Indies. Eleanor Stanford Gerry (1800–1871) Emily Louisa Gerry (1802–1894) Gerry's grandson Elbridge Thomas Gerry became a distinguished lawyer and philanthropist in New York. His great-grandson, Peter G. Gerry, was a member of the U.S. House of Representatives and later a U.S. Senator from Rhode Island. Gerry is depicted in two of John Trumbull's paintings, the Declaration of Independence and General George Washington Resigning His Commission. Both are on view in the rotunda of the United States Capitol. The upstate New York town of Elbridge is believed to have been named in his honor, as is the western New York town of Gerry. The town of Phillipston, Massachusetts was originally incorporated in 1786 under the name Gerry in his honor but was changed to its present name after the town submitted a petition in 1812, citing Democratic-Republican support for the War of 1812. Gerry's Landing Road in Cambridge, Massachusetts, is located near the Eliot Bridge not far from Elmwood. During the 19th century, the area was known as Gerry's Landing (formerly known as Sir Richard's Landing) and was used by a Gerry relative for a short time as a landing and storehouse. The supposed house of his birth, the Elbridge Gerry House (it is uncertain whether he was born in the house currently standing on the site or an earlier structure) stands in Marblehead, and Marblehead's Elbridge Gerry School is named in his honor. See also Memorial to the 56 Signers of the Declaration of Independence Notes References Bibliography Volume 2 Austin was Gerry's son-in-law. (five volume history of Massachusetts until the early 20th century) Shows that Gerry ignored Jefferson's 1799 letter inviting him to switch parties. Further reading Billias, George. Elbridge Gerry: Founding Father and Republican Statesman. New York: McGraw-Hill Book Company, 1976. External links Biography by Rev. Charles A. Goodrich, 1856 A New Nation Votes: American Election Returns 1787–1825 Delegates to the Constitutional Convention: Massachusetts (Brief Biography of Gerry) Gerry family archive at Hartwick College Elbridge Gerry, the Unfairly Maligned Revolutionary at New England Historical Society |- |- |- |- |- 1744 births 1814 deaths 18th-century American politicians 19th-century vice presidents of |
keys. A publicly available public-key encryption application called Pretty Good Privacy (PGP) was written in 1991 by Phil Zimmermann, and distributed free of charge with source code. PGP was purchased by Symantec in 2010 and is regularly updated. Uses Encryption has long been used by militaries and governments to facilitate secret communication. It is now commonly used in protecting information within many kinds of civilian systems. For example, the Computer Security Institute reported that in 2007, 71% of companies surveyed utilized encryption for some of their data in transit, and 53% utilized encryption for some of their data in storage. Encryption can be used to protect data "at rest", such as information stored on computers and storage devices (e.g. USB flash drives). In recent years, there have been numerous reports of confidential data, such as customers' personal records, being exposed through loss or theft of laptops or backup drives; encrypting such files at rest helps protect them if physical security measures fail. Digital rights management systems, which prevent unauthorized use or reproduction of copyrighted material and protect software against reverse engineering (see also copy protection), is another somewhat different example of using encryption on data at rest. Encryption is also used to protect data in transit, for example data being transferred via networks (e.g. the Internet, e-commerce), mobile telephones, wireless microphones, wireless intercom systems, Bluetooth devices and bank automatic teller machines. There have been numerous reports of data in transit being intercepted in recent years. Data should also be encrypted when transmitted across networks in order to protect against eavesdropping of network traffic by unauthorized users. Data erasure Conventional methods for permanently deleting data from a storage device involve overwriting the device's whole content with zeros, ones, or other patterns – a process which can take a significant amount of time, depending on the capacity and the type of storage medium. Cryptography offers a way of making the erasure almost instantaneous. This method is called crypto-shredding. An example implementation of this method can be found on iOS devices, where the cryptographic key is kept in a dedicated 'effaceable storage'. Because the key is stored on the same device, this setup on its own does not offer full privacy or security protection if an unauthorized person gains physical access to the device. Limitations Encryption is used in the 21st century to protect digital data and information systems. As computing power increased over the years, encryption technology has only become more advanced and secure. However, this advancement in technology has also exposed a potential limitation of today's encryption methods. The length of the encryption key is an indicator of the strength of the encryption method. For example, the original encryption key, DES (Data Encryption Standard), was 56 bits, meaning it had 2^56 combination possibilities. With today's computing power, a 56-bit key is no longer secure, being vulnerable to hacking by brute force attack. Today the standard of modern encryption keys is up to 2048 bit with the RSA system. Decrypting a 2048 bit encryption key is nearly impossible in light of the number of possible combinations. However, quantum computing is threatening to change this secure nature. Quantum computing utilizes properties of quantum mechanics in order to process large amounts of data simultaneously. Quantum computing has been found to achieve computing speeds thousands of times faster than today's supercomputers. This computing power presents a challenge to today's encryption technology. For example, RSA encryption utilizes the multiplication of very large prime numbers to create a semiprime number for its public key. Decoding this key without its private key requires this semiprime number to be factored in, which can take a very long time to do with modern computers. It would take a supercomputer anywhere between weeks to months to factor in this key. However, quantum computing can use quantum algorithms to factor this semiprime number in the same amount of time it takes for normal computers to generate it. This would make all data protected by current public-key encryption vulnerable to quantum computing attacks. Other encryption techniques like elliptic curve cryptography and symmetric key encryption are also vulnerable to quantum computing. While quantum computing could be a threat to encryption security in the future, quantum computing as it currently stands is still very limited. Quantum computing currently is not commercially available, cannot handle large amounts of code, and only exists as computational devices, not computers. Furthermore, quantum computing advancements will be able to be utilized in favor of encryption as well. The National Security Agency (NSA) is | power to severely limit the number of reasonable combinations they needed to check every day, leading to the breaking of the Enigma Machine. Modern Today, encryption is used in the transfer of communication over the Internet for security and commerce. As computing power continues to increase, computer encryption is constantly evolving to prevent attacks. Encryption in cryptography In the context of cryptography, encryption serves as a mechanism to ensure confidentiality. Since data may be visible on the Internet, sensitive information such as passwords and personal communication may be exposed to potential interceptors. The process of encrypting and decrypting messages involves keys. The two main types of keys in cryptographic systems are symmetric-key and public-key (also known as asymmetric-key). Many complex cryptographic algorithms often use simple modular arithmetic in their implementations. Types Symmetric key In symmetric-key schemes, the encryption and decryption keys are the same. Communicating parties must have the same key in order to achieve secure communication. The German Enigma Machine utilized a new symmetric-key each day for encoding and decoding messages. Public key In public-key encryption schemes, the encryption key is published for anyone to use and encrypt messages. However, only the receiving party has access to the decryption key that enables messages to be read. Public-key encryption was first described in a secret document in 1973; beforehand, all encryption schemes were symmetric-key (also called private-key). Although published subsequently, the work of Diffie and Hellman was published in a journal with a large readership, and the value of the methodology was explicitly described. The method became known as the Diffie-Hellman key exchange. RSA (Rivest–Shamir–Adleman) is another notable public-key cryptosystem. Created in 1978, it is still used today for applications involving digital signatures. Using number theory, the RSA algorithm selects two prime numbers, which help generate both the encryption and decryption keys. A publicly available public-key encryption application called Pretty Good Privacy (PGP) was written in 1991 by Phil Zimmermann, and distributed free of charge with source code. PGP was purchased by Symantec in 2010 and is regularly updated. Uses Encryption has long been used by militaries and governments to facilitate secret communication. It is now commonly used in protecting information within many kinds of civilian systems. For example, the Computer Security Institute reported that in 2007, 71% of companies surveyed utilized encryption for some of their data in transit, and 53% utilized encryption for some of their data in storage. Encryption can be used to protect data "at rest", such as information stored on computers and storage devices (e.g. USB flash drives). In recent years, there have been numerous reports of confidential data, such as customers' personal records, being exposed through loss or theft of laptops or backup drives; encrypting such files at rest helps protect them if physical security measures fail. Digital rights management systems, which prevent unauthorized use or reproduction of copyrighted material and protect software against reverse engineering (see also copy protection), is another somewhat different example of using encryption on data at rest. Encryption is also used to protect data in transit, for example data being transferred via networks (e.g. the Internet, e-commerce), mobile telephones, wireless microphones, wireless intercom systems, Bluetooth devices and bank automatic teller machines. There have been numerous reports of data in transit being intercepted in recent years. Data should also be encrypted when transmitted across networks in order to protect against eavesdropping of network traffic by unauthorized users. Data erasure Conventional methods for permanently deleting data from a storage device involve overwriting the device's whole content with zeros, ones, or other patterns – a process which can take a significant amount of time, depending on the capacity and the type of storage medium. Cryptography offers a way of making the erasure almost instantaneous. This method is called crypto-shredding. An example implementation of this method can be found on iOS devices, where the cryptographic key is kept in a dedicated 'effaceable storage'. Because the key is stored on the same device, this setup on its own does not offer full privacy or security protection if an unauthorized person gains physical access to the device. Limitations Encryption is used in the 21st century to protect digital data and information systems. As computing power increased over the years, encryption technology has only become more advanced and secure. However, this advancement in technology has also exposed a potential limitation of today's encryption methods. The length of the encryption key is an indicator of the strength of the encryption method. For example, the original encryption key, DES (Data Encryption Standard), was 56 bits, meaning it had 2^56 combination possibilities. With today's computing power, a 56-bit key is no longer secure, being vulnerable to hacking by brute force attack. Today the standard of modern encryption keys is up to 2048 bit with the RSA system. Decrypting a 2048 bit encryption key is nearly impossible in light of the number of possible combinations. However, quantum computing is threatening to change this secure nature. Quantum computing utilizes properties of quantum mechanics in order to process large amounts of data simultaneously. Quantum computing has been found to achieve computing speeds thousands of times faster than today's supercomputers. This computing power presents a challenge to today's encryption technology. For example, RSA encryption utilizes the multiplication of very large prime numbers to create a semiprime number for its public key. Decoding this key without its private key requires this semiprime number to be factored in, which can take a very long time to do with modern computers. It would take a supercomputer anywhere between weeks to months to factor in this key. However, quantum computing can use quantum algorithms to factor this semiprime number in the same amount of time it takes for normal computers to generate it. This would make all data protected by current public-key encryption vulnerable to quantum computing attacks. Other encryption techniques like elliptic curve cryptography and symmetric key encryption are also vulnerable to quantum computing. While quantum computing could be a threat to encryption security in the future, quantum computing as it currently stands is still very limited. Quantum computing currently is not commercially available, cannot handle large amounts of code, and only exists as computational devices, not computers. Furthermore, quantum computing advancements will be able to be utilized in favor of encryption as well. The National Security Agency (NSA) is currently preparing post-quantum encryption standards for the future. Quantum encryption promises a level of security that will be able to counter the threat of quantum computing. Attacks and countermeasures Encryption is an important tool but is not sufficient alone to ensure the security or privacy of sensitive information throughout its lifetime. Most applications |
whole issue — appeared in a daily newspaper prior to the publication of the criticized paper itself." Bohr's reply Bohr's response to the EPR paper was published in the Physical Review later in 1935. He argued that EPR had reasoned fallaciously. Because measurements of position and of momentum are complementary, making the choice to measure one excludes the possibility of measuring the other. Consequently, a fact deduced regarding one arrangement of laboratory apparatus could not be combined with a fact deduced by means of the other, and so, the inference of predetermined position and momentum values for the second particle was not valid. Bohr concluded that EPR's "arguments do not justify their conclusion that the quantum description turns out to be essentially incomplete." Einstein's own argument In his own publications and correspondence, Einstein used a different argument to insist that quantum mechanics is an incomplete theory. He explicitly de-emphasized EPR's attribution of "elements of reality" to the position and momentum of particle B, saying that "I couldn't care less" whether the resulting states of particle B allowed one to predict the position and momentum with certainty. For Einstein, the crucial part of the argument was the demonstration of nonlocality, that the choice of measurement done in particle A, either position or momentum, would lead to two different quantum states of particle B. He argued that, because of locality, the real state of particle B couldn't depend on which kind of measurement was done in A, and therefore the quantum states cannot be in one-to-one correspondence with the real states. Later developments Bohm's variant In 1951, David Bohm proposed a variant of the EPR thought experiment in which the measurements have discrete ranges of possible outcomes, unlike the position and momentum measurements considered by EPR. The EPR–Bohm thought experiment can be explained using electron–positron pairs. Suppose we have a source that emits electron–positron pairs, with the electron sent to destination A, where there is an observer named Alice, and the positron sent to destination B, where there is an observer named Bob. According to quantum mechanics, we can arrange our source so that each emitted pair occupies a quantum state called a spin singlet. The particles are thus said to be entangled. This can be viewed as a quantum superposition of two states, which we call state I and state II. In state I, the electron has spin pointing upward along the z-axis (+z) and the positron has spin pointing downward along the z-axis (−z). In state II, the electron has spin −z and the positron has spin +z. Because it is in a superposition of states, it is impossible without measuring to know the definite state of spin of either particle in the spin singlet. Alice now measures the spin along the z-axis. She can obtain one of two possible outcomes: +z or −z. Suppose she gets +z. Informally speaking, the quantum state of the system collapses into state I. The quantum state determines the probable outcomes of any measurement performed on the system. In this case, if Bob subsequently measures spin along the z-axis, there is 100% probability that he will obtain −z. Similarly, if Alice gets −z, Bob will get +z. There is nothing special about choosing the z-axis: according to quantum mechanics the spin singlet state may equally well be expressed as a superposition of spin states pointing in the x direction. Suppose that Alice and Bob had decided to measure spin along the x-axis. We'll call these states Ia and IIa. In state Ia, Alice's electron has spin +x and Bob's positron has spin −x. In state IIa, Alice's electron has spin −x and Bob's positron has spin +x. Therefore, if Alice measures +x, the system 'collapses' into state Ia, and Bob will get −x. If Alice measures −x, the system collapses into state IIa, and Bob will get +x. Whatever axis their spins are measured along, they are always found to be opposite. In quantum mechanics, the x-spin and z-spin are "incompatible observables", meaning the Heisenberg uncertainty principle applies to alternating measurements of them: a quantum state cannot possess a definite value for both of these variables. Suppose Alice measures the z-spin and obtains +z, so that the quantum state collapses into state I. Now, instead of measuring the z-spin as well, Bob measures the x-spin. According to quantum mechanics, when the system is in state I, Bob's x-spin measurement will have a 50% probability of producing +x and a 50% probability of -x. It is impossible to predict which outcome will appear until Bob actually performs the measurement. Therefore, Bob's positron will have a definite spin when measured along the same axis as Alice's electron, but when measured in the perpendicular axis its spin will be uniformly random. It seems as if information has propagated (faster than light) from Alice's apparatus to make Bob's positron assume a definite spin in the appropriate axis. Bell's theorem In 1964, John Stewart Bell published a paper investigating the puzzling situation at that time: on one hand, the EPR paradox purportedly showed that quantum mechanics was nonlocal, and suggested that a hidden-variable theory could heal this nonlocality. On the other hand, David Bohm had recently developed the first successful hidden-variable theory, but it had a grossly nonlocal character. Bell set out to investigate whether it was indeed possible to solve the nonlocality problem with hidden variables, and found out that first, the correlations shown in both EPR's and Bohm's versions of the paradox could indeed be explained in a local way with hidden variables, and second, that the correlations shown in his own variant of the paradox couldn't be explained by any local hidden-variable theory. This second result became known as the Bell theorem. To understand the first result, consider the following toy hidden-variable theory introduced later by J.J. Sakurai: in it, quantum spin-singlet states emitted by the source are actually approximate descriptions for "true" physical states possessing definite values for the z-spin and x-spin. In these "true" states, the positron going to Bob always has spin values opposite to the electron going to Alice, but the values are otherwise completely random. For example, the first pair emitted by the source might be "(+z, −x) to Alice and (−z, +x) to Bob", the next pair "(−z, −x) to Alice and (+z, +x) to Bob", and so forth. Therefore, if Bob's measurement axis is aligned with Alice's, he will necessarily get the opposite of whatever Alice gets; otherwise, he will get "+" and "−" with equal probability. Bell showed, however, that such models can only reproduce the singlet correlations when Alice and Bob | where Bob would have a fixed quantum state in his side, that is classically correlated but otherwise independent of Alice's. Locality in the EPR paradox Locality has several different meanings in physics. EPR describe the principle of locality as asserting that physical processes occurring at one place should have no immediate effect on the elements of reality at another location. At first sight, this appears to be a reasonable assumption to make, as it seems to be a consequence of special relativity, which states that energy can never be transmitted faster than the speed of light without violating causality; however, it turns out that the usual rules for combining quantum mechanical and classical descriptions violate EPR's principle of locality without violating special relativity or causality. Causality is preserved because there is no way for Alice to transmit messages (i.e., information) to Bob by manipulating her measurement axis. Whichever axis she uses, she has a 50% probability of obtaining "+" and 50% probability of obtaining "−", completely at random; according to quantum mechanics, it is fundamentally impossible for her to influence what result she gets. Furthermore, Bob is only able to perform his measurement once: there is a fundamental property of quantum mechanics, the no-cloning theorem, which makes it impossible for him to make an arbitrary number of copies of the electron he receives, perform a spin measurement on each, and look at the statistical distribution of the results. Therefore, in the one measurement he is allowed to make, there is a 50% probability of getting "+" and 50% of getting "−", regardless of whether or not his axis is aligned with Alice's. As a summary, the results of the EPR thought experiment do not contradict the predictions of special relativity. Neither the EPR paradox nor any quantum experiment demonstrates that superluminal signaling is possible; however, the principle of locality appeals powerfully to physical intuition, and Einstein, Podolsky and Rosen were unwilling to abandon it. Einstein derided the quantum mechanical predictions as "spooky action at a distance". The conclusion they drew was that quantum mechanics is not a complete theory. Mathematical formulation Bohm's variant of the EPR paradox can be expressed mathematically using the quantum mechanical formulation of spin. The spin degree of freedom for an electron is associated with a two-dimensional complex vector space V, with each quantum state corresponding to a vector in that space. The operators corresponding to the spin along the x, y, and z direction, denoted Sx, Sy, and Sz respectively, can be represented using the Pauli matrices: where is the reduced Planck constant (or the Planck constant divided by 2π). The eigenstates of Sz are represented as and the eigenstates of Sx are represented as The vector space of the electron-positron pair is , the tensor product of the electron's and positron's vector spaces. The spin singlet state is where the two terms on the right hand side are what we have referred to as state I and state II above. From the above equations, it can be shown that the spin singlet can also be written as where the terms on the right hand side are what we have referred to as state Ia and state IIa. To illustrate the paradox, we need to show that after Alice's measurement of Sz (or Sx), Bob's value of Sz (or Sx) is uniquely determined and Bob's value of Sx (or Sz) is uniformly random. This follows from the principles of measurement in quantum mechanics. When Sz is measured, the system state collapses into an eigenvector of Sz. If the measurement result is +z, this means that immediately after measurement the system state collapses to Similarly, if Alice's measurement result is −z, the state collapses to The left hand side of both equations show that the measurement of Sz on Bob's positron is now determined, it will be −z in the first case or +z in the second case. The right hand side of the equations show that the measurement of Sx on Bob's positron will return, in both cases, +x or -x with probability 1/2 each. See also Bohr-Einstein debates: The argument of EPR CHSH Bell test Coherence Correlation does not imply causation ER=EPR GHZ experiment Measurement problem Philosophy of information Philosophy of physics Popper's experiment Superdeterminism Quantum entanglement Quantum information Quantum pseudo-telepathy Quantum teleportation Quantum Zeno effect Synchronicity Ward's probability amplitude Notes References Selected papers A. Fine, Do Correlations need to be explained?, in Philosophical Consequences of Quantum Theory: Reflections on Bell's Theorem, edited by Cushing & McMullin (University of Notre Dame Press, 1986). M. Mizuki, A classical interpretation of Bell's inequality. Annales de la Fondation Louis de Broglie 26 683 (2001) P. Pluch, "Theory for Quantum Probability", PhD Thesis University of Klagenfurt (2006) Books Bell, John S. (1987). Speakable and Unspeakable in Quantum Mechanics. Cambridge University Press. . Fine, Arthur (1996). The Shaky Game: Einstein, Realism and the Quantum Theory. 2nd ed. Univ. of Chicago Press. Gribbin, John (1984). In Search of Schrödinger's Cat. Black Swan. Leaderman, Leon; Teresi, Dick (1993). The God Particle: If the Universe Is the Answer, What Is the Question? Houghton Mifflin Company, pp. 21, 187–189. Selleri, Franco (1988). Quantum Mechanics Versus Local Realism: The Einstein–Podolsky–Rosen Paradox. New York: Plenum Press. . External links The Einstein–Podolsky–Rosen Argument in Quantum Theory; 1.2 The argument in the text Internet Encyclopedia of Philosophy: "The Einstein-Podolsky-Rosen Argument and the Bell Inequalities" Stanford Encyclopedia of Philosophy: Abner Shimony (2004) "Bell's Theorem" EPR, Bell & Aspect: The Original References Does Bell's Inequality Principle rule out local theories of quantum mechanics? from the Usenet Physics FAQ Theoretical use of EPR in teleportation Effective use of EPR in cryptography EPR experiment with single photons interactive Spooky Actions At A Distance?: Oppenheimer Lecture by Prof. Mermin Original paper Albert Einstein Physical paradoxes Quantum measurement Thought |
tissue engineering applications Computing and electronics An alternate term for conformal coating or potting, which protects electronic components Encapsulation (networking), the process of adding control information as it passes through the layered | encapsulation, in chemistry, the confinement of an individual molecule within a larger molecule Micro-encapsulation, in material science, the coating of microscopic particles with another material Biology Cell encapsulation, technology made to overcome the existing problem of graft |
mixed languages, constructed languages, and as yet unclassified languages. In 2019, Ethnologue disabled trial views and introduced a hard paywall. In 2021, the 24th edition had 7,139 modern languages. In 2022, the 25th edition listed a total of 7,151 living languages, an increase of 12 living languages from 24th edition. Reception In 1986, William Bright, then editor of the journal Language, wrote of Ethnologue that it "is indispensable for any reference shelf on the languages of the world". In 2008 in the same journal, Lyle Campbell and Verónica Grondona said: "Ethnologue...has become the standard reference, and its usefulness is hard to overestimate." In 2015, Harald Hammarström, an editor of Glottolog, criticized the publication for frequently lacking citations and failing to articulate clear principles of language classification and identification. However, he concluded that, on balance, "Ethnologue is an impressively comprehensive catalogue of world languages, and it is far superior to anything else produced prior to 2009." Editions Starting with the 17th edition, Ethnologue has been published every year. See also Glottolog'' Linguasphere Observatory Register Lists of languages List of language families References Citations Sources Further reading External links Web version of Ethnologue 1951 non-fiction books 1952 non-fiction books 1953 non-fiction books 1958 non-fiction books 1965 non-fiction books 1969 non-fiction books 1974 non-fiction books 1978 non-fiction books 1984 non-fiction books 1988 non-fiction books 1992 non-fiction books 1996 non-fiction books 2000 non-fiction books 2005 non-fiction books 2009 non-fiction books 2013 non-fiction books | the 17th edition, Ethnologue introduced a numerical code for language status using a framework called EGIDS (Expanded Graded Intergenerational Disruption Scale), an elaboration of Fishman's GIDS (Graded Intergenerational Disruption Scale). It ranks a language from 0 for an international language to 10 for an extinct language, i.e. a language with which no-one retains a sense of ethnic identity. In December 2015, Ethnologue launched a metered paywall; users in high-income countries who want to refer to more than seven pages of data per month must buy a paid subscription. As of 2017, Ethnologue'''s 20th edition described 237 language families including 86 language isolates and six typological categories, namely sign languages, creoles, pidgins, mixed languages, constructed languages, and as yet unclassified languages. In 2019, Ethnologue disabled trial views and introduced a hard paywall. In 2021, the 24th edition had 7,139 modern languages. In 2022, the 25th edition listed a total of 7,151 living languages, an increase of 12 living languages from 24th edition. Reception In 1986, William Bright, then editor of the journal Language, wrote of Ethnologue that it "is indispensable for any reference shelf on the languages of the world". In 2008 in the same journal, Lyle Campbell and Verónica Grondona said: "Ethnologue...has become the standard reference, and its usefulness is hard to overestimate." In 2015, Harald Hammarström, an editor of Glottolog, criticized the publication for frequently lacking citations and failing to articulate clear principles of language classification and identification. However, he concluded that, on balance, "Ethnologue is an impressively comprehensive catalogue of world languages, and it is far superior to anything else produced prior to 2009." Editions Starting with the 17th edition, Ethnologue has been published every year. See also Glottolog'' Linguasphere Observatory Register Lists of languages List of language families References Citations |
the liquid will boil. The ability for a molecule of a liquid to evaporate is based largely on the amount of kinetic energy an individual particle may possess. Even at lower temperatures, individual molecules of a liquid can evaporate if they have more than the minimum amount of kinetic energy required for vaporization. Factors influencing the rate of evaporation Note: Air used here is a common example; however, the vapor phase can be other gases. Concentration of the substance evaporating in the air If the air already has a high concentration of the substance evaporating, then the given substance will evaporate more slowly. Flow rate of air This is in part related to the concentration points above. If "fresh" air (i.e., air which is neither already saturated with the substance nor with other substances) is moving over the substance all the time, then the concentration of the substance in the air is less likely to go up with time, thus encouraging faster evaporation. This is the result of the boundary layer at the evaporation surface decreasing with flow velocity, decreasing the diffusion distance in the stagnant layer. The amount of minerals dissolved in the liquid Inter-molecular forces The stronger the forces keeping the molecules together in the liquid state, the more energy one must get to escape. This is characterized by the enthalpy of vaporization. Pressure Evaporation happens faster if there is less exertion on the surface keeping the molecules from launching themselves. Surface area A substance that has a larger surface area will evaporate faster, as there are more surface molecules per unit of volume that are potentially able to escape. Temperature of the substance the higher the temperature of the substance the greater the kinetic energy of the molecules at its surface and therefore the faster the rate of their evaporation. In the US, the National Weather Service measures the actual rate of evaporation from a standardized "pan" open water surface outdoors, at various locations nationwide. Others do likewise around the world. The US data is collected and compiled into an annual evaporation map. The measurements range from under 30 to over per year. Because it typically takes place in a complex environment, where 'evaporation is an extremely rare event', the mechanism for | and enter the surrounding air as a gas. When evaporation occurs, the energy removed from the vaporized liquid will reduce the temperature of the liquid, resulting in evaporative cooling. On average, only a fraction of the molecules in a liquid have enough heat energy to escape from the liquid. The evaporation will continue until an equilibrium is reached when the evaporation of the liquid is equal to its condensation. In an enclosed environment, a liquid will evaporate until the surrounding air is saturated. Evaporation is an essential part of the water cycle. The sun (solar energy) drives evaporation of water from oceans, lakes, moisture in the soil, and other sources of water. In hydrology, evaporation and transpiration (which involves evaporation within plant stomata) are collectively termed evapotranspiration. Evaporation of water occurs when the surface of the liquid is exposed, allowing molecules to escape and form water vapor; this vapor can then rise up and form clouds. With sufficient energy, the liquid will turn into vapor. Theory For molecules of a liquid to evaporate, they must be located near the surface, they have to be moving in the proper direction, and have sufficient kinetic energy to overcome liquid-phase intermolecular forces. When only a small proportion of the molecules meet these criteria, the rate of evaporation is low. Since the kinetic energy of a molecule is proportional to its temperature, evaporation proceeds more quickly at higher temperatures. As the faster-moving molecules escape, the remaining molecules have lower average kinetic energy, and the temperature of the liquid decreases. This phenomenon is also called evaporative cooling. This is why evaporating sweat cools the human body. Evaporation also tends to proceed more quickly with higher flow rates between the gaseous and liquid phase and in liquids with higher vapor pressure. For example, laundry on a clothes line will dry (by evaporation) more rapidly on a windy day than on a still day. Three key parts to evaporation are heat, atmospheric pressure (determines the percent humidity), and air movement. On a molecular level, there is no strict boundary between the liquid state and the vapor state. Instead, there is a Knudsen layer, where the phase is undetermined. Because this layer is only a few molecules thick, at a macroscopic scale a clear phase transition interface cannot be seen. Liquids that do not evaporate visibly at a given temperature in a given gas (e.g., cooking oil at room temperature) have molecules that do not tend to transfer energy to each other in a pattern sufficient to frequently give a molecule the heat energy necessary to turn into vapor. However, these liquids are evaporating. It is just that the process is much slower and thus significantly less visible. Evaporative equilibrium If evaporation takes place in an enclosed area, the escaping molecules accumulate as a vapor above the liquid. Many of the molecules return to the liquid, with returning molecules becoming more frequent as the density and pressure of the vapor increases. When the process of escape and return reaches an equilibrium, the vapor is said to be "saturated", and no further change in either vapor pressure and density or liquid temperature will occur. |
Murray's use of French witch trial sources on supposed Witches' Sabbaths in her attempts to "reconstruct" a Witch Cult in Western Europe. Observance An esbat is commonly understood to be a ritual observance on the night of a full moon. However, the late high priestess | to "reconstruct" a Witch Cult in Western Europe. Observance An esbat is commonly understood to be a ritual observance on the night of a full moon. However, the late high priestess Doreen Valiente distinguished between "full moon Esbat[s]" and other esbatic occasions. The term esbat in this sense was described by Margaret Murray. See also Full moon Lunar calendar Wheel of the Year Lunar effect List |
alternative temperament of Barbershop harmony. 27 EDO is the smallest EDO that uniquely represents all intervals involving the first eight harmonics. It tempers out the septimal comma but not the syntonic comma. 29 EDO is the lowest number of equal divisions of the octave that produces a better perfect fifth than 12 EDO. Its major third is roughly as inaccurate as 12-TET; however, it is tuned 14 cents flat rather than 14 cents sharp. It tunes the 7th, 11th, and 13th harmonics flat as well, by roughly the same amount. This means intervals such as 7:5, 11:7, 13:11, etc., are all matched extremely well in 29-TET. 31 EDO was advocated by Christiaan Huygens and Adriaan Fokker. 31 EDO has a slightly less accurate fifth than 12 EDO, but provides near-just major thirds, and provides decent matches for harmonics up to at least 13, of which the seventh harmonic is particularly accurate. 34 EDO gives slightly less total combined errors of approximation to the 5-limit just ratios 3:2, 5:4, 6:5, and their inversions than 31 EDO does, although the approximation of 5:4 is worse. 34 EDO doesn't approximate ratios involving prime 7 well. It contains a 600-cent tritone, since it is an even-numbered EDO. 41 EDO is the second lowest number of equal divisions that produces a better perfect fifth than 12 EDO. Its major third is more accurate than 12 EDO and 29 EDO, about 6 cents flat. It is not meantone, so it distinguishes 10:9 and 9:8, unlike 31edo. It is more accurate in 13-limit than 31edo. 46 EDO provides slightly sharp major thirds and perfect fifths, giving triads a characteristic bright sound. The harmonics up to 11 are approximated within 5 cents of accuracy, with 10:9 and 9:5 being a fifth of a cent away from pure. As it's not a meantone system, it distinguishes 10:9 and 9:8. 53 EDO is better at approximating the traditional just consonances than 12, 19 or 31 EDO, but has had only occasional use. Its extremely good perfect fifths make it interchangeable with an extended Pythagorean tuning, but it also accommodates schismatic temperament, and is sometimes used in Turkish music theory. It does not, however, fit the requirements of meantone temperaments, which put good thirds within easy reach via the cycle of fifths. In 53 EDO, the very consonant thirds would be reached instead by using a Pythagorean diminished fourth (C-F), as it is an example of schismatic temperament, just like 41 EDO. 72 EDO approximates many just intonation intervals well, even into the 7-limit and 11-limit, such as 7:4, 9:7, 11:5, 11:6 and 11:7. 72 EDO has been taught, written and performed in practice by Joe Maneri and his students (whose atonal inclinations typically avoid any reference to just intonation whatsoever). It can be considered an extension of 12 EDO because 72 is a multiple of 12. 72 EDO has a smallest interval that is six times smaller than the smallest interval of 12 EDO and therefore contains six copies of 12 EDO starting on different pitches. It also contains three copies of 24 EDO and two copies of 36 EDO, which are themselves multiples of 12 EDO. 72 EDO has also been criticized for its redundancy by retaining the poor approximations contained in 12 EDO, despite not needing them for any lower limits of just intonation (e.g. 5-limit). 96 EDO approximates all intervals within 6.25 cents, which is barely distinguishable. As an eightfold multiple of 12, it can be used fully like the common 12 EDO. It has been advocated by several composers, especially Julián Carrillo from 1924 to the 1940s. Other equal divisions of the octave that have found occasional use include 15 EDO, 17 EDO, and 22 EDO. 2, 5, 12, 41, 53, 306, 665 and 15601 are denominators of first convergents of log(3), so 2, 5, 12, 41, 53, 306, 665 and 15601 twelfths (and fifths), being in correspondent equal temperaments equal to an integer number of octaves, are better approximation of 2, 5, 12, 41, 53, 306, 665 and 15601 just twelfths/fifths than for any equal temperaments with fewer tones. 1, 2, 3, 5, 7, 12, 29, 41, 53, 200... is the sequence of divisions of octave that provide better and better approximations of the perfect fifth. Related sequences contain divisions approximating other just intervals. This application: calculates the frequencies, approximate cents, and MIDI pitch bend values for any systems of equal division of the octave. Note that 'rounded' and 'floored' produce the same MIDI pitch bend value. Equal temperaments of non-octave intervals The equal-tempered version of the Bohlen–Pierce scale consists of the ratio 3:1, 1902 cents, conventionally a perfect fifth plus an octave (that is, a perfect twelfth), called in this theory a tritave (), and split into thirteen equal parts. This provides a very close match to justly tuned ratios consisting only of odd numbers. Each step is 146.3 cents (), or . Wendy Carlos created three unusual equal temperaments after a thorough study of the properties of possible temperaments having a step size between 30 and 120 cents. These were called alpha, beta, and gamma. They can be considered as equal divisions of the perfect fifth. Each of them provides a very good approximation of several just intervals. Their step sizes: alpha: (78.0 cents) beta: (63.8 cents) gamma: (35.1 cents) Alpha and Beta may be heard on the title track of her 1986 album Beauty in the Beast. Proportions between semitone and whole tone In this section, semitone and whole tone may not have their usual 12-EDO meanings, as it discusses how they may be tempered in different ways from their just versions to produce desired relationships. Let the number of steps in a semitone be s, and the number of steps in a tone be t. There is exactly one family of equal temperaments that fixes the semitone to any proper fraction of a whole tone, while keeping the notes in the right order (meaning that, for example, C, D, E, F, and F are in ascending order if they preserve their usual relationships to C). That is, fixing q to a proper fraction in the relationship qt = s also defines a unique family of one equal temperament and its multiples that fulfil this relationship. For example, where k is an integer, 12k-EDO sets q = , and 19k-EDO sets q = . The smallest multiples in these families (e.g. 12 and 19 above) has the additional property of having no notes outside the circle of fifths. (This is not true in general; in 24-EDO, the half-sharps and half-flats are not in the circle of fifths generated starting from C.) The extreme cases are 5k-EDO, where q = 0 and the semitone becomes a unison, and 7k-EDO, where q = 1 and the semitone and tone are the same interval. Once one knows how many steps a semitone and a tone are in this equal temperament, one can find the number of steps it has in the octave. An equal temperament fulfilling the above properties (including having no notes outside the circle of fifths) divides the octave into 7t − 2s steps, and the perfect fifth into 4t − s steps. If there are notes outside the circle of fifths, one must then multiply these results by n, which is the number of nonoverlapping circles of fifths required to generate all the notes (e.g. two in 24-EDO, six in 72-EDO). (One must take the small semitone for this purpose: 19-EDO has two semitones, one being tone and the other being .) The smallest of these families is 12k-EDO, and in particular, 12-EDO is the smallest equal temperament that has the above properties. Additionally, it also makes the semitone exactly half a whole tone, the simplest possible relationship. These are some of the reasons why 12-EDO has become the most commonly used equal temperament. (Another reason is that 12-EDO is the smallest equal temperament to closely approximate 5-limit harmony, the next-smallest being 19-EDO.) Each choice of fraction q for the relationship results in exactly one equal temperament family, but the converse is not true: 47-EDO has two different semitones, where one is tone and the other is , which are not complements of each other like in 19-EDO ( and ). Taking each semitone results in a different choice of perfect fifth. Related tuning systems Regular diatonic tunings The diatonic tuning in twelve equal can be generalized to any regular diatonic tuning dividing the octave as a sequence of steps TTSTTTS (or a rotation of it) with all the T's and all the S's the same size and the S's smaller than the T's. In twelve equal the S is the semitone and is exactly half the size of the tone T. When the S's reduce to zero the result is TTTTT or a five-tone equal temperament, As the semitones get larger, eventually the steps are all the same size, and the result is in seven tone equal temperament. These two endpoints are not included as regular diatonic tunings. The notes in a regular diatonic tuning are connected together by a cycle of seven tempered fifths. The twelve-tone system similarly generalizes to a sequence CDCDDCDCDCDD (or a rotation of it) of chromatic and diatonic semitones connected together in a cycle of twelve fifths. In this case, seven equal is obtained in the limit as the size of C tends to zero and five equal is the limit as D tends to zero while twelve equal is of course the case C = D. Some of the intermediate sizes of tones and semitones can also be generated in equal temperament systems. For instance if the diatonic semitone is double the size of the chromatic semitone, i.e. D = 2*C the result is nineteen equal with one step for the chromatic semitone, two steps for the diatonic semitone and three steps for the tone and the total number of steps 5*T + 2*S = 15 + 4 = 19 steps. The resulting twelve-tone system closely approximates to the historically important 1/3 comma meantone. If the chromatic semitone is two-thirds of the size of the diatonic semitone, i.e. C = (2/3)*D, the result is thirty one equal, with two steps for the chromatic semitone, three steps for the diatonic semitone, and five steps for the tone where 5*T + 2*S = 25 + 6 = 31 steps. The resulting twelve-tone system closely approximates to the historically important 1/4 comma meantone. See also Just intonation Musical acoustics (the physics of music) Music and mathematics Microtuner Microtonal music Piano tuning List of meantone intervals Diatonic | parts, all of which are equal on a logarithmic scale, with a ratio equal to the 12th root of 2 ( ≈ 1.05946). That resulting smallest interval, the width of an octave, is called a semitone or half step. In Western countries the term equal temperament, without qualification, generally means 12-TET. In modern times, 12-TET is usually tuned relative to a standard pitch of 440 Hz, called A440, meaning one note, A, is tuned to 440 hertz and all other notes are defined as some multiple of semitones apart from it, either higher or lower in frequency. The standard pitch has not always been 440 Hz. It has varied and generally risen over the past few hundred years. Other equal temperaments divide the octave differently. For example, some music has been written in 19-TET and 31-TET, while the Arab tone system uses 24-TET. Instead of dividing an octave, an equal temperament can also divide a different interval, like the equal-tempered version of the Bohlen–Pierce scale, which divides the just interval of an octave and a fifth (ratio 3:1), called a "tritave" or a "pseudo-octave" in that system, into 13 equal parts. For tuning systems that divide the octave equally, but are not approximations of just intervals, the term equal division of the octave, or EDO can be used. Unfretted string ensembles, which can adjust the tuning of all notes except for open strings, and vocal groups, who have no mechanical tuning limitations, sometimes use a tuning much closer to just intonation for acoustic reasons. Other instruments, such as some wind, keyboard, and fretted instruments, often only approximate equal temperament, where technical limitations prevent exact tunings. Some wind instruments that can easily and spontaneously bend their tone, most notably trombones, use tuning similar to string ensembles and vocal groups. General properties In an equal temperament, the distance between two adjacent steps of the scale is the same interval. Because the perceived identity of an interval depends on its ratio, this scale in even steps is a geometric sequence of multiplications. (An arithmetic sequence of intervals would not sound evenly spaced, and would not permit transposition to different keys.) Specifically, the smallest interval in an equal-tempered scale is the ratio: where the ratio r divides the ratio p (typically the octave, which is 2:1) into n equal parts. (See Twelve-tone equal temperament below.) Scales are often measured in cents, which divide the octave into 1200 equal intervals (each called a cent). This logarithmic scale makes comparison of different tuning systems easier than comparing ratios, and has considerable use in Ethnomusicology. The basic step in cents for any equal temperament can be found by taking the width of p above in cents (usually the octave, which is 1200 cents wide), called below w, and dividing it into n parts: In musical analysis, material belonging to an equal temperament is often given an integer notation, meaning a single integer is used to represent each pitch. This simplifies and generalizes discussion of pitch material within the temperament in the same way that taking the logarithm of a multiplication reduces it to addition. Furthermore, by applying the modular arithmetic where the modulus is the number of divisions of the octave (usually 12), these integers can be reduced to pitch classes, which removes the distinction (or acknowledges the similarity) between pitches of the same name, e.g. c is 0 regardless of octave register. The MIDI encoding standard uses integer note designations. General formulas for the equal-tempered interval Twelve-tone equal temperament 12-tone equal temperament, which divides the octave into twelve equally-sized intervals, is the most common musical system used today, especially in Western music. History The two figures frequently credited with the achievement of exact calculation of equal temperament are Zhu Zaiyu (also romanized as Chu-Tsaiyu. Chinese: ) in 1584 and Simon Stevin in 1585. According to Fritz A. Kuttner, a critic of the theory, it is known that "Chu-Tsaiyu presented a highly precise, simple and ingenious method for arithmetic calculation of equal temperament mono-chords in 1584" and that "Simon Stevin offered a mathematical definition of equal temperament plus a somewhat less precise computation of the corresponding numerical values in 1585 or later." The developments occurred independently. Kenneth Robinson attributes the invention of equal temperament to Zhu Zaiyu and provides textual quotations as evidence. Zhu Zaiyu is quoted as saying that, in a text dating from 1584, "I have founded a new system. I establish one foot as the number from which the others are to be extracted, and using proportions I extract them. Altogether one has to find the exact figures for the pitch-pipers in twelve operations." Kuttner disagrees and remarks that his claim "cannot be considered correct without major qualifications." Kuttner proposes that neither Zhu Zaiyu or Simon Stevin achieved equal temperament and that neither of the two should be treated as inventors. China While China had previously come up with approximations for 12-TET, Zhu Zaiyu was the first person to mathematically solve twelve-tone equal temperament, which he described in his Fusion of Music and Calendar in 1580 and Complete Compendium of Music and Pitch (Yuelü quan shu ) in 1584. An extended account is also given by Joseph Needham. Zhu obtained his result mathematically by dividing the length of string and pipe successively by ≈ 1.059463, and for pipe length by , such that after twelve divisions (an octave) the length was divided by a factor of 2. Zhu Zaiyu created several instruments tuned to his system, including bamboo pipes. Europe Some of the first Europeans to advocate for equal temperament were lutenists Vincenzo Galilei, Giacomo Gorzanis, and Francesco Spinacino, all of whom wrote music in it. Simon Stevin was the first to develop 12-TET based on the twelfth root of two, which he described in Van De Spiegheling der singconst (ca. 1605), published posthumously nearly three centuries later in 1884. For several centuries, Europe used a variety of tuning systems, including 12 equal temperament, as well as meantone temperament and well temperament, each of which can be viewed as an approximation of the former. Plucked instrument players (lutenists and guitarists) generally favored equal temperament, while others were more divided. In the end, twelve-tone equal temperament won out. This allowed new styles of symmetrical tonality and polytonality, atonal music such as that written with the twelve tone technique or serialism, and jazz (at least its piano component) to develop and flourish. Mathematics In twelve-tone equal temperament, which divides the octave into 12 equal parts, the width of a semitone, i.e. the frequency ratio of the interval between two adjacent notes, is the twelfth root of two: This is equivalent to: This interval is divided into 100 cents. Calculating absolute frequencies To find the frequency, Pn, of a note in 12-TET, the following definition may be used: In this formula Pn refers to the pitch, or frequency (usually in hertz), you are trying to find. Pa refers to the frequency of a reference pitch. n and a refer to numbers assigned to the desired pitch and the reference pitch, respectively. These two numbers are from a list of consecutive integers assigned to consecutive semitones. For example, A4 (the reference pitch) is the 49th key from the left end of a piano (tuned to 440 Hz), and C4 (middle C), and F#4 are the 40th and 46th key respectively. These numbers can be used to find the frequency of C4 and F#4 : Converting frequencies to their equal temperament counterparts To convert a frequency (in Hz) to its equal 12-TET counterpart, the following formula can be used: Where En refers to the frequency of a pitch in equal temperament, and a refers to the frequency of a reference pitch. For example, (if we let the reference pitch equal 440 Hz) we can see that E5 and C#5 are equal to the following frequencies respectively: Comparison with just intonation The intervals of 12-TET closely approximate some intervals in just intonation. The fifths and fourths are almost indistinguishably close to just intervals, while thirds and sixths are further away. In the following table the sizes of various just intervals are compared against their equal-tempered counterparts, given as a ratio as well as cents. Seven-tone equal division of the fifth Violins, violas and cellos are tuned in perfect fifths (G – D – A – E, for violins, and C – G – D – A, for violas and cellos), which suggests that their semi-tone ratio is slightly higher than in the conventional twelve-tone equal temperament. Because a perfect fifth is in 3:2 relation with its base tone, and this interval is covered in 7 steps, each tone is in the ratio of to the next (100.28 cents), which provides for a perfect fifth with ratio of 3:2 but a slightly widened octave with a ratio of ≈ 517:258 or ≈ 2.00388:1 rather than the usual 2:1 ratio, because twelve perfect fifths do not equal seven octaves. During actual play, however, the violinist chooses pitches by ear, and only the four unstopped pitches of the strings are guaranteed to exhibit this 3:2 ratio. Other equal temperaments 5 and 7 tone temperaments in ethnomusicology Five and seven tone equal temperament (5-TET and 7-TET ), with 240 and 171 cent steps respectively, are fairly common. 5-TET and 7-TET mark the endpoints of the syntonic temperament's valid tuning range, as shown in Figure 1. In 5-TET the tempered perfect fifth is 720 cents wide (at the top of the tuning continuum), and marks the endpoint on the tuning continuum at which the width of the minor second shrinks to a width of 0 cents. In 7-TET the tempered perfect fifth is 686 cents wide (at the bottom of the tuning continuum), and marks the endpoint on the tuning continuum, at which the minor second expands to be as wide as the major second (at 171 cents each). 5-tone equal temperament Indonesian gamelans are tuned to 5-TET according to Kunst (1949), but according to Hood (1966) and McPhee (1966) their tuning varies widely, and according to Tenzer (2000) they contain stretched octaves. It is now well-accepted that of the two primary tuning systems in gamelan music, slendro and pelog, only slendro somewhat resembles five-tone equal temperament while pelog is highly unequal; however, Surjodiningrat et al. (1972) has analyzed pelog as a seven-note subset of nine-tone equal temperament (133-cent steps |
himself, and completely abandoned the project, only writing 60 pages of text. However, after Gibbon's death, his writings on Switzerland's history were discovered and published by Lord Sheffield in 1815. Soon after abandoning his History of Switzerland, Gibbon made another attempt towards completing a full history. His second work, Memoires Litteraires de la Grande Bretagne, was a two-volume set which described the literary and social conditions of England at the time, such as Lord Lyttelton's history of Henry II and Nathaniel Lardner's The Credibility of the Gospel History. Gibbon's Memoires Litteraires failed to gain any notoriety, and was considered a flop by fellow historians and literary scholars. After he tended to his father's estate—which was by no means in good condition— quite enough remained for Gibbon to settle fashionably in London at 7 Bentinck Street, free of financial concern. By February 1773, he was writing in earnest, but not without the occasional self-imposed distraction. He took to London society quite easily, joined the better social clubs, including Dr. Johnson's Literary Club, and looked in from time to time on his friend Holroyd in Sussex. He succeeded Oliver Goldsmith at the Royal Academy as 'professor in ancient history' (honorary but prestigious). In late 1774, he was initiated as a Freemason of the Premier Grand Lodge of England. He was also, perhaps least productively in that same year, 1774, returned to the House of Commons for Liskeard, Cornwall through the intervention of his relative and patron, Edward Eliot. He became the archetypal back-bencher, benignly "mute" and "indifferent," his support of the Whig ministry invariably automatic. Gibbon's indolence in that position, perhaps fully intentional, subtracted little from the progress of his writing. Gibbon lost the Liskeard seat in 1780 when Eliot joined the opposition, taking with him "the Electors of Leskeard [who] are commonly of the same opinion as Mr. El[l]iot." (Murray, p. 322.) The following year, owing to the good grace of Prime Minister Lord North, he was again returned to Parliament, this time for Lymington on a by-election. The History of the Decline and Fall of the Roman Empire: 1776–1788 After several rewrites, with Gibbon "often tempted to throw away the labours of seven years," the first volume of what was to become his life's major achievement, The History of the Decline and Fall of the Roman Empire, was published on 17 February 1776. Through 1777, the reading public eagerly consumed three editions, for which Gibbon was rewarded handsomely: two-thirds of the profits, amounting to approximately £1,000. Biographer Leslie Stephen wrote that thereafter, "His fame was as rapid as it has been lasting." And as regards this first volume, "Some warm praise from David Hume overpaid the labour of ten years." Volumes II and III appeared on 1 March 1781, eventually rising "to a level with the previous volume in general esteem." Volume IV was finished in June 1784; the final two were completed during a second Lausanne sojourn (September 1783 to August 1787) where Gibbon reunited with his friend Deyverdun in leisurely comfort. By early 1787, he was "straining for the goal" and with great relief the project was finished in June. Gibbon later wrote: Volumes IV, V, and VI finally reached the press in May 1788, their publication having been delayed since March so it could coincide with a dinner party celebrating Gibbon's 51st birthday (the 8th). Mounting a bandwagon of praise for the later volumes were such contemporary luminaries as Adam Smith, William Robertson, Adam Ferguson, Lord Camden, and Horace Walpole. Adam Smith told Gibbon that "by the universal assent of every man of taste and learning, whom I either know or correspond with, it sets you at the very head of the whole literary tribe at present existing in Europe." In November 1788, he was elected a Fellow of the Royal Society, the main proposer being his good friend Lord Sheffield. In 1783 Gibbon had been intrigued by the cleverness of Sheffield's 12 year-old eldest daughter, Maria, and he proposed to teach her himself. Over the following years he continued, creating a girl of sixteen who was both well educated, confident and determined to choose her own husband. Gibbon described her as a "mixture of just observation and lively imagery, the strong sense of a man expressed with the easy elegance of a female". Later life: 1789–1794 The years following Gibbon's completion of The History were filled largely with sorrow and increasing physical discomfort. He had returned to London in late 1787 to oversee the publication process alongside Lord Sheffield. With that accomplished, in 1789 it was back to Lausanne only to learn of and be "deeply affected" by the death of Deyverdun, who had willed Gibbon his home, La Grotte. He resided there with little commotion, took in the local society, received a visit from Sheffield in 1791, and "shared the common abhorrence" of the French Revolution. In 1793, word came of Lady Sheffield's death; Gibbon immediately left Lausanne and set sail to comfort a grieving but composed Sheffield. His health began to fail critically in December, and at the turn of the new year, he was on his last legs. Gibbon is believed to have suffered from an extreme case of scrotal swelling, probably a hydrocele testis, a condition which causes the scrotum to swell with fluid in a compartment overlying either testicle. In an age when close-fitting clothes were fashionable, his condition led to a chronic and disfiguring inflammation that left Gibbon a lonely figure. As his condition worsened, he underwent numerous procedures to alleviate the condition, but with no enduring success. In early January, the last of a series of three operations caused an unremitting peritonitis to set in and spread, from which he died. The "English giant of the Enlightenment" finally succumbed at 12:45 pm, 16 January 1794 at age 56. He was buried in the Sheffield Mausoleum attached to the north transept of the Church of St Mary and St Andrew, Fletching, East Sussex, having died in Fletching while staying with his great friend, Lord Sheffield. Gibbon's estate was valued at approximately £26,000. He left most of his property to cousins. As stipulated in his will, Sheffield oversaw the sale of his library at auction to William Beckford for £950. What happened next suggests that Beckford may have known of Gibbon's moralistic, 'impertinent animadversion' at his expense in the presence of the Duchess of Devonshire at Lausanne. Gibbon's wish that his 6,000-book library would not be locked up 'under the key of a jealous master' was effectively denied by Beckford who retained it in Lausanne until 1801 before inspecting it, then locking it up again until at least as late as 1818 before giving most of the books back to Gibbon's physician Dr Scholl who had helped negotiate the sale in the first place. Beckford's annotated copy of the Decline and Fall turned up in Christie's in 1953, complete with his shattering critique of the author's 'ludicrous self-complacency ... your frequent distortion of historical Truth to provoke a gibe, or excite a sneer ... your ignorance of oriental languages [etc]'. Legacy Edward Gibbon's central thesis in his explanation of how the Roman Empire fell, that it was due to embracing Christianity, is not widely accepted by scholars today. Gibbon argued that with the empire's new Christian character, large sums of wealth that would have otherwise been used in the secular affairs in promoting the state were transferred to promoting the activities of the Church. However, the pre-Christian empire also spent large financial sums on religious affairs and it is unclear whether or not the change of religion increased the amount of resources the empire spent on religion. Gibbon further argued that new attitudes in Christianity caused many Christians of wealth to renounce their lifestyles and enter a monastic lifestyle, and so stop participating in the support of the empire. However, while many Christians of wealth did become monastics, this paled in comparison to the participants in the imperial bureaucracy. Although Gibbon further pointed out the importance Christianity placed on peace caused a decline in the number of people serving the military, the decline was so small as to be negligible for the army's effectiveness. Gibbon's work has been criticised for its scathing view of Christianity as laid down in chapters XV and XVI, a situation which resulted in the banning of the book in several countries. Gibbon's alleged crime was disrespecting, and none too lightly, the character of sacred Christian doctrine, by "treat[ing] the Christian church as a phenomenon of general history, not a special case admitting supernatural explanations and disallowing criticism of its adherents". More specifically, the chapters excoriated the church for "supplanting in an unnecessarily destructive way the great culture that preceded it" and for "the outrage of [practising] religious intolerance and warfare". Gibbon, in letters to Holroyd and others, expected some type of church-inspired backlash, but the harshness of the ensuing torrents exceeded anything he or his friends had anticipated. Contemporary detractors such as Joseph Priestley and Richard Watson stoked the nascent fire, but the most severe of these attacks was an "acrimonious" piece by the young cleric, Henry Edwards Davis. Gibbon subsequently published his Vindication in 1779, in which he categorically denied Davis' "criminal accusations", branding him a purveyor of "servile plagiarism." Davis followed Gibbon's Vindication with yet another reply (1779). Gibbon's apparent antagonism to Christian doctrine spilled over into the Jewish faith, leading to charges of anti-Semitism. For example, he wrote: From the reign of Nero to that of Antoninus Pius, the Jews discovered a fierce impatience of the dominion of Rome, which repeatedly broke out in the most furious massacres and insurrections. Humanity is shocked at the recital of the horrid cruelties which they committed in the cities of Egypt, of Cyprus, and of Cyrene, where they dwelt in treacherous friendship with the unsuspecting natives; and we are tempted to applaud the severe retaliation which was exercised by the arms of legions against a race of fanatics, whose dire and credulous superstition seemed to render them the implacable enemies not only of the Roman government, but also of humankind. Influence Gibbon is considered to be a son of the Enlightenment and this is reflected in his famous verdict on the history of the Middle Ages: "I have described the triumph of barbarism and religion." However, politically, he aligned himself with the conservative Edmund Burke's rejection of the radical egalitarian movements of the time as well as with Burke's dismissal of overly rationalistic applications of the "rights of man". Gibbon's work has been praised for its style, his piquant epigrams and its effective irony. Winston Churchill memorably noted in My Early Life, "I set out upon...Gibbon's Decline and Fall of the Roman Empire [and] was immediately dominated both by the story and the style. ...I devoured Gibbon. I rode triumphantly through it from end to end and enjoyed it all." Churchill modelled much of his own literary style on Gibbon's. Like Gibbon, he dedicated himself to producing a "vivid historical narrative, ranging widely over period and place and enriched by analysis and reflection." Unusually for the 18th century, Gibbon was never content with secondhand accounts when the primary sources were accessible (though most of these were drawn from well-known printed editions). "I have always endeavoured," he says, "to draw from the fountain-head; that my curiosity, as well as a sense of duty, has always urged me to study the originals; and that, if they have sometimes eluded my search, I have carefully marked the secondary evidence, on whose faith a passage or a fact were reduced to depend." In this insistence upon the importance of primary sources, Gibbon is considered by many to be one of the first modern historians: In accuracy, thoroughness, lucidity, and comprehensive grasp of a vast subject, the 'History' is unsurpassable. It is the one English history which may be regarded as definitive...Whatever its shortcomings the book is artistically imposing as well as historically unimpeachable as a vast panorama of a great period. The subject of Gibbon's writing, as well as his ideas and style, have influenced other writers. Besides his influence on Churchill, Gibbon was also a model for Isaac Asimov in his writing of The Foundation Trilogy, which he said involved "a little bit of cribbin' from the works of Edward Gibbon". Evelyn Waugh admired Gibbon's style, but not his secular viewpoint. In Waugh's 1950 novel Helena, the early Christian author Lactantius worried about the possibility of "'a false historian, with the mind of Cicero or Tacitus and the soul of an animal,' and he nodded towards the gibbon who fretted his golden chain and chattered for fruit." Monographs by Gibbon Essai sur l’Étude de la Littérature (London: Becket & De Hondt, 1761). Critical Observations on the Sixth Book of [Vergil's] The Aeneid (London: Elmsley, 1770). The History of the Decline and Fall of the Roman Empire (vol. I, 1776; vols. II, III, 1781; vols. IV, V, VI, 1788–1789). all London: Strahan & Cadell. A Vindication of some passages in the fifteenth and sixteenth chapters of the History of the Decline and Fall of the Roman Empire (London: J. Dodsley, 1779). Mémoire Justificatif pour servir de Réponse à l’Exposé, etc. de la Cour de France (London: Harrison & Brooke, 1779). Other writings by Gibbon "Lettre sur le gouvernement de Berne" [Letter No. IX. Mr. Gibbon to *** on the Government of Berne], in Miscellaneous Works, First (1796) edition, vol. 1 (below). Scholars differ on the date of its composition (Norman, D.M. Low: 1758–59; Pocock: 1763–64). Mémoires Littéraires de la Grande-Bretagne. co-author: Georges Deyverdun (2 vols.: vol. 1, London: Becket & De Hondt, 1767; vol. 2, London: Heydinger, 1768). Miscellaneous Works of Edward Gibbon, Esq., ed. John Lord Sheffield (2 vols., London: Cadell & Davies, 1796; 5 vols., London: J. Murray, 1814; 3 vols., London: J. Murray, 1815). Includes Memoirs of the Life and Writings of Edward Gibbon, Esq.. Autobiographies of Edward Gibbon, ed. John Murray (London: J. Murray, 1896). EG's complete memoirs (six drafts) from the original manuscripts. The Private Letters of Edward Gibbon, 2 vols., ed. Rowland E. Prothero (London: J. Murray, 1896). The works of Edward Gibbon, Volume 3 1906. Gibbon's Journal to 28 January 1763, ed. D.M. Low (London: Chatto and Windus, 1929). Le Journal de Gibbon à Lausanne, ed. Georges A. Bonnard (Lausanne: Librairie de l'Université, 1945). Miscellanea Gibboniana, eds. G.R. de Beer, L. Junod, G.A. Bonnard (Lausanne: Librairie de l'Université, 1952). The Letters of Edward Gibbon, 3 vols., ed. J.E. Norton (London: Cassell & Co., 1956). vol. 1: 1750–1773; vol. 2: 1774–1784; vol. 3: 1784–1794. cited as 'Norton, Letters'. Gibbon's Journey from Geneva to Rome, ed. G.A. Bonnard (London: Thomas Nelson and Sons, 1961). journal. Edward Gibbon: Memoirs of My Life, | Evelyn Waugh admired Gibbon's style, but not his secular viewpoint. In Waugh's 1950 novel Helena, the early Christian author Lactantius worried about the possibility of "'a false historian, with the mind of Cicero or Tacitus and the soul of an animal,' and he nodded towards the gibbon who fretted his golden chain and chattered for fruit." Monographs by Gibbon Essai sur l’Étude de la Littérature (London: Becket & De Hondt, 1761). Critical Observations on the Sixth Book of [Vergil's] The Aeneid (London: Elmsley, 1770). The History of the Decline and Fall of the Roman Empire (vol. I, 1776; vols. II, III, 1781; vols. IV, V, VI, 1788–1789). all London: Strahan & Cadell. A Vindication of some passages in the fifteenth and sixteenth chapters of the History of the Decline and Fall of the Roman Empire (London: J. Dodsley, 1779). Mémoire Justificatif pour servir de Réponse à l’Exposé, etc. de la Cour de France (London: Harrison & Brooke, 1779). Other writings by Gibbon "Lettre sur le gouvernement de Berne" [Letter No. IX. Mr. Gibbon to *** on the Government of Berne], in Miscellaneous Works, First (1796) edition, vol. 1 (below). Scholars differ on the date of its composition (Norman, D.M. Low: 1758–59; Pocock: 1763–64). Mémoires Littéraires de la Grande-Bretagne. co-author: Georges Deyverdun (2 vols.: vol. 1, London: Becket & De Hondt, 1767; vol. 2, London: Heydinger, 1768). Miscellaneous Works of Edward Gibbon, Esq., ed. John Lord Sheffield (2 vols., London: Cadell & Davies, 1796; 5 vols., London: J. Murray, 1814; 3 vols., London: J. Murray, 1815). Includes Memoirs of the Life and Writings of Edward Gibbon, Esq.. Autobiographies of Edward Gibbon, ed. John Murray (London: J. Murray, 1896). EG's complete memoirs (six drafts) from the original manuscripts. The Private Letters of Edward Gibbon, 2 vols., ed. Rowland E. Prothero (London: J. Murray, 1896). The works of Edward Gibbon, Volume 3 1906. Gibbon's Journal to 28 January 1763, ed. D.M. Low (London: Chatto and Windus, 1929). Le Journal de Gibbon à Lausanne, ed. Georges A. Bonnard (Lausanne: Librairie de l'Université, 1945). Miscellanea Gibboniana, eds. G.R. de Beer, L. Junod, G.A. Bonnard (Lausanne: Librairie de l'Université, 1952). The Letters of Edward Gibbon, 3 vols., ed. J.E. Norton (London: Cassell & Co., 1956). vol. 1: 1750–1773; vol. 2: 1774–1784; vol. 3: 1784–1794. cited as 'Norton, Letters'. Gibbon's Journey from Geneva to Rome, ed. G.A. Bonnard (London: Thomas Nelson and Sons, 1961). journal. Edward Gibbon: Memoirs of My Life, ed. G.A. Bonnard (New York: Funk & Wagnalls, 1969; 1966). portions of EG's memoirs arranged chronologically, omitting repetition. The English Essays of Edward Gibbon, ed. Patricia Craddock (Oxford: Clarendon Press, 1972); hb: . See also The Work of J.G.A. Pocock: Edward Gibbon section The History of the Decline and Fall of the Roman Empire: Further reading The Miscellaneous Works of Edward Gibbon A Gibbon chronology Historiography of the United Kingdom Notes Most of this article, including quotations unless otherwise noted, has been adapted from Stephen's entry on Edward Gibbon in the Dictionary of National Biography. References Beer, G. R. de. "The Malady of Edward Gibbon, F.R.S." Notes and Records of the Royal Society of London 7:1 (December 1949), 71–80. Craddock, Patricia B. Edward Gibbon, Luminous Historian 1772–1794. Baltimore: Johns Hopkins University Press, 1989. HB: . Biography. Dickinson, H.T . "The Politics of Edward Gibbon". Literature and History 8:4 (1978), 175–196. Low, D. M., Edward Gibbon. 1737–1794 (London: Chatto & Windus, 1937). Murray, John (ed.), The Autobiographies of Edward Gibbon. Second Edition (London: John Murray, 1897). Norton, J. E. A Bibliography of the Works of Edward Gibbon. New York: Burt Franklin Co., 1940, repr. 1970. Norton, J .E. The Letters of Edward Gibbon. 3 vols. London: Cassell & Co. Ltd., 1956. Pocock, J. G. A. The Enlightenments of Edward Gibbon, 1737–1764. Cambridge: Cambridge University Press, 1999. HB: . Pocock, J. G. A. "Classical and Civil History: The Transformation of Humanism". Cromohs 1 (1996). Online at the Università degli Studi di Firenze. Retrieved 20 November 2009. Pocock, J. G. A. "The Ironist". Review of David Womersley's The Watchmen of the Holy City. London Review of Books 24:22 (14 November 2002). Online at the London Review of Books (subscribers only). Retrieved 20 November 2009. Gibbon, Edward. Memoirs of My Life and Writings. Online at Gutenberg. Retrieved 20 November 2009. Stephen, Sir Leslie, "Gibbon, Edward (1737–1794)". In the Dictionary of National Biography, eds. Sir Leslie Stephen and Sir Sidney Lee. Oxford: 1921, repr. 1963. Vol. 7, 1129–1135. Womersley, David, ed. The History of the Decline and Fall of the Roman Empire. 3 vols. (London and New York: Penguin, 1994). Womersley, David. "Introduction," in Womersley, Decline and Fall, vol. 1, xi–cvi. Womersley, David. "Gibbon, Edward (1737–1794)". In the Oxford Dictionary of National Biography, eds. H.C.G. Matthew and Brian Harrison. Oxford: Oxford University Press, 2004. Vol. 22, 8–18. Further reading Before 1985 Barlow, J.W. (1879). “Gibbon and Julian”. In: Hermathena, Volume 3, 142–159. Dublin: Edward Posonby. Beer, Gavin de. Gibbon and His World. London: Thames and Hudson, 1968. HB: . Bowersock, G.W., et al. eds. Edward Gibbon and the Decline and Fall of the Roman Empire. Cambridge, MA: Harvard University Press, 1977. Craddock, Patricia B. Young Edward Gibbon: Gentleman of Letters. Baltimore, MD: Johns Hopkins University Press, 1982. HB: . Biography. Jordan, David. Gibbon and his Roman Empire. Urbana, IL: University of Illinois Press, 1971. Keynes, Geoffrey, ed. The Library of Edward Gibbon. 2nd ed. Godalming, England: St. Paul's Bibliographies, 1940, repr. 1980. Lewis, Bernard. "Gibbon on Muhammad". Daedalus 105:3 (Summer 1976), 89–101. Low, D.M. Edward Gibbon 1737–1794. London: Chatto and Windus, 1937. Biography. Momigliano, Arnaldo. "Gibbon's Contributions to Historical Method". Historia 2 (1954), 450–463. Reprinted in Momigliano, Studies in Historiography (New York: Harper & Row, 1966; Garland Pubs., 1985), 40–55. PB: . Porter, Roger J. "Gibbon's Autobiography: Filling Up the Silent Vacancy". Eighteenth-Century Studies 8:1 (Autumn 1974), 1–26. Stephen, Leslie, "Gibbon's Autobiography" in Studies of a Biographer, Vol. 1 (1898) Swain, J. W. Edward Gibbon the Historian. New York: St. Martin's Press, 1966. White, Jr. Lynn, ed. The Transformation of the Roman World: Gibbon's Problem after Two Centuries. Berkeley: University of California Press, 1966. HB: . Since 1985 Berghahn, C.-F., and T. Kinzel, eds., Edward Gibbon im deutschen Sprachraum. Bausteine einer Rezeptionsgeschichte. Heidelberg: Universitätsverlag Winter, 2015. Bowersock, G. W. Gibbon's Historical Imagination. Stanford: Stanford University Press, 1988. Burrow, J. W. Gibbon (Past Masters). Oxford: Oxford University Press, 1985. HB: . PB: . Carnochan, W. Bliss. Gibbon's Solitude: The Inward World of the Historian. Stanford: Stanford University Press, 1987. HB: . Chaney, Edward, "Reiseerlebnis und 'Traumdeutung' bei Edward Gibbon und William Beckford", Europareisen politisch-sozialer Eliten im 18.Jahrhundert, eds. J. Rees, W. Siebers and H. Tilgner (Berlin 2002), pp.243-60. Chaney, Edward, "Gibbon, Beckford and the Interpretation of Dreams, Waking Thoughts, and Incidents", The Beckford Society Annual Lectures 2000-2003, ed. Jon Millinton (Beckford Society, 2004). Craddock, Patricia B. Edward Gibbon: a Reference Guide. Boston: G.K. Hall, 1987. PB: . A comprehensive listing of secondary literature through 1985. See also her supplement covering the period through 1997. Ghosh, Peter R. "Gibbon Observed". Journal of Roman Studies 81 (1991), 132–156. Ghosh, Peter R. "Gibbon's First Thoughts: Rome, Christianity and the Essai sur l'Étude de la Litterature 1758–61". Journal of Roman Studies 85 (1995), 148–164. Ghosh, Peter R. "The Conception of Gibbon's History", in McKitterick and Quinault, eds. Edward Gibbon and Empire, 271–316. Ghosh, Peter R. "Gibbon's Timeless Verity: Nature and Neo-Classicism in the Late Enlightenment," in Womersley, Burrow, Pocock, eds. Edward Gibbon: bicentenary essays. Ghosh, Peter R. "Gibbon, Edward 1737–1794 British historian of Rome and universal historian," in Kelly Boyd, ed. Encyclopedia of Historians and Historical Writing (Chicago: Fitzroy Dearborn, 1999), 461–463. Levine, Joseph M., "Edward Gibbon and the Quarrel between the Ancients and the Moderns," in Levine, Humanism and History: origins of modern English historiography (Ithaca, NY: Cornell University Press, 1987). Levine, Joseph M. "Truth and Method in Gibbon's Historiography," in Levine, The Autonomy of History: truth and method from Erasmus to Gibbon (Chicago: Chicago University Press, 1999). McKitterick, R., and R. Quinault, eds. Edward Gibbon and Empire. Cambridge: Cambridge University Press, 1997. Norman, Brian. "The Influence of Switzerland on the Life and Writings of Edward Gibbon," in Studies on Voltaire and the Eighteenth Century [SVEC] v.2002:03. Oxford: Voltaire Foundation, 2002. O'Brien, Karen. "English Enlightenment Histories, 1750–c.1815" in . Pocock, J. G. A. Barbarism and Religion, 4 vols.: vol. 1, The Enlightenments of Edward Gibbon, 1737–1764, 1999 [hb: ]; vol. 2, Narratives of Civil Government, 1999 [hb: ]; vol. 3, The First Decline and Fall, 2003 [pb: ]; vol. 4, Barbarians, Savages and Empires, 2005 [pb: ]. all Cambridge Univ. Press. Porter, Roy. Gibbon: Making History. New York: St. Martin's Press, 1989, HB: . Turnbull, Paul. "'Une marionnette infidele': the Fashioning of Edward Gibbon's Reputation as the English Voltaire," in Womersley, Burrow, Pocock, eds. Edward Gibbon: bicentenary essays. |
Pakistanis it was called “Oriental Pakistan” or alternatively Islamically as “Bangalistan”. The word Mashriqi implies as Eastern. Kazim, in his book of reviews, Kal ki Baat (Readings Lahore, 2010), tells us that Aurangzeb’s minister Abul Fazl had opined that Bangla was actually Bangal and that ‘al’ in it meant enclosure. Today, ‘aal’ is taken to mean home, from a sense of ‘outer wall making an enclosure’, in which to extent what is exactly present-day Bangla-Desh is today respectively. History One Unit and Islamic Republic In 1955, Prime Minister Mohammad Ali Bogra implemented the One Unit scheme which merged the four western provinces into a single unit called West Pakistan while East Bengal was renamed as East Pakistan. Pakistan ended its dominion status and adopted a republican constitution in 1956, which proclaimed an Islamic republic. The populist leader H. S. Suhrawardy of East Pakistan was appointed prime minister of Pakistan. As soon as he became the prime minister, Suhrawardy initiated legal work reviving the joint electorate system. There was strong opposition and resentment to the joint electorate system in West Pakistan. The Muslim League had taken the cause to the public and began calling for the implementation of a separate electorate system. In contrast to West Pakistan, the joint electorate was highly popular in East Pakistan. The tug of war with the Muslim League to establish the appropriate electorate caused problems for his government. The constitutionally obliged National Finance Commission Program (NFC Program) was immediately suspended by Prime Minister Suhrawardy despite the reserves of the four provinces of West Pakistan in 1956. Suhrawardy advocated for the USSR-based Five-Year Plans to centralise the national economy. In this view, East Pakistan's economy would be quickly centralised and all major economic planning would be shifted to West Pakistan. Efforts leading to centralising the economy were met with great resistance in West Pakistan when the elite monopolist and the business community angrily refused to adhere to his policies. The business community in Karachi began its political struggle to undermine any attempts of financial distribution of the US$10 million ICA aid to the better part of East Pakistan and to set up a consolidated national shipping corporation. In the financial cities of West Pakistan, such as Karachi, Lahore, Quetta, and Peshawar, there were series of major labour strikes against the economic policies of Suhrawardy supported by the elite business community and the private sector. Furthermore, in order to divert attention from the controversial One Unit Program, Prime Minister Suhrawardy tried to end the crises by calling a small group of investors to set up small businesses in the country. Despite many initiatives and holding off the NFC Award Program, Suhrawardy's political position and image deteriorated in the four provinces in West Pakistan. Many nationalist leaders and activists of the Muslim League were dismayed with the suspension of the constitutionally obliged NFC Program. His critics and Muslim League leaders observed that with the suspension of NFC Award Program, Suhrawardy tried to give more financial allocations, aids, grants, and opportunities to East Pakistan than West Pakistan, including West Pakistan's four provinces. During the last days of his Prime ministerial years, Suhrawardy tried to remove the economic disparity between the Eastern and Western wings of the country but to no avail. He also tried unsuccessfully to alleviate the food shortage in the country. Suhrawardy strengthened relations with the United States by reinforcing Pakistani membership in the Central Treaty Organization and Southeast Asia Treaty Organization. Suhrawardy also promoted relations with the People’s Republic of China. His contribution in formulating the 1956 constitution of Pakistan was substantial as he played a vital role in incorporating provisions for civil liberties and universal adult franchise in line with his adherence to the parliamentary form of liberal democracy. Era of Ayub Khan In 1958, President Iskandar Mirza enacted martial law as part of a military coup by the Pakistan Army's chief Ayub Khan. Roughly after two weeks, President Mirza's relations with Pakistan Armed Forces deteriorated leading Army Commander General Ayub Khan relieving the president from his presidency and forcefully exiling President Mirza to the United Kingdom. General Ayub Khan justified his actions after appearing on national radio declaring that: "the armed forces and the people demanded a clean break with the past...". Until 1962, the martial law continued while Field Marshal Ayub Khan purged a number of politicians and civil servants from the government and replaced them with military officers. Ayub called his regime a "revolution to clean up the mess of black marketing and corruption". Khan replaced Mirza as president and became the country’s strongman for eleven years. Martial law continued until 1962 when the government of Field Marshal Ayub Khan commissioned a constitutional bench under Chief Justice of Pakistan Muhammad Shahabuddin, composed of ten senior justices, each five from East Pakistan and five from West Pakistan. On 6 May 1961, the commission sent its draft to President Ayub Khan. He thoroughly examined the draft while consulting with his cabinet. In January 1962, the cabinet finally approved the text of the new constitution, promulgated by President Ayub Khan on 1 March 1962, which came into effect on 8 June 1962. Under the 1962 constitution, Pakistan became a presidential republic. Universal suffrage was abolished in favour of a system dubbed 'Basic Democracy'. Under the system, an electoral college would be responsible for electing the president and national assembly. The 1962 constitution created a gubernatorial system in West and East Pakistan. Each province ran its own separate provincial gubernatorial governments. The constitution defined a division of powers between the central government and the provinces. Fatima Jinnah received strong support in East Pakistan during her failed bid to unseat Ayub Khan in the 1965 presidential election. Dacca was declared as the second capital of Pakistan in 1962. It was designated as the legislative capital and Louis Kahn was tasked with designing a national assembly complex. Dacca's population increased in the 1960s. Seven natural gas fields were tapped in the province. The petroleum industry developed as the Eastern Refinery was established in the port city of Chittagong. Six Points In 1966, Awami League leader Sheikh Mujibur Rahman announced the six-point movement in Lahore. The movement demanded greater provincial autonomy and the restoration of democracy in Pakistan. Rahman was indicted for treason during the Agartala Conspiracy Case after launching the six-point movement. He was released in the 1969 uprising in East Pakistan, which ousted Ayub Khan from the presidency. Below includes the historical six points:- Final years Ayub Khan was replaced by general Yahya Khan who became the Chief Martial Law Administrator. Khan organised the 1970 Pakistani general election. The 1970 Bhola cyclone was one of the deadliest natural disasters of the 20th century. The cyclone claimed half a million lives. The disastrous effects of the cyclone caused huge resentment against the federal government. After a decade of military rule, East Pakistan was a hotbed of Bengali nationalism. There were open calls for self-determination. When the federal general election was held, the Awami League emerged as the single largest party in the Pakistani parliament. The League won 167 out of 169 seats in East Pakistan, thereby crossing the half way mark of 150 in the 300-seat National Assembly of Pakistan. In theory, this gave the League the right to form a government under the Westminster tradition. But the League failed to win a single seat in West Pakistan, where the Pakistan Peoples Party emerged as the single largest party with 81 seats. The military junta stalled the transfer of power and conducted prolonged negotiations with the League. A civil disobedience movement erupted across East Pakistan demanding the convening of parliament. Rahman announced a struggle for independence from Pakistan during a speech on 7 March 1971. Between 7–26 March, East Pakistan was virtually under the popular control of the Awami League. On Pakistan's Republic Day on 23 March 1971, the first flag of Bangladesh was hoisted in many East Pakistani households. The Pakistan Army launched a crackdown on 26 March, including Operation Searchlight and the 1971 Dhaka University massacre. This led to the Bangladeshi Declaration of Independence. As the Bangladesh Liberation War and the 1971 Bangladesh genocide continued for nine months, East Pakistani military units like the East Bengal Regiment and the East Pakistan Rifles defected to form the Bangladesh Forces. The Provisional Government of Bangladesh allied with neighbouring India which intervened in the final two weeks of the war and secured the surrender of Pakistan. Role of the Pakistani military With Ayub Khan ousted from office in 1969, Commander of the Pakistani Army, General Yahya Khan became the country's second ruling chief martial law administrator. Both Bhutto and Mujib strongly disliked General Khan, but patiently endured him and his government as he had promised to hold an election in 1970. During this time, strong nationalistic sentiments in East Pakistan were perceived by the Pakistani Armed Forces and the central military government. Therefore, Khan and his military government wanted to divert the nationalistic threats and violence against non-East Pakistanis. The Eastern Command was under constant pressure from the Awami League and requested an active-duty officer to control the command under such extreme pressure. The high flag rank officers, junior officers, and many high command officers from Pakistan's Armed Forces were highly cautious about their appointment in East-Pakistan, and the assignment of governing East Pakistan and appointment of an officer was considered highly difficult for the Pakistan High Military Command. East Pakistan's Armed Forces, under the military administrations of Major-General Muzaffaruddin and Lieutenant-General Sahabzada Yaqub Khan, used an excessive amount of show of military force to curb the uprising in the province. With such action, the situation became highly critical and civil control over the province slipped away from the government. On 24 March, dissatisfied with the performance of his generals, Yahya Khan removed General Muzaffaruddin and General Yaqub Khan from office on 1 September 1969. The appointment of a military administrator was considered quite difficult and challenging with the crisis continually deteriorating. Vice-Admiral Syed Mohammad Ahsan, Commander-in-Chief of the Pakistan Navy, had previously served as political and military adviser of East | the Pakistan High Military Command. East Pakistan's Armed Forces, under the military administrations of Major-General Muzaffaruddin and Lieutenant-General Sahabzada Yaqub Khan, used an excessive amount of show of military force to curb the uprising in the province. With such action, the situation became highly critical and civil control over the province slipped away from the government. On 24 March, dissatisfied with the performance of his generals, Yahya Khan removed General Muzaffaruddin and General Yaqub Khan from office on 1 September 1969. The appointment of a military administrator was considered quite difficult and challenging with the crisis continually deteriorating. Vice-Admiral Syed Mohammad Ahsan, Commander-in-Chief of the Pakistan Navy, had previously served as political and military adviser of East Pakistan to former President Ayub Khan. Having such a strong background in administration, and being an expert on East Pakistan affairs, General Yahya Khan appointed Vice-Admiral Syed Mohammad Ahsan as Martial Law Administrator, with absolute authority in his command. He was relieved as naval chief and received an extension from the government. The tense relations between East and West Pakistan reached a climax in 1970 when the Awami League, the largest East Pakistani political party, led by Sheikh Mujibur Rahman, (Mujib), won a landslide victory in the national elections in East Pakistan. The party won 160 of the 162 seats allotted to East Pakistan, and thus a majority of the 300 seats in the Parliament. This gave the Awami League the constitutional right to form a government without forming a coalition with any other party. Khan invited Mujib to Rawalpindi to take the charge of the office, and negotiations took place between the military government and the Awami Party. Bhutto was shocked with the results and threatened his fellow Peoples Party members if they attended the inaugural session at the National Assembly, famously saying he would "break the legs" of any member of his party who dared enter and attend the session. However, fearing East Pakistani separatism, Bhutto demanded Mujib to form a coalition government. After a secret meeting held in Larkana, Mujib agreed to give Bhutto the office of the presidency with Mujib as prime minister. General Yahya Khan and his military government were kept unaware of these developments and under pressure from his own military government, refused to allow Rahman to become the prime minister of Pakistan. This increased agitation for greater autonomy in East Pakistan. The military police arrested Mujib and Bhutto and placed them in Adiala Jail in Rawalpindi. The news spread like a fire in both East and West Pakistan, and the struggle for independence began in East Pakistan. The senior high command officers in Pakistan Armed Forces, and Zulfikar Ali Bhutto, began to pressure General Yahya Khan to take armed action against Mujib and his party. Bhutto later distanced himself from Yahya Khan after he was arrested by Military Police along with Mujib. Soon after the arrests, a high-level meeting was chaired by Yahya Khan. During the meeting, high commanders of the Pakistan Armed Forces unanimously recommended an armed and violent military action. East Pakistan's Martial Law Administrator Admiral Ahsan, Governor of East Pakistan, and Air Commodore Zafar Masud, Air Officer Commanding of Dacca's only airbase, were the only officers to object to the plans. When it became obvious that military action in East Pakistan was inevitable, Admiral Ahsan resigned from his position as martial law administrator in protest, and immediately flew back to Karachi, West Pakistan. Disheartened and isolated, Admiral Ahsan took early retirement from the Navy and quietly settled in Karachi. Once Operation Searchlight and Operation Barisal commenced, Air Marshal Masud flew to West Pakistan, and unlike Admiral Ahsan, tried to stop the violence in East Pakistan. When he failed in his attempts to meet General Yahya Khan, Masud too resigned from his position as AOC of Dacca airbase and took retirement from Air Force. Lieutenant-General Sahabzada Yaqub Khan was sent into East Pakistan in an emergency, following a major blow of the resignation of Vice Admiral Ahsan. General Yaqub temporarily assumed the control of the province, he was also made the corps-commander of Eastern Corps. General Yaqub mobilised the entire major forces in East Pakistan. Sheikh Mujibur Rahman made a declaration of independence at Dacca on 26 March 1971. All major Awami League leaders including elected leaders of the National Assembly and Provincial Assembly fled to neighbouring India and an exile government was formed headed by Mujibur Rahman. While he was in Pakistan Prison, Syed Nazrul Islam was the acting president with Tazuddin Ahmed as the prime minister. The exile government took oath on 17 April 1971 at Mujib Nagar, within East Pakistan territory of Kushtia district, and formally formed the government. Colonel MOG Osmani was appointed the Commander in Chief of Liberation Forces and whole East Pakistan was divided into eleven sectors headed by eleven sector commanders. All sector commanders were Bengali officers who had defected from the Pakistan Army. This started the Bangladesh Liberation War in which the freedom fighters, joined in December 1971 by 400,000 Indian soldiers, faced the Pakistani Armed Forces of 365,000 plus Paramilitary and collaborationist forces. An additional approximately 25,000 ill-equipped civilian volunteers and police forces also sided with the Pakistan Armed Forces. Bloody guerrilla warfare ensued in East Pakistan. The Pakistan Armed Forces were unable to counter such threats. Poorly trained and inexperienced in guerrilla tactics, Pakistan Armed Forces and their assets were defeated by the Bangladesh Liberation Forces. In April 1971, Lieutenant-General Tikka Khan succeeded General Yaqub Khan as the Corps Commander. General Tikka Khan led the massive violent and massacre campaigns in the region. He is held responsible for killing hundreds of thousands of Bengali people in East Pakistan, mostly civilians and unarmed peoples. For his role, General Tikka Khan gained the title of "Butcher of Bengal". General Khan faced an international reaction against Pakistan, and therefore, General Tikka was removed as Commander of the Eastern front. He installed a civilian administration under Abdul Motaleb Malik on 31 August 1971, which proved to be ineffective. However, during the meeting, with no high officers willing to assume the command of East Pakistan, Lieutenant-General Amir Abdullah Khan Niazi volunteered for the command of East Pakistan. Inexperienced and the large magnitude of this assignment, the government sent Rear-Admiral Mohammad Shariff as Flag Officer Commanding of Eastern Naval Command (Pakistan). Admiral Shariff served as the deputy of General Niazi when doing joint military operations. However, General Niazi proved to be a failure and ineffective ruler. Therefore, General Niazi and Air Commodore Inamul Haque Khan, AOC, PAF Base Dacca, failed to launch any operation in East Pakistan against Indian or its allies. Except for Admiral Shariff who continued to keep pressure on the Indian Navy until the end of the conflict. Admiral Shariff's effective plans made it nearly impossible for the Indian Navy to land its naval forces on the shores of East Pakistan. The Indian Navy was unable to land forces in East Pakistan and the Pakistan Navy was still offering resistance. The Indian Army, entered East Pakistan from all three directions of the province. The Indian Navy then decided to wait near the Bay of Bengal until the Army reached the shore. The Indian Air Force dismantled the capability of the Pakistan Air Force in East Pakistan. Air Commodore Inamul Haque Khan, Dacca airbase's AOC, failed to offer any serious resistance to the actions of the Indian Air Force. For the most part of the war, the IAF enjoyed complete dominance in the skies over East Pakistan. On 16 December 1971, the Pakistan Armed Forces surrendered to the joint liberation forces of Mukti Bahini and the Indian army, headed by Lieutenant-General Jagjit Singh Arora, the General Officer Commanding-in-Chief (GOC-in-C) of the Eastern Command of the Indian Army. Lieutenant General AAK Niazi, the last corps commander of Eastern Corps, signed the Instrument of Surrender at about 4:31 pm. Over 93,000 personnel, including Lt. General Niazi and Admiral Shariff, were taken as prisoners of war. As of 16 December 1971, East Pakistan was separated from West Pakistan and became the newly independent state of Bangladesh. The Eastern Command, civilian institutions, and paramilitary forces were disbanded. Geography In contrast to the desert and rugged mountainous terrain of West Pakistan, East Pakistan featured the world's largest delta, 700 rivers, and tropical hilly jungles. Administrative geography East Pakistan inherited 17 districts from British Bengal. In 1960, Lower Tippera was renamed Comilla. In 1969, two new districts were created with Tangail separated from Mymensingh and Patuakhali from Bakerganj. East Pakistan's districts are listed in the following. Economy At the time of the Partition of British India, East Bengal had a plantation economy. The Chittagong Tea Auction was established in 1949 as the region was home to the world's largest tea plantations. The East Pakistan Stock Exchange Association was established in 1954. Many wealthy Muslim immigrants from India, Burma, and former British colonies settled in East Pakistan. The Ispahani family, Africawala brothers, and the Adamjee family were pioneers of industrialisation in the region. Many of modern Bangladesh's leading companies were born in the East Pakistan period. An airline founded in British Bengal, Orient Airways, launched the vital air link between East and West Pakistan with DC-3 aircraft on the Dacca-Calcutta-Delhi-Karachi route. Orient Airways later evolved into Pakistan International Airlines, whose first chairman was the East Pakistan-based industrialist Mirza Ahmad Ispahani. By the 1950s, East Bengal surpassed West Bengal in having the largest jute industries in the world. The Adamjee Jute Mills was the largest jute processing plant in history and its location in Narayanganj was nicknamed the Dundee of the East. The Adamjees were descendants of Sir Haji Adamjee Dawood, who made his fortune in British Burma. Natural gas was discovered in the northeastern part of East Pakistan in 1955 by the Burmah Oil Company. Industrial use of natural gas began in 1959. The Shell Oil Company and Pakistan Petroleum tapped 7 gas fields in the 1960s. The industrial seaport city of Chittagong hosted the headquarters of Burmah Eastern and Pakistan National Oil. Iran, an erstwhile leading oil producer, assisted in establishing the Eastern Refinery in Chittagong. The Comilla Model of the Pakistan Academy for Rural Development (present-day Bangladesh Academy for Rural Development) was conceived by Akhtar Hameed Khan and replicated in many developing countries. In 1965, Pakistan implemented the Kaptai Dam hydroelectric project in the southeastern part of East Pakistan with American assistance. It was the sole hydroelectric dam in East Pakistan. The project was controversial for displacing over 40,000 indigenous people from the area. The centrally located metropolis Dacca witnessed significant urban growth. Economic discrimination and disparity Although East Pakistan had a larger population, West Pakistan dominated the divided country politically and received more money from the common budget. According to the World Bank, there was much economic discrimination against East Pakistan, including higher government spending on West Pakistan, financial transfers from East to West, and the use of the East's foreign exchange surpluses to finance the West's imports. The discrimination occurred despite the fact that East Pakistan generated a major share of Pakistan's exports. The annual rate of growth of the gross domestic product per capita was 4.4% in West Pakistan versus 2.6% in East Pakistan from 1960 to 1965. Bengali politicians pushed for more autonomy, arguing that much of Pakistan's export earnings were generated in East Pakistan from the exportation of Bengali jute and tea. As late as 1960, approximately 70% of Pakistan's export earnings originated in East Pakistan, although this percentage declined as international demand for jute dwindled. By the mid-1960s, East Pakistan was accounting for less than 60% of the nation's export earnings, and by the time Bangladesh gained its independence in 1971, this percentage had dipped below 50%. In 1966, Mujib demanded that separate foreign exchange accounts be kept and that separate trade offices be opened overseas. By the mid-1960s, West Pakistan was benefiting from Ayub's "Decade of Progress" with its successful Green Revolution in wheat and from the expansion of markets for West Pakistani textiles, while East Pakistan's standard of living remained at an abysmally low level. Bengalis were also upset that West Pakistan, the seat of the national government, received more foreign aid. Economists in East Pakistan argued a "Two Economies Theory" within Pakistan itself, which was founded on the Two-Nation Theory with India. The so-called Two Economies Theory suggested that East and West Pakistan had different economic features which should not be regulated by a federal government in Islamabad. Demographics and culture East Pakistan was home to 55% of Pakistan's population. The largest ethnic group of the province were Bengalis, who in turn were the largest ethnic group in Pakistan. Bengali Muslims formed the predominant majority, followed by Bengali Hindus, Bengali Buddhists and Bengali Christians. East Pakistan also had many tribal groups, including the Chakmas, Marmas, Tangchangyas, Garos, Manipuris, Tripuris, Santhals and Bawms. They largely followed the religions of Buddhism, Christianity and Hinduism. East Pakistan was home to immigrant Muslims from across the Indian subcontinent, including West Bengal, Bihar, Sindh, Gujarat, the Northwest Frontier Province, Assam, Orissa, the Punjab and Kerala. A small Armenian and Jewish minority resided in East Pakistan. The Asiatic Society of Pakistan was founded in Old Dacca by Ahmad Hasan Dani in 1948. The Varendra Research Museum in Rajshahi was an important center of research on the Indus Valley Civilization. The Bangla Academy was established in 1954. Among East Pakistan's newspapers, The Daily Ittefaq was the leading Bengali language title; while Holiday was a leading English title. At the time of partition, East Bengal had 80 cinemas. The first movie produced in |
in the 1990 encyclopedic work The Ants. Because much self-sacrificing behavior on the part of individual ants can be explained on the basis of their genetic interests in the survival of the sisters, with whom they share 75% of their genes (though the actual case is some species' queens mate with multiple males and therefore some workers in a colony would only be 25% related), Wilson argued for a sociobiological explanation for all social behavior on the model of the behavior of the social insects. Wilson said in reference to ants "Karl Marx was right, socialism works, it is just that he had the wrong species". He asserted that individual ants and other eusocial species were able to reach higher Darwinian fitness putting the needs of the colony above their own needs as individuals because they lack reproductive independence: individual ants cannot reproduce without a queen, so they can only increase their fitness by working to enhance the fitness of the colony as a whole. Humans, however, do possess reproductive independence, and so individual humans enjoy their maximum level of Darwinian fitness by looking after their own survival and having their own offspring. Consilience, 1998 In his 1998 book Consilience: The Unity of Knowledge, Wilson discussed methods that have been used to unite the sciences, and might be able to unite the sciences with the humanities. He argued that knowledge is a single, unified thing, not divided between science and humanistic inquiry. Wilson used the term "consilience" to describe the synthesis of knowledge from different specialized fields of human endeavor. He defined human nature as a collection of epigenetic rules, the genetic patterns of mental development. He argued that culture and rituals are products, not parts, of human nature. He said art is not part of human nature, but our appreciation of art is. He suggested that concepts such as art appreciation, fear of snakes, or the incest taboo (Westermarck effect) could be studied by scientific methods of the natural sciences and be part of interdisciplinary research. Spiritual and political beliefs Scientific humanism Wilson coined the phrase scientific humanism as "the only worldview compatible with science's growing knowledge of the real world and the laws of nature". Wilson argued that it is best suited to improve the human condition. In 2003, he was one of the signers of the Humanist Manifesto. God and religion On the question of God, Wilson described his position as provisional deism and explicitly denied the label of "atheist", preferring "agnostic". He explained his faith as a trajectory away from traditional beliefs: "I drifted away from the church, not definitively agnostic or atheistic, just Baptist & Christian no more." Wilson argued that belief in God and the rituals of religion are products of evolution. He argued that they should not be rejected or dismissed, but further investigated by science to better understand their significance to human nature. In his book The Creation, Wilson wrote that scientists ought to "offer the hand of friendship" to religious leaders and build an alliance with them, stating that "Science and religion are two of the most potent forces on Earth and they should come together to save the creation." Wilson made an appeal to the religious community on the lecture circuit at Midland College, Texas, for example, and that "the appeal received a 'massive reply'", that a covenant had been written and that a "partnership will work to a substantial degree as time goes on". In a New Scientist interview published on January 21, 2015, however, Wilson said that "Religion 'is dragging us down' and must be eliminated 'for the sake of human progress, and "So I would say that for the sake of human progress, the best thing we could possibly do would be to diminish, to the point of eliminating, religious faiths." Ecology Wilson said that, if he could start his life over he would work in microbial ecology, when discussing the reinvigoration of his original fields of study since the 1960s. He studied the mass extinctions of the 20th century and their relationship to modern society, and in 1998 argued for an ecological approach at the Capitol: From the late 1970s Wilson was actively involved in the global conservation of biodiversity, contributing and promoting research. In 1984 he published Biophilia, a work that explored the evolutionary and psychological basis of humanity's attraction to the natural environment. This work introduced the word biophilia which influenced the shaping of modern conservation ethics. In 1988 Wilson edited the BioDiversity volume, based on the proceedings of the first US national conference on the subject, which also introduced the term biodiversity into the language. This work was very influential in creating the modern field of biodiversity studies. In 2011, Wilson led scientific expeditions to the Gorongosa National Park in Mozambique and the archipelagos of Vanuatu and New Caledonia in the southwest Pacific. Wilson was part of the international conservation movement, as a consultant to Columbia University's Earth Institute, as a director of the American Museum of Natural History, Conservation International, The Nature Conservancy and the World Wildlife Fund. Understanding the scale of the extinction crisis led him to advocate for forest protection, including the "Act to Save America's Forests", first introduced in 1998, until 2008, but never passed. The Forests Now Declaration calls for new markets-based mechanisms to protect tropical forests. Wilson once said destroying a rainforest for economic gain was like burning a Renaissance painting to cook a meal. In 2014, Wilson called for setting aside 50% of the earth's surface for other species to thrive in as the only possible strategy to solve the extinction crisis. Wilson's influence regarding ecology through popular science was covered by Alan G. Gross in The Scientific Sublime (2018). Wilson was instrumental in launching the Encyclopedia of Life (EOL) initiative with the goal of creating a global database to include information on the 1.9 million species recognized by science. Currently, it includes information on practically all known species. This open and searchable digital repository for organism traits, measurements, interactions and other data has more than 300 international partners and countless scientists to provide global user access to knowledge of life on Earth. For his part, Wilson discovered and described more than 400 species of ants. Awards and honors Wilson's scientific and conservation honors include: Member of the American Academy of Arts and Sciences, elected 1959 Member of the National Academy of Sciences, elected 1969 U.S. National Medal of Science, 1977 Leidy Award, 1979, from the Academy of Natural Sciences of Philadelphia Pulitzer Prize for On Human Nature, 1979 Tyler Prize for Environmental Achievement, 1984 ECI Prize, International Ecology Institute, terrestrial ecology, 1987 Honorary doctorate from the Faculty of Mathematics and Science at Uppsala University, Sweden, 1987 Academy of Achievement Golden Plate Award, 1988 His books The Insect Societies and Sociobiology: The New Synthesis were honored with the Science Citation Classic award by the Institute for Scientific Information. Crafoord Prize, 1990, a prize awarded by the Royal Swedish Academy of Sciences Pulitzer Prize for The Ants (with Bert Hölldobler), 1991 International Prize for Biology, 1993 Carl Sagan Award for Public Understanding of Science, 1994 The National Audubon Society's Audubon Medal, 1995 Time magazine's 25 Most Influential People in America, 1995 Certificate of Distinction, International Congresses of Entomology, Florence, Italy 1996 Benjamin Franklin Medal for Distinguished Achievement in the Sciences of the American Philosophical Society, 1998. American Humanist Association's 1999 Humanist of the Year Lewis Thomas Prize for Writing about Science, 2000 Nierenberg Prize, | In collaboration with mathematician William H. Bossert, Wilson developed a classification of pheromones based on insect communication patterns. In the 1960s, he collaborated with mathematician and ecologist Robert MacArthur in developing the theory of species equilibrium. In the 1970s he and Daniel S. Simberloff tested this theory on tiny mangrove islets in the Florida Keys. They eradicated all insect species and observed the re-population by new species. Wilson and MacArthur's book The Theory of Island Biogeography became a standard ecology text. In 1971, he published The Insect Societies, which argues that insect behavior and the behavior of other animals are influenced by similar evolutionary pressures. In 1973, Wilson was appointed the curator of entomology at the Harvard Museum of Comparative Zoology. In 1975, he published the book Sociobiology: The New Synthesis applying his theories of insect behavior to vertebrates, and in the last chapter, humans. He speculated that evolved and inherited tendencies were responsible for hierarchical social organization among humans. In 1978 he published On Human Nature, which dealt with the role of biology in the evolution of human culture and won a Pulitzer Prize for General Nonfiction. Wilson was named the Frank B. Baird, Jr., Professor of Science in 1976 and, after his retirement from Harvard in 1996, became the Pellegrino University Professor Emeritus. In 1981 after collaborating with Charles Lumsden, he published Genes, Mind and Culture, a theory of gene-culture coevolution. In 1990 he published The Ants, co-written with Bert Hölldobler, his second Pulitzer Prize for General Nonfiction. In the 1990s, he published The Diversity of Life (1992), an autobiography: Naturalist (1994), and Consilience: The Unity of Knowledge (1998) about the unity of the natural and social sciences. Retirement and death In 1996, Wilson officially retired from Harvard University, where he continued to hold the positions of Professor Emeritus and Honorary Curator in Entomology. He fully retired from Harvard in 2002 at age 73. After stepping down, he published more than a dozen books, including a digital biology textbook for the iPad. He founded the E.O. Wilson Biodiversity Foundation, which finances the PEN/E. O. Wilson Literary Science Writing Award and is an "independent foundation" at the Nicholas School of the Environment, Duke University. Wilson became a special lecturer at Duke University as part of the agreement. Wilson and his wife, Irene, resided in Lexington, Massachusetts. He had a daughter, Catherine. He was preceded in death by his wife (on August 7, 2021) and died in nearby Burlington on December 26, 2021, at the age of 92. Work Sociobiology: The New Synthesis, 1975 Wilson used sociobiology and evolutionary principles to explain the behavior of social insects and then to understand the social behavior of other animals, including humans, thus establishing sociobiology as a new scientific field. He argued that all animal behavior, including that of humans, is the product of heredity, environmental stimuli, and past experiences, and that free will is an illusion. He referred to the biological basis of behavior as the "genetic leash". The sociobiological view is that all animal social behavior is governed by epigenetic rules worked out by the laws of evolution. This theory and research proved to be seminal, controversial, and influential. Wilson argued that the unit of selection is a gene, the basic element of heredity. The target of selection is normally the individual who carries an ensemble of genes of certain kinds. With regard to the use of kin selection in explaining the behavior of eusocial insects, the "new view that I'm proposing is that it was group selection all along, an idea first roughly formulated by Darwin." Sociobiological research was at the time particularly controversial with regard to its application to humans. The theory established a scientific argument for rejecting the common doctrine of tabula rasa, which holds that human beings are born without any innate mental content and that culture functions to increase human knowledge and aid in survival and success. Reception Sociobiology was initially met with substantial criticism. Several of Wilson's colleagues at Harvard, such as Richard Lewontin and Stephen Jay Gould, were strongly opposed to his ideas regarding sociobiology. Gould, Lewontin, and others from the Sociobiology Study Group from the Boston area wrote "Against 'Sociobiology'" in an open letter criticizing Wilson's "deterministic view of human society and human action". Although attributed to members of the Sociobiology Study Group, it seems that Lewontin was the main author. In a 2011 interview, Wilson said, "I believe Gould was a charlatan. I believe that he was ... seeking reputation and credibility as a scientist and writer, and he did it consistently by distorting what other scientists were saying and devising arguments based upon that distortion." There was also political opposition. Sociobiology re-ignited the nature and nurture debate. Wilson was accused of racism, misogyny, and sympathy to eugenics. In one incident in November 1978, his lecture was attacked by the International Committee Against Racism, a front group of the Marxist Progressive Labor Party, where one member poured a pitcher of water on Wilson's head and chanted "Wilson, you're all wet" at an AAAS conference. Wilson later reflected with pride that he was willing to pursue scientific truth despite such attacks: "I believe ... I was the only scientist in modern times to be physically attacked for an idea." Philosopher Mary Midgley encountered Sociobiology in the process of writing Beast and Man (1979) and significantly rewrote the book to offer a critique of Wilson's views. Midgley praised the book for the study of animal behavior, clarity, scholarship, and encyclopedic scope, but extensively critiqued Wilson for conceptual confusion, scientism, and anthropomorphism of genetics. Following the publication of Sociobiology, Wilson extensively corresponded with and supported J. Philippe Rushton, a controversial psychologist at the University of Western Ontario, who later headed the Pioneer Fund. After Wilson's death, historians of science Mark Borrello and David Sepkoski reassessed how Wilson's thinking on issues of race and evolution was influenced by Rushton. On Human Nature, 1978 Wilson wrote in his 1978 book On Human Nature, "The evolutionary epic is probably the best myth we will ever have." Wilson's fame prompted use of the morphed phrase epic of evolution. The book won the Pulitzer Prize in 1979. The Ants, 1990 Wilson, along with Bert Hölldobler, carried out a systematic study of ants and ant behavior, culminating in the 1990 encyclopedic work The Ants. Because much self-sacrificing behavior on the part of individual ants can be explained on the basis of their genetic interests in the survival of the sisters, with whom they share 75% of their |
radio development at this time, primarily for point-to-point extensions of its wired telephone exchanges, purchased the US rights to Lévy's patent and contested Armstrong's grant. The subsequent court reviews continued until 1928, when the District of Columbia Court of Appeals disallowed all nine claims of Armstrong's patent, assigning priority for seven of the claims to Lévy, and one each to Ernst Alexanderson of General Electric and Burton W. Kendall of Bell Laboratories. Although most early radio receivers used regeneration Armstrong approached RCA's David Sarnoff, whom he had known since giving a demonstration of his regeneration receiver in 1913, about the corporation offering superheterodynes as a superior offering to the general public. (The ongoing patent dispute was not a hindrance, because extensive cross-licensing agreements signed in 1920 and 1921 between RCA, Westinghouse and AT&T meant that Armstrong could freely use the Lévy patent.) Superheterodyne sets were initially thought to be prohibitively complicated and expensive as the initial designs required multiple tuning knobs and used nine vacuum tubes. In conjunction with RCA engineers, Armstrong developed a simpler, less costly design. RCA introduced its superheterodyne Radiola sets in the US market in early 1924, and they were an immediate success, dramatically increasing the corporation's profits. These sets were considered so valuable that RCA would not license the superheterodyne to other US companies until 1930. Super-regeneration circuit The regeneration legal battle had one serendipitous outcome for Armstrong. While he was preparing apparatus to counteract a claim made by a patent attorney, he "accidentally ran into the phenomenon of super-regeneration", where, by rapidly "quenching" the vacuum-tube oscillations, he was able to achieve even greater levels of amplification. A year later, in 1922, Armstrong sold his super-regeneration patent to RCA for $200,000 plus 60,000 shares of corporation stock, which was later increased to 80,000 shares in payment for consulting services. This made Armstrong RCA's largest shareholder, and he noted that "The sale of that invention was to net me more than the sale of the regenerative circuit and the superheterodyne combined". RCA envisioned selling a line of super-regenerative receivers until superheterodyne sets could be perfected for general sales, but it turned out the circuit was not selective enough to make it practical for broadcast receivers. Wide-band FM radio "Static" interference – extraneous noises caused by sources such as thunderstorms and electrical equipment – bedeviled early radio communication using amplitude modulation and perplexed numerous inventors attempting to eliminate it. Many ideas for static elimination were investigated, with little success. In the mid-1920s, Armstrong began researching a solution. He initially, and unsuccessfully, attempted to resolve the problem by modifying the characteristics of AM transmissions. One approach had been the use of frequency modulation (FM) transmissions. Instead of varying the strength of the carrier wave as with AM, the frequency of the carrier was changed to represent the desired audio signal. In 1922 John Renshaw Carson of AT&T, inventor of Single-sideband modulation (SSB), had published a detailed mathematical analysis which showed that FM transmissions did not provide any improvement over AM. Although the Carson bandwidth rule for FM is important today, this review turned out to be incomplete, because it analyzed only what is now known as "narrow-band" FM. In early 1928 Armstrong began researching the capabilities of FM. Although there were others involved in FM research at this time, he knew of an RCA project to see if FM shortwave transmissions were less susceptible to fading than AM. In 1931 the RCA engineers constructed a successful FM shortwave link transmitting the Schmeling–Stribling fight broadcast from California to Hawaii, and noted at the time that the signals seemed to be less affected by static. The project made little further progress. Working in secret in the basement laboratory of Columbia's Philosophy Hall, Armstrong developed "wide-band" FM, in the process discovering significant advantages over the earlier "narrow-band" FM transmissions. In a "wide-band" FM system, the deviations of the carrier frequency are made to be much larger in magnitude than the frequency of the audio signal; this can be shown to provide better noise rejection. He was granted five US patents covering the basic features of new system on December 26, 1933. Initially, the primary claim was that his FM system was effective at filtering out the noise produced in receivers by vacuum tubes. Armstrong had a standing agreement to give RCA the right of first refusal to his patents. In 1934 he presented his new system to RCA president Sarnoff. Sarnoff was somewhat taken aback by its complexity, as he had hoped it would be possible to eliminate static merely by adding a simple device to existing receivers. From May 1934 until October 1935 Armstrong conducted field tests of his FM technology from an RCA laboratory located on the 85th floor of the Empire State Building in New York City. An antenna attached to the building's spire transmitted signals for distances up to . These tests helped demonstrate FM's static-reduction and high-fidelity capabilities. RCA, which was heavily invested in perfecting TV broadcasting, chose not to invest in FM, and instructed Armstrong to remove his equipment. Denied the marketing and financial clout of RCA, Armstrong decided to finance his own development and form ties with smaller members of the radio industry, including Zenith and General Electric, to promote his invention. Armstrong thought that FM had the potential to replace AM stations within 5 years, which he promoted as a boost for the radio manufacturing industry, then suffering from the effects of the Great Depression. Making existing AM radio transmitters and receivers obsolete would necessitate that stations buy replacement transmitters and listeners purchase FM-capable receivers. In 1936 he published a landmark paper in the Proceedings of the IRE that documented the superior capabilities of using wide-band FM. (This paper would be reprinted in the August 1984 issue of Proceedings of the IEEE.) A year later, a paper by Murray G. Crosby (inventor of Crosby system for FM Stereo) in the same journal provided further analysis of the wide-band FM characteristics, and introduced the concept of "threshold", demonstrating that there is a superior signal-to-noise ratio when the signal is stronger than a certain level. In June 1936, Armstrong gave a formal presentation of his new system at the US Federal Communications Commission (FCC) headquarters. For comparison, he played a jazz record using a conventional AM radio, then switched to an FM transmission. A United Press correspondent was present, and recounted in a wire service report that: "if the audience of 500 engineers had shut their eyes they would have believed the jazz band was in the same room. There were no extraneous sounds." Moreover, "Several engineers said after the demonstration that they consider Dr. Armstrong's invention one of the most important radio developments since the first earphone crystal sets were introduced." Armstrong was quoted as saying he could "visualize a time not far distant when the use of ultra-high frequency wave bands will play the leading role in all broadcasting", although the article noted that "A switchover to the ultra-high frequency system would mean the junking of present broadcasting equipment and present receivers in homes, eventually causing the expenditure of billions of dollars." In the late 1930s, as technical advances made it possible to transmit on higher frequencies, the FCC investigated options for increasing the number of broadcasting stations, in addition to ideas for better audio quality, known as "high-fidelity". In 1937 it introduced what became known as the Apex band, consisting of 75 broadcasting frequencies from 41.02 to 43.98 MHz. As on the standard broadcast band these were AM stations, but with higher quality audio – in one example, a frequency response from 20 Hz to 17,000 Hz +/- 1 dB – because station separations were 40 kHz instead of the 10 kHz spacings used on the original AM band. Armstrong worked to convince the FCC that a band of FM broadcasting stations would be a superior approach. That year he financed the construction of the first FM radio station, W2XMN (later KE2XCC) at Alpine, New Jersey. FCC engineers had believed that transmissions using high frequencies would travel little farther than line-of-sight distances, limited by the horizon. When operating with 40 kilowatts on 42.8 MHz, the station could be clearly heard away, matching the daytime coverage of a full power 50-kilowatt AM station. FCC studies comparing the Apex station transmissions with Armstrong's FM system concluded that his approach was superior. In early 1940, the FCC held hearings on whether to establish a commercial FM service. Following this review, the FCC announced the establishment of an FM band effective January 1, 1941, consisting of forty 200 kHz-wide channels on a band from 42–50 MHz, with the first five channels reserved for educational stations. Existing Apex stations were notified that they would not be allowed to operate after January 1, 1941 unless they converted to FM. Although there was interest in the new FM band by station owners, construction restrictions that went into place during WWII limited the growth of the new service. Following the end of WWII, the FCC moved to standardize its frequency allocations. One area of concern was the effects of tropospheric and Sporadic E propagation, which at times reflected station signals over great distances, causing mutual interference. A particularly controversial proposal, spearheaded by RCA, was that the FM band needed to be shifted to higher frequencies to avoid this problem. This reassignment was fiercely opposed as unneeded by Armstrong, but he lost. The FCC made its decision final on June 27, 1945. It allocated 100 FM channels from 88–108 MHz, and assigned the former FM band to 'non government fixed and mobile' (42–44 MHz), and television channel 1 (44–50 MHz), now sidestepping the interference concerns. A period of allowing existing FM stations to broadcast on both low and high bands ended at midnight on January 8, 1949, at which time any low band transmitters were shut down, making obsolete 395,000 receivers that had already been purchased by the public for the original band. Although converters allowing low band FM sets to receive high band were manufactured, they ultimately proved to be complicated to install, and often as (or more) expensive than buying a new high band set outright. Armstrong felt the FM band reassignment had been inspired primarily by a desire to cause a disruption that would limit FM's ability to challenge the existing radio industry, including RCA's AM radio properties that included the NBC radio network, plus the other major networks including CBS, ABC and Mutual. The change was thought to have been favored by AT&T, as the elimination of FM relaying stations would require radio stations to lease wired links from that company. Particularly galling was the FCC assignment of TV channel 1 to the 44–50 MHz segment of the old FM band. Channel 1 was later deleted, since periodic radio propagation would make local TV signals unviewable. Although the FM band shift was an economic setback, there was reason for optimism. A book published in 1946 by Charles A. Siepmann heralded FM stations as "Radio's Second Chance". In late 1945, Armstrong contracted with John Orr Young, founding member of the public relations firm Young & Rubicam, to conduct a national campaign promoting FM broadcasting, especially by educational institutions. Article placements promoting both Armstrong personally and FM were made with general circulation publications including The Nation, Fortune, The New York Times, Atlantic Monthly, and The Saturday Evening Post. In 1940, RCA offered Armstrong $1,000,000 for a non-exclusive, royalty-free license to use his FM patents. He refused this offer, because he felt this would be unfair to the other licensed companies, which had to pay 2% royalties on their sales. Over time this impasse with RCA dominated Armstrong's life. RCA countered by conducting its own FM research, eventually developing what it claimed was a non-infringing FM system. The corporation encouraged other companies to stop paying royalties to Armstrong. Outraged by this, in 1948 Armstrong filed suit against RCA and the National Broadcasting Company, accusing them of patent infringement and that they had "deliberately set out to oppose and impair the value" of his invention, for which he requested treble damages. Although he was confident that this suit would be successful and result in a major monetary award, the protracted legal maneuvering that followed eventually began to impair his finances, especially after his primary patents expired in late 1950. FM radar During World War II, Armstrong turned his attention to investigations of continuous-wave FM radar funded by government contracts. Armstrong hoped that the interference fighting characteristic of wide-band FM and a narrow receiver bandwidth to reduce noise would increase range. Primary development took place at Armstrong’s Alpine, NJ laboratory. A duplicate set of equipment was sent to the U.S. Army’s Evans Signal Laboratory. The results of his investigations were inconclusive, the war ended, and the project was dropped by the Army. Under the name Project Diana, the Evans staff took up the possibility of bouncing radar signals off the moon. Calculations showed that standard pulsed radar like the stock SCR-271 would not do the job; higher average power, much wider transmitter pulses, and vary narrow receiver bandwidth would be required. They realized that the Armstrong equipment could be modified to accomplish the task. The FM modulator of the transmitter was disabled and the transmitter keyed to produce quarter-second CW pulses. The narrow-band (57 Hz) receiver, which tracked the transmitter frequency, got an incremental tuning control to compensate for the possible 300 Hz Doppler shift on the lunar echoes. They achieved success on 10 January 1946. Death Bitter and overtaxed by years of litigation and mounting financial problems, Armstrong lashed out at his wife one day with a fireplace poker, striking her on the arm. She left their apartment to stay with her sister, Marjorie Tuttle, in Granby, Connecticut. Sometime during the night of January 31–February 1, 1954, Armstrong jumped to his death from a window in his 12-room apartment on the 13th floor of River House in Manhattan, New York City. The New York Times described the contents of his two-page suicide note to his wife: "he was heartbroken at being unable to see her once again, and expressing deep regret at having hurt her, the dearest thing in his life." The note concluded, "God keep you and Lord have mercy on my Soul." David Sarnoff disclaimed any responsibility, telling Carl Dreher directly that "I did not kill Armstrong." After his death, a friend of Armstrong estimated that 90 percent of his time was spent on litigation against RCA. U.S. Senator Joseph McCarthy (R-Wisconsin) reported that Armstrong had recently met with one of his investigators, and had been "mortally afraid" that secret radar discoveries by him and other scientists "were being fed to the Communists as fast as they could be developed". Armstrong was buried in Locust Grove Cemetery, Merrimac, Massachusetts. Legacy Following her husband's death, Marion Armstrong took charge of pursuing his estate's legal cases. In late December 1954, it was announced that through arbitration a settlement of "approximately $1,000,000" had been made with RCA. Dana Raymond of Cravath, Swaine & Moore in New York served as counsel in that litigation. Marion Armstrong was able to formally establish Armstrong as the inventor of FM following protracted court proceedings over five of his basic FM patents, with a series of successful suits, which lasted until 1967, against other companies that were found guilty of infringement. It was not until the 1960s that FM stations in the United States started to challenge the popularity of the AM band, helped by the development of FM stereo by General Electric, followed by the FCC's FM Non-Duplication Rule, which limited large-city broadcasters with AM and FM licenses to simulcasting on those two frequencies for only half of their broadcast hours. Armstrong's FM system was also used for communications between NASA and the Apollo program astronauts. (He is of no known relation to Apollo astronaut Neil Armstrong.) A US Postage Stamp was released in his honor in 1983 in a series commemorating American Inventors. Armstrong has been called "the most prolific and influential inventor in radio history". The superheterodyne process is still extensively used by radio equipment. Eighty years after its invention, FM technology has started to be supplemented, and in some cases replaced, by more efficient digital technologies. The introduction of digital television eliminated the FM audio channel that had been | modulation) radio and the superheterodyne receiver system. He held 42 patents and received numerous awards, including the first Medal of Honor awarded by the Institute of Radio Engineers (now IEEE), the French Legion of Honor, the 1941 Franklin Medal and the 1942 Edison Medal. He was inducted into the National Inventors Hall of Fame and included in the International Telecommunication Union's roster of great inventors. Early life Armstrong was born in the Chelsea district of New York City, the oldest of John and Emily (née Smith) Armstrong's three children. His father began working at a young age at the American branch of the Oxford University Press, which published bibles and standard classical works, eventually advancing to the position of vice president. His parents first met at the North Presbyterian Church, located at 31st Street and Ninth Avenue. His mother's family had strong ties to Chelsea, and an active role in church functions. When the church moved north, the Smiths and Armstrongs followed, and in 1895 the Armstrong family moved from their brownstone row house at 347 West 29th Street to a similar house at 26 West 97th Street in the Upper West Side. The family was comfortably middle class. At the age of eight, Armstrong contracted Sydenham's chorea (then known as St. Vitus' Dance), an infrequent but serious neurological disorder precipitated by rheumatic fever. For the rest of his life, Armstrong was afflicted with a physical tic exacerbated by excitement or stress. Due to this illness, he withdrew from public school and was home-tutored for two years. To improve his health, the Armstrong family moved to a house overlooking the Hudson River, at 1032 Warburton Avenue in Yonkers. The Smith family subsequently moved next door. Armstrong's tic and the time missed from school led him to become socially withdrawn. From an early age, Armstrong showed an interest in electrical and mechanical devices, particularly trains. He loved heights and constructed a makeshift backyard antenna tower that included a bosun's chair for hoisting himself up and down its length, to the concern of neighbors. Much of his early research was conducted in the attic of his parents' house. In 1909, Armstrong enrolled at Columbia University in New York City, where he became a member of the Epsilon Chapter of the Theta Xi engineering fraternity, and studied under Professor Michael Pupin at the Hartley Laboratories, a separate research unit at Columbia. Another of his instructors, Professor John H. Morecroft, later remembered Armstrong as being intensely focused on the topics that interested him, but somewhat indifferent to the rest of his studies. Armstrong challenged conventional wisdom and was quick to question the opinions of both professors and peers. In one case, he recounted how he tricked an instructor he disliked into receiving a severe electrical shock. He also stressed the practical over the theoretical, stating that progress was more likely the product of experimentation and reasoning than on mathematical calculation and the formulae of "mathematical physics". Armstrong graduated from Columbia in 1913, earning an electrical engineering degree. During World War I, Armstrong served in the Signal Corps as a captain and later a major. Following college graduation, he received a $600 one-year appointment as a laboratory assistant at Columbia, after which he nominally worked as a research assistant, for a salary of $1 a year, under Professor Pupin. Unlike most engineers, Armstrong never became a corporate employee. He set up a self-financed independent research and development laboratory at Columbia, and owned his patents outright. In 1934, he filled the vacancy left by John H. Morecroft's death, receiving an appointment as a Professor of Electrical Engineering at Columbia, a position he held the remainder of his life. Early work Regenerative circuit Armstrong began working on his first major invention while still an undergraduate at Columbia. In late 1906, Lee de Forest had invented the three-element (triode) "grid Audion" vacuum-tube. How vacuum tubes worked was not understood at the time. De Forest's initial Audions did not have a high vacuum and developed a blue glow at modest plate voltages; De Forest improved the vacuum for Federal Telegraph. By 1912, vacuum tube operation was understood, and regenerative circuits using high vacuum tubes were appreciated. While growing up, Armstrong had experimented with the early temperamental, "gassy" Audions. Spurred by the later discoveries, he developed a keen interest in gaining a detailed scientific understanding of how vacuum tubes worked. In conjunction with Professor Morecroft he used an oscillograph to conduct comprehensive studies. His breakthrough discovery was determining that employing positive feedback (also known as "regeneration") produced amplification hundreds of times greater than previously attained, with the amplified signals now strong enough so that receivers could use loudspeakers instead of headphones. Further investigation revealed that when the feedback was increased beyond a certain level a vacuum-tube would go into oscillation, thus could also be used as a continuous-wave radio transmitter. Beginning in 1913 Armstrong prepared a series of comprehensive demonstrations and papers that carefully documented his research, and in late 1913 applied for patent protection covering the regenerative circuit. On October 6, 1914, was issued for his discovery. Although Lee de Forest initially discounted Armstrong's findings, beginning in 1915 de Forest filed a series of competing patent applications that largely copied Armstrong's claims, now stating that he had discovered regeneration first, based on August 6, 1912 notebook entry, while working for the Federal Telegraph company, prior to the January 31, 1913 date recognized for Armstrong. The result was an interference hearing at the patent office to determine priority. De Forest was not the only other inventor involved – the four competing claimants included Armstrong, de Forest, General Electric's Langmuir, and Alexander Meissner, who was a German national, which led to his application being seized by the Office of Alien Property Custodian during World War I. Following the end of WWI Armstrong enlisted representation by the law firm of Pennie, Davis, Martin and Edmonds. To finance his legal expenses he began issuing non-transferable licenses for use of the regenerative patents to a select group of small radio equipment firms, and by November 1920 17 companies had been licensed. These licensees paid 5% royalties on their sales which were restricted to only "amateurs and experimenters". Meanwhile, Armstrong reviewed his options for selling the commercial rights to his work. Although the obvious candidate was the Radio Corporation of America (RCA), on October 5, 1920 the Westinghouse Electric & Manufacturing Company took out an option for $335,000 for the commercial rights for both the regenerative and superheterodyne patents, with an additional $200,000 to be paid if Armstrong prevailed in the regenerative patent dispute. Westinghouse exercised this option on November 4, 1920. Legal proceedings related to the regeneration patent became separated into two groups of court cases. An initial court action was triggered in 1919 when Armstrong sued de Forest's company in district court, alleging infringement of patent 1,113,149. This court ruled in Armstrong's favor on May 17, 1921. A second line of court cases, the result of the patent office interference hearing, had a different outcome. The interference board had also sided with Armstrong, but he was unwilling to settle with de Forest for less than what he considered full compensation. Thus pressured, de Forest continued his legal defense, and appealed the interference board decision to the District of Columbia district court. On May 8, 1924, that court ruled that it was de Forest who should be considered regeneration's inventor. Armstrong (along with much of the engineering community) was shocked by these events, and his side appealed this decision. Although the legal proceeding twice went before the US Supreme Court, in 1928 and 1934, he was unsuccessful in overturning the decision. In response to the second Supreme Court decision upholding de Forest as the inventor of regeneration, Armstrong attempted to return his 1917 IRE Medal of Honor, which had been awarded "in recognition of his work and publications dealing with the action of the oscillating and non-oscillating audion". The organization's board refused to allow him, and issued a statement that it "strongly affirms the original award". Superheterodyne circuit The United States entered into WWI in April 1917. Later that year Armstrong was commissioned as a Captain in the U.S. Army Signal Corps, and assigned to a laboratory in Paris, France to help develop radio communication for the Allied war effort. He returned to the US in the autumn of 1919, after being promoted to the rank of Major. (During both world wars, Armstrong gave the US military free use of his patents.) During this period Armstrong's most significant accomplishment was the development of a "supersonic heterodyne" – soon shortened to "superheterodyne" – radio receiver circuit. This circuit made radio receivers more sensitive and selective and is extensively used today. The key feature of the superheterodyne approach is the mixing of the incoming radio signal with a locally generated, different frequency signal within a radio set. This circuit is called the mixer. The result is a fixed, unchanging intermediate frequency, or I.F. signal which is easily amplified and detected by following circuit stages. In 1919, Armstrong filed an application for a US patent of the superheterodyne circuit which was issued the next year. This patent was subsequently sold to Westinghouse. The patent was challenged, triggering another patent office interference hearing. Armstrong ultimately lost this patent battle; although the outcome was less controversial than that involving the regeneration proceedings. The challenger was Lucien Lévy of France who had worked developing Allied radio communication during WWI. He had been awarded French patents in 1917 and 1918 that covered some of the same basic ideas used in Armstrong's superheterodyne receiver. AT&T, interested in radio development at this time, primarily for point-to-point extensions of its wired telephone exchanges, purchased the US rights to Lévy's patent and contested Armstrong's grant. The subsequent court reviews continued until 1928, when the District of Columbia Court of Appeals disallowed all nine claims of Armstrong's patent, assigning priority for seven of the claims to Lévy, and one each to Ernst Alexanderson of General Electric and Burton W. Kendall of Bell Laboratories. Although most early radio receivers used regeneration Armstrong approached RCA's David Sarnoff, whom he had known since giving a demonstration of his regeneration receiver in 1913, about the corporation offering superheterodynes as a superior offering to the general public. (The ongoing patent dispute was not a hindrance, because extensive cross-licensing agreements signed in 1920 and 1921 between RCA, Westinghouse and AT&T meant that Armstrong could freely use the Lévy patent.) Superheterodyne sets were initially thought to be prohibitively complicated and expensive as the initial designs required multiple tuning knobs and used nine vacuum tubes. In conjunction with RCA engineers, Armstrong developed a simpler, less costly design. RCA introduced its superheterodyne Radiola sets in the US market in early 1924, and they were an immediate success, dramatically increasing the corporation's profits. These sets were considered so valuable that RCA would not license the superheterodyne to other US companies until 1930. Super-regeneration circuit The regeneration legal battle had one serendipitous outcome for Armstrong. While he was preparing apparatus to counteract a claim made by a patent attorney, he "accidentally ran into the phenomenon of super-regeneration", where, by rapidly "quenching" the vacuum-tube oscillations, he was able to achieve even greater levels of amplification. A year later, in 1922, Armstrong sold his super-regeneration patent to RCA for $200,000 plus 60,000 shares of corporation stock, which was later increased to 80,000 shares in payment for consulting services. This made Armstrong RCA's largest shareholder, and he noted that "The sale of that invention was to net me more than the sale of the regenerative circuit and the superheterodyne combined". RCA envisioned selling a line of super-regenerative receivers until superheterodyne sets could be perfected for general sales, but it turned out the circuit was not selective enough to make it practical for broadcast receivers. Wide-band FM radio "Static" interference – extraneous noises caused by sources such as thunderstorms and electrical equipment – bedeviled early radio communication using amplitude modulation and perplexed numerous inventors attempting to eliminate it. Many ideas for static elimination were investigated, with little success. In the mid-1920s, Armstrong began researching a solution. He initially, and unsuccessfully, attempted to resolve the problem by modifying the characteristics of AM transmissions. One approach had been the use of frequency modulation (FM) transmissions. Instead of varying the strength of the carrier wave as with AM, the frequency of the carrier was changed to represent the desired audio signal. In 1922 John Renshaw Carson of AT&T, inventor of Single-sideband modulation (SSB), had published a detailed mathematical analysis which showed that FM transmissions did not provide any improvement over AM. Although the Carson bandwidth rule for FM is important today, this review turned out to be incomplete, because it analyzed only what is now known as "narrow-band" FM. In early 1928 Armstrong began researching the capabilities of FM. Although there were others involved in FM research at this time, he knew of an RCA project to see if FM shortwave transmissions were less susceptible to fading than AM. In 1931 the RCA engineers constructed a successful FM shortwave link transmitting the Schmeling–Stribling fight broadcast from California to Hawaii, and noted at the time that the signals seemed to be less affected by static. The project made little further progress. Working in secret in the basement laboratory of Columbia's Philosophy Hall, Armstrong developed "wide-band" FM, in the process discovering significant advantages over the earlier "narrow-band" FM transmissions. In a "wide-band" FM system, the deviations of the carrier frequency are made to be much larger in magnitude than the frequency of the audio signal; this can be shown to provide better noise rejection. He was granted five US patents covering the basic features of new system on December 26, 1933. Initially, the primary claim was that his FM system was effective at filtering out the noise produced in receivers by vacuum tubes. Armstrong had a standing agreement to give RCA the right of first refusal to his patents. In 1934 he presented his new system to RCA president Sarnoff. Sarnoff was somewhat taken aback by its complexity, as he had hoped it would be possible to eliminate static merely by adding a simple device to existing receivers. From May 1934 until October 1935 Armstrong conducted field tests of his FM technology from an RCA laboratory located on the 85th floor of the Empire State Building in New York City. An antenna attached to the building's spire transmitted signals for distances up to . These tests helped demonstrate FM's static-reduction and high-fidelity capabilities. RCA, which was heavily invested in perfecting TV broadcasting, chose not to invest in FM, and instructed Armstrong to remove his equipment. Denied the marketing and financial clout of RCA, Armstrong decided to finance his own development and form ties with smaller members of the radio industry, including Zenith and General Electric, to promote his invention. Armstrong thought that FM had the potential to replace AM stations within 5 years, which he promoted as a boost for the radio manufacturing industry, then suffering from the effects of the Great Depression. Making existing AM radio transmitters and receivers obsolete would necessitate that stations buy replacement transmitters and listeners purchase FM-capable receivers. In 1936 he published a landmark paper in the Proceedings of the IRE that documented the superior capabilities of using wide-band FM. (This paper would be reprinted in the August 1984 issue of Proceedings of the IEEE.) A year later, a paper by Murray G. Crosby (inventor of Crosby system for FM Stereo) in the same journal provided further analysis of the wide-band FM characteristics, and introduced the concept of "threshold", demonstrating that there is a superior signal-to-noise ratio when the signal is stronger than a certain level. In June 1936, Armstrong gave a formal presentation of his new system at the US Federal Communications Commission (FCC) headquarters. For comparison, he played a jazz record using a conventional AM radio, then switched to an FM transmission. A United Press correspondent was present, and recounted in a wire service report that: "if the audience of 500 engineers had shut their eyes they would have believed the jazz band was in the same room. There were no extraneous sounds." Moreover, "Several engineers said after the demonstration that they consider Dr. Armstrong's invention one of the most important radio developments since the |
of subscriptions. Numbers continued rising rapidly until mid-2001 when growth slowed. The game initially launched with volunteer "Guides" who would act as basic customer service/support via 'petitions'. Issues could be forwarded to the Game Master assigned to the server or resolved by the volunteer. Other guides would serve in administrative functions within the program or assisting the Quest Troupe with dynamic and persistent live events throughout the individual servers. Volunteers were compensated with free subscription and expansions to the game. In 2003 the program changed for the volunteer guides taking them away from the customer service focus and placing them into their current roles as roving 'persistent characters' role-playing with the players. In anticipation of PlayStation's launch, Sony Interactive Studios America made the decision to focus primarily on console titles under the banner 989 Studios, while spinning off its sole computer title, EverQuest, which was ready to launch, to a new computer game division named Redeye (renamed Verant Interactive). Executives initially had very low expectations for EverQuest, but in 2000, following the surprising continued success and unparalleled profits of EverQuest, Sony reorganized Verant Interactive into Sony Online Entertainment (SOE) with Smedley retaining control of the company. Many of the original EverQuest team, including Brad McQuaid and Steve Clover left SOE by 2002. Growth and sequels The first four expansions were released in traditional physical boxes at roughly one-year intervals. These were highly ambitious and offered huge new landmasses, new playable races and new classes. The expansion Shadows of Luclin (2001) gave a significant facelift to player character models, bringing the dated 1999 graphics up to modern standards. However, non-player characters which do not correspond to any playable race-gender-class combination (such as vendors) were not updated, leading to the coexistence of 1999-era and 2001-era graphics in many locations. The expansion Planes of Power (2002) introduced The Plane of Knowledge, a hub zone from which players could quickly teleport to many other destinations. This made the pre-existing roads and ships largely redundant, and long-distance overland travel is now virtually unheard of. EverQuest made a push to enter the European market in 2002 with the New Dawn promotional campaign, which not only established local servers in Germany, France and Great Britain but also offered localized versions of the game in German and French to accommodate players who prefer those languages to English. In the following year the game also moved beyond the PC market with a Mac OS X version. In 2003 experiments began with digital distribution of expansions, starting with the Legacy of Ykesha. From this point on expansions would be less ambitious in scope than the original four, but on the other hand the production rate increased to two expansions a year instead of one. This year the franchise also ventured into the console market with EverQuest Online Adventures, released for Sony's internet-capable PlayStation 2. It was the second MMORPG for this console, after Final Fantasy XI. Story-wise it was a prequel, with the events taking place 500 years before the original EverQuest. Other spin-off projects were the PC strategy game Lords of EverQuest (2003) and the co-op Champions of Norrath (2004) for the PlayStation 2. After these side projects, the first proper sequel was released in late 2004, titled simply EverQuest II . The game is set 500 years after the original. EverQuest II faced severe competition from Blizzard's World of Warcraft, which was released at virtually the same time and quickly grew to dominate the MMORPG genre. Decline Since the release of World of Warcraft and other modern MMORPGs, there have been a number of signs that the EverQuest population is shrinking. The national New Dawn servers were discontinued in 2005 and merged into a general (English-language) European server. The 2006 expansion The Serpent's Spine introduced the "adventure-friendly" city of Crescent Reach in which all races and classes are able (and encouraged) to start. Crescent Reach is supposed to provide a more pedagogic starting environment than the original 1999 cities, where players were given almost no guidance on what to do. The common starting city also concentrates the dwindling number of new players in a single location, making grouping easier. 2008's Seeds of Destruction expansion introduced computer controlled companions called "mercenaries" that can join groups in place of human players; a response to the increasing difficulty of finding other players of appropriate level for group activities. As of Seeds the production rate also returned to one expansion a year instead of two. In March 2012 EverQuest departed from the traditional monthly subscription business model by introducing three tiers of commitment: a completely free-to-play Bronze Level, a one-time fee Silver Level, and a subscription Gold Level. The same month saw the closure of EverQuest Online Adventures. Just a few months earlier EverQuest II had gone free-to-play and SOE flagship Star Wars Galaxies also closed. In June of the same year SOE removed the ability to buy game subscription time with Station Cash without any warning to players. SOE apologized for this abrupt change in policy and reinstated the option for an additional week, after which it was removed permanently. November 18, 2013 saw the closure of the sole Mac OS server Al'Kabor. In February 2015 Sony sold its online entertainment division to private equity group Columbus Nova, with Sony Online Entertainment subsequently renamed Daybreak Game Company (DBG). An initial period of uncertainty followed, with all projects such as expansions and sequels put on hold and staff laid off. The situation stabilized around the game's 16th anniversary celebrations, and a new expansion was released in November 2015. Expansions There have been twenty-eight expansions to the original game since release. Expansions are purchased separately and provide additional content to the game (for example: raising the maximum character level; adding new races, classes, zones, continents, quests, equipment, game features). When you purchase the latest expansion you receive all previous expansions you may not have previously purchased. Additionally, the game is updated through downloaded patches. The EverQuest expansions are as follows: Servers The game runs on multiple game servers, each with a unique name for identification. These names were originally the deities of the world of Norrath. In technical terms, each game server is actually a cluster of server machines. Once a character is created, it can be played only on that server unless the character is transferred to a new server by the customer service staff, generally for a fee. Each server often has a unique community and people often include the server name when identifying their character outside of the game. There is an official EverQuest server list, as well as unofficial 3rd-party servers. For example, the Project 1999 EverQuest servers are intended to recreate EverQuest in the state it existed in the year it launched and the two subsequent expansions. Referred to as the “Classic Trilogy”. OS X SOE devoted one server (Al'Kabor) to an OS X version of the game, which opened for beta testing in early 2003, and officially released on June 24, 2003. The game was never developed beyond the Planes of Power expansion, and contained multiple features and bugs not seen on PC servers, as a side-effect of the codebase having been split from an early Planes of Power date but not updated with the PC codebase. In January 2012, SOE announced plans to shut down the server, but based on the passionate response of the player base, rescinded the decision and changed Al'Kabor to a free-to-play subscription model. At about the same time, SOE revised the Macintosh client software to run natively on Intel processors. Players running on older, PowerPC-based systems lost access to the game at that point. Finally in November 2013, SOE closed Al'Kabor. European Two SOE servers were set up to better support players in and around Europe: Antonius Bayle and Kane Bayle. Kane Bayle was merged into Antonius Bayle. With the advent of the New Dawn promotion, three additional servers were set up and maintained by Ubisoft: Venril Sathir (British), Sebilis (French) and Kael Drakkal (German). The downside of the servers was that while it was possible to transfer to them, it was impossible to transfer off. The servers were subsequently acquired by SOE and all three were merged into Antonius Bayle server. Reception Reviews of Everquest were mostly positive upon release in 1999, earning an 85 out of 100 score from aggregate review website Metacritic. Comparing it to other online role-playing titles at the time, critics called it "the best game in its class," and the "most immersive and most addictive online RPG to date." | virtually unheard of. EverQuest made a push to enter the European market in 2002 with the New Dawn promotional campaign, which not only established local servers in Germany, France and Great Britain but also offered localized versions of the game in German and French to accommodate players who prefer those languages to English. In the following year the game also moved beyond the PC market with a Mac OS X version. In 2003 experiments began with digital distribution of expansions, starting with the Legacy of Ykesha. From this point on expansions would be less ambitious in scope than the original four, but on the other hand the production rate increased to two expansions a year instead of one. This year the franchise also ventured into the console market with EverQuest Online Adventures, released for Sony's internet-capable PlayStation 2. It was the second MMORPG for this console, after Final Fantasy XI. Story-wise it was a prequel, with the events taking place 500 years before the original EverQuest. Other spin-off projects were the PC strategy game Lords of EverQuest (2003) and the co-op Champions of Norrath (2004) for the PlayStation 2. After these side projects, the first proper sequel was released in late 2004, titled simply EverQuest II . The game is set 500 years after the original. EverQuest II faced severe competition from Blizzard's World of Warcraft, which was released at virtually the same time and quickly grew to dominate the MMORPG genre. Decline Since the release of World of Warcraft and other modern MMORPGs, there have been a number of signs that the EverQuest population is shrinking. The national New Dawn servers were discontinued in 2005 and merged into a general (English-language) European server. The 2006 expansion The Serpent's Spine introduced the "adventure-friendly" city of Crescent Reach in which all races and classes are able (and encouraged) to start. Crescent Reach is supposed to provide a more pedagogic starting environment than the original 1999 cities, where players were given almost no guidance on what to do. The common starting city also concentrates the dwindling number of new players in a single location, making grouping easier. 2008's Seeds of Destruction expansion introduced computer controlled companions called "mercenaries" that can join groups in place of human players; a response to the increasing difficulty of finding other players of appropriate level for group activities. As of Seeds the production rate also returned to one expansion a year instead of two. In March 2012 EverQuest departed from the traditional monthly subscription business model by introducing three tiers of commitment: a completely free-to-play Bronze Level, a one-time fee Silver Level, and a subscription Gold Level. The same month saw the closure of EverQuest Online Adventures. Just a few months earlier EverQuest II had gone free-to-play and SOE flagship Star Wars Galaxies also closed. In June of the same year SOE removed the ability to buy game subscription time with Station Cash without any warning to players. SOE apologized for this abrupt change in policy and reinstated the option for an additional week, after which it was removed permanently. November 18, 2013 saw the closure of the sole Mac OS server Al'Kabor. In February 2015 Sony sold its online entertainment division to private equity group Columbus Nova, with Sony Online Entertainment subsequently renamed Daybreak Game Company (DBG). An initial period of uncertainty followed, with all projects such as expansions and sequels put on hold and staff laid off. The situation stabilized around the game's 16th anniversary celebrations, and a new expansion was released in November 2015. Expansions There have been twenty-eight expansions to the original game since release. Expansions are purchased separately and provide additional content to the game (for example: raising the maximum character level; adding new races, classes, zones, continents, quests, equipment, game features). When you purchase the latest expansion you receive all previous expansions you may not have previously purchased. Additionally, the game is updated through downloaded patches. The EverQuest expansions are as follows: Servers The game runs on multiple game servers, each with a unique name for identification. These names were originally the deities of the world of Norrath. In technical terms, each game server is actually a cluster of server machines. Once a character is created, it can be played only on that server unless the character is transferred to a new server by the customer service staff, generally for a fee. Each server often has a unique community and people often include the server name when identifying their character outside of the game. There is an official EverQuest server list, as well as unofficial 3rd-party servers. For example, the Project 1999 EverQuest servers are intended to recreate EverQuest in the state it existed in the year it launched and the two subsequent expansions. Referred to as the “Classic Trilogy”. OS X SOE devoted one server (Al'Kabor) to an OS X version of the game, which opened for beta testing in early 2003, and officially released on June 24, 2003. The game was never developed beyond the Planes of Power expansion, and contained multiple features and bugs not seen on PC servers, as a side-effect of the codebase having been split from an early Planes of Power date but not updated with the PC codebase. In January 2012, SOE announced plans to shut down the server, but based on the passionate response of the player base, rescinded the decision and changed Al'Kabor to a free-to-play subscription model. At about the same time, SOE revised the Macintosh client software to run natively on Intel processors. Players running on older, PowerPC-based systems lost access to the game at that point. Finally in November 2013, SOE closed Al'Kabor. European Two SOE servers were set up to better support players in and around Europe: Antonius Bayle and Kane Bayle. Kane Bayle was merged into Antonius Bayle. With the advent of the New Dawn promotion, three additional servers were set up and maintained by Ubisoft: Venril Sathir (British), Sebilis (French) and Kael Drakkal (German). The downside of the servers was that while it was possible to transfer to them, it was impossible to transfer off. The servers were subsequently acquired by SOE and all three were merged into Antonius Bayle server. Reception Reviews of Everquest were mostly positive upon release in 1999, earning an 85 out of 100 score from aggregate review website Metacritic. Comparing it to other online role-playing titles at the time, critics called it "the best game in its class," and the "most immersive and most addictive online RPG to date." Dan Amrich of GamePro magazine declared that "the bar for online gaming has not so much been raised as obliterated," and that the game's developers had "created the first true online killer app." The reviewer would find fault with its repetitive gameplay in the early levels and lack of sufficient documentation to help new players, urging them to turn to fansites for help instead. Greg Kasavin of GameSpot similarly felt that the game's combat was "uninteresting" but did note that, unlike earlier games in the genre, EverQuest offered the opportunity to play on servers that wouldn't allow players to fight each other unless they chose to, and that it heavily promoted cooperation. Ultimately, the reviewer would declare that "the combat may be a little boring, the manual may be horrible, the quest system half-baked, and the game not without its small share of miscellaneous bugs. But all you need is to find a like-minded adventurer or two, and all of a sudden EverQuest stands to become one of the most memorable gaming experiences you've ever had." Baldric of Game Revolution likewise stated that the game was more co-operative than Ultima Online, but that there was less interaction with the environment, calling it more "player oriented" instead of "'world' oriented." Despite server issues during the initial launch, reviewers felt that the game played well even on lower-end network cards, with Tal Blevins of IGN remarking that it "rarely suffered from major lag issues, even on a 28.8k modem." The reviewer did feel that the title suffered from a lack of player customization aside from different face types, meaning all characters of the same race looked mostly the same, but its visual quality on the whole was "excellent" with "particularly impressive" spell, lighting, and particle effects. Next Generation stated that "EverQuest is one of the rare games that gives back increasingly as you play it, and it is the newest high watermark by which all future persistent online worlds will be judged." Computer Games Magazine would also commend the game's three-dimensional graphics and environments, remarking that "With its 3D graphics, first-person perspective, and elegantly simple combat system, EverQuest has finally given us the first step towards a true virtual world. Internet gaming will never be the same." Accolades Everquest was named GameSpot's 1999 Game of the Year in its Best & Worst of 1999 awards, remarking that "Following EverQuest's release in March, the whole gaming industry effectively ground to a halt [...] At least one prominent game developer blamed EverQuest for product delays, and for several weeks GameSpot's editors were spending more time exploring Norrath than they were doing their jobs." The website would also include the game in their list of the Greatest Games of All Time in 2004. GameSpot UK would also rank the title 14th on its list of the 100 Best Computer Games of the Millennium in 2000, calling it "a technological tour de force" and "the first online RPG to bring the production values of single-player games to the online masses." The Academy of Interactive Arts and Sciences named EverQuest their Online Game of the Year for 1999, while Game Revolution named it the Best PC RPG of 1999. It was included in Time magazine's Best of 1999 in the "Tech" category, and Entertainment Weekly would include the game in their Top Ten Hall of Fame Video Games of the '90s, calling its virtual world "the nearest you could get to being on a Star Trek holodeck." In 2007, Sony Online Entertainment received a Technology & Engineering Emmy Award for EverQuest under the category of "Development of Massively Multiplayer Online Graphical Role Playing Games". During the 2nd annual Game Developers Choice Online Awards in 2011, EverQuest received a Hall of Fame award for its long-term advancement of online gaming, such as being the first MMORPG to feature a guild system and raiding. Editors of Computer Gaming World and GameSpot each nominated EverQuest for their 1999 "Role-Playing Game of the Year" awards, both of which ultimately went to Planescape: Torment. CNET Gamecenter likewise nominated it in this category, but gave the award to Asheron's Call. GameSpot would also nominate the title for Best Multiplayer Game of 1999, but would give the award to Quake III Arena. In 2012, 1UP.com ranked EverQuest 57th on its list of the Top 100 Essential Games. Game Informer placed the game 33rd on their top 100 video games of all time in 2009. Sales and subscriptions EverQuest was the most pre-ordered PC title on EBGames.com prior to its release in March 1999. The game had 10,000 active subscribers 24 hours after launch, making it the high-selling online role-playing game up until that point. It achieved 60,000 subscribers by April 1999. Six months later, around 225,000 copies of the game had been sold in total, with 150,000 active subscribers. By early 2000, the game's domestic sales alone reached 231,093 copies, which drew revenues of $10.6 million. NPD Techworld, a firm that tracked sales the United States, reported 559,948 units sold of EverQuest by December 2002. Subscription numbers would rise to over 500,000 active accounts four years after release in 2003. By the end of 2004 the title's lifetime sales exceeded 3 million copies worldwide and reached an active subscriber peak of 550,000. As of September 30, 2020, EverQuest had 66,000 subscribers and 82,000 monthly active players. Controversies Sale of in-game objects/real world economics The sale of in-game objects for real currency is a controversial and lucrative industry with topics concerning issues practices of hacking/stealing accounts for profit. Critics often cite how it affects the virtual economy inside the game. In 2001, the sales of in-game items for real life currency was banned on eBay. A practice in the real-world trade economy is of companies creating characters, powerleveling them to make them powerful, and then reselling the characters for large sums of money or in-game items of other games. Sony discourages the payment of real-world money for online goods, except on certain "Station Exchange" servers in EverQuest II, launched in July 2005. The program facilitates buying in-game items for real money from fellow players for a nominal fee. At this point this system only applies to select EverQuest II servers; none of the pre-Station Exchange EverQuest II or EverQuest servers are affected. In 2012, Sony added an in-game item called a "Krono", which adds 30 days of game membership throughout EverQuest and EverQuest II. The item can be initially bought starting at US$17.99. Up to 25 "Kronos" can be bought for US$424.99. Krono can be resold via player trading, which has allowed Krono to be frequently used in the real-world trade economy due to its inherent value. Intellectual property and role-playing Mystere incident On October 2000, Verant banned a player by the name of Mystere, allegedly for creating controversial fan fiction, causing outrage among some EverQuest players and |
divergent mutations to accumulate between two lineages, the approximate date for the split between lineages can be calculated. The gibbons (family Hylobatidae) and then the orangutans (genus Pongo) were the first groups to split from the line leading to the hominins, including humans—followed by gorillas (genus Gorilla), and, ultimately, by the chimpanzees (genus Pan). The splitting date between hominin and chimpanzee lineages is placed by some between , that is, during the Late Miocene. Speciation, however, appears to have been unusually drawn out. Initial divergence occurred sometime between , but ongoing hybridization blurred the separation and delayed complete separation during several millions of years. Patterson (2006) dated the final divergence at . Genetic evidence has also been employed to resolve the question of whether there was any gene flow between early modern humans and Neanderthals, and to enhance the understanding of the early human migration patterns and splitting dates. By comparing the parts of the genome that are not under natural selection and which therefore accumulate mutations at a fairly steady rate, it is possible to reconstruct a genetic tree incorporating the entire human species since the last shared ancestor. Each time a certain mutation (single-nucleotide polymorphism) appears in an individual and is passed on to his or her descendants, a haplogroup is formed including all of the descendants of the individual who will also carry that mutation. By comparing mitochondrial DNA which is inherited only from the mother, geneticists have concluded that the last female common ancestor whose genetic marker is found in all modern humans, the so-called mitochondrial Eve, must have lived around 200,000 years ago. Genetics Human evolutionary genetics studies how one human genome differs from the other, the evolutionary past that gave rise to it, and its current effects. Differences between genomes have anthropological, medical and forensic implications and applications. Genetic data can provide important insight into human evolution. Evidence from the fossil record There is little fossil evidence for the divergence of the gorilla, chimpanzee and hominin lineages. The earliest fossils that have been proposed as members of the hominin lineage are Sahelanthropus tchadensis dating from , Orrorin tugenensis dating from , and Ardipithecus kadabba dating to . Each of these have been argued to be a bipedal ancestor of later hominins but, in each case, the claims have been contested. It is also possible that one or more of these species are ancestors of another branch of African apes, or that they represent a shared ancestor between hominins and other apes. The question then of the relationship between these early fossil species and the hominin lineage is still to be resolved. From these early species, the australopithecines arose around and diverged into robust (also called Paranthropus) and gracile branches, one of which (possibly A. garhi) probably went on to become ancestors of the genus Homo. The australopithecine species that is best represented in the fossil record is Australopithecus afarensis with more than 100 fossil individuals represented, found from Northern Ethiopia (such as the famous "Lucy"), to Kenya, and South Africa. Fossils of robust australopithecines such as Au. robustus (or alternatively Paranthropus robustus) and Au./P. boisei are particularly abundant in South Africa at sites such as Kromdraai and Swartkrans, and around Lake Turkana in Kenya. The earliest member of the genus Homo is Homo habilis which evolved around . Homo habilis is the first species for which we have positive evidence of the use of stone tools. They developed the Oldowan lithic technology, named after the Olduvai Gorge in which the first specimens were found. Some scientists consider Homo rudolfensis, a larger bodied group of fossils with similar morphology to the original H. habilis fossils, to be a separate species, while others consider them to be part of H. habilis—simply representing intraspecies variation, or perhaps even sexual dimorphism. The brains of these early hominins were about the same size as that of a chimpanzee, and their main adaptation was bipedalism as an adaptation to terrestrial living. During the next million years, a process of encephalization began and, by the arrival (about ) of Homo erectus in the fossil record, cranial capacity had doubled. Homo erectus were the first of the hominins to emigrate from Africa, and, from , this species spread through Africa, Asia, and Europe. One population of H. erectus, also sometimes classified as a separate species Homo ergaster, remained in Africa and evolved into Homo sapiens. It is believed that these species, H. erectus and H. ergaster, were the first to use fire and complex tools. In Eurasia H. erectus evolved into species such as H. antecessor, H. heidelbergensis and H. neanderthalensis. The earliest fossils of anatomically modern humans are from the Middle Paleolithic, about 300–200,000 years ago such as the Herto and Omo remains of Ethiopia, Jebel Irhoud remains of Morocco, and Florisbad remains of South Africa; later fossils from Es Skhul cave in Israel and Southern Europe begin around 90,000 years ago (). As modern humans spread out from Africa, they encountered other hominins such as Homo neanderthalensis and the Denisovans, who may have evolved from populations of Homo erectus that had left Africa around . The nature of interaction between early humans and these sister species has been a long-standing source of controversy, the question being whether humans replaced these earlier species or whether they were in fact similar enough to interbreed, in which case these earlier populations may have contributed genetic material to modern humans. This migration out of Africa is estimated to have begun about 70–50,000 years BP and modern humans subsequently spread globally, replacing earlier hominins either through competition or hybridization. They inhabited Eurasia and Oceania by 40,000 years BP, and the Americas by at least 14,500 years BP. Inter-species breeding The hypothesis of interbreeding, also known as hybridization, admixture or hybrid-origin theory, has been discussed ever since the discovery of Neanderthal remains in the 19th century. The linear view of human evolution began to be abandoned in the 1970s as different species of humans were discovered that made the linear concept increasingly unlikely. In the 21st century with the advent of molecular biology techniques and computerization, whole-genome sequencing of Neanderthal and human genome were performed, confirming recent admixture between different human species. In 2010, evidence based on molecular biology was published, revealing unambiguous examples of interbreeding between archaic and modern humans during the Middle Paleolithic and early Upper Paleolithic. It has been demonstrated that interbreeding happened in several independent events that included Neanderthals and Denisovans, as well as several unidentified hominins. Today, approximately 2% of DNA from all non-African populations (including Europeans, Asians, and Oceanians) is Neanderthal, with traces of Denisovan heritage. Also, 4–6% of modern Melanesian genetics are Denisovan. Comparisons of the human genome to the genomes of Neandertals, Denisovans and apes can help identify features that set modern humans apart from other hominin species. In a 2016 comparative genomics study, a Harvard Medical School/UCLA research team made a world map on the distribution and made some predictions about where Denisovan and Neanderthal genes may be impacting modern human biology. For example, comparative studies in the mid-2010s found several traits related to neurological, immunological, developmental, and metabolic phenotypes, that were developed by archaic humans to European and Asian environments and inherited to modern humans through admixture with local hominins. Although the narratives of human evolution are often contentious, several discoveries since 2010 show that human evolution should not be seen as a simple linear or branched progression, but a mix of related species. In fact, genomic research has shown that hybridization between substantially diverged lineages is the rule, not the exception, in human evolution. Furthermore, it is argued that hybridization was an essential creative force in the emergence of modern humans. Before Homo Early evolution of primates The evolutionary history of the primates can be traced back 65 million years. One of the oldest known primate-like mammal species, the Plesiadapis, came from North America; another, Archicebus, came from China. Other similar basal primates were widespread in Eurasia and Africa during the tropical conditions of the Paleocene and Eocene. David R. Begun concluded that early primates flourished in Eurasia and that a lineage leading to the African apes and humans, including to Dryopithecus, migrated south from Europe or Western Asia into Africa. The surviving tropical population of primates—which is seen most completely in the Upper Eocene and lowermost Oligocene fossil beds of the Faiyum depression southwest of Cairo—gave rise to all extant primate species, including the lemurs of Madagascar, lorises of Southeast Asia, galagos or "bush babies" of Africa, and to the anthropoids, which are the Platyrrhines or New World monkeys, the Catarrhines or Old World monkeys, and the great apes, including humans and other hominids. The earliest known catarrhine is Kamoyapithecus from uppermost Oligocene at Eragaleit in the northern Great Rift Valley in Kenya, dated to 24 million years ago. Its ancestry is thought to be species related to Aegyptopithecus, Propliopithecus, and Parapithecus from the Faiyum, at around 35 million years ago. In 2010, Saadanius was described as a close relative of the last common ancestor of the crown catarrhines, and tentatively dated to 29–28 million years ago, helping to fill an 11-million-year gap in the fossil record. In the Early Miocene, about 22 million years ago, the many kinds of arboreally adapted primitive catarrhines from East Africa suggest a long history of prior diversification. Fossils at 20 million years ago include fragments attributed to Victoriapithecus, the earliest Old World monkey. Among the genera thought to be in the ape lineage leading up to 13 million years ago are Proconsul, Rangwapithecus, Dendropithecus, Limnopithecus, Nacholapithecus, Equatorius, Nyanzapithecus, Afropithecus, Heliopithecus, and Kenyapithecus, all from East Africa. The presence of other generalized non-cercopithecids of Middle Miocene from sites far distant—Otavipithecus from cave deposits in Namibia, and Pierolapithecus and Dryopithecus from France, Spain and Austria—is evidence of a wide diversity of forms across Africa and the Mediterranean basin during the relatively warm and equable climatic regimes of the Early and Middle Miocene. The youngest of the Miocene hominoids, Oreopithecus, is from coal beds in Italy that have been dated to 9 million years ago. Molecular evidence indicates that the lineage of gibbons (family Hylobatidae) diverged from the line of great apes some 18–12 million years ago, and that of orangutans (subfamily Ponginae) diverged from the other great apes at about 12 million years; there are no fossils that clearly document the ancestry of gibbons, which may have originated in a so-far-unknown Southeast Asian hominoid population, but fossil proto-orangutans may be represented by Sivapithecus from India and Griphopithecus from Turkey, dated to around 10 million years ago. Divergence of the human clade from other great apes Species close to the last common ancestor of gorillas, chimpanzees and humans may be represented by Nakalipithecus fossils found in Kenya and Ouranopithecus found in Greece. Molecular evidence suggests that between 8 and 4 million years ago, first the gorillas, and then the chimpanzees (genus Pan) split off from the line leading to the humans. Human DNA is approximately 98.4% identical to that of chimpanzees when comparing single nucleotide polymorphisms (see human evolutionary genetics). The fossil record, however, of gorillas and chimpanzees is limited; both poor preservation – rain forest soils tend to be acidic and dissolve bone – and sampling bias probably contribute to this problem. Other hominins probably adapted to the drier environments outside the equatorial belt; and there they encountered antelope, hyenas, dogs, pigs, elephants, horses, and others. The equatorial belt contracted after about 8 million years ago, and there is very little fossil evidence for the split—thought to have occurred around that time—of the hominin lineage from the lineages of gorillas and chimpanzees. The earliest fossils argued by some to belong to the human lineage are Sahelanthropus tchadensis (7 Ma) and Orrorin tugenensis (6 Ma), followed by Ardipithecus (5.5–4.4 Ma), with species Ar. kadabba and Ar. ramidus. It has been argued in a study of the life history of Ar. ramidus that the species provides evidence for a suite of anatomical and behavioral adaptations in very early hominins unlike any species of extant great ape. This study demonstrated affinities between the skull morphology of Ar. ramidus and that of infant and juvenile chimpanzees, suggesting the species evolved a juvenalised or paedomorphic craniofacial morphology via heterochronic dissociation of growth trajectories. It was also argued that the species provides support for the notion that very early hominins, akin to bonobos (Pan paniscus) the less aggressive species of the genus Pan, may have evolved via the process of self-domestication. Consequently, arguing against the so-called "chimpanzee referential model" the authors suggest it is no longer tenable to use chimpanzee (Pan troglodytes) social and mating behaviors in models of early hominin social evolution. When commenting on the absence of aggressive canine morphology in Ar. ramidus and the implications this has for the evolution of hominin social psychology, they wrote: The authors argue that many of the basic human adaptations evolved in the ancient forest and woodland ecosystems of late Miocene and early Pliocene Africa. Consequently, they argue that humans may not represent evolution from a chimpanzee-like ancestor as has traditionally been supposed. This suggests many modern human adaptations represent phylogenetically deep traits and that the behavior and morphology of chimpanzees may have evolved subsequent to the split with the common ancestor they share with humans. Genus Australopithecus The genus Australopithecus evolved in eastern Africa around 4 million years ago before spreading throughout the continent and eventually becoming extinct 2 million years ago. During this time period various forms of australopiths existed, including Australopithecus anamensis, Au. afarensis, Au. sediba, and Au. africanus. There is still some debate among academics whether certain African hominid species of this time, such as Au. robustus and Au. boisei, constitute members of the same genus; if so, they would be considered to be Au. robust australopiths whilst the others would be considered Au. gracile australopiths. However, if these species do indeed constitute their own genus, then they may be given their own name, Paranthropus. Australopithecus (4–1.8 Ma), with species Au. anamensis, Au. afarensis, Au. africanus, Au. bahrelghazali, Au. garhi, and Au. sediba; Kenyanthropus (3–2.7 Ma), with species K. platyops; Paranthropus (3–1.2 Ma), with species P. aethiopicus, P. boisei, and P. robustus A new proposed species Australopithecus deyiremeda is claimed to have been discovered living at the same time period of Au. afarensis. There is debate if Au. deyiremeda is a new species or is Au. afarensis. Australopithecus prometheus, otherwise known as Little Foot has recently been dated at 3.67 million years old through a new dating technique, making the genus Australopithecus as old as afarensis. Given the opposable big toe found on Little Foot, it seems that he was a good climber, and it is thought given the night predators of the region, he probably, like gorillas and chimpanzees, built a nesting platform at night, in the trees. Evolution of genus Homo The earliest documented representative of the genus Homo is Homo habilis, which evolved around , and is arguably the earliest species for which there is positive evidence of the use of stone tools. The brains of these early hominins were about the same size as that of a chimpanzee, although it has been suggested that this was the time in which the human SRGAP2 gene doubled, producing a more rapid wiring of the frontal cortex. During the next million years a process of rapid encephalization occurred, and with the arrival of Homo erectus and Homo ergaster in the fossil record, cranial capacity had doubled to 850 cm3. (Such an increase in human brain size is equivalent to each generation having 125,000 more neurons than their parents.) It is believed that Homo erectus and Homo ergaster were the first to use fire and complex tools, and were the first of the hominin line to leave Africa, spreading throughout Africa, Asia, and Europe between . According to the recent African origin of modern humans theory, modern humans evolved in Africa possibly from Homo heidelbergensis, Homo rhodesiensis or Homo antecessor and migrated out of the continent some 50,000 to 100,000 years ago, gradually replacing local populations of Homo erectus, Denisova hominins, Homo floresiensis, Homo luzonensis and Homo neanderthalensis. Archaic Homo sapiens, the forerunner of anatomically modern humans, evolved in the Middle Paleolithic between 400,000 and 250,000 years ago. Recent DNA evidence suggests that several haplotypes of Neanderthal origin are present among all non-African populations, and Neanderthals and other hominins, such as Denisovans, may have contributed up to 6% of their genome to present-day humans, suggestive of a limited interbreeding between these species. The transition to behavioral modernity with the development of symbolic culture, language, and specialized lithic technology happened around 50,000 years ago, according to some anthropologists, although others point to evidence that suggests that a gradual change in behavior took place over a longer time span. Homo sapiens is the only extant species of its genus, Homo. While some (extinct) Homo species might have been ancestors of Homo sapiens, many, perhaps most, were likely "cousins", having speciated away from the ancestral hominin line. There is yet no consensus as to which of these groups should be considered a separate species and which should be a subspecies; this may be due to the dearth of fossils or to the slight differences used to classify species in the genus Homo. The Sahara pump theory (describing an occasionally passable "wet" Sahara desert) provides one possible explanation of the early variation in the genus Homo. Based on archaeological and paleontological evidence, it has been possible to infer, to some extent, the ancient dietary practices of various Homo species and to study the role of diet in physical and behavioral evolution within Homo. Some anthropologists and archaeologists subscribe to the Toba catastrophe theory, which posits that the supereruption of Lake Toba on Sumatran island in Indonesia some 70,000 years ago caused global consequences, killing the majority of humans and creating a population bottleneck that affected the genetic inheritance of all humans today. The genetic and archaeological evidence for this remains in question however. H. habilis and H. gautengensis Homo habilis lived from about 2.8 to 1.4 Ma. The species evolved in South and East Africa in the Late Pliocene or Early Pleistocene, 2.5–2 Ma, when it diverged from the australopithecines. Homo habilis had smaller molars and larger brains than the australopithecines, and made tools from stone and perhaps animal bones. One of the first known hominins was nicknamed 'handy man' by discoverer Louis Leakey due to its association with stone tools. Some scientists have proposed moving this species out of Homo and into Australopithecus due to the morphology of its skeleton being more adapted to living on trees rather than to moving on two legs like Homo sapiens. In May 2010, a new species, Homo gautengensis, was discovered in South Africa. H. rudolfensis and H. georgicus These are proposed species names for fossils from about 1.9–1.6 Ma, whose relation to Homo habilis is not yet clear. Homo rudolfensis refers to a single, incomplete skull from Kenya. Scientists have suggested that this was another Homo habilis, but this has not been confirmed. Homo georgicus, from Georgia, may be an intermediate form between Homo habilis and Homo erectus, or a subspecies of Homo erectus. H. ergaster and H. erectus The first fossils of Homo erectus were discovered by Dutch physician Eugene Dubois in 1891 on the Indonesian island of Java. He originally named the material Anthropopithecus erectus (1892–1893, considered at this point as a chimpanzee-like fossil primate) and Pithecanthropus erectus (1893–1894, changing his mind as of based on its morphology, which he considered to be intermediate between that of humans and apes). Years later, in the 20th century, the German physician and paleoanthropologist Franz Weidenreich (1873–1948) compared in detail the characters of Dubois' Java Man, then named Pithecanthropus erectus, with the characters of the Peking Man, then named Sinanthropus pekinensis. Weidenreich concluded in 1940 that because of their anatomical similarity with modern humans it was necessary to gather all these specimens of Java and China in a single species of the genus Homo, the species Homo erectus. Homo erectus lived from about 1.8 Ma to about 70,000 years ago – which would indicate that they were probably wiped out by the Toba catastrophe; however, nearby Homo floresiensis survived it. The early phase of Homo erectus, from 1.8 to 1.25 Ma, is considered by some to be a separate species, Homo ergaster, or as Homo erectus ergaster, a subspecies of Homo erectus. In Africa in the Early Pleistocene, 1.5–1 Ma, some populations of Homo habilis are thought to have evolved larger brains and to have made more elaborate stone tools; these differences and others are sufficient for anthropologists to classify them as a new species, Homo erectus—in Africa. The evolution of locking knees and the movement of the foramen magnum are thought to be likely drivers of the larger population changes. This species also may have used fire to cook meat. Richard Wrangham suggests that the fact that Homo seems to have been ground dwelling, with reduced intestinal length, smaller dentition, "and swelled our brains to their current, horrendously fuel-inefficient size", suggest that control of fire and releasing increased nutritional value through cooking was the key adaptation that separated Homo from tree-sleeping Australopithecines. A famous example of Homo erectus is Peking Man; others were found in Asia (notably in Indonesia), Africa, and Europe. Many paleoanthropologists now use the term Homo ergaster for the non-Asian forms of this group, and reserve Homo erectus only for those fossils that are found in Asia and meet certain skeletal and dental requirements which differ slightly from H. ergaster. H. cepranensis and H. antecessor These are proposed as species that may be intermediate between H. erectus and H. heidelbergensis. H. antecessor is known from fossils from Spain and England that are dated 1.2 Ma–500 ka. H. cepranensis refers to a single skull cap from Italy, estimated to be about 800,000 years old. H. heidelbergensis H. heidelbergensis ("Heidelberg Man") lived from about 800,000 to about 300,000 years ago. Also proposed as Homo sapiens heidelbergensis or Homo sapiens paleohungaricus. H. rhodesiensis, and the Gawis cranium H. rhodesiensis, estimated to be 300,000–125,000 years old. Most current researchers place Rhodesian Man within the group of Homo heidelbergensis, though other designations such as archaic Homo sapiens and Homo sapiens rhodesiensis have been proposed. In February 2006 a fossil, the Gawis cranium, was found which might possibly be a species intermediate between H. erectus and H. sapiens or one of many evolutionary dead ends. The skull from Gawis, Ethiopia, is believed to be 500,000–250,000 years old. Only summary details are known, and the finders have not yet released a peer-reviewed study. Gawis man's facial features suggest its being either an intermediate species or an example of a "Bodo man" female. Neanderthal and Denisovan Homo neanderthalensis, alternatively designated as Homo sapiens neanderthalensis, lived in Europe and Asia from 400,000 to about 28,000 years ago. There are a number of clear anatomical differences between anatomically modern humans (AMH) and Neanderthal populations. Many of these relate to the superior adaptation to cold environments possessed by the Neanderthal populations. Their surface to volume ratio is an extreme version of that found amongst Inuit populations, indicating that they were less inclined to lose body heat than were AMH. From brain Endocasts, Neanderthals also had significantly larger brains. This would seem to indicate that the intellectual superiority of AMH populations may be questionable. More recent research by Eiluned Pearce, Chris Stringer, R.I.M. Dunbar, however, have shown important differences in brain architecture. For example, in both the orbital chamber size and in the size of the occipital lobe, the larger size suggests that the Neanderthal had a better visual acuity than modern humans. This would give a superior vision in the inferior light conditions found in Glacial Europe. It also seems that the higher body mass of Neanderthals had a correspondingly larger brain mass required for body care and control. The Neanderthal populations seem to have been physically superior to AMH populations. These differences may have been sufficient to give Neanderthal populations an environmental superiority to AMH populations from 75,000 to 45,000 years BP. With these differences, Neanderthal brains show a smaller area was available for social functioning. Plotting group size possible from endocranial volume, suggests that AMH populations (minus occipital lobe size), had a Dunbars number of 144 possible relationships. Neanderthal populations seem to have been limited to about 120 individuals. This would show up in a larger number of possible mates for AMH humans, with increased risks of inbreeding amongst Neanderthal populations. It also suggests that humans had larger trade catchment areas than Neanderthals (confirmed in the distribution of stone tools). With larger populations, social and technological innovations were easier to fix in human populations, which may have all contributed to the fact that modern Homo sapiens replaced the Neanderthal populations by 28,000 BP. Earlier evidence from sequencing mitochondrial DNA suggested that no significant gene flow occurred between H. neanderthalensis and H. sapiens, and that the two were separate species that shared a common ancestor about 660,000 years ago. However, a sequencing of the Neanderthal genome in 2010 indicated that Neanderthals did indeed interbreed with anatomically modern humans circa 45,000 to 80,000 years ago (at the approximate time that modern humans migrated out from Africa, but before they dispersed into Europe, Asia and elsewhere). The genetic sequencing of a 40,000-year-old human skeleton from Romania showed that 11% of its genome was Neanderthal, and it was estimated that the individual had a Neanderthal ancestor 4–6 generations previously, in addition to a contribution from earlier interbreeding in the Middle East. Though this interbred Romanian population seems not to have been ancestral to modern humans, the finding indicates that interbreeding happened repeatedly. All modern non-African humans have about 1% to 4% or, according to more recent data, about 1.5% to 2.6% of their DNA derived from Neanderthal DNA, and this finding is consistent with recent studies indicating that the divergence of some human alleles dates to one Ma, although the interpretation of these studies has been questioned. Neanderthals and Homo sapiens could have co-existed in Europe for as long as 10,000 years, during which populations of anatomically modern humans exploded vastly outnumbering Neanderthals, possibly outcompeting them by sheer numerical strength. In 2008, archaeologists working at the site of Denisova Cave in the Altai Mountains of Siberia uncovered a small bone fragment from the fifth finger of a juvenile member of Denisovans. Artifacts, including a bracelet, excavated in the cave at the same level were carbon dated to around 40,000 BP. As DNA had survived in the fossil fragment due to the cool climate of the Denisova Cave, both mtDNA and nuclear DNA were sequenced. While the divergence point of the mtDNA was unexpectedly deep in time, the full genomic sequence suggested the Denisovans belonged to the same lineage as Neanderthals, with the two diverging shortly after their line split from the lineage that gave rise to modern humans. Modern humans are known to have overlapped with Neanderthals in Europe and the Near East for possibly more than 40,000 years, and the discovery raises the possibility that Neanderthals, Denisovans, and modern humans may have co-existed and interbred. The existence of this distant branch creates a much more complex picture of humankind during the Late Pleistocene than previously thought. Evidence has also been found that as much as 6% of the DNA of some modern Melanesians derive from Denisovans, indicating limited interbreeding in Southeast Asia. Alleles thought to have originated in Neanderthals and Denisovans have been identified at several genetic loci in the genomes of modern humans outside of Africa. HLA haplotypes from Denisovans and Neanderthal represent more than half the HLA alleles of modern Eurasians, indicating strong positive selection for these introgressed alleles. Corinne Simoneti at Vanderbilt University, in Nashville and her team have found from medical records of 28,000 people of European descent that the presence of Neanderthal DNA segments may be associated with a likelihood to suffer depression more frequently. The flow of genes from Neanderthal populations to modern humans was not all one way. Sergi Castellano of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, has in 2016 reported that while Denisovan and Neanderthal genomes are more related to each other than they are to us, Siberian Neanderthal genomes show similarity to the modern human gene pool, more so than to European Neanderthal populations. The evidence suggests that the Neanderthal populations interbred with modern humans possibly 100,000 years ago, probably somewhere in the Near East. Studies of a Neanderthal child at Gibraltar show from brain development and teeth eruption that Neanderthal children may have matured more rapidly than is the case for Homo sapiens. H. floresiensis H. floresiensis, which lived from approximately 190,000 to 50,000 years before present (BP), has been nicknamed the hobbit for its small size, possibly a result of insular dwarfism. H. floresiensis is intriguing both for its size and its age, being an example of a recent species of the genus Homo that exhibits derived traits not shared with modern humans. In other words, H. floresiensis shares a common ancestor with modern humans, but split from the modern human lineage and followed a distinct evolutionary path. The main find was a skeleton believed to be a woman of about 30 years of age. Found in 2003, it has been dated to approximately 18,000 years old. The living woman was estimated to be one meter in height, with a brain volume of just 380 cm3 (considered small for a chimpanzee and less than a third of the H. sapiens average of 1400 cm3). However, there is an ongoing debate over whether H. floresiensis is indeed a separate species. Some scientists hold that H. floresiensis was a modern H. sapiens with pathological dwarfism. This hypothesis is supported in part, because some modern humans who live on Flores, the Indonesian island where the skeleton was found, are pygmies. This, coupled with pathological dwarfism, could have resulted in a significantly diminutive human. The other major attack on H. floresiensis as a separate species is that it was found with tools only associated with H. sapiens. The hypothesis of pathological dwarfism, however, fails to explain additional anatomical features that are unlike those of modern humans (diseased or not) but much like those of ancient members of our genus. Aside from cranial features, these features include the form of bones in the wrist, forearm, shoulder, | the first species for which we have positive evidence of the use of stone tools. They developed the Oldowan lithic technology, named after the Olduvai Gorge in which the first specimens were found. Some scientists consider Homo rudolfensis, a larger bodied group of fossils with similar morphology to the original H. habilis fossils, to be a separate species, while others consider them to be part of H. habilis—simply representing intraspecies variation, or perhaps even sexual dimorphism. The brains of these early hominins were about the same size as that of a chimpanzee, and their main adaptation was bipedalism as an adaptation to terrestrial living. During the next million years, a process of encephalization began and, by the arrival (about ) of Homo erectus in the fossil record, cranial capacity had doubled. Homo erectus were the first of the hominins to emigrate from Africa, and, from , this species spread through Africa, Asia, and Europe. One population of H. erectus, also sometimes classified as a separate species Homo ergaster, remained in Africa and evolved into Homo sapiens. It is believed that these species, H. erectus and H. ergaster, were the first to use fire and complex tools. In Eurasia H. erectus evolved into species such as H. antecessor, H. heidelbergensis and H. neanderthalensis. The earliest fossils of anatomically modern humans are from the Middle Paleolithic, about 300–200,000 years ago such as the Herto and Omo remains of Ethiopia, Jebel Irhoud remains of Morocco, and Florisbad remains of South Africa; later fossils from Es Skhul cave in Israel and Southern Europe begin around 90,000 years ago (). As modern humans spread out from Africa, they encountered other hominins such as Homo neanderthalensis and the Denisovans, who may have evolved from populations of Homo erectus that had left Africa around . The nature of interaction between early humans and these sister species has been a long-standing source of controversy, the question being whether humans replaced these earlier species or whether they were in fact similar enough to interbreed, in which case these earlier populations may have contributed genetic material to modern humans. This migration out of Africa is estimated to have begun about 70–50,000 years BP and modern humans subsequently spread globally, replacing earlier hominins either through competition or hybridization. They inhabited Eurasia and Oceania by 40,000 years BP, and the Americas by at least 14,500 years BP. Inter-species breeding The hypothesis of interbreeding, also known as hybridization, admixture or hybrid-origin theory, has been discussed ever since the discovery of Neanderthal remains in the 19th century. The linear view of human evolution began to be abandoned in the 1970s as different species of humans were discovered that made the linear concept increasingly unlikely. In the 21st century with the advent of molecular biology techniques and computerization, whole-genome sequencing of Neanderthal and human genome were performed, confirming recent admixture between different human species. In 2010, evidence based on molecular biology was published, revealing unambiguous examples of interbreeding between archaic and modern humans during the Middle Paleolithic and early Upper Paleolithic. It has been demonstrated that interbreeding happened in several independent events that included Neanderthals and Denisovans, as well as several unidentified hominins. Today, approximately 2% of DNA from all non-African populations (including Europeans, Asians, and Oceanians) is Neanderthal, with traces of Denisovan heritage. Also, 4–6% of modern Melanesian genetics are Denisovan. Comparisons of the human genome to the genomes of Neandertals, Denisovans and apes can help identify features that set modern humans apart from other hominin species. In a 2016 comparative genomics study, a Harvard Medical School/UCLA research team made a world map on the distribution and made some predictions about where Denisovan and Neanderthal genes may be impacting modern human biology. For example, comparative studies in the mid-2010s found several traits related to neurological, immunological, developmental, and metabolic phenotypes, that were developed by archaic humans to European and Asian environments and inherited to modern humans through admixture with local hominins. Although the narratives of human evolution are often contentious, several discoveries since 2010 show that human evolution should not be seen as a simple linear or branched progression, but a mix of related species. In fact, genomic research has shown that hybridization between substantially diverged lineages is the rule, not the exception, in human evolution. Furthermore, it is argued that hybridization was an essential creative force in the emergence of modern humans. Before Homo Early evolution of primates The evolutionary history of the primates can be traced back 65 million years. One of the oldest known primate-like mammal species, the Plesiadapis, came from North America; another, Archicebus, came from China. Other similar basal primates were widespread in Eurasia and Africa during the tropical conditions of the Paleocene and Eocene. David R. Begun concluded that early primates flourished in Eurasia and that a lineage leading to the African apes and humans, including to Dryopithecus, migrated south from Europe or Western Asia into Africa. The surviving tropical population of primates—which is seen most completely in the Upper Eocene and lowermost Oligocene fossil beds of the Faiyum depression southwest of Cairo—gave rise to all extant primate species, including the lemurs of Madagascar, lorises of Southeast Asia, galagos or "bush babies" of Africa, and to the anthropoids, which are the Platyrrhines or New World monkeys, the Catarrhines or Old World monkeys, and the great apes, including humans and other hominids. The earliest known catarrhine is Kamoyapithecus from uppermost Oligocene at Eragaleit in the northern Great Rift Valley in Kenya, dated to 24 million years ago. Its ancestry is thought to be species related to Aegyptopithecus, Propliopithecus, and Parapithecus from the Faiyum, at around 35 million years ago. In 2010, Saadanius was described as a close relative of the last common ancestor of the crown catarrhines, and tentatively dated to 29–28 million years ago, helping to fill an 11-million-year gap in the fossil record. In the Early Miocene, about 22 million years ago, the many kinds of arboreally adapted primitive catarrhines from East Africa suggest a long history of prior diversification. Fossils at 20 million years ago include fragments attributed to Victoriapithecus, the earliest Old World monkey. Among the genera thought to be in the ape lineage leading up to 13 million years ago are Proconsul, Rangwapithecus, Dendropithecus, Limnopithecus, Nacholapithecus, Equatorius, Nyanzapithecus, Afropithecus, Heliopithecus, and Kenyapithecus, all from East Africa. The presence of other generalized non-cercopithecids of Middle Miocene from sites far distant—Otavipithecus from cave deposits in Namibia, and Pierolapithecus and Dryopithecus from France, Spain and Austria—is evidence of a wide diversity of forms across Africa and the Mediterranean basin during the relatively warm and equable climatic regimes of the Early and Middle Miocene. The youngest of the Miocene hominoids, Oreopithecus, is from coal beds in Italy that have been dated to 9 million years ago. Molecular evidence indicates that the lineage of gibbons (family Hylobatidae) diverged from the line of great apes some 18–12 million years ago, and that of orangutans (subfamily Ponginae) diverged from the other great apes at about 12 million years; there are no fossils that clearly document the ancestry of gibbons, which may have originated in a so-far-unknown Southeast Asian hominoid population, but fossil proto-orangutans may be represented by Sivapithecus from India and Griphopithecus from Turkey, dated to around 10 million years ago. Divergence of the human clade from other great apes Species close to the last common ancestor of gorillas, chimpanzees and humans may be represented by Nakalipithecus fossils found in Kenya and Ouranopithecus found in Greece. Molecular evidence suggests that between 8 and 4 million years ago, first the gorillas, and then the chimpanzees (genus Pan) split off from the line leading to the humans. Human DNA is approximately 98.4% identical to that of chimpanzees when comparing single nucleotide polymorphisms (see human evolutionary genetics). The fossil record, however, of gorillas and chimpanzees is limited; both poor preservation – rain forest soils tend to be acidic and dissolve bone – and sampling bias probably contribute to this problem. Other hominins probably adapted to the drier environments outside the equatorial belt; and there they encountered antelope, hyenas, dogs, pigs, elephants, horses, and others. The equatorial belt contracted after about 8 million years ago, and there is very little fossil evidence for the split—thought to have occurred around that time—of the hominin lineage from the lineages of gorillas and chimpanzees. The earliest fossils argued by some to belong to the human lineage are Sahelanthropus tchadensis (7 Ma) and Orrorin tugenensis (6 Ma), followed by Ardipithecus (5.5–4.4 Ma), with species Ar. kadabba and Ar. ramidus. It has been argued in a study of the life history of Ar. ramidus that the species provides evidence for a suite of anatomical and behavioral adaptations in very early hominins unlike any species of extant great ape. This study demonstrated affinities between the skull morphology of Ar. ramidus and that of infant and juvenile chimpanzees, suggesting the species evolved a juvenalised or paedomorphic craniofacial morphology via heterochronic dissociation of growth trajectories. It was also argued that the species provides support for the notion that very early hominins, akin to bonobos (Pan paniscus) the less aggressive species of the genus Pan, may have evolved via the process of self-domestication. Consequently, arguing against the so-called "chimpanzee referential model" the authors suggest it is no longer tenable to use chimpanzee (Pan troglodytes) social and mating behaviors in models of early hominin social evolution. When commenting on the absence of aggressive canine morphology in Ar. ramidus and the implications this has for the evolution of hominin social psychology, they wrote: The authors argue that many of the basic human adaptations evolved in the ancient forest and woodland ecosystems of late Miocene and early Pliocene Africa. Consequently, they argue that humans may not represent evolution from a chimpanzee-like ancestor as has traditionally been supposed. This suggests many modern human adaptations represent phylogenetically deep traits and that the behavior and morphology of chimpanzees may have evolved subsequent to the split with the common ancestor they share with humans. Genus Australopithecus The genus Australopithecus evolved in eastern Africa around 4 million years ago before spreading throughout the continent and eventually becoming extinct 2 million years ago. During this time period various forms of australopiths existed, including Australopithecus anamensis, Au. afarensis, Au. sediba, and Au. africanus. There is still some debate among academics whether certain African hominid species of this time, such as Au. robustus and Au. boisei, constitute members of the same genus; if so, they would be considered to be Au. robust australopiths whilst the others would be considered Au. gracile australopiths. However, if these species do indeed constitute their own genus, then they may be given their own name, Paranthropus. Australopithecus (4–1.8 Ma), with species Au. anamensis, Au. afarensis, Au. africanus, Au. bahrelghazali, Au. garhi, and Au. sediba; Kenyanthropus (3–2.7 Ma), with species K. platyops; Paranthropus (3–1.2 Ma), with species P. aethiopicus, P. boisei, and P. robustus A new proposed species Australopithecus deyiremeda is claimed to have been discovered living at the same time period of Au. afarensis. There is debate if Au. deyiremeda is a new species or is Au. afarensis. Australopithecus prometheus, otherwise known as Little Foot has recently been dated at 3.67 million years old through a new dating technique, making the genus Australopithecus as old as afarensis. Given the opposable big toe found on Little Foot, it seems that he was a good climber, and it is thought given the night predators of the region, he probably, like gorillas and chimpanzees, built a nesting platform at night, in the trees. Evolution of genus Homo The earliest documented representative of the genus Homo is Homo habilis, which evolved around , and is arguably the earliest species for which there is positive evidence of the use of stone tools. The brains of these early hominins were about the same size as that of a chimpanzee, although it has been suggested that this was the time in which the human SRGAP2 gene doubled, producing a more rapid wiring of the frontal cortex. During the next million years a process of rapid encephalization occurred, and with the arrival of Homo erectus and Homo ergaster in the fossil record, cranial capacity had doubled to 850 cm3. (Such an increase in human brain size is equivalent to each generation having 125,000 more neurons than their parents.) It is believed that Homo erectus and Homo ergaster were the first to use fire and complex tools, and were the first of the hominin line to leave Africa, spreading throughout Africa, Asia, and Europe between . According to the recent African origin of modern humans theory, modern humans evolved in Africa possibly from Homo heidelbergensis, Homo rhodesiensis or Homo antecessor and migrated out of the continent some 50,000 to 100,000 years ago, gradually replacing local populations of Homo erectus, Denisova hominins, Homo floresiensis, Homo luzonensis and Homo neanderthalensis. Archaic Homo sapiens, the forerunner of anatomically modern humans, evolved in the Middle Paleolithic between 400,000 and 250,000 years ago. Recent DNA evidence suggests that several haplotypes of Neanderthal origin are present among all non-African populations, and Neanderthals and other hominins, such as Denisovans, may have contributed up to 6% of their genome to present-day humans, suggestive of a limited interbreeding between these species. The transition to behavioral modernity with the development of symbolic culture, language, and specialized lithic technology happened around 50,000 years ago, according to some anthropologists, although others point to evidence that suggests that a gradual change in behavior took place over a longer time span. Homo sapiens is the only extant species of its genus, Homo. While some (extinct) Homo species might have been ancestors of Homo sapiens, many, perhaps most, were likely "cousins", having speciated away from the ancestral hominin line. There is yet no consensus as to which of these groups should be considered a separate species and which should be a subspecies; this may be due to the dearth of fossils or to the slight differences used to classify species in the genus Homo. The Sahara pump theory (describing an occasionally passable "wet" Sahara desert) provides one possible explanation of the early variation in the genus Homo. Based on archaeological and paleontological evidence, it has been possible to infer, to some extent, the ancient dietary practices of various Homo species and to study the role of diet in physical and behavioral evolution within Homo. Some anthropologists and archaeologists subscribe to the Toba catastrophe theory, which posits that the supereruption of Lake Toba on Sumatran island in Indonesia some 70,000 years ago caused global consequences, killing the majority of humans and creating a population bottleneck that affected the genetic inheritance of all humans today. The genetic and archaeological evidence for this remains in question however. H. habilis and H. gautengensis Homo habilis lived from about 2.8 to 1.4 Ma. The species evolved in South and East Africa in the Late Pliocene or Early Pleistocene, 2.5–2 Ma, when it diverged from the australopithecines. Homo habilis had smaller molars and larger brains than the australopithecines, and made tools from stone and perhaps animal bones. One of the first known hominins was nicknamed 'handy man' by discoverer Louis Leakey due to its association with stone tools. Some scientists have proposed moving this species out of Homo and into Australopithecus due to the morphology of its skeleton being more adapted to living on trees rather than to moving on two legs like Homo sapiens. In May 2010, a new species, Homo gautengensis, was discovered in South Africa. H. rudolfensis and H. georgicus These are proposed species names for fossils from about 1.9–1.6 Ma, whose relation to Homo habilis is not yet clear. Homo rudolfensis refers to a single, incomplete skull from Kenya. Scientists have suggested that this was another Homo habilis, but this has not been confirmed. Homo georgicus, from Georgia, may be an intermediate form between Homo habilis and Homo erectus, or a subspecies of Homo erectus. H. ergaster and H. erectus The first fossils of Homo erectus were discovered by Dutch physician Eugene Dubois in 1891 on the Indonesian island of Java. He originally named the material Anthropopithecus erectus (1892–1893, considered at this point as a chimpanzee-like fossil primate) and Pithecanthropus erectus (1893–1894, changing his mind as of based on its morphology, which he considered to be intermediate between that of humans and apes). Years later, in the 20th century, the German physician and paleoanthropologist Franz Weidenreich (1873–1948) compared in detail the characters of Dubois' Java Man, then named Pithecanthropus erectus, with the characters of the Peking Man, then named Sinanthropus pekinensis. Weidenreich concluded in 1940 that because of their anatomical similarity with modern humans it was necessary to gather all these specimens of Java and China in a single species of the genus Homo, the species Homo erectus. Homo erectus lived from about 1.8 Ma to about 70,000 years ago – which would indicate that they were probably wiped out by the Toba catastrophe; however, nearby Homo floresiensis survived it. The early phase of Homo erectus, from 1.8 to 1.25 Ma, is considered by some to be a separate species, Homo ergaster, or as Homo erectus ergaster, a subspecies of Homo erectus. In Africa in the Early Pleistocene, 1.5–1 Ma, some populations of Homo habilis are thought to have evolved larger brains and to have made more elaborate stone tools; these differences and others are sufficient for anthropologists to classify them as a new species, Homo erectus—in Africa. The evolution of locking knees and the movement of the foramen magnum are thought to be likely drivers of the larger population changes. This species also may have used fire to cook meat. Richard Wrangham suggests that the fact that Homo seems to have been ground dwelling, with reduced intestinal length, smaller dentition, "and swelled our brains to their current, horrendously fuel-inefficient size", suggest that control of fire and releasing increased nutritional value through cooking was the key adaptation that separated Homo from tree-sleeping Australopithecines. A famous example of Homo erectus is Peking Man; others were found in Asia (notably in Indonesia), Africa, and Europe. Many paleoanthropologists now use the term Homo ergaster for the non-Asian forms of this group, and reserve Homo erectus only for those fossils that are found in Asia and meet certain skeletal and dental requirements which differ slightly from H. ergaster. H. cepranensis and H. antecessor These are proposed as species that may be intermediate between H. erectus and H. heidelbergensis. H. antecessor is known from fossils from Spain and England that are dated 1.2 Ma–500 ka. H. cepranensis refers to a single skull cap from Italy, estimated to be about 800,000 years old. H. heidelbergensis H. heidelbergensis ("Heidelberg Man") lived from about 800,000 to about 300,000 years ago. Also proposed as Homo sapiens heidelbergensis or Homo sapiens paleohungaricus. H. rhodesiensis, and the Gawis cranium H. rhodesiensis, estimated to be 300,000–125,000 years old. Most current researchers place Rhodesian Man within the group of Homo heidelbergensis, though other designations such as archaic Homo sapiens and Homo sapiens rhodesiensis have been proposed. In February 2006 a fossil, the Gawis cranium, was found which might possibly be a species intermediate between H. erectus and H. sapiens or one of many evolutionary dead ends. The skull from Gawis, Ethiopia, is believed to be 500,000–250,000 years old. Only summary details are known, and the finders have not yet released a peer-reviewed study. Gawis man's facial features suggest its being either an intermediate species or an example of a "Bodo man" female. Neanderthal and Denisovan Homo neanderthalensis, alternatively designated as Homo sapiens neanderthalensis, lived in Europe and Asia from 400,000 to about 28,000 years ago. There are a number of clear anatomical differences between anatomically modern humans (AMH) and Neanderthal populations. Many of these relate to the superior adaptation to cold environments possessed by the Neanderthal populations. Their surface to volume ratio is an extreme version of that found amongst Inuit populations, indicating that they were less inclined to lose body heat than were AMH. From brain Endocasts, Neanderthals also had significantly larger brains. This would seem to indicate that the intellectual superiority of AMH populations may be questionable. More recent research by Eiluned Pearce, Chris Stringer, R.I.M. Dunbar, however, have shown important differences in brain architecture. For example, in both the orbital chamber size and in the size of the occipital lobe, the larger size suggests that the Neanderthal had a better visual acuity than modern humans. This would give a superior vision in the inferior light conditions found in Glacial Europe. It also seems that the higher body mass of Neanderthals had a correspondingly larger brain mass required for body care and control. The Neanderthal populations seem to have been physically superior to AMH populations. These differences may have been sufficient to give Neanderthal populations an environmental superiority to AMH populations from 75,000 to 45,000 years BP. With these differences, Neanderthal brains show a smaller area was available for social functioning. Plotting group size possible from endocranial volume, suggests that AMH populations (minus occipital lobe size), had a Dunbars number of 144 possible relationships. Neanderthal populations seem to have been limited to about 120 individuals. This would show up in a larger number of possible mates for AMH humans, with increased risks of inbreeding amongst Neanderthal populations. It also suggests that humans had larger trade catchment areas than Neanderthals (confirmed in the distribution of stone tools). With larger populations, social and technological innovations were easier to fix in human populations, which may have all contributed to the fact that modern Homo sapiens replaced the Neanderthal populations by 28,000 BP. Earlier evidence from sequencing mitochondrial DNA suggested that no significant gene flow occurred between H. neanderthalensis and H. sapiens, and that the two were separate species that shared a common ancestor about 660,000 years ago. However, a sequencing of the Neanderthal genome in 2010 indicated that Neanderthals did indeed interbreed with anatomically modern humans circa 45,000 to 80,000 years ago (at the approximate time that modern humans migrated out from Africa, but before they dispersed into Europe, Asia and elsewhere). The genetic sequencing of a 40,000-year-old human skeleton from Romania showed that 11% of its genome was Neanderthal, and it was estimated that the individual had a Neanderthal ancestor 4–6 generations previously, in addition to a contribution from earlier interbreeding in the Middle East. Though this interbred Romanian population seems not to have been ancestral to modern humans, the finding indicates that interbreeding happened repeatedly. All modern non-African humans have about 1% to 4% or, according to more recent data, about 1.5% to 2.6% of their DNA derived from Neanderthal DNA, and this finding is consistent with recent studies indicating that the divergence of some human alleles dates to one Ma, although the interpretation of these studies has been questioned. Neanderthals and Homo sapiens could have co-existed in Europe for as long as 10,000 years, during which populations of anatomically modern humans exploded vastly outnumbering Neanderthals, possibly outcompeting them by sheer numerical strength. In 2008, archaeologists working at the site of Denisova Cave in the Altai Mountains of Siberia uncovered a small bone fragment from the fifth finger of a juvenile member of Denisovans. Artifacts, including a bracelet, excavated in the cave at the same level were carbon dated to around 40,000 BP. As DNA had survived in the fossil fragment due to the cool climate of the Denisova Cave, both mtDNA and nuclear DNA were sequenced. While the divergence point of the mtDNA was unexpectedly deep in time, the full genomic sequence suggested the Denisovans belonged to the same lineage as Neanderthals, with the two diverging shortly after their line split from the lineage that gave rise to modern humans. Modern humans are known to have overlapped with Neanderthals in Europe and the Near East for possibly more than 40,000 years, and the discovery raises the possibility that Neanderthals, Denisovans, and modern humans may have co-existed and interbred. The existence of this distant branch creates a much more complex picture of humankind during the Late Pleistocene than previously thought. Evidence has also been found that as much as 6% of the DNA of some modern Melanesians derive from Denisovans, indicating limited interbreeding in Southeast Asia. Alleles thought to have originated in Neanderthals and Denisovans have been identified at several genetic loci in the genomes of modern humans outside of Africa. HLA haplotypes from Denisovans and Neanderthal represent more than half the HLA alleles of modern Eurasians, indicating strong positive selection for these introgressed alleles. Corinne Simoneti at Vanderbilt University, in Nashville and her team have |
Life Evliya Çelebi was born in Constantinople in 1611 to a wealthy family from Kütahya. Both his parents were attached to the Ottoman court, his father, Derviş Mehmed Zilli, as a jeweller, and his mother as an Abkhazian relation of the grand vizier Melek Ahmed Pasha. In his book, Evliya Çelebi traces his paternal genealogy back to Ahmad Yasawi, an early Sufi mystic. Evliya Çelebi received a court education from the Imperial ulama (scholars). He may have joined the Gulshani Sufi order, as he shows an intimate knowledge of their khanqah in Cairo, and a graffito exists in which he referred to himself as Evliya-yı Gülşenî ("Evliya of the Gülşenî"). A devout Muslim opposed to fanaticism, Evliya could recite the Quran from memory and joked freely about Islam. Though employed as clergy and entertainer in the Imperial Court of Sultan Murad IV Evliya refused employment that would keep him from travelling. Çelebi had studied vocal and instrumental music as a pupil of a renowned Khalwati dervish by the name of 'Umar Gulshani, and his music gifts earned him much favor at the Imperial Palace impressing even the chief musician Amir Guna. He was also trained in the theory of music called . His journal writing began in Constantinople, taking notes on buildings, markets, customs and culture, and in 1640 it was extended with accounts of his travels beyond the confines of the city. The collected notes of his travels form a ten-volume work called the Seyahatname ("Travelogue"). Departing from the Ottoman literary convention of the time, he wrote in a mixture of vernacular and high Turkish, with the effect that the Seyahatname has remained a popular and accessible reference work about life in the Ottoman Empire in the 17th century, including two chapters on musical instruments. Evliya Çelebi died in 1684, it is unclear whether he was in Istanbul or Cairo at the time. Travels Croatia During his travels in South Slavic regions of the Ottoman Empire Çelebi visited various regions of the modern-day Croatia including northern Dalmatia, parts of Slavonia, Međimurje and Banija. He recorded variety of historiographic and ethnographic sources. They included descriptions of first hand encounters, third party narrator witnesses and invented elements. Mostar Evliya Çelebi visited the town of Mostar, then in Ottoman Bosnia and Herzegovina. He wrote that the name Mostar means "bridge-keeper", in reference to the town's celebrated bridge, 28 meters long and 20 meters high. Çelebi wrote that it "is like a rainbow arch soaring up to the skies, extending from one cliff to the other. ...I, a poor and miserable slave of Allah, have passed through 16 countries, but I have never seen such a high bridge. It is thrown from rock to rock as high as the sky." Kosovo In 1660 Çelebi went to Kosovo which is a toponym in Serbian language and referred to the central part of the region as Arnavud (آرناوود) and noted that in Vučitrn its inhabitants were speakers of Albanian or Turkish and few spoke Serbian. The highlands around the Tetovo, Peć and Prizren areas Çelebi considered as being the "mountains of Arnavudluk". Çelebi referred to the "mountains of | claimed to have encountered Native Americans as a guest in Rotterdam during his visit of 1663. He wrote: "[they] cursed those priests, saying, 'Our world used to be peaceful, but it has been filled by greedy people, who make war every year and shorten our lives.'" While visiting Vienna in 1665–66, Çelebi noted some similarities between words in German and Persian, an early observation of the relationship between what would later be known as two Indo-European languages. Çelebi visited Crete and in book II describes the fall of Chania to the Sultan; in book VIII he recounts the Candia campaign. Azerbaijan Of oil merchants in Baku Çelebi wrote: "By Allah's decree oil bubbles up out of the ground, but in the manner of hot springs, pools of water are formed with oil congealed on the surface like cream. Merchants wade into these pools and collect the oil in ladles and fill goatskins with it, these oil merchants then sell them in different regions. Revenues from this oil trade are delivered annually directly to the Safavid Shah." Crimean Khanate Evliya Çelebi remarked on the impact of Cossack raids from Azak upon the territories of the Crimean Khanate, destroying trade routes and severely depopulating the regions. By the time of Çelebi's arrival, many of the towns visited were affected by the Cossacks, and the only place he reported as safe was the Ottoman fortress at Arabat. Çelebi wrote of the slave trade in the Crimea: Çelebi estimated that there were about 400,000 slaves in the Crimea but only 187,000 free Muslims. Parthenon In 1667 Çelebi expressed his marvel at the Parthenon's sculptures and described the building as "like some impregnable fortress not made by human agency." He composed a poetic supplication that the Parthenon, as "a work less of human hands than of Heaven itself, should remain standing for all time." Syria and Palestine In contrast to many European and some Jewish travelogues of Syria and Palestine in the 17th century, Çelebi wrote one of the few detailed travelogues from an Islamic point of view. Çelebi visited Palestine twice, once in 1649 and once in 1670–1. An English translation of the first part, with some passages from the second, was published in 1935–1940 by the self-taught Palestinian scholar Stephan Hanna Stephan who worked for the Palestine Department of Antiquities. Significant are the many references to Palestine, or "Land of Palestine", and Evliya notes, "All chronicles call this country Palestine." Circassia Çelebi traveled to Circassia and explained it in great detail. The Seyâhatnâme Although many of the descriptions the Seyâhatnâme were written in an exaggerated manner or were plainly inventive fiction or third-source misinterpretation, his notes remain a useful guide to the culture and lifestyles of the 17th century Ottoman Empire. The first volume deals exclusively with Istanbul, the final volume with Egypt. Currently there is no English translation of the entire Seyahatname, although there are translations of various parts. The longest single English translation was published in 1834 by Joseph von Hammer-Purgstall, an Austrian orientalist: it may be found under the name "Evliya Efendi." Von Hammer-Purgstall's work covers the first two volumes (Istanbul and Anatolia) but its language is antiquated. Other translations include Erich Prokosch's nearly complete translation into German of the tenth volume, the 2004 introductory work entitled The World of Evliya Çelebi: An Ottoman Mentality written by University of Chicago professor Robert Dankoff, and Dankoff and Sooyong Kim's 2010 translation of select excerpts of the ten volumes, An Ottoman Traveller: Selections from the Book of Travels of Evliya Çelebi. Evliya is noted for having collected specimens of the languages in each region he traveled in. There are some 30 Turkic dialects and languages cataloged in the Seyâhatnâme. Çelebi notes the similarities between several words from the German and Persian, though he denies any common Indo-European heritage. The Seyâhatnâme also contains the first transcriptions of |
both by maintaining justice and harmony in human society and by sustaining the gods with temples and offerings. For these reasons, he oversaw all state religious activity. However, the pharaoh's real-life influence and prestige could differ from his portrayal in official writings and depictions, and beginning in the late New Kingdom his religious importance declined drastically. The king was also associated with many specific deities. He was identified directly with Horus, who represented kingship itself, and he was seen as the son of Ra, who ruled and regulated nature as the pharaoh ruled and regulated society. By the New Kingdom he was also associated with Amun, the supreme force in the cosmos. Upon his death, the king became fully deified. In this state, he was directly identified with Ra, and was also associated with Osiris, god of death and rebirth and the mythological father of Horus. Many mortuary temples were dedicated to the worship of deceased pharaohs as gods. Afterlife The Egyptians had elaborate beliefs about death and the afterlife. They believed that humans possessed a ka, or life-force, which left the body at the point of death. In life, the ka received its sustenance from food and drink, so it was believed that, to endure after death, the ka must continue to receive offerings of food, whose spiritual essence it could still consume. Each person also had a ba, the set of spiritual characteristics unique to each individual. Unlike the ka, the ba remained attached to the body after death. Egyptian funeral rituals were intended to release the ba from the body so that it could move freely, and to rejoin it with the ka so that it could live on as an akh. However, it was also important that the body of the deceased be preserved, as the Egyptians believed that the ba returned to its body each night to receive new life, before emerging in the morning as an akh. In early times the deceased pharaoh was believed to ascend to the sky and dwell among the stars. Over the course of the Old Kingdom (c. 2686–2181 BC), however, he came to be more closely associated with the daily rebirth of the sun god Ra and with the underworld ruler Osiris as those deities grew more important. In the fully developed afterlife beliefs of the New Kingdom, the soul had to avoid a variety of supernatural dangers in the Duat, before undergoing a final judgement, known as the "Weighing of the Heart", carried out by Osiris and by the Assessors of Maat. In this judgement, the gods compared the actions of the deceased while alive (symbolized by the heart) to the feather of Maat, to determine whether he or she had behaved in accordance with Maat. If the deceased was judged worthy, his or her ka and ba were united into an akh. Several beliefs coexisted about the akh's destination. Often the dead were said to dwell in the realm of Osiris, a lush and pleasant land in the underworld. The solar vision of the afterlife, in which the deceased soul traveled with Ra on his daily journey, was still primarily associated with royalty, but could extend to other people as well. Over the course of the Middle and New Kingdoms, the notion that the akh could also travel in the world of the living, and to some degree magically affect events there, became increasingly prevalent. Atenism During the New Kingdom the pharaoh Akhenaten abolished the official worship of other gods in favor of the sun-disk Aten. This is often seen as the first instance of true monotheism in history, although the details of Atenist theology are still unclear and the suggestion that it was monotheistic is disputed. The exclusion of all but one god from worship was a radical departure from Egyptian tradition and some see Akhenaten as a practitioner of monolatry rather than monotheism, as he did not actively deny the existence of other gods; he simply refrained from worshipping any but the Aten. Under Akhenaten's successors Egypt reverted to its traditional religion, and Akhenaten himself came to be reviled as a heretic. Writings While the Egyptians had no unified religious scripture, they produced many religious writings of various types. Together the disparate texts provide an extensive, but still incomplete, understanding of Egyptian religious practices and beliefs. Mythology Egyptian myths were metaphorical stories intended to illustrate and explain the gods' actions and roles in nature. The details of the events they recounted could change to convey different symbolic perspectives on the mysterious divine events they described, so many myths exist in different and conflicting versions. Mythical narratives were rarely written in full, and more often texts only contain episodes from or allusions to a larger myth. Knowledge of Egyptian mythology, therefore, is derived mostly from hymns that detail the roles of specific deities, from ritual and magical texts which describe actions related to mythic events, and from funerary texts which mention the roles of many deities in the afterlife. Some information is also provided by allusions in secular texts. Finally, Greeks and Romans such as Plutarch recorded some of the extant myths late in Egyptian history. Among the significant Egyptian myths were the creation myths. According to these stories, the world emerged as a dry space in the primordial ocean of chaos. Because the sun is essential to life on earth, the first rising of Ra marked the moment of this emergence. Different forms of the myth describe the process of creation in various ways: a transformation of the primordial god Atum into the elements that form the world, as the creative speech of the intellectual god Ptah, and as an act of the hidden power of Amun. Regardless of these variations, the act of creation represented the initial establishment of Ma'at and the pattern for the subsequent cycles of time. The most important of all Egyptian myths was the Osiris myth. It tells of the divine ruler Osiris, who was murdered by his jealous brother Set, a god often associated with chaos. Osiris's sister and wife Isis resurrected him so that he could conceive an heir, Horus. Osiris then entered the underworld and became the ruler of the dead. Once grown, Horus fought and defeated Set to become king himself. Set's association with chaos, and the identification of Osiris and Horus as the rightful rulers, provided a rationale for pharaonic succession and portrayed the pharaohs as the upholders of order. At the same time, Osiris's death and rebirth were related to the Egyptian agricultural cycle, in which crops grew in the wake of the Nile inundation, and provided a template for the resurrection of human souls after death. Another important mythic motif was the journey of Ra through the Duat each night. In the course of this journey, Ra met with Osiris, who again acted as an agent of regeneration, so that his life was renewed. He also fought each night with Apep, a serpentine god representing chaos. The defeat of Apep and the meeting with Osiris ensured the rising of the sun the next morning, an event that represented rebirth and the victory of order over chaos. Ritual and magical texts The procedures for religious rituals were frequently written on papyri, which were used as instructions for those performing the ritual. These ritual texts were kept mainly in the temple libraries. Temples themselves are also inscribed with such texts, often accompanied by illustrations. Unlike the ritual papyri, these inscriptions were not intended as instructions, but were meant to symbolically perpetuate the rituals even if, in reality, people ceased to perform them. Magical texts likewise describe rituals, although these rituals were part of the spells used for specific goals in everyday life. Despite their mundane purpose, many of these texts also originated in temple libraries and later became disseminated among the general populace. Hymns and prayers The Egyptians produced numerous prayers and hymns, written in the form of poetry. Hymns and prayers follow a similar structure and are distinguished mainly by the purposes they serve. Hymns were written to praise particular deities. Like ritual texts, they were written on papyri and on temple walls, and they were probably recited as part of the rituals they accompany in temple inscriptions. Most are structured according to a set literary formula, designed to expound on the nature, aspects, and mythological functions of a given deity. They tend to speak more explicitly about fundamental theology than other Egyptian religious writings, and became particularly important in the New Kingdom, a period of particularly active theological discourse. Prayers follow the same general pattern as hymns, but address the relevant god in a more personal way, asking for blessings, help, or forgiveness for wrongdoing. Such prayers are rare before the New Kingdom, indicating that in earlier periods such direct personal interaction with a deity was not believed possible, or at least was less likely to be expressed in writing. They are known mainly from inscriptions on statues and stelae left in sacred sites as votive offerings. Funerary texts Among the most significant and extensively preserved Egyptian writings are funerary texts designed to ensure that deceased souls reached a pleasant afterlife. The earliest of these are the Pyramid Texts. They are a loose collection of hundreds of spells inscribed on the walls of royal pyramids during the Old Kingdom, intended to magically provide pharaohs with the means to join the company of the gods in the afterlife. The spells appear in differing arrangements and combinations, and few of them appear in all of the pyramids. At the end of the Old Kingdom a new body of funerary spells, which included material from the Pyramid Texts, began appearing in tombs, inscribed primarily on coffins. This collection of writings is known as the Coffin Texts, and was not reserved for royalty, but appeared in the tombs of non-royal officials. In the New Kingdom, several new funerary texts emerged, of which the best-known is the Book of the Dead. Unlike the earlier books, it often contains extensive illustrations, or vignettes. The book was copied on papyrus and sold to commoners to be placed in their tombs. The Coffin Texts included sections with detailed descriptions of the underworld and instructions on how to overcome its hazards. In the New Kingdom, this material gave rise to several "books of the netherworld", including the Book of Gates, the Book of Caverns, and the Amduat. Unlike the loose collections of spells, these netherworld books are structured depictions of Ra's passage through the Duat, and by analogy, the journey of the deceased person's soul through the realm of the dead. They were originally restricted to pharaonic tombs, but in the Third Intermediate Period they came to be used more widely. Practices Temples Temples existed from the beginning of Egyptian history, and at the height of the civilization they were present in most of its towns. They included both mortuary temples to serve the spirits of deceased pharaohs and temples dedicated to patron gods, although the distinction was blurred because divinity and kingship were so closely intertwined. The temples were not primarily intended as places for worship by the general populace, and the common people had a complex set of religious practices of their own. Instead, the state-run temples served as houses for the gods, in which physical images which served as their intermediaries were cared for and provided with offerings. This service was believed to be necessary to sustain the gods, so that they could in turn maintain the universe itself. Thus, temples were central to Egyptian society, and vast resources were devoted to their upkeep, including both donations from the monarchy and large estates of their own. Pharaohs often expanded them as part of their obligation to honor the gods, so that many temples grew to enormous size. However, not all gods had temples dedicated to them, as many gods who were important in official theology received only minimal worship, and many household gods were the focus of popular veneration rather than temple ritual. The earliest Egyptian temples were small, impermanent structures, but through the Old and Middle Kingdoms their designs grew more elaborate, and they were increasingly built out of stone. In the New Kingdom, a basic temple layout emerged, which had evolved from common elements in Old and Middle Kingdom temples. With variations, this plan was used for most of the temples built from then on, and most of those that survive today adhere to it. In this standard plan, the temple was built along a central processional way that led through a series of courts and halls to the sanctuary, which held a statue of the temple's god. Access to this most sacred part of the temple was restricted to the pharaoh and the highest-ranking priests. The journey from the temple entrance to the sanctuary was seen as a journey from the human world to the divine realm, a point emphasized by the complex mythological symbolism present in temple architecture. Well beyond the temple building proper was the outermost wall. Between the two lay many subsidiary buildings, including workshops and storage areas to supply the temple's needs, and the library where the temple's sacred writings and mundane records were kept, and which also served as a center of learning on a multitude of subjects. Theoretically it was the duty of the pharaoh to carry out temple rituals, as he was Egypt's official representative to the gods. In reality, ritual duties were almost always carried out by priests. During the Old and Middle Kingdoms, there was no separate class of priests; instead, many government officials served in this capacity for several months out of the year before returning to their secular duties. Only in the New Kingdom did professional priesthood become widespread, although most lower-ranking priests were still part-time. All were still employed by the state, and the pharaoh had final say in their appointments. However, as the wealth of the temples grew, the influence of their priesthoods increased, until it rivaled that of the pharaoh. In the political fragmentation of the Third Intermediate Period (c. 1070–664 BC), the high priests of Amun at Karnak even became the effective rulers of Upper Egypt. The temple staff also included many people other than priests, such as musicians and chanters in temple ceremonies. Outside the temple were artisans and other laborers who helped supply the temple's needs, as well as farmers who worked on temple estates. All were paid with portions of the temple's income. Large temples were therefore very important centers of economic activity, sometimes employing thousands of people. Official rituals and festivals State religious practice included both temple rituals involved in the cult of a deity, and ceremonies related to divine kingship. Among the latter were coronation ceremonies and the Sed festival, a ritual renewal of the pharaoh's strength that took place periodically during his reign. There were numerous temple rituals, including rites that took place across the country and rites limited to single temples or to the temples of a single god. Some were performed daily, while others took place annually or on rare occasions. The most common temple ritual was the morning offering ceremony, performed daily in temples across Egypt. In it, a high-ranking priest, or occasionally the pharaoh, washed, anointed, and elaborately dressed the god's statue before presenting it with offerings. Afterward, when the god had consumed the spiritual essence of the offerings, the items themselves were taken to be distributed among the priests. The less frequent temple rituals, or festivals, were still numerous, with dozens occurring every year. These festivals often entailed actions beyond simple offerings to the gods, such as reenactments of particular myths or the symbolic destruction of the forces of disorder. Most of these events were probably celebrated only by the priests and took place only inside the temple. However, the most important temple festivals, like the Opet Festival celebrated at Karnak, usually involved a procession carrying the god's image out of the sanctuary in a model barque to visit other significant sites, such as the temple of a related deity. Commoners gathered to watch the procession and sometimes received portions of the unusually large offerings given to the gods on these occasions. Animal cults At many sacred sites, the Egyptians worshipped individual animals which they believed to be manifestations of particular deities. These animals were selected based on specific sacred markings which were believed to indicate their fitness for the role. Some of these cult animals retained their positions for the rest of their lives, as with the Apis bull worshipped in Memphis as a manifestation of Ptah. Other animals were selected for much shorter periods. These cults grew more popular in later times, and many temples began raising stocks of such animals from which to choose a new divine manifestation. A separate practice developed in the Twenty-sixth Dynasty, when people began mummifying any member of a particular animal species as an offering to the god whom the species represented. Millions of mummified cats, birds, and other creatures were buried at temples honoring Egyptian deities. Worshippers paid the priests of a particular deity to obtain and mummify an animal associated with that deity, and the mummy was placed in a cemetery near the god's cult center. Oracles The Egyptians used oracles to ask the gods for knowledge or guidance. Egyptian oracles are known mainly from the New Kingdom and afterward, though they probably appeared much earlier. People of all classes, including the king, asked questions of oracles. The most common means of consulting an oracle was to pose a question to the divine image while it was being carried in a festival procession, and interpret an answer from the barque's movements. Other methods included interpreting the behavior of cult animals, drawing lots, or consulting statues through which a priest apparently spoke. The means of discerning the god's will gave great influence to the priests who spoke and interpreted the god's message. Popular religion While the state cults were meant to preserve the stability of the Egyptian world, lay individuals had their own religious practices that related more directly to daily life. This popular religion left less evidence than the official cults, and because this evidence was mostly produced by the wealthiest portion of the Egyptian population, it is uncertain to what degree it reflects the practices of the populace as a whole. Popular religious practice included ceremonies marking important transitions in life. These included birth, because of the danger involved in the process, and naming, because the name was held to be a crucial part of a person's identity. The most important of these ceremonies were those surrounding death, because they ensured the soul's survival beyond it. Other religious practices sought to discern the gods' will or seek their knowledge. These included the interpretation of dreams, which could be seen as messages from the divine realm, and the consultation of oracles. People also sought to affect the gods' behavior to their own benefit through magical rituals. Individual Egyptians also prayed to gods and gave them private offerings. Evidence of this type of personal piety is sparse before the New Kingdom. This is probably due to cultural restrictions on depiction of nonroyal religious activity, which relaxed during the Middle and New Kingdoms. Personal piety became still more prominent in the late New Kingdom, when it was believed that the gods intervened directly in individual lives, punishing wrongdoers and saving the pious from disaster. Official temples were important venues for private prayer and offering, even though their central activities were closed to laypeople. Egyptians frequently donated goods to be offered to the temple deity and objects inscribed with prayers to be placed in temple courts. Often they prayed in person before temple statues or in shrines set aside for their use. Yet in addition to temples, the populace also used separate local chapels, smaller but more accessible than the formal temples. These chapels were very numerous and probably staffed by members of the community. Households, | and declined, and their intricate relationships shifted. At various times, certain gods became preeminent over the others, including the sun god Ra, the creator god Amun, and the mother goddess Isis. For a brief period, in the theology promulgated by the pharaoh Akhenaten, a single god, the Aten, replaced the traditional pantheon. Ancient Egyptian religion and mythology left behind many writings and monuments, along with significant influences on ancient and modern cultures. Beliefs The beliefs and rituals now referred to as "ancient Egyptian religion" were integral within every aspect of Egyptian culture. The Egyptian language possessed no single term corresponding to the modern European concept of religion. Ancient Egyptian religion consisted of a vast and varying set of beliefs and practices, linked by their common focus on the interaction between the world of humans and the world of the divine. The characteristics of the gods who populated the divine realm were inextricably linked to the Egyptians' understanding of the properties of the world in which they lived. Deities The Egyptians believed that the phenomena of nature were divine forces in and of themselves. These deified forces included the elements, animal characteristics, or abstract forces. The Egyptians believed in a pantheon of gods, which were involved in all aspects of nature and human society. Their religious practices were efforts to sustain and placate these phenomena and turn them to human advantage. This polytheistic system was very complex, as some deities were believed to exist in many different manifestations, and some had multiple mythological roles. Conversely, many natural forces, such as the sun, were associated with multiple deities. The diverse pantheon ranged from gods with vital roles in the universe to minor deities or "demons" with very limited or localized functions. It could include gods adopted from foreign cultures, and sometimes humans: deceased pharaohs were believed to be divine, and occasionally, distinguished commoners such as Imhotep also became deified. The depictions of the gods in art were not meant as literal representations of how the gods might appear if they were visible, as the gods' true natures were believed to be mysterious. Instead, these depictions gave recognizable forms to the abstract deities by using symbolic imagery to indicate each god's role in nature. This iconography was not fixed, and many of the gods could be depicted in more than one form. Many gods were associated with particular regions in Egypt where their cults were most important. However, these associations changed over time, and they did not mean that the god associated with a place had originated there. For instance, the god Montu was the original patron of the city of Thebes. Over the course of the Middle Kingdom, however, he was displaced in that role by Amun, who may have arisen elsewhere. The national popularity and importance of individual gods fluctuated in a similar way. Deities had complex interrelationships, which partly reflected the interaction of the forces they represented. The Egyptians often grouped gods together to reflect these relationships. One of the more common combinations was a family triad consisting of a father, mother, and child, who were worshipped together. Some groups had wide-ranging importance. One such group, the Ennead, assembled nine deities into a theological system that was involved in the mythological areas of creation, kingship, and the afterlife. The relationships between deities could also be expressed in the process of syncretism, in which two or more different gods were linked to form a composite deity. This process was a recognition of the presence of one god "in" another when the second god took on a role belonging to the first. These links between deities were fluid, and did not represent the permanent merging of two gods into one; therefore, some gods could develop multiple syncretic connections. Sometimes, syncretism combined deities with very similar characteristics. At other times it joined gods with very different natures, as when Amun, the god of hidden power, was linked with Ra, the god of the sun. The resulting god, Amun-Ra, thus united the power that lay behind all things with the greatest and most visible force in nature. Many deities could be given epithets that seem to indicate that they were greater than any other god, suggesting some kind of unity beyond the multitude of natural forces. This is particularly true of a few gods who, at various points, rose to supreme importance in Egyptian religion. These included the royal patron Horus, the sun god Ra, and the mother goddess Isis. During the New Kingdom (c. 1550–1070 BC) Amun held this position. The theology of the period described in particular detail Amun's presence in and rule over all things, so that he, more than any other deity, embodied the all-encompassing power of the divine. Cosmology The Egyptian conception of the universe centered on Ma'at, a word that encompasses several concepts in English, including "truth," "justice," and "order." It was the fixed, eternal order of the universe, both in the cosmos and in human society, and was often personified as a goddess. It had existed since the creation of the world, and without it the world would lose its cohesion. In Egyptian belief, Ma'at was constantly under threat from the forces of disorder, so all of society was required to maintain it. On the human level this meant that all members of society should cooperate and coexist; on the cosmic level it meant that all of the forces of nature—the gods—should continue to function in balance. This latter goal was central to Egyptian religion. The Egyptians sought to maintain Ma'at in the cosmos by sustaining the gods through offerings and by performing rituals which staved off disorder and perpetuated the cycles of nature. The most important part of the Egyptian view of the cosmos was the conception of time, which was greatly concerned with the maintenance of Ma'at. Throughout the linear passage of time, a cyclical pattern recurred, in which Ma'at was renewed by periodic events which echoed the original creation. Among these events were the annual Nile flood and the succession from one king to another, but the most important was the daily journey of the sun god Ra. When thinking of the shape of the cosmos, the Egyptians saw the earth as a flat expanse of land, personified by the god Geb, over which arched the sky goddess Nut. The two were separated by Shu, the god of air. Beneath the earth lay a parallel underworld and undersky, and beyond the skies lay the infinite expanse of Nu, the chaos that had existed before creation. The Egyptians also believed in a place called the Duat, a mysterious region associated with death and rebirth, that may have lain in the underworld or in the sky. Each day, Ra traveled over the earth across the underside of the sky, and at night he passed through the Duat to be reborn at dawn. In Egyptian belief, this cosmos was inhabited by three types of sentient beings: one was the gods; another was the spirits of deceased humans, who existed in the divine realm and possessed many of the gods' abilities; living humans were the third category, and the most important among them was the pharaoh, who bridged the human and divine realms. Kingship Egyptologists have long debated the degree to which the pharaoh was considered a god. It seems most likely that the Egyptians viewed royal authority itself as a divine force. Therefore, although the Egyptians recognized that the pharaoh was human and subject to human weakness, they simultaneously viewed him as a god, because the divine power of kingship was incarnated in him. He therefore acted as intermediary between Egypt's people and the gods. He was key to upholding Ma'at, both by maintaining justice and harmony in human society and by sustaining the gods with temples and offerings. For these reasons, he oversaw all state religious activity. However, the pharaoh's real-life influence and prestige could differ from his portrayal in official writings and depictions, and beginning in the late New Kingdom his religious importance declined drastically. The king was also associated with many specific deities. He was identified directly with Horus, who represented kingship itself, and he was seen as the son of Ra, who ruled and regulated nature as the pharaoh ruled and regulated society. By the New Kingdom he was also associated with Amun, the supreme force in the cosmos. Upon his death, the king became fully deified. In this state, he was directly identified with Ra, and was also associated with Osiris, god of death and rebirth and the mythological father of Horus. Many mortuary temples were dedicated to the worship of deceased pharaohs as gods. Afterlife The Egyptians had elaborate beliefs about death and the afterlife. They believed that humans possessed a ka, or life-force, which left the body at the point of death. In life, the ka received its sustenance from food and drink, so it was believed that, to endure after death, the ka must continue to receive offerings of food, whose spiritual essence it could still consume. Each person also had a ba, the set of spiritual characteristics unique to each individual. Unlike the ka, the ba remained attached to the body after death. Egyptian funeral rituals were intended to release the ba from the body so that it could move freely, and to rejoin it with the ka so that it could live on as an akh. However, it was also important that the body of the deceased be preserved, as the Egyptians believed that the ba returned to its body each night to receive new life, before emerging in the morning as an akh. In early times the deceased pharaoh was believed to ascend to the sky and dwell among the stars. Over the course of the Old Kingdom (c. 2686–2181 BC), however, he came to be more closely associated with the daily rebirth of the sun god Ra and with the underworld ruler Osiris as those deities grew more important. In the fully developed afterlife beliefs of the New Kingdom, the soul had to avoid a variety of supernatural dangers in the Duat, before undergoing a final judgement, known as the "Weighing of the Heart", carried out by Osiris and by the Assessors of Maat. In this judgement, the gods compared the actions of the deceased while alive (symbolized by the heart) to the feather of Maat, to determine whether he or she had behaved in accordance with Maat. If the deceased was judged worthy, his or her ka and ba were united into an akh. Several beliefs coexisted about the akh's destination. Often the dead were said to dwell in the realm of Osiris, a lush and pleasant land in the underworld. The solar vision of the afterlife, in which the deceased soul traveled with Ra on his daily journey, was still primarily associated with royalty, but could extend to other people as well. Over the course of the Middle and New Kingdoms, the notion that the akh could also travel in the world of the living, and to some degree magically affect events there, became increasingly prevalent. Atenism During the New Kingdom the pharaoh Akhenaten abolished the official worship of other gods in favor of the sun-disk Aten. This is often seen as the first instance of true monotheism in history, although the details of Atenist theology are still unclear and the suggestion that it was monotheistic is disputed. The exclusion of all but one god from worship was a radical departure from Egyptian tradition and some see Akhenaten as a practitioner of monolatry rather than monotheism, as he did not actively deny the existence of other gods; he simply refrained from worshipping any but the Aten. Under Akhenaten's successors Egypt reverted to its traditional religion, and Akhenaten himself came to be reviled as a heretic. Writings While the Egyptians had no unified religious scripture, they produced many religious writings of various types. Together the disparate texts provide an extensive, but still incomplete, understanding of Egyptian religious practices and beliefs. Mythology Egyptian myths were metaphorical stories intended to illustrate and explain the gods' actions and roles in nature. The details of the events they recounted could change to convey different symbolic perspectives on the mysterious divine events they described, so many myths exist in different and conflicting versions. Mythical narratives were rarely written in full, and more often texts only contain episodes from or allusions to a larger myth. Knowledge of Egyptian mythology, therefore, is derived mostly from hymns that detail the roles of specific deities, from ritual and magical texts which describe actions related to mythic events, and from funerary texts which mention the roles of many deities in the afterlife. Some information is also provided by allusions in secular texts. Finally, Greeks and Romans such as Plutarch recorded some of the extant myths late in Egyptian history. Among the significant Egyptian myths were the creation myths. According to these stories, the world emerged as a dry space in the primordial ocean of chaos. Because the sun is essential to life on earth, the first rising of Ra marked the moment of this emergence. Different forms of the myth describe the process of creation in various ways: a transformation of the primordial god Atum into the elements that form the world, as the creative speech of the intellectual god Ptah, and as an act of the hidden power of Amun. Regardless of these variations, the act of creation represented the initial establishment of Ma'at and the pattern for the subsequent cycles of time. The most important of all Egyptian myths was the Osiris myth. It tells of the divine ruler Osiris, who was murdered by his jealous brother Set, a god often associated with chaos. Osiris's sister and wife Isis resurrected him so that he could conceive an heir, Horus. Osiris then entered the underworld and became the ruler of the dead. Once grown, Horus fought and defeated Set to become king himself. Set's association with chaos, and the identification of Osiris and Horus as the rightful rulers, provided a rationale for pharaonic succession and portrayed the pharaohs as the upholders of order. At the same time, Osiris's death and rebirth were related to the Egyptian agricultural cycle, in which crops grew in the wake of the Nile inundation, and provided a template for the resurrection of human souls after death. Another important mythic motif was the journey of Ra through the Duat each night. In the course of this journey, Ra met with Osiris, who again acted as an agent of regeneration, so that his life was renewed. He also fought each night with Apep, a serpentine god representing chaos. The defeat of Apep and the meeting with Osiris ensured the rising of the sun the next morning, an event that represented rebirth and the victory of order over chaos. Ritual and magical texts The procedures for religious rituals were frequently written on papyri, which were used as instructions for those performing the ritual. These ritual texts were kept mainly in the temple libraries. Temples themselves are also inscribed with such texts, often accompanied by illustrations. Unlike the ritual papyri, these inscriptions were not intended as instructions, but were meant to symbolically perpetuate the rituals even if, in reality, people ceased to perform them. Magical texts likewise describe rituals, although these rituals were part of the spells used for specific goals in everyday life. Despite their mundane purpose, |
of educational psychology relies heavily on quantitative methods, including testing and measurement, to enhance educational activities related to instructional design, classroom management, and assessment, which serve to facilitate learning processes in various educational settings across the lifespan. Educational psychology can in part be understood through its relationship with other disciplines. It is informed primarily by psychology, bearing a relationship to that discipline analogous to the relationship between medicine and biology. It is also informed by neuroscience. Educational psychology in turn informs a wide range of specialities within educational studies, including instructional design, educational technology, curriculum development, organizational learning, special education, classroom management, and student motivation. Educational psychology both draws from and contributes to cognitive science and the learning sciences. In universities, departments of educational psychology are usually housed within faculties of education, possibly accounting for the lack of representation of educational psychology content in introductory psychology textbooks. The field of educational psychology involves the study of memory, conceptual processes, and individual differences (via cognitive psychology) in conceptualizing new strategies for learning processes in humans. Educational psychology has been built upon theories of operant conditioning, functionalism, structuralism, constructivism, humanistic psychology, Gestalt psychology, and information processing. Educational psychology has seen rapid growth and development as a profession in the last twenty years. School psychology began with the concept of intelligence testing leading to provisions for special education students, who could not follow the regular classroom curriculum in the early part of the 20th century. However, "school psychology" itself has built a fairly new profession based upon the practices and theories of several psychologists among many different fields. Educational psychologists are working side by side with psychiatrists, social workers, teachers, speech and language therapists, and counselors in an attempt to understand the questions being raised when combining behavioral, cognitive, and social psychology in the classroom setting. History Early years Educational psychology is a fairly new and growing field of study. Although it can date back as early as the days of Plato and Aristotle, educational psychology was not considered a specific practice. It was unknown that everyday teaching and learning in which individuals had to think about individual differences, assessment, development, the nature of a subject being taught, problem-solving, and transfer of learning was the beginning to the field of educational psychology. These topics are important to education and, as a result, they are important in understanding human cognition, learning, and social perception. Plato and Aristotle Educational psychology dates back to the time of Aristotle and Plato. Plato and Aristotle researched individual differences in the field of education, training of the body and the cultivation of psycho-motor skills, the formation of good character, the possibilities and limits of moral education. Some other educational topics they spoke about were the effects of music, poetry, and the other arts on the development of individual, role of teacher, and the relations between teacher and student. Plato saw knowledge acquisition as an innate ability, which evolves through experience and understanding of the world. This conception of human cognition has evolved into a continuing argument of nature vs. nurture in understanding conditioning and learning today. Aristotle observed the phenomenon of "association." His four laws of association included succession, contiguity, similarity, and contrast. His studies examined recall and facilitated learning processes. John Locke John Locke is considered one of the most influential philosophers in post-renaissance Europe, a time period that began around the mid-1600s. Locke is considered the "Father of English Psychology". One of Locke's most important works was written in 1690, named An Essay Concerning Human Understanding. In this essay, he introduced the term "tabula rasa" meaning "blank slate." Locke explained that learning was attained through experience only and that we are all born without knowledge. He followed by contrasting Plato's theory of innate learning processes. Locke believed the mind was formed by experiences, not innate ideas. Locke introduced this idea as "empiricism," or the understanding that knowledge is only built on knowledge and experience. In the late 1600s, John Locke advanced the hypothesis that people learn primarily from external forces. He believed that the mind was like a blank tablet (tabula rasa), and that successions of simple impressions give rise to complex ideas through association and reflection. Locke is credited with establishing "empiricism" as a criterion for testing the validity of knowledge, thus providing a conceptual framework for later development of experimental methodology in the natural and social sciences. Before 1890 Philosophers of education such as Juan Vives, Johann Pestalozzi, Friedrich Fröbel, and Johann Herbart had examined, classified and judged the methods of education centuries before the beginnings of psychology in the late 1800s. Juan Vives Juan Vives (1493–1540) proposed induction as the method of study and believed in the direct observation and investigation of the study of nature. His studies focused on humanistic learning, which opposed scholasticism and was influenced by a variety of sources including philosophy, psychology, politics, religion, and history. He was one of the first prominent thinkers to emphasize that the location of a school is important to learning. He suggested that a school should be located away from disturbing noises; the air quality should be good and there should be plenty of food for the students and teachers. Vives emphasized the importance of understanding individual differences of the students and suggested practice as an important tool for learning. Vives introduced his educational ideas in his writing, "De anima et vita" in 1538. In this publication, Vives explores moral philosophy as a setting for his educational ideals; with this, he explains that the different parts of the soul (similar to that of Aristotle's ideas) are each responsible for different operations, which function distinctively. The first book covers the different "souls": "The Vegetative Soul;" this is the soul of nutrition, growth, and reproduction, "The Sensitive Soul," which involves the five external senses; "The Cogitative soul," which includes internal senses and cognitive facilities. The second book involves functions of the rational soul: mind, will, and memory. Lastly, the third book explains the analysis of emotions. Johann Pestalozzi Johann Pestalozzi (1746–1827), a Swiss educational reformer, emphasized the child rather than the content of the school. Pestalozzi fostered an educational reform backed by the idea that early education was crucial for children, and could be manageable for mothers. Eventually, this experience with early education would lead to a "wholesome person characterized by morality." Pestalozzi has been acknowledged for opening institutions for education, writing books for mother's teaching home education, and elementary books for students, mostly focusing on the kindergarten level. In his later years, he published teaching manuals and methods of teaching. During the time of The Enlightenment, Pestalozzi's ideals introduced "educationalization". This created the bridge between social issues and education by introducing the idea of social issues to be solved through education. Horlacher describes the most prominent example of this during The Enlightenment to be "improving agricultural production methods." Johann Herbart Johann Herbart (1776–1841) is considered the father of educational psychology. He believed that learning was influenced by interest in the subject and the teacher. He thought that teachers should consider the students' existing mental sets—what they already know—when presenting new information or material. Herbart came up with what are now known as the formal steps. The 5 steps that teachers should use are: Review material that has already been learned by the student Prepare the student for new material by giving them an overview of what they are learning next Present the new material. Relate the new material to the old material that has already been learned. Show how the student can apply the new material and show the material they will learn next. 1890–1920 There were three major figures in educational psychology in this period: William James, G. Stanley Hall, and John Dewey. These three men distinguished themselves in general psychology and educational psychology, which overlapped significantly at the end of the 19th century. William James (1842–1910) The period of 1890–1920 is considered the golden era of educational psychology when aspirations of the new discipline rested on the application of the scientific methods of observation and experimentation to educational problems. From 1840 to 1920 37 million people immigrated to the United States. This created an expansion of elementary schools and secondary schools. The increase in immigration also provided educational psychologists the opportunity to use intelligence testing to screen immigrants at Ellis Island. Darwinism influenced the beliefs of the prominent educational psychologists. Even in the earliest years of the discipline, educational psychologists recognized the limitations of this new approach. The pioneering American psychologist William James commented that: James is the father of psychology in America but he also made contributions to educational psychology. In his famous series of lectures Talks to Teachers on Psychology, published in 1899, James defines education as "the organization of acquired habits of conduct and tendencies to behavior". He states that teachers should "train the pupil to behavior" so that he fits into the social and physical world. Teachers should also realize the importance of habit and instinct. They should present information that is clear and interesting and relate this new information and material to things the student already knows about. He also addresses important issues such as attention, memory, and association of ideas. Alfred Binet Alfred Binet published Mental Fatigue in 1898, in which he attempted to apply the experimental method to educational psychology. In this experimental method he advocated for two types of experiments, experiments done in the lab and experiments done in the classroom. In 1904 he was appointed the Minister of Public Education. This is when he began to look for a way to distinguish children with developmental disabilities. Binet strongly supported special education programs because he believed that "abnormality" could be cured. The Binet-Simon test was the first intelligence test and was the first to distinguish between "normal children" and those with developmental disabilities. Binet believed that it was important to study individual differences between age groups and children of the same age. He also believed that it was important for teachers to take into account individual students' strengths and also the needs of the classroom as a whole when teaching and creating a good learning environment. He also believed that it was important to train teachers in observation so that they would be able to see individual differences among children and adjust the curriculum to the students. Binet also emphasized that practice of material was important. In 1916 Lewis Terman revised the Binet-Simon so that the average score was always 100. The test became known as the Stanford-Binet and was one of the most widely used tests of intelligence. Terman, unlike Binet, was interested in using intelligence test to identify gifted children who had high intelligence. In his longitudinal study of gifted children, who became known as the Termites, Terman found that gifted children become gifted adults. Edward Thorndike Edward Thorndike (1874–1949) supported the scientific movement in education. He based teaching practices on empirical evidence and measurement. Thorndike developed the theory of instrumental conditioning or the law of effect. The law of effect states that associations are strengthened when it is followed by something pleasing and associations are weakened when followed by something not pleasing. He also found that learning is done a little at a time or in increments, learning is an automatic process and its principles apply to all mammals. Thorndike's research with Robert Woodworth on the theory of transfer found that learning one subject will only influence your ability to learn another subject if the subjects are similar. This discovery led to less emphasis on learning the classics because they found that studying the classics does not contribute to overall general intelligence. Thorndike was one of the first to say that individual differences in cognitive tasks were due to how many stimulus-response patterns a person had rather than general intellectual ability. He contributed word dictionaries that were scientifically based to determine the words and definitions used. The dictionaries were the first to take into consideration the users' maturity level. He also integrated pictures and easier pronunciation guide into each of the definitions. Thorndike contributed arithmetic books based on learning theory. He made all the problems more realistic and relevant to what was being studied, not just to improve the general intelligence. He developed tests that were standardized to measure performance in school-related subjects. His biggest contribution to testing was the CAVD intelligence test which used a multidimensional approach to intelligence and was the first to use a ratio scale. His later work was on programmed instruction, mastery learning, and computer-based learning: John Dewey John Dewey (1859–1952) had a major influence on the development of progressive education in the United States. He believed that the classroom should prepare children to be good citizens and facilitate creative intelligence. He pushed for the creation of practical classes that could be applied outside of a school setting. He also thought that education should be student-oriented, not subject-oriented. For Dewey, education was a social experience that helped bring together generations of people. He stated that students learn by doing. He believed in an active mind that was able to be educated through observation, problem-solving, and enquiry. In his 1910 book How We Think, he emphasizes that material should be provided in a way that is stimulating and interesting to the student since it encourages original thought and problem-solving. He also stated that material should be relative to the student's own experience. Jean Piaget Jean Piaget (1896–1980) was one of the most powerful researchers in the area of developmental psychology during the 20th century. He developed the theory of cognitive development. The theory stated that intelligence developed in four different stages. The stages are the sensorimotor stage from birth to 2 years old, the preoperational state from 2 to 7 years old, the concrete operational stage from 7 to 10 years old, and the formal operational stage from 12 years old and up. He also believed that learning was constrained to the child's cognitive development. Piaget influenced educational psychology because he was the first to believe that cognitive development was important and something that should be paid attention to in education. Most of the research on Piagetian theory was carried out by American educational psychologists. 1920–present The number of people receiving a high school and college education increased dramatically from 1920 to 1960. Because very few jobs were available to teens coming out of eighth grade, there was an increase in high school attendance in the 1930s. The progressive movement in the United States took off at this time and led to the idea of progressive education. John Flanagan, an educational psychologist, developed tests for combat trainees and instructions in combat training. In 1954 the work of Kenneth Clark and his wife on the effects of segregation on black and white children was influential in the Supreme Court case Brown v. Board of Education. From the 1960s to present day, educational psychology has switched from a behaviorist perspective to a more cognitive-based perspective because of the influence and development of cognitive psychology at this time. Jerome Bruner Jerome Bruner is notable for integrating Piaget's cognitive approaches into educational psychology. He advocated for discovery learning where teachers create a problem solving environment that allows the student to question, explore and experiment. In his book The Process of Education Bruner stated that the structure of the material and the cognitive abilities of the person are important in learning. He emphasized the importance of the subject matter. He also believed that how the subject was structured was important for the student's understanding of the subject and that it was the goal of the teacher to structure the subject in | late 1800s. Juan Vives Juan Vives (1493–1540) proposed induction as the method of study and believed in the direct observation and investigation of the study of nature. His studies focused on humanistic learning, which opposed scholasticism and was influenced by a variety of sources including philosophy, psychology, politics, religion, and history. He was one of the first prominent thinkers to emphasize that the location of a school is important to learning. He suggested that a school should be located away from disturbing noises; the air quality should be good and there should be plenty of food for the students and teachers. Vives emphasized the importance of understanding individual differences of the students and suggested practice as an important tool for learning. Vives introduced his educational ideas in his writing, "De anima et vita" in 1538. In this publication, Vives explores moral philosophy as a setting for his educational ideals; with this, he explains that the different parts of the soul (similar to that of Aristotle's ideas) are each responsible for different operations, which function distinctively. The first book covers the different "souls": "The Vegetative Soul;" this is the soul of nutrition, growth, and reproduction, "The Sensitive Soul," which involves the five external senses; "The Cogitative soul," which includes internal senses and cognitive facilities. The second book involves functions of the rational soul: mind, will, and memory. Lastly, the third book explains the analysis of emotions. Johann Pestalozzi Johann Pestalozzi (1746–1827), a Swiss educational reformer, emphasized the child rather than the content of the school. Pestalozzi fostered an educational reform backed by the idea that early education was crucial for children, and could be manageable for mothers. Eventually, this experience with early education would lead to a "wholesome person characterized by morality." Pestalozzi has been acknowledged for opening institutions for education, writing books for mother's teaching home education, and elementary books for students, mostly focusing on the kindergarten level. In his later years, he published teaching manuals and methods of teaching. During the time of The Enlightenment, Pestalozzi's ideals introduced "educationalization". This created the bridge between social issues and education by introducing the idea of social issues to be solved through education. Horlacher describes the most prominent example of this during The Enlightenment to be "improving agricultural production methods." Johann Herbart Johann Herbart (1776–1841) is considered the father of educational psychology. He believed that learning was influenced by interest in the subject and the teacher. He thought that teachers should consider the students' existing mental sets—what they already know—when presenting new information or material. Herbart came up with what are now known as the formal steps. The 5 steps that teachers should use are: Review material that has already been learned by the student Prepare the student for new material by giving them an overview of what they are learning next Present the new material. Relate the new material to the old material that has already been learned. Show how the student can apply the new material and show the material they will learn next. 1890–1920 There were three major figures in educational psychology in this period: William James, G. Stanley Hall, and John Dewey. These three men distinguished themselves in general psychology and educational psychology, which overlapped significantly at the end of the 19th century. William James (1842–1910) The period of 1890–1920 is considered the golden era of educational psychology when aspirations of the new discipline rested on the application of the scientific methods of observation and experimentation to educational problems. From 1840 to 1920 37 million people immigrated to the United States. This created an expansion of elementary schools and secondary schools. The increase in immigration also provided educational psychologists the opportunity to use intelligence testing to screen immigrants at Ellis Island. Darwinism influenced the beliefs of the prominent educational psychologists. Even in the earliest years of the discipline, educational psychologists recognized the limitations of this new approach. The pioneering American psychologist William James commented that: James is the father of psychology in America but he also made contributions to educational psychology. In his famous series of lectures Talks to Teachers on Psychology, published in 1899, James defines education as "the organization of acquired habits of conduct and tendencies to behavior". He states that teachers should "train the pupil to behavior" so that he fits into the social and physical world. Teachers should also realize the importance of habit and instinct. They should present information that is clear and interesting and relate this new information and material to things the student already knows about. He also addresses important issues such as attention, memory, and association of ideas. Alfred Binet Alfred Binet published Mental Fatigue in 1898, in which he attempted to apply the experimental method to educational psychology. In this experimental method he advocated for two types of experiments, experiments done in the lab and experiments done in the classroom. In 1904 he was appointed the Minister of Public Education. This is when he began to look for a way to distinguish children with developmental disabilities. Binet strongly supported special education programs because he believed that "abnormality" could be cured. The Binet-Simon test was the first intelligence test and was the first to distinguish between "normal children" and those with developmental disabilities. Binet believed that it was important to study individual differences between age groups and children of the same age. He also believed that it was important for teachers to take into account individual students' strengths and also the needs of the classroom as a whole when teaching and creating a good learning environment. He also believed that it was important to train teachers in observation so that they would be able to see individual differences among children and adjust the curriculum to the students. Binet also emphasized that practice of material was important. In 1916 Lewis Terman revised the Binet-Simon so that the average score was always 100. The test became known as the Stanford-Binet and was one of the most widely used tests of intelligence. Terman, unlike Binet, was interested in using intelligence test to identify gifted children who had high intelligence. In his longitudinal study of gifted children, who became known as the Termites, Terman found that gifted children become gifted adults. Edward Thorndike Edward Thorndike (1874–1949) supported the scientific movement in education. He based teaching practices on empirical evidence and measurement. Thorndike developed the theory of instrumental conditioning or the law of effect. The law of effect states that associations are strengthened when it is followed by something pleasing and associations are weakened when followed by something not pleasing. He also found that learning is done a little at a time or in increments, learning is an automatic process and its principles apply to all mammals. Thorndike's research with Robert Woodworth on the theory of transfer found that learning one subject will only influence your ability to learn another subject if the subjects are similar. This discovery led to less emphasis on learning the classics because they found that studying the classics does not contribute to overall general intelligence. Thorndike was one of the first to say that individual differences in cognitive tasks were due to how many stimulus-response patterns a person had rather than general intellectual ability. He contributed word dictionaries that were scientifically based to determine the words and definitions used. The dictionaries were the first to take into consideration the users' maturity level. He also integrated pictures and easier pronunciation guide into each of the definitions. Thorndike contributed arithmetic books based on learning theory. He made all the problems more realistic and relevant to what was being studied, not just to improve the general intelligence. He developed tests that were standardized to measure performance in school-related subjects. His biggest contribution to testing was the CAVD intelligence test which used a multidimensional approach to intelligence and was the first to use a ratio scale. His later work was on programmed instruction, mastery learning, and computer-based learning: John Dewey John Dewey (1859–1952) had a major influence on the development of progressive education in the United States. He believed that the classroom should prepare children to be good citizens and facilitate creative intelligence. He pushed for the creation of practical classes that could be applied outside of a school setting. He also thought that education should be student-oriented, not subject-oriented. For Dewey, education was a social experience that helped bring together generations of people. He stated that students learn by doing. He believed in an active mind that was able to be educated through observation, problem-solving, and enquiry. In his 1910 book How We Think, he emphasizes that material should be provided in a way that is stimulating and interesting to the student since it encourages original thought and problem-solving. He also stated that material should be relative to the student's own experience. Jean Piaget Jean Piaget (1896–1980) was one of the most powerful researchers in the area of developmental psychology during the 20th century. He developed the theory of cognitive development. The theory stated that intelligence developed in four different stages. The stages are the sensorimotor stage from birth to 2 years old, the preoperational state from 2 to 7 years old, the concrete operational stage from 7 to 10 years old, and the formal operational stage from 12 years old and up. He also believed that learning was constrained to the child's cognitive development. Piaget influenced educational psychology because he was the first to believe that cognitive development was important and something that should be paid attention to in education. Most of the research on Piagetian theory was carried out by American educational psychologists. 1920–present The number of people receiving a high school and college education increased dramatically from 1920 to 1960. Because very few jobs were available to teens coming out of eighth grade, there was an increase in high school attendance in the 1930s. The progressive movement in the United States took off at this time and led to the idea of progressive education. John Flanagan, an educational psychologist, developed tests for combat trainees and instructions in combat training. In 1954 the work of Kenneth Clark and his wife on the effects of segregation on black and white children was influential in the Supreme Court case Brown v. Board of Education. From the 1960s to present day, educational psychology has switched from a behaviorist perspective to a more cognitive-based perspective because of the influence and development of cognitive psychology at this time. Jerome Bruner Jerome Bruner is notable for integrating Piaget's cognitive approaches into educational psychology. He advocated for discovery learning where teachers create a problem solving environment that allows the student to question, explore and experiment. In his book The Process of Education Bruner stated that the structure of the material and the cognitive abilities of the person are important in learning. He emphasized the importance of the subject matter. He also believed that how the subject was structured was important for the student's understanding of the subject and that it was the goal of the teacher to structure the subject in a way that was easy for the student to understand. In the early 1960s, Bruner went to Africa to teach math and science to school children, which influenced his view as schooling as a cultural institution. Bruner was also influential in the development of MACOS, Man: a Course of Study, which was an educational program that combined anthropology and science. The program explored human evolution and social behavior. He also helped with the development of the head start program. He was interested in the influence of culture on education and looked at the impact of poverty on educational development. Benjamin Bloom Benjamin Bloom (1903–1999) spent over 50 years at the University of Chicago, where he worked in the department of education. He believed that all students can learn. He developed the taxonomy of educational objectives. The objectives were divided into three domains: cognitive, affective, and psychomotor. The cognitive domain deals with how we think. It is divided into categories that are on a continuum from easiest to more complex. The categories are knowledge or recall, comprehension, application, analysis, synthesis, and evaluation. The affective domain deals with emotions and has 5 categories. The categories are receiving phenomenon, responding to that phenomenon, valuing, organization, and internalizing values. The psychomotor domain deals with the development of motor skills, movement, and coordination and has 7 categories that also go from simplest to most complex. The 7 categories of the psychomotor domain are perception, set, guided response, mechanism, complex overt response, adaptation, and origination. The taxonomy provided broad educational objectives that could be used to help expand the curriculum to match the ideas in the taxonomy. The taxonomy is considered to have a greater influence internationally than in the United States. Internationally, the taxonomy is used in every aspect of education from the training of the teachers to the development of testing material. Bloom believed in communicating clear learning goals and promoting an active student. He thought that teachers should provide feedback to the students on their strengths and weaknesses. Bloom also did research on college students and their problem-solving processes. He found that they differ in understanding the basis of the problem and the ideas in the problem. He also found that students differ in process of problem-solving in their approach and attitude toward the problem. Nathaniel Gage Nathaniel Gage (1917-2008) is an important figure in educational psychology as his research focused on improving teaching and understanding the processes involved in teaching. He edited the book Handbook of Research on Teaching (1963), which helped develop early research in teaching and educational psychology. Gage founded the Stanford Center for Research and Development in Teaching, which contributed research on teaching as well as influencing the education of important educational psychologists. Perspectives Behavioral Applied behavior analysis, a research-based science utilizing behavioral principles of operant conditioning, is effective in a range of educational settings. For example, teachers can alter student behavior by systematically rewarding students who follow classroom rules with praise, stars, or tokens exchangeable for sundry items. Despite the demonstrated efficacy of awards in changing behavior, their use in education has been criticized by proponents of self-determination theory, who claim that praise and other rewards undermine intrinsic motivation. There is evidence that tangible rewards decrease intrinsic motivation in specific situations, such as when the student already has a high level of intrinsic motivation to perform the goal behavior. But the results showing detrimental effects are counterbalanced by evidence that, in other situations, such as when rewards are given for attaining a gradually increasing standard of performance, rewards enhance intrinsic motivation. Many effective therapies have been based on the principles of applied behavior analysis, including pivotal response therapy which is used to treat autism spectrum disorders. Cognitive Among current educational psychologists, the cognitive perspective is more widely held than the behavioral perspective, perhaps because it admits causally related mental constructs such as traits, beliefs, memories, motivations, and emotions. Cognitive theories claim that memory structures determine how information is perceived, processed, stored, retrieved and forgotten. Among the memory structures theorized by cognitive psychologists are separate but linked visual and verbal systems described by Allan Paivio's dual coding theory. Educational psychologists have used dual coding theory and cognitive load theory to explain how people learn from multimedia presentations. The spaced learning effect, a cognitive phenomenon strongly supported by psychological research, has broad applicability within education. For example, students have been found to perform better on a test of knowledge about a text passage when a second reading of the passage is delayed rather than immediate (see figure). Educational psychology research has confirmed the applicability to the education of other findings from cognitive psychology, such as the benefits of using mnemonics for immediate and delayed retention of information. Problem solving, according to prominent cognitive psychologists, is fundamental to learning. It resides as an important research topic in educational psychology. A student is thought to interpret a problem by assigning it to a schema retrieved from long-term memory. A problem students run into while reading is called "activation." This is when the student's representations of the text are present during working memory. This causes the student to read through the material without absorbing the information and being able to retain it. When working memory is absent from the reader's representations of the working memory they experience something called "deactivation." When deactivation occurs, the student has an understanding of the material and is able to retain information. If deactivation occurs during the first reading, the reader does not need to undergo deactivation in the second reading. The reader will only need to reread to get a "gist" of the text to spark their memory. When the problem is assigned to the wrong schema, the student's attention is subsequently directed away from features of the problem that are inconsistent with the assigned schema. The critical step of finding a mapping between the problem and a pre-existing schema is often cited as supporting the centrality of analogical thinking to problem-solving. Cognitive view of intelligence Each person has an individual profile of characteristics, abilities, and challenges that result from predisposition, learning, and development. These manifest as individual differences in intelligence, creativity, cognitive style, motivation, and the capacity to process information, communicate, and relate to others. The most prevalent disabilities found among school age children are attention deficit hyperactivity disorder (ADHD), learning disability, dyslexia, and speech disorder. Less common disabilities include intellectual disability, hearing impairment, cerebral palsy, epilepsy, and blindness. Although theories of intelligence have been discussed by philosophers since Plato, intelligence testing is an invention of educational psychology, and is coincident with the development of that discipline. Continuing debates about the nature of intelligence revolve on whether it can be characterized by a single factor known as general intelligence, multiple factors (e.g., Gardner's theory of multiple intelligences), or whether it can be measured at all. In practice, standardized instruments such as the Stanford-Binet IQ test and the WISC are widely used in economically developed countries to identify children in need of individualized educational treatment. Children classified as gifted are often provided with accelerated or enriched programs. Children with identified deficits may be provided with enhanced education in specific skills such as phonological awareness. In addition to basic abilities, the individual's personality traits are also important, with people higher in conscientiousness and hope attaining superior academic achievements, even after controlling for intelligence and past performance. Developmental Developmental psychology, and especially the psychology of cognitive development, opens a special perspective for educational psychology. This is so because education and the psychology of cognitive development converge on a number of crucial assumptions. First, the psychology of cognitive development defines human cognitive competence at successive phases of development. Education aims to help students acquire knowledge and develop skills that are compatible with their understanding and problem-solving capabilities at different ages. Thus, knowing the students' level on a developmental sequence provides information on the kind and level of knowledge they can assimilate, which, in turn, can be used as a frame for organizing the subject matter to be taught at different school grades. This is the reason why Piaget's theory of cognitive development was so influential for education, especially mathematics and science education. In the same direction, the neo-Piagetian theories of cognitive development suggest that in addition to the concerns above, sequencing of concepts and skills in teaching must take account of the processing and working memory capacities that characterize successive age levels. Second, the psychology of cognitive development involves understanding how cognitive change takes place and recognizing the factors and processes which enable cognitive competence to develop. Education also capitalizes on cognitive change, because the construction of knowledge presupposes effective teaching methods that would move the student from a lower to a higher level of understanding. Mechanisms such as reflection on actual or mental actions vis-à-vis alternative solutions to problems, |
turn of the century. In a short time, other countries adopted the EFTPOS technology, but these systems too were limited to the national borders. Each country adopted various interbank co-operative models. In Australia, in 1984 Westpac was the first major Australian bank to implement an EFTPOS system, at BP petrol stations. The other major banks implemented EFTPOS systems during 1984, initially with petrol stations. The banks' existing debit and credit cards (but only allowed to access debit accounts) were used in the EFTPOS systems. In 1985, the State Bank of Victoria developed the capacity to host connect individual ATMS and helped create the ATM (Financial) Network. Banks started to link their EFTPOS systems to provide access for all customers across all EFTPOS devices. Cards issued by all banks could then be used at all EFTPOS terminals nationally, but debit cards issued in other countries could not. Prior to 1986, the Australian banks organised a widespread uniform credit card, called Bankcard, which had been in existence since 1974. There was a dispute between the banks whether Bankcard (or credit cards in general) should be permitted into the proposed EFTPOS system. At that time several banks were actively promoting MasterCard and Visa credit cards. Store cards and proprietary cards were shut out of the new system. In New Zealand, Bank of New Zealand started issuing EFTPOS debit cards in 1985 with the first merchant terminals being installed in petrol stations. First Mobile EFTPOS In 1996, mobile EFTPOS arrived, with hotels in Singapore installing systems in 1997 and the first example of a pizza delivery in Singapore accepting Visa card via cellular payment in 1998, which was a collaboration between Signet, Visa, Citi Bank, and Dynamic Data Systems, beginning the rollout of mobile systems in Asia. By 2004, Cellular based Eftpos infrastructure had really taken off, and by 2010, Cellular Eftpos had become the standard for the global market. Since 2002, the use of EFTPOS has grown significantly, and it has become the standard payment method, displacing the use of cash. Subsequently, networks facilitating the process of money transfer and payment settlement between the consumer and the merchant grew from a small number of nationwide systems to the majority of payment processing transactions. For EFTPOS, USA based systems allow the use of debit cards or credit cards. Australia In Australia, debit and credit cards are the most common non-cash payment methods at “points of sale” (POS) or via ATMs. Not all merchants provide EFTPOS facilities, but those who wish to accept EFTPOS payments must enter an agreement with one of the many (originally seven) merchant service providers, which rent an EFTPOS terminal to the merchant. The EFTPOS system in Australia is managed by Eftpos Payments Australia Ltd, which also sets the EFTPOS interchange fee. For credit cards to be accepted by a merchant a separate agreement must be entered into with each credit card company, each of which has its own flexible merchant fee rate. Eftpos machines for merchants are provided by larger banks and specialists such as Live eftpos. The clearing arrangements for EFTPOS are managed by Australian Payments Clearing Association (APCA). The system for ATM and EFTPOS interchanges is called Issuers and Acquirers Community (formerly Consumer Electronic Clearing System; CECS) also called CS3. CECS required authorisations from the Australian Competition & Consumer Commission (ACCC), which was obtained in 2001 and reaffirmed in 2009. ATM and EFTPOS clearances are the made under individual bilateral arrangements between the institutions involved. Debit cards Australian financial institutions provide their customers with a plastic card, which can be used as a debit card or as an ATM card, and sometimes as a credit card. The card merely provides the means by which a customer's linked bank or other accounts can be accessed using an EFTPOS terminal or ATM. These cards can also be used on some vending machines and other automatic payment mechanisms, such as ticket vending machines. Each Australian bank has given a different name to its debit cards, such as: Commonwealth Bank: Keycard Westpac: Handycard National Australia Bank: FlexiCard ANZ Bank: Access card Bendigo Bank: Easy Money card St George/Bank of Melbourne/BankSA: FreedomCard Qudos Bank, Queensland Police Credit Union, Dnister and Indue sponsored financial institutions: Cue Card CUA, People’s Choice Credit Union, Bank Australia, Credit Union SA, Beyond Bank, Teachers Mutual Bank, Nexus Mutual, and Cuscal sponsored financial institutions: rediCARD Suncorp Bank: eftpos Card Regional Australia Bank: Access card Some banks offer alternative debit card facilities to their customers using the Visa or MasterCard clearance system. For example, St George Bank offers a Visa Debit Card, as does the National Australia Bank. The main difference with regular debit cards is that these cards can be used outside Australia where the respective credit card is accepted. Those merchants that enter the EFTPOS payment system must accept debit cards issued by any Australian bank, and some also accept various credit cards and other cards. Some merchants set minimum transaction amounts for EFTPOS transactions, which can be different for debit and credit card transactions. Some merchants impose a surcharge on the use of EFTPOS. These can vary between merchants and on the type of card being used, and generally are not imposed on debit card transactions, and widely not on MasterCard and Visa credit card transactions. A feature of a debit card is that an EFTPOS transaction will only be accepted if there is an available credit balance in the bank cheque or savings account linked to the card. Australian debit cards normally cannot be used outside Australia. They can only be used outside Australia if they carry the MasterCard/Maestro/Cirrus or Visa/Plus or other similar logos, in which case the non-Australian transaction will be processed through those transaction systems. Similarly, non-Australian debit and credit cards can only be used at Australian EFTPOS terminals or ATMs if they have these logos or the MasterCard or Visa logos. Diners Club and/or American Express cards will be accepted only if the merchant has an agreement with those card companies, or increasingly if the merchant has modern alternative payment options available for those cards, such as through PayPal. The Discover Card is accepted in Australia as a Diners Club card. In addition, credit card companies issue prepaid cards which act like generic gift cards, which are anonymous and not linked to any bank accounts. These cards are accepted by merchants who accept credit cards and are processed through the EFTPOS terminal in the same way as credit cards. Cash out A number of merchants permit customers using a debit card to withdraw cash as part of the EFTPOS transaction. In Australia, this facility (known as debit card cashback in many other countries) is known as "cash out". For the merchant, cash out is a way of reducing their | banking relationships, not being linked to each other. Consumers and merchants were slow to accept it, and there was minimal marketing. As a result, growth and market penetration of EFTPOS was minimal in the US up to the turn of the century. In a short time, other countries adopted the EFTPOS technology, but these systems too were limited to the national borders. Each country adopted various interbank co-operative models. In Australia, in 1984 Westpac was the first major Australian bank to implement an EFTPOS system, at BP petrol stations. The other major banks implemented EFTPOS systems during 1984, initially with petrol stations. The banks' existing debit and credit cards (but only allowed to access debit accounts) were used in the EFTPOS systems. In 1985, the State Bank of Victoria developed the capacity to host connect individual ATMS and helped create the ATM (Financial) Network. Banks started to link their EFTPOS systems to provide access for all customers across all EFTPOS devices. Cards issued by all banks could then be used at all EFTPOS terminals nationally, but debit cards issued in other countries could not. Prior to 1986, the Australian banks organised a widespread uniform credit card, called Bankcard, which had been in existence since 1974. There was a dispute between the banks whether Bankcard (or credit cards in general) should be permitted into the proposed EFTPOS system. At that time several banks were actively promoting MasterCard and Visa credit cards. Store cards and proprietary cards were shut out of the new system. In New Zealand, Bank of New Zealand started issuing EFTPOS debit cards in 1985 with the first merchant terminals being installed in petrol stations. First Mobile EFTPOS In 1996, mobile EFTPOS arrived, with hotels in Singapore installing systems in 1997 and the first example of a pizza delivery in Singapore accepting Visa card via cellular payment in 1998, which was a collaboration between Signet, Visa, Citi Bank, and Dynamic Data Systems, beginning the rollout of mobile systems in Asia. By 2004, Cellular based Eftpos infrastructure had really taken off, and by 2010, Cellular Eftpos had become the standard for the global market. Since 2002, the use of EFTPOS has grown significantly, and it has become the standard payment method, displacing the use of cash. Subsequently, networks facilitating the process of money transfer and payment settlement between the consumer and the merchant grew from a small number of nationwide systems to the majority of payment processing transactions. For EFTPOS, USA based systems allow the use of debit cards or credit cards. Australia In Australia, debit and credit cards are the most common non-cash payment methods at “points of sale” (POS) or via ATMs. Not all merchants provide EFTPOS facilities, but those who wish to accept EFTPOS payments must enter an agreement with one of the many (originally seven) merchant service providers, which rent an EFTPOS terminal to the merchant. The EFTPOS system in Australia is managed by Eftpos Payments Australia Ltd, which also sets the EFTPOS interchange fee. For credit cards to be accepted by a merchant a separate agreement must be entered into with each credit card company, each of which has its own flexible merchant fee rate. Eftpos machines for merchants are provided by larger banks and specialists such as Live eftpos. The clearing arrangements for EFTPOS are managed by Australian Payments Clearing Association (APCA). The system for ATM and EFTPOS interchanges is called Issuers and Acquirers Community (formerly Consumer Electronic Clearing System; CECS) also called CS3. CECS required authorisations from the Australian Competition & Consumer Commission (ACCC), which was obtained in 2001 and reaffirmed in 2009. ATM and EFTPOS clearances are the made under individual bilateral arrangements between the institutions involved. Debit cards Australian financial institutions provide their customers with a plastic card, which can be used as a debit card or as an ATM card, and sometimes as a credit card. The card merely provides the means by which a customer's linked bank or other accounts can be accessed using an EFTPOS terminal or ATM. These cards can also be used on some vending machines and other automatic payment mechanisms, such as ticket vending machines. Each Australian bank has given a different name to its debit cards, such as: Commonwealth Bank: Keycard Westpac: Handycard National Australia Bank: FlexiCard ANZ Bank: Access card Bendigo Bank: Easy Money card St George/Bank of Melbourne/BankSA: FreedomCard Qudos Bank, Queensland Police Credit Union, Dnister and Indue sponsored financial institutions: Cue Card CUA, People’s Choice Credit Union, Bank Australia, Credit Union SA, Beyond Bank, Teachers Mutual Bank, Nexus Mutual, and Cuscal sponsored financial institutions: rediCARD Suncorp Bank: eftpos Card Regional Australia Bank: Access card Some banks offer alternative debit card facilities to their customers using the Visa or MasterCard clearance system. For example, St George Bank offers a Visa Debit Card, as does the National Australia Bank. The main difference with regular debit cards is that these cards can be used outside Australia where the respective credit card is accepted. Those merchants that enter the EFTPOS payment system must accept debit cards issued by any Australian bank, and some also accept various credit cards and other cards. Some merchants set minimum transaction amounts for EFTPOS transactions, which can be different for debit and credit card transactions. Some merchants impose a surcharge on the use of EFTPOS. These can vary between merchants and on the type of card being used, and generally are not imposed on debit card transactions, and widely not on MasterCard and Visa credit card transactions. A feature of a debit card is that an EFTPOS transaction will only be accepted if there is an available credit balance in the bank cheque or savings account linked to the card. Australian debit cards normally cannot be used outside Australia. They can only be used outside Australia if they carry the MasterCard/Maestro/Cirrus or Visa/Plus or other similar logos, in which case the non-Australian transaction will be processed through those transaction systems. Similarly, non-Australian debit and credit cards can only be used at Australian EFTPOS terminals or ATMs if they have these logos or the MasterCard or Visa logos. Diners Club and/or American Express cards will be accepted only if the merchant has an agreement with those card companies, or increasingly if the merchant has modern alternative payment options available for those cards, such as through PayPal. The Discover Card is accepted in Australia as a Diners Club card. In addition, credit card companies issue prepaid cards which act like generic gift cards, which are anonymous and not linked to any bank accounts. These cards are accepted by merchants who accept credit cards and are processed through the EFTPOS terminal in the same way as credit cards. Cash out A number of merchants permit customers using a debit card to withdraw cash as part of the EFTPOS transaction. In Australia, this facility (known as debit card cashback in many other countries) is known as "cash out". For the merchant, cash out is a way of reducing their net cash takings, saving on banking of cash. There is no additional cost to the merchant in providing cash out because banks charge a merchant a debit card transaction fee per EFTPOS transaction, and not on the transaction value. Cash out is a facility provided by the merchant, and not the bank, so the merchant can limit or vary how much cash can be withdrawn at a time, or suspend the facility at any time. When available, cash out is convenient for the customer, who can bypass having to visit a bank branch or ATM. Cash out is also cheaper for the customer, since only one bank transaction is involved. For people in some remote areas, cash out may be the only way they can withdraw cash from their personal accounts. However, most merchants who provide the facility set a relatively low limit on cash out, generally $50, and some also charge for the service. Some merchants in Australia only allow cash out with the purchase of goods; other merchants allow cash out whether or not customers buy any goods. Cash out is not available in association with credit card sales because on credit card transactions the merchant is charged a percentage commission based on the transaction value, and also because cash withdrawals are treated differently from purchase transactions by the credit card company. (However, though inconsistent with a merchant's agreement with each credit card company, the merchant may treat a cash withdrawal as part of an ordinary credit card sale.) Cardholder verification EFTPOS transactions involving a debit, credit or prepaid card are primarily authenticated via the entry of a personal identification number (PIN) at the point of sale. Historically, these transactions were authenticated by the merchant using the cardholder's signature, as signed on their receipt. However, merchants had become increasingly lax in enforcing this verification, resulting in an increase in fraud. Australian banks have since deployed chip and PIN technology using the global EMV card standard; as of 1 August 2014, Australian merchants no longer accept signatures on transactions by domestic customers at point of sale terminals. As a further security measure, if a user enters an incorrect PIN three times, the card may be locked out of EFTPOS and require reactivation over the phone or at a bank branch. In the case of an ATM, the card will not be returned, and the cardholder will need to visit the branch to retrieve the card, or request a new card to be issued. All debit cards now have a magnetic stripe on which is encoded the card's service codes, consisting of three-digit values. These codes are used to convey instructions to merchant terminals on how |
4:16. Mention in Colossians 4:16 Paul, the earliest known Christian author, wrote several letters (or epistles) in Greek to various churches. Paul apparently dictated all his epistles through a secretary (or amanuensis), but wrote the final few paragraphs of each letter by his own hand. Many survived and are included in the New Testament, but others are known to have been lost. The Epistle to the Colossians states "After this letter has been read to you, see that it is also read in the church of the Laodiceans and that you in turn read the letter from Laodicea." The last words can be interpreted as "letter written to the Laodiceans", but also "letter written from Laodicea". The New American Standard Bible (NASB) translates this verse in the latter manner, and translations in other languages such as the Dutch Statenvertaling translate it likewise: "When this letter is read among you, have it also read in the church of the Laodiceans; and you, for your part read my letter (that is coming) from Laodicea." Those who read here "letter written to the Laodiceans" presume that, at the time that the Epistle to the Colossians was written, Paul also had written an epistle to the community of believers in Laodicea. Another possibility exists: that no such epistle to the Laodiceans was ever created, despite the verse in Colossians. Colossians is considered a deutero-Pauline work by many scholars: a number of differences in writing style and assumed situation distinguish it from Paul's earlier letters. While this is generally explained by Christians by increasing use of a secretary (amanuensis) later in Paul's life, a more skeptical approach is to suggest that Colossians was not written by Paul at all. If Colossians was forged in Paul's name, then the reference to the other letter to the Laodiceans could merely be a verisimilitude - a small detail to make the letter seem real. The letter would never have been sent to | point of view. This is not at all clear, however, since none of the text survives. It is not known what this letter might have contained. Most scholars believe it was explicitly Marcionist in its outlook, hence its condemnation. Others believe it to be the Epistle to the Ephesians; the proto-Orthodox author Tertullian accuses Marcion's group of using an edited version of Ephesians which was referred to as the Epistle to the Laodiceans. Latin Vulgate Epistle to the Laodiceans A claimed Epistle to the Laodiceans from Paul exists in Latin. It is quite short at only 20 verses. It is mentioned by various writers from the fourth century onwards, notably by Pope Gregory the Great; the oldest known copy of this epistle is in the Fulda manuscript written for Victor of Capua in 546. Possibly due to Gregory's endorsement of it, many Western Latin Bibles contained this epistle for centuries afterward. It also featured in early English Bibles: John Wycliffe included Paul's letter to the Laodiceans in his Bible translation from the Latin to English. However, the epistle is essentially unknown in Eastern Christianity, where it was never used or published; additionally, there is no evidence of a Greek text, the language Paul wrote in. The text was almost unanimously considered pseudepigraphal when the Christian Biblical canon was decided upon, and does not appear in any Greek copies of the Bible at all, nor is it known in Syriac or other versions. Jerome, who wrote the Latin Vulgate translation, wrote in the 4th century, "it is rejected by everyone". Scholars are unanimous in concurring with Jerome and believing this epistle forged long after Paul's death. Additionally, the epistle is derided for having no theological content. It includes Pauline greetings and farewells, but does not appear to have any substantive content: it does not address any problem or advocate for any position. Professors Rudolf Knopf (1874-1920) and Gustav Kruger (1862-1940) wrote that the epistle is "nothing other than a worthless patching together of [canonical] Pauline passages and phrases, mainly from the Epistle to the Philippians." M. R. James wrote that "It is not easy to imagine a more feebly constructed cento of Pauline phrases." Wilhelm Schneemelcher was "amazed that it ever found a place in Bible manuscripts." However, it evidently gained a certain degree of respect, having appeared in over 100 surviving early Latin copies of the Bible. According to |
the greatest Jewish population in Nazi-controlled Europe.<ref>"The evacuation of Jews to Poland", Jewish Virtual Library.'.' Retrieved 28 July 2009.</ref> On top of that, the new death camps outside of Germany's prewar borders could be kept secret from the German civil populace. Pure extermination camps During the initial phase of the Final Solution, gas vans producing poisonous exhaust fumes were developed in the occupied Soviet Union (USSR) and at the Chełmno extermination camp in occupied Poland, before being used elsewhere. The killing method was based on experience gained by the SS during the secretive Aktion T4 programme of involuntary euthanasia. There were two types of death chambers operating during the Holocaust. Unlike at Auschwitz, where the cyanide-based Zyklon B was used to exterminate trainloads of prisoners under the guise of "relocation", the camps at Treblinka, Bełżec, and Sobibór, built during Operation Reinhard (October 1941November 1943), used lethal exhaust fumes produced by large internal combustion engines. The three killing centres of Einsatz Reinhard were constructed predominantly for the extermination of Poland's Jews trapped in the Nazi ghettos. At first, the victim's bodies were buried with the use of crawler excavators, but they were later exhumed and incinerated in open-air pyres to hide the evidence of genocide in what became known as Sonderaktion 1005. The six camps considered to be purely for extermination were Chełmno extermination camp, Belzec extermination camp, Sobibor extermination camp, Treblinka extermination camp, Majdanek extermination camp and Auschwitz extermination camp (also called Auschwitz-Birkenau). Whereas the Auschwitz II (Auschwitz–Birkenau) and Majdanek camps were parts of a labor camp complex, the Chełmno and Operation Reinhard death camps (that is, Bełżec, Sobibór, and Treblinka) were built exclusively for the rapid extermination of entire communities of people (primarily Jews) within hours of their arrival. All were constructed near branch lines that linked to the Polish railway system, with staff members transferring between locations. These camps had almost identical design: they were several hundred metres in length and width, and were equipped with only minimal staff housing and support installations not meant for the victims crammed into the railway transports. The Nazis deceived the victims upon their arrival, telling them that they were at a temporary transit stop, and would soon continue to German Arbeitslagers (work camps) farther to the east. Selected able-bodied prisoners delivered to the death camps were not immediately killed, but instead were pressed into labor units called Sonderkommandos to help with the extermination process by removing corpses from the gas chambers and burning them. Concentration and extermination camps At the camps of Operation Reinhard, including Bełżec, Sobibór, and Treblinka, trainloads of prisoners were murdered immediately after arrival in gas chambers designed exclusively for that purpose. The mass killing facilities were developed at about the same time inside the Auschwitz II-Birkenau subcamp of a forced labour complex, and at the Majdanek concentration camp. In most other camps prisoners were selected for slave labor first; they were kept alive on starvation rations and made available to work as required. Auschwitz, Majdanek, and Jasenovac were retrofitted with Zyklon B gas chambers and crematoria buildings as the time went on, remaining operational until war's end in 1945. Extermination procedure Heinrich Himmler visited the outskirts of Minsk in 1941 to witness a mass shooting. He was told by the commanding officer there that the shootings were proving psychologically damaging to those being asked to pull the triggers. Thus Himmler knew another method of mass killing was required. After the war, the diary of the Auschwitz Commandant, Rudolf Höss, revealed that psychologically "unable to endure wading through blood any longer", many Einsatzkommandosthe killerseither went mad or killed themselves. The Nazis had first used gassing with carbon monoxide cylinders to murder 70,000 disabled people in Germany in what they called a 'euthanasia programme' to disguise that mass murder was taking place. Despite the lethal effects of carbon monoxide, this was seen as unsuitable for use in the East due to the cost of transporting the carbon monoxide in cylinders. Each extermination camp operated differently, yet each had designs for quick and efficient industrialized killing. While Höss was away on an official journey in late August 1941 his deputy, Karl Fritzsch, tested out an idea. At Auschwitz clothes infested with lice were treated with crystallised prussic acid. The crystals were made to order by the IG Farben chemicals company for which the brand name was Zyklon B. Once released from their container, Zyklon B crystals in the air released a lethal cyanide gas. Fritzsch tried out the effect of Zyklon B on Soviet POWs, who were locked up in cells in the basement of the bunker for this experiment. Höss on his return was briefed and impressed with the results and this became the camp strategy for extermination as it was also to be at Majdanek. Besides gassing, the camp guards continued killing prisoners via mass shooting, starvation, torture, etc. Gassings SS Obersturmführer Kurt Gerstein of the Institute for Hygiene of the Waffen-SS, told a Swedish diplomat during the war, about life in a death camp. He recounted that on 19 August 1942, he arrived at Belzec extermination camp (which was equipped with carbon monoxide gas chambers) and was shown the unloading of 45 train cars filled with 6,700 Jews, many already dead. The rest were marched naked to the gas chambers, where: Auschwitz Camp Commandant Rudolf Höss reported that the first time Zyklon B pellets were used on the Jews, many suspected they were to be killeddespite having been deceived into believing they were to be deloused and then returned to the camp. As a result, the Nazis identified and isolated "difficult individuals" who might alert the prisoners, and removed them from the masslest they incite revolt among the | the masslest they incite revolt among the deceived majority of prisoners en route to the gas chambers. The "difficult" prisoners were led to a site out of view to be killed off discreetly. A prisoner unit Sonderkommando (Special Detachment) effected in the processes of extermination; they encouraged the Jews to undress without a hint of what was about to happen. They accompanied them into the gas chambers outfitted to appear as shower rooms (with nonworking water nozzles, and tile walls); and remained with the victims until just before the chamber door closed. To psychologically maintain the "calming effect" of the delousing deception, an SS man stood at the door until the end. The Sonderkommando talked to the victims about life in the camp to pacify the suspicious ones, and hurried them inside; to that effect, they also assisted the aged and the very young in undressing. To further persuade the prisoners that nothing harmful was happening, the Sonderkommando deceived them with small talk about friends or relations who had arrived in earlier transports. Many young mothers hid their infants beneath their piled clothes fearing that the delousing "disinfectant" might harm them. Camp Commandant Höss reported that the "men of the Special Detachment were particularly on the look-out for this", and encouraged the women to take their children into the "shower room". Likewise, the Sonderkommando comforted older children who might cry "because of the strangeness of being undressed in this fashion". Yet, not every prisoner was deceived by such psychological tactics; Commandant Höss spoke of Jews "who either guessed, or knew, what awaited them, nevertheless ... [they] found the courage to joke with the children, to encourage them, despite the mortal terror visible in their own eyes". Some women would suddenly "give the most terrible shrieks while undressing, or tear their hair, or scream like maniacs"; the Sonderkommando immediately took them away for execution by shooting. In such circumstances, others, meaning to save themselves at the gas chamber's threshold, betrayed the identities and "revealed the addresses of those members of their race still in hiding". Once the door of the filled gas chamber was sealed, pellets of Zyklon B were dropped through special holes in the roof. Regulations required that the Camp Commandant supervise the preparations, the gassing (through a peephole), and the aftermath looting of the corpses. Commandant Höss reported that the gassed victims "showed no signs of convulsion"; the Auschwitz camp physicians attributed that to the "paralyzing effect on the lungs" of the Zyklon B gas, which killed before the victim began suffering convulsions. As a matter of political training, some high-ranked Nazi Party leaders and SS officers were sent to Auschwitz–Birkenau to witness the gassings. Höss reported that, "all were deeply impressed by what they saw ... [yet some] ... who had previously spoken most loudly, about the necessity for this extermination, fell silent once they had actually seen the 'final solution of the Jewish problem'." As the Auschwitz Camp Commandant Rudolf Höss justified the extermination by explaining the need for "the iron determination with which we must carry out Hitler's orders"; yet saw that even "[Adolf] Eichmann, who certainly [was] tough enough, had no wish to change places with me”. Corpse disposal After the gassings, the Sonderkommando removed the corpses from the gas chambers, then extracted any gold teeth. Initially, the victims were buried in mass graves, but were later cremated during Sonderaktion 1005 in all camps of Operation Reinhard. The Sonderkommando was responsible for burning the corpses in the pits, stoking the fires, draining surplus body fat and turning over the "mountain of burning corpses ... so that the draft might fan the flames" wrote Commandant Höss in his memoir while in the Polish custody. He was impressed by the diligence of prisoners from the so-called Special Detachment who carried out their duties despite their being well aware that they, too, would meet exactly the same fate in the end. At the Lazaret killing station they held the sick so they would never see the gun while being shot. They did it "in such a matter-of-course manner that they might, themselves, have been the exterminators" wrote Höss. He further said that the men ate and smoked "even when engaged in the grisly job of burning corpses which had been lying for some time in mass graves." They occasionally encountered the corpse of a relative, or saw them entering the gas chambers. According to Höss, they were obviously shaken by this but "it never led to any incident." He mentioned the case of a Sonderkommando'' who found the body of his wife, yet continued to drag corpses along "as though nothing had happened." At Auschwitz, the corpses were incinerated in crematoria and the ashes either buried, scattered, or dumped in the river. At Sobibór, Treblinka, Bełżec, and Chełmno, the corpses were incinerated on pyres. The efficiency of industrialised murder at Auschwitz-Birkenau led to the construction of three buildings with crematoria designed by specialists from the firm J.A. Topf & Söhne. They burned bodies 24 hours a day, and yet the death rate was at times so high that corpses also needed to be burned in open-air pits. Victims The estimated total number of people who were murdered in the six Nazi extermination camps is 2.7 million, according to the United States Holocaust Memorial Museum. Dismantling and attempted concealment The Nazis attempted to either partially or completely dismantle the extermination camps in order to hide any evidence that people had been murdered there. This was an attempt to conceal not only the extermination process but also the buried remains. As a result of the secretive Sonderaktion 1005, the camps were dismantled by commandos of condemned prisoners, their records were destroyed, and the mass graves were dug up. Some extermination camps that remained uncleared of evidence were liberated by Soviet troops, who followed different standards of documentation and openness than the Western allies did. Nonetheless Majdanek was captured nearly intact due to the rapid advance of the Soviet Red Army during Operation Bagration. Commemoration In the post-war period the government of the People's Republic of Poland created monuments at the extermination camp sites. These early monuments mentioned no ethnic, religious, or national particulars of the Nazi victims. The extermination camps sites have been accessible to everyone in recent decades. They are popular destinations for visitors from all over the world, especially the most infamous Nazi death camp, Auschwitz near the town of Oświęcim. In the early 1990s, the Jewish Holocaust organisations debated with the Polish Catholic groups about "What religious symbols of martyrdom are appropriate as memorials in a Nazi death camp such as Auschwitz?" The Jews opposed the placement of Christian memorials such as the Auschwitz cross near Auschwitz I where mostly Poles were killed. The Jewish victims of the Holocaust were mostly killed at Auschwitz II Birkenau. The March of the Living is organized in Poland annually since 1988. Marchers come from countries as diverse as Estonia, New Zealand, Panama, and Turkey. The camps and Holocaust denial Holocaust deniers or negationists are people and organizations who assert that the Holocaust did not occur, or that it did not occur in the historically recognized manner and extent. Holocaust deniers claim that the extermination camps were actually transit camps from which Jews were deported farther east. However, these theories are disproven by surviving German documents, which show that Jews were sent to the camps to be murdered. Extermination camp research is difficult because of extensive attempts by the SS and Nazi regime to conceal the existence of the extermination camps. The existence of the extermination camps is |
and enterprises Enterprise GP Holdings, an energy holding company Enterprise plc, a UK civil engineering and maintenance company Enterprise Products, a natural gas and crude oil pipeline company Enterprise Records, a record label Enterprise Rent-A-Car, a car rental Provider Enterprise Holdings, the parent company General Business, economic activity done by a businessperson Big business, larger corporation commonly called "enterprise" in business jargon (excluding small and medium sized businesses) Company, a legal entity practicing a business activity Enterprises in the Soviet Union, the equivalent of "company" in the former socialist state Enterprise architecture, a strategic management discipline within an organization Enterprise Capital Fund, a type of venture capital in the UK Entrepreneurship, the practice of starting new organizations, particularly new businesses Social enterprise, an organization that applies commercial strategies to improve well-being Organizations Enterprize Canada, a student-run entrepreneurial competition and conference Enterprise for High School Students, a non-profit organization Computing Enterprise (computer), a 1980s UK 8-bit home computer, also known as Flan and Elan Enterprise resource planning (ERP), integrated management of core business processes or the technology supporting such management Enterprise software, business-oriented computer applications Enterprise storage, for large businesses Windows Enterprise, an edition of several versions of Microsoft Windows Entertainment and media Television Star Trek: Enterprise, also Enterprise, a 2001-2005 television series Enterprise (soundtrack), a 2002 soundtrack album from the first season of the series Enterprice (British TV series), a 2018 television series Fictional entities Star Trek vessels Starship Enterprise, a list, timeline and brief description of starships in the fictional history of Star Trek Enterprise (NX-01), the main setting of Star Trek: Enterprise USS Enterprise (NCC-1701), from the original Star Trek television series and several Star Trek films USS Enterprise (NCC-1701-A),from the fifth and sixth Star Trek films USS Enterprise (NCC-1701-B), from the film Star Trek: Generations USS Enterprise (NCC-1701-C), from the Star Trek: Next Generation episode "Yesterday's Enterprise" USS Enterprise (NCC-1701-D), from Star Trek: The Next Generation USS Enterprise (NCC-1701-E), from the films Star Trek: First Contact, Star Trek: Insurrection, and Star Trek: Nemesis USS Enterprise (NCC-1701-F), a non-player ship in the Star Trek Online video game USS Enterprise (NCC-1701-J), from the Star Trek: Enterprise episode "Azati Prime" Other fictional vessels Enterprise, an airship in the game Final Fantasy IV Enterprise, an airship in the game Final Fantasy XIV Enterprise, the title ship in the 1959–1961 television series Riverboat Enterprise, a starship in H. Beam Piper's novel Space Viking Newspapers Australia The Enterprise (Katoomba), in Katoomba, New South Wales (1913) United States Bastrop Daily Enterprise, in Louisiana Chico Enterprise-Record, in Chico, California High Point Enterprise, in North Carolina The Beaumont Enterprise, in Texas The Enterprise (Brockton), in Brockton, Massachusetts Malheur Enterprise, in Malheur County, Oregon The Press-Enterprise, in Riverside, California (1885–1983) Geographic locations Canada Enterprise, Northwest Territories, a hamlet Enterprise, a hamlet in the township of | Virginia, an unincorporated community Enterprise (community), Wisconsin, an unincorporated community Enterprise, Wisconsin, a town Enterprise Rancheria in California Enterprise Township, Michigan Enterprise Township, Jackson County, Minnesota Enterprise Township, Valley County, Nebraska Other places Enterprise, Guyana, a village Enterprise, Trinidad and Tobago Enterprise Rupes, an escarpment on Mercury Vehicles Aircraft Enterprise (balloon), a gas-inflated aerial reconnaissance balloon used by the Union Army during the American Civil War Enterprise, a US Navy L-class blimp Enterprise, an Armstrong Whitworth Ensign plane Spacecraft IXS Enterprise, a NASA conceptual interstellar ship Space Shuttle Enterprise VSS Enterprise, the inaugural vessel of the Virgin Galactic suborbital tourism fleet Trains Enterprise (train service), between Belfast and Dublin Enterprise (Via Rail train), a former service between Montreal and Toronto Watercraft United States Navy ships (Chronological) , a Continental Navy sloop captured from the British, burned to prevent recapture in 1777 , a schooner that fired the first shots in the First Barbary War , a schooner, stationed primarily in South America to patrol and protect commerce , a steam-powered sloop-of-war used for surveying, patrolling, and training until 1909 , a motorboat (1917–1919) used in World War I as a non-commissioned section patrol craft (1936), a Yorktown-class aircraft carrier, and the most decorated U.S. Navy ship (1961), the world's first nuclear-powered aircraft carrier (2027), a planned Gerald R. Ford-class aircraft carrier Other ships , a J-class yacht involved in the America's Cup , a schooner, previously a privateer, used by the Continental Navy in Chesapeake Bay until 1777 , steamboat that delivered supplies and troops during the Battle of New Orleans and was the first to ascend the Mississippi and Ohio rivers , an Australian topsail schooner used for the founding of Melbourne, Australia , a replica of the 1829 Enterprize , forced by weather into Bermuda in 1835, resulting in the liberation of most of the slaves on board , a Canadian 19th-century steamer on the Columbia and Fraser rivers , a sidewheeler, built in San Francisco, operated on the Fraser River system, from 1861 to her loss in 1885 , a Canadian pioneer sternwheeler on the upper Fraser River PS Enterprise, an 1878 Australian paddle steamer on the Murray, Darling and Murrumbidgee Rivers , an American steamboat that operated on the Willamette River in Oregon Enterprise, a sailing ship caught in a storm off St. Ives, Cornwall in 1903 , any of several ships of the British Royal Navy London Enterprise (1950), an oil tanker built for London & Overseas Freighters, scrapped London Enterprise (1983), a Panamax oil tanker built for London & Overseas Freighters (1944–1952), an American cargo ship originally commissioned as the SS Cape Kumukaki (C1-B) , see Boats of the Mackenzie River watershed Ship classes , a class of sailboat Discoverer Enterprise, the namesake of a class of deepwater drillships Other uses Enterprise (apple) Enterprise (horse), a British Thoroughbred racehorse Enterprise (ride), an amusement ride Enterprise Cup, an annual rugby union competition in Kenya, Tanzania and Uganda Enterprise number, a former type of US business phone number which |
as the aforesaid heretics and who persist in their error even to death: let him be anathema." Lutheran churches Although Lutheranism technically has an excommunication process, some denominations and congregations do not use it. In the Smalcald Articles Luther differentiates between the "great" and "small" excommunication. The "small" excommunication is simply barring an individual from the Lord's Supper and "other fellowship in the church." While the "great" excommunication excluded a person from both the church and political communities which he considered to be outside the authority of the church and only for civil leaders. A modern Lutheran practice is laid out in the Lutheran Church-Missouri Synod's 1986 explanation to the Small Catechism, defined beginning at Questions No. 277-284, in "The Office of Keys." They endeavor to follow the process that Jesus laid out in the 18th chapter of the Gospel of Matthew. According to the explanation, excommunication requires: The confrontation between the subject and the individual against whom he has sinned. If this fails, the confrontation between the subject, the harmed individual, and two or three witnesses to such acts of sin. The informing of the pastor of the subject's congregation. A confrontation between the pastor and the subject. Many Lutheran denominations operate under the premise that the entire congregation (as opposed to the pastor alone) must take appropriate steps for excommunication, and there are not always precise rules, to the point where individual congregations often set out rules for excommunicating laymen (as opposed to clergy). For example, churches may sometimes require that a vote must be taken at Sunday services; some congregations require that this vote be unanimous. The Church of Sweden and the church visit at Sundays were mandatory (Konventikelplakatet) for all Swedes from 1600-1858 as the only allowed religious organisation in the country, with a few exceptions, like for the Great Synagogue of Stockholm and Embassies. However, one can not get excluded from a state institution that is by law mandatory for all. The topic has some interesting aspects of excommunication of the parliament of Sweden by canon law from the Catholic Church and interdict (Catholic church strike) as background of the Reformation in Sweden. In the Church of Sweden and the Church of Denmark, excommunicated individuals are turned out from their parish in front of their congregation. They are not forbidden, however, to attend church and participate in other acts of devotion, although they are to sit in a place appointed by the priest (which was at a distance from others). The Lutheran process, though rarely used, has created unusual situations in recent years due to its somewhat democratic excommunication process. One example was an effort to get serial killer Dennis Rader excommunicated from his denomination (the Evangelical Lutheran Church in America) by individuals who tried to "lobby" Rader's fellow church members into voting for his excommunication. Anglican Communion Church of England The Church of England does not have any specific canons regarding how or why a member can be excommunicated, although it has a canon according to which ecclesiastical burial may be refused to someone "declared excommunicate for some grievous and notorious crime and no man to testify to his repentance". The punishment of imprisonment for being excommunicated from the Church of England was removed from English law in 1963. Episcopal Church of the United States of America The ECUSA is in the Anglican Communion, and shares many canons with the Church of England which would determine its policy on excommunication. Reformed churches In the Reformed Churches, excommunication has generally been seen as the culmination of church discipline, which is one of the three marks of the Church. The Westminster Confession of Faith sees it as the third step after "admonition" and "suspension from the sacrament of the Lord's Supper for a season." Yet, John Calvin argues in his Institutes of the Christian Religion that church censures do not "consign those who are excommunicated to perpetual ruin and damnation," but are designed to induce repentance, reconciliation and restoration to communion. Calvin notes, "though ecclesiastical discipline does not allow us to be on familiar and intimate terms with excommunicated persons, still we ought to strive by all possible means to bring them to a better mind, and recover them to the fellowship and unity of the Church." At least one modern Reformed theologian argues that excommunication is not the final step in the disciplinary process. Jay E. Adams argues that in excommunication, the offender is still seen as a brother, but in the final step they become "as the heathen and tax collector" (Matthew 18:17). Adams writes, "Nowhere in the Bible is excommunication (removal from the fellowship of the Lord's Table, according to Adams) equated with what happens in step 5; rather, step 5 is called "removing from the midst, handing over to Satan," and the like." Former Princeton president and theologian, Jonathan Edwards, addresses the notion of excommunication as "removal from the fellowship of the Lord's Table" in his treatise entitled "The Nature and End of Excommunication". Edwards argues that "Particularly, we are forbidden such a degree of associating ourselves with (excommunicants), as there is in making them our guests at our tables, or in being their guests at their tables; as is manifest in the text, where we are commanded to have no company with them, no not to eat". Edwards insists, "That this respects not eating with them at the Lord's supper, but a common eating, is evident by the words, that the eating here forbidden, is one of the lowest degrees of keeping company, which are forbidden. Keep no company with such a one, saith the apostle, no not to eat – as much as to say, no not in so low a degree as to eat with him. But eating with him at the Lord's supper, is the very highest degree of visible Christian communion. Who can suppose that the apostle meant this: Take heed and have no company with a man, no not so much as in the highest degree of communion that you can have? Besides, the apostle mentions this eating as a way of keeping company which, however, they might hold with the heathen. He tells them, not to keep company with fornicators. Then he informs them, he means not with fornicators of this world, that is, the heathens; but, saith he, "if any man that is called a brother be a fornicator, etc. with such a one keep no company, no not to eat." This makes it most apparent, that the apostle doth not mean eating at the Lord's table; for so, they might not keep company with the heathens, any more than with an excommunicated person." Methodism In the Methodist Episcopal Church, individuals were able to be excommunicated following "trial before a jury of his peers, and after having had the privilege of an appeal to a higher court." Nevertheless, an excommunication could be lifted after sufficient penance. John Wesley, the founder of the Methodist Churches, excommunicated sixty-four members from the Newcastle Methodist society alone for the following reasons: The Allegheny Wesleyan Methodist Connection, in its 2014 Discipline, includes "homosexuality, lesbianism, bi-sexuality, bestiality, incest, fornication, adultery, and any attempt to alter one’s gender by surgery", as well as remarriage after divorce among its excommunicable offences. The Evangelical Wesleyan Church, in its 2015 Discipline, states that "Any member of our church who is accused of neglect of the means of grace or other duties required by the Word of God, the indulgence of sinful tempers, words or actions, the sowing of dissension, or any other violation of the order and discipline of the church, may, after proper labor and admonition, be censured, placed on probation, or expelled by the official board of the circuit of which he is a member. If he request a trial, however, within thirty dates of the final action of the official board, it shall be granted." Anabaptist tradition When believers were baptized and taken into membership of the church by Anabaptists, it was not only done as symbol of cleansing of sin but was also done as a public commitment to identify with Jesus Christ and to conform one's life to the teaching and example of Jesus as understood by the church. Practically, that meant membership in the church entailed a commitment to try to live according to norms of Christian behavior widely held by the Anabaptist tradition. In the ideal, discipline in the Anabaptist tradition requires the church to confront a notoriously erring and unrepentant church member, first directly in a very small circle and, if no resolution is forthcoming, expanding the circle in steps eventually to include the entire church congregation. If the errant member persists without repentance and rejects even the admonition of the congregation, that person is excommunicated or excluded from church membership. Exclusion from the church is recognition by the congregation that this person has separated himself or herself from the church by way of his or her visible and unrepentant sin. This is done ostensibly as a final resort to protect the integrity of the church. When this occurs, the church is expected to continue to pray for the excluded member and to seek to restore him or her to its fellowship. There was originally no inherent expectation to shun (completely sever all ties with) an excluded member, however differences regarding this very issue led to early schisms between different Anabaptist leaders and those who followed them. Amish Jakob Ammann, founder of the Amish sect, believed that the shunning of those under the ban should be systematically practiced among the Swiss Anabaptists as it was in the north and as was outlined in the Dordrecht Confession. Ammann's uncompromising zeal regarding this practice was one of the main disputes that led to the schism between the Anabaptist groups that became the Amish and those that eventually would be called Mennonite. Recently more moderate Amish groups have become less strict in their application of excommunication as a discipline. This has led to splits in several communities, an example of which is the Swartzetruber Amish who split from the main body of Old Order Amish because of the latter's practice of lifting the ban from members who later join other churches. In general, the Amish will excommunicate baptized members for failure to abide by their Ordnung (church rules) as it is interpreted by the local Bishop if certain repeat violations of the Ordnung occur. Excommunication among the Old Order Amish results in shunning or the Meidung, the severity of which depends on many factors, such as the family, the local community as well as the type of Amish. Some Amish communities cease shunning after one year if the person joins another church later on, especially if it is another Mennonite church. At the most severe, other members of the congregation are prohibited almost all contact with an excommunicated member including social and business ties between the excommunicant and the congregation, sometimes even marital contact between the excommunicant and spouse remaining in the congregation or family contact between adult children and parents. Mennonites In the Mennonite Church excommunication is rare and is carried out only after many attempts at reconciliation and on someone who is flagrantly and repeatedly violating standards of behavior that the church expects. Occasionally excommunication is also carried against those who repeatedly question the church's behavior or who genuinely differ with the church's theology as well, although in almost all cases the dissenter will leave the church before any discipline need be invoked. In either case, the church will attempt reconciliation with the member in private, first one on one and then with a few church leaders. Only if the church's reconciliation attempts are unsuccessful, the congregation formally revokes church membership. Members of the church generally pray for the excluded member. Some regional conferences (the Mennonite counterpart to dioceses of other denominations) of the Mennonite Church have acted to expel member congregations that have openly welcomed non-celibate homosexuals as members. This internal conflict regarding homosexuality has also been an issue for other moderate denominations, such as the American Baptists and Methodists. The practice among Old Order Mennonite congregations is more along the lines of Amish, but perhaps less severe typically. An Old Order member who disobeys the Ordnung (church regulations) must meet with the leaders of the church. If a church regulation is broken a second time there is a confession in the church. Those who refuse to confess are excommunicated. However upon later confession, the church member will be reinstated. An excommunicated member is placed under the ban. This person is not banned from eating with their own family. Excommunicated persons can still have business dealings with church members and can maintain marital relations with a marriage partner, who remains a church member. Hutterites The separatist, communal, and self-contained Hutterites also use excommunication and shunning as form of church discipline. Since Hutterites have communal ownership of goods, the effects of excommunication could impose a hardship upon the excluded member and family leaving them without employment income and material assets such as a home. However, often arrangements are made to provide material benefits to the family leaving the colony such as an automobile and some transition funds for rent, etc. One Hutterite colony in Manitoba (Canada) had a protracted dispute when leaders attempted to force the departure of a group that had been excommunicated but would not leave. About a dozen lawsuits in both Canada and the United States were filed between the various Hutterite factions and colonies concerning excommunication, shunning, the legitimacy of leadership, communal property rights, and fair division of communal property when factions have separated. Baptists For Baptists, excommunication is used as a last resort by denominations and churches for members who do not want to repent of beliefs or behavior at odds with the confession of faith of the community. The vote of community members, however, can restore a person who has repented. The Church of Jesus Christ of Latter-day Saints The Church of Jesus Christ of Latter-day Saints (LDS Church) practices excommunication as a penalty for those who commit serious sins, i.e., actions that significantly impair the name or moral influence of the church or pose a threat to other people. In 2020, the church ceased using the term "excommunication" and instead refers to "withdrawal of membership". According to the church leadership General Handbook, the purposes of withdrawing membership or imposing membership restrictions are, (1) to help protect others; (2) to help a person access the redeeming power of Jesus Christ through repentance; and (3) to protect the integrity of the Church. The origins of LDS disciplinary procedures and excommunications are traced to a revelation Joseph Smith dictated on 9 February 1831, later canonized as Doctrine and Covenants, section 42 and codified in the General Handbook. The LDS Church also practices the lesser sanctions of private counsel and caution and informal and formal membership restrictions. (Informal membership restrictions was formerly known as "probation"; formal membership restrictions was formerly known as "disfellowshipment".) Formal membership restrictions are used for serious sins that do not rise to the level of membership withdrawal. Formal membership restriction denies some privileges but does not include a loss of church membership. Once formal membership restrictions are in place, persons may not take the sacrament or enter church temples, nor may they offer public prayers or sermons. Such persons may continue to attend most church functions and are allowed to wear temple garments, pay tithes and offerings, and participate in church classes if their conduct is orderly. Formal membership restrictions typically lasts for one year, after which one may be reinstated as a member in good standing. In the more grievous or recalcitrant cases, withdrawal of membership becomes a disciplinary option. Such an action is generally reserved for what are seen as the most serious sins, including committing serious crimes such as murder, child abuse, and incest; committing adultery; involvement in or teaching of polygamy; involvement in homosexual conduct; apostasy; participation in an abortion; teaching false doctrine; or openly criticizing church leaders. The General Handbook states that formally joining another church constitutes apostasy and is worthy of membership withdrawal; however, merely attending another church does not constitute apostasy. A withdrawal of membership can occur only after a formal church membership council. Formerly called a "disciplinary council" or a "church court," the councils were renamed to avoid focusing on guilt and instead to emphasize the availability of repentance. The decision to withdraw the membership a Melchizedek priesthood holder is generally the province of the leadership of a stake. In such a disciplinary council, the stake presidency and, sometimes in more difficult cases, the stake high council attend. If the high council is involved, the twelve members of the high council are split in half: one group represents the member in question and is charged with "prevent[ing] insult or injustice"; the other group represents the church as a whole. The member under scrutiny is invited to attend the membership proceedings, but the council can go forward without him. In making a decision, the leaders of the high council consult with the stake presidency, but the decision about which discipline is necessary is the stake president's alone. It is possible to appeal a decision of a stake membership council to the church's First Presidency. For females and for male members not initiated into the Melchizedek priesthood, a ward membership council is held. In such cases, a bishop determines whether withdrawal of membership or a lesser sanction is warranted. He does this in consultation with his two counselors, with the bishop making the final determination after prayer. The decision of a ward membership council can be appealed to the stake president. The following list of variables serves as a general set of guidelines for when membership withdrawal or lesser action may be warranted, beginning with those more likely to result in severe sanction: Violation of covenants: Covenants are made in conjunction with specific ordinances in the LDS Church. Violated covenants that might result in excommunication are usually those surrounding marriage covenants, temple covenants, and priesthood covenants. Position of trust or authority: The person's position in the church hierarchy factors into the decision. It is considered more serious when a sin is committed by an area seventy; a stake, mission, or temple president; a bishop; a patriarch; or a full-time missionary. Repetition: Repetition of a sin is more serious than a single instance. Magnitude: How often, how many individuals were impacted, and who is aware of the sin factor into the decision. Age, maturity, and experience: Those who are young in age, or immature in their understanding, are typically afforded leniency. Interests of the innocent: How the discipline will impact innocent family members may be considered. Time between transgression and confession: If the sin was committed in the distant past, and there has not been repetition, leniency may be considered. Voluntary confession: If a person voluntarily confesses the sin, leniency is suggested. Evidence of repentance: Sorrow for sin, and demonstrated commitment to repentance, as well as faith in | most special manner without having the faculty to do so (can. 2338 § 1), giving aid to vitandus excommunicates in their delict, or, as a cleric, knowingly and freely celebrating the Divine Office together with them (can. 2338 § 2), taking a bishop, abbot or prelate nullius, or one of the highest superiors of papally recognized orders to secular court w.r.t. doing his office (can. 2341), violating the enclosure of a convent (can. 2342), taking part in a duel, in any function (can. 2351), trying to enter a (civil) marriage as a cleric from the rank of subdeacon and above, or a monk or nun with solemn vows (can. 2388 § 2), commit simony (can. 2392), incepting, destroying, hiding or substantially changing a document directed to the diocesan curia, as a vicar capitular or canon of the chapter (during a vacancy only?) (can. 2405), reserved to the diocesan bishop: trying to enter marriage in front of a non-Catholic minister, or in the explicit or implicit understanding that one or more of the children are to be baptized outside the Catholic Church, or giving knowingly one's children to be baptized by non-Catholics (can. 2319), making false relics or knowingly selling them, distributing them and exposing them to public veneration (can. 2326), physical violence against a cleric, monk or nun (can. 2343 § 4), marrying, as a monk or nun in simple vows (can. 2388 § 2), reserved to no one: writing, editing or printing, without due permission, editions of the Sacred Scripture or of annotations or commentaries thereon (can. 2318 § 2), giving an ecclesial burial to the unfaithful, apostates, heretics, schismatics or any excommunicates or interdicted people (can. 2339), forcing a man to enter the clerical state or a woman to enter religion or to take simple or solemn vows (can. 2352), for the victim of solicitation, knowing failure to denounce the perpetrator (not to be absolved before the obligation is fulfilled, can. 2368 § 2). Eastern Orthodox Church In the Eastern Orthodox Church, excommunication is the exclusion of a member from the Eucharist. It is not expulsion from the churches. This can happen for such reasons as not having confessed within that year; excommunication can also be imposed as part of a penitential period. It is generally done with the goal of restoring the member to full communion. Before an excommunication of significant duration is imposed, the bishop is usually consulted. The Eastern Orthodox do have a means of expulsion, by pronouncing anathema, but this is reserved only for acts of serious and unrepentant heresy. As an example of this, the Second Council of Constantinople in 553, in its eleventh capitula, declared: "If anyone does not anathematize Arius, Eunomius, Macedonius, Apollinaris, Nestorius, Eutyches and Origen, as well as their heretical books, and also all other heretics who have already been condemned and anathematized by the holy, catholic and apostolic church and by the four holy synods which have already been mentioned, and also all those who have thought or now think in the same way as the aforesaid heretics and who persist in their error even to death: let him be anathema." Lutheran churches Although Lutheranism technically has an excommunication process, some denominations and congregations do not use it. In the Smalcald Articles Luther differentiates between the "great" and "small" excommunication. The "small" excommunication is simply barring an individual from the Lord's Supper and "other fellowship in the church." While the "great" excommunication excluded a person from both the church and political communities which he considered to be outside the authority of the church and only for civil leaders. A modern Lutheran practice is laid out in the Lutheran Church-Missouri Synod's 1986 explanation to the Small Catechism, defined beginning at Questions No. 277-284, in "The Office of Keys." They endeavor to follow the process that Jesus laid out in the 18th chapter of the Gospel of Matthew. According to the explanation, excommunication requires: The confrontation between the subject and the individual against whom he has sinned. If this fails, the confrontation between the subject, the harmed individual, and two or three witnesses to such acts of sin. The informing of the pastor of the subject's congregation. A confrontation between the pastor and the subject. Many Lutheran denominations operate under the premise that the entire congregation (as opposed to the pastor alone) must take appropriate steps for excommunication, and there are not always precise rules, to the point where individual congregations often set out rules for excommunicating laymen (as opposed to clergy). For example, churches may sometimes require that a vote must be taken at Sunday services; some congregations require that this vote be unanimous. The Church of Sweden and the church visit at Sundays were mandatory (Konventikelplakatet) for all Swedes from 1600-1858 as the only allowed religious organisation in the country, with a few exceptions, like for the Great Synagogue of Stockholm and Embassies. However, one can not get excluded from a state institution that is by law mandatory for all. The topic has some interesting aspects of excommunication of the parliament of Sweden by canon law from the Catholic Church and interdict (Catholic church strike) as background of the Reformation in Sweden. In the Church of Sweden and the Church of Denmark, excommunicated individuals are turned out from their parish in front of their congregation. They are not forbidden, however, to attend church and participate in other acts of devotion, although they are to sit in a place appointed by the priest (which was at a distance from others). The Lutheran process, though rarely used, has created unusual situations in recent years due to its somewhat democratic excommunication process. One example was an effort to get serial killer Dennis Rader excommunicated from his denomination (the Evangelical Lutheran Church in America) by individuals who tried to "lobby" Rader's fellow church members into voting for his excommunication. Anglican Communion Church of England The Church of England does not have any specific canons regarding how or why a member can be excommunicated, although it has a canon according to which ecclesiastical burial may be refused to someone "declared excommunicate for some grievous and notorious crime and no man to testify to his repentance". The punishment of imprisonment for being excommunicated from the Church of England was removed from English law in 1963. Episcopal Church of the United States of America The ECUSA is in the Anglican Communion, and shares many canons with the Church of England which would determine its policy on excommunication. Reformed churches In the Reformed Churches, excommunication has generally been seen as the culmination of church discipline, which is one of the three marks of the Church. The Westminster Confession of Faith sees it as the third step after "admonition" and "suspension from the sacrament of the Lord's Supper for a season." Yet, John Calvin argues in his Institutes of the Christian Religion that church censures do not "consign those who are excommunicated to perpetual ruin and damnation," but are designed to induce repentance, reconciliation and restoration to communion. Calvin notes, "though ecclesiastical discipline does not allow us to be on familiar and intimate terms with excommunicated persons, still we ought to strive by all possible means to bring them to a better mind, and recover them to the fellowship and unity of the Church." At least one modern Reformed theologian argues that excommunication is not the final step in the disciplinary process. Jay E. Adams argues that in excommunication, the offender is still seen as a brother, but in the final step they become "as the heathen and tax collector" (Matthew 18:17). Adams writes, "Nowhere in the Bible is excommunication (removal from the fellowship of the Lord's Table, according to Adams) equated with what happens in step 5; rather, step 5 is called "removing from the midst, handing over to Satan," and the like." Former Princeton president and theologian, Jonathan Edwards, addresses the notion of excommunication as "removal from the fellowship of the Lord's Table" in his treatise entitled "The Nature and End of Excommunication". Edwards argues that "Particularly, we are forbidden such a degree of associating ourselves with (excommunicants), as there is in making them our guests at our tables, or in being their guests at their tables; as is manifest in the text, where we are commanded to have no company with them, no not to eat". Edwards insists, "That this respects not eating with them at the Lord's supper, but a common eating, is evident by the words, that the eating here forbidden, is one of the lowest degrees of keeping company, which are forbidden. Keep no company with such a one, saith the apostle, no not to eat – as much as to say, no not in so low a degree as to eat with him. But eating with him at the Lord's supper, is the very highest degree of visible Christian communion. Who can suppose that the apostle meant this: Take heed and have no company with a man, no not so much as in the highest degree of communion that you can have? Besides, the apostle mentions this eating as a way of keeping company which, however, they might hold with the heathen. He tells them, not to keep company with fornicators. Then he informs them, he means not with fornicators of this world, that is, the heathens; but, saith he, "if any man that is called a brother be a fornicator, etc. with such a one keep no company, no not to eat." This makes it most apparent, that the apostle doth not mean eating at the Lord's table; for so, they might not keep company with the heathens, any more than with an excommunicated person." Methodism In the Methodist Episcopal Church, individuals were able to be excommunicated following "trial before a jury of his peers, and after having had the privilege of an appeal to a higher court." Nevertheless, an excommunication could be lifted after sufficient penance. John Wesley, the founder of the Methodist Churches, excommunicated sixty-four members from the Newcastle Methodist society alone for the following reasons: The Allegheny Wesleyan Methodist Connection, in its 2014 Discipline, includes "homosexuality, lesbianism, bi-sexuality, bestiality, incest, fornication, adultery, and any attempt to alter one’s gender by surgery", as well as remarriage after divorce among its excommunicable offences. The Evangelical Wesleyan Church, in its 2015 Discipline, states that "Any member of our church who is accused of neglect of the means of grace or other duties required by the Word of God, the indulgence of sinful tempers, words or actions, the sowing of dissension, or any other violation of the order and discipline of the church, may, after proper labor and admonition, be censured, placed on probation, or expelled by the official board of the circuit of which he is a member. If he request a trial, however, within thirty dates of the final action of the official board, it shall be granted." Anabaptist tradition When believers were baptized and taken into membership of the church by Anabaptists, it was not only done as symbol of cleansing of sin but was also done as a public commitment to identify with Jesus Christ and to conform one's life to the teaching and example of Jesus as understood by the church. Practically, that meant membership in the church entailed a commitment to try to live according to norms of Christian behavior widely held by the Anabaptist tradition. In the ideal, discipline in the Anabaptist tradition requires the church to confront a notoriously erring and unrepentant church member, first directly in a very small circle and, if no resolution is forthcoming, expanding the circle in steps eventually to include the entire church congregation. If the errant member persists without repentance and rejects even the admonition of the congregation, that person is excommunicated or excluded from church membership. Exclusion from the church is recognition by the congregation that this person has separated himself or herself from the church by way of his or her visible and unrepentant sin. This is done ostensibly as a final resort to protect the integrity of the church. When this occurs, the church is expected to continue to pray for the excluded member and to seek to restore him or her to its fellowship. There was originally no inherent expectation to shun (completely sever all ties with) an excluded member, however differences regarding this very issue led to early schisms between different Anabaptist leaders and those who followed them. Amish Jakob Ammann, founder of the Amish sect, believed that the shunning of those under the ban should be systematically practiced among the Swiss Anabaptists as it was in the north and as was outlined in the Dordrecht Confession. Ammann's uncompromising zeal regarding this practice was one of the main disputes that led to the schism between the Anabaptist groups that became the Amish and those that eventually would be called Mennonite. Recently more moderate Amish groups have become less strict in their application of excommunication as a discipline. This has led to splits in several communities, an example of which is the Swartzetruber Amish who split from the main body of Old Order Amish because of the latter's practice of lifting the ban from members who later join other churches. In general, the Amish will excommunicate baptized members for failure to abide by their Ordnung (church rules) as it is interpreted by the local Bishop if certain repeat violations of the Ordnung occur. Excommunication among the Old Order Amish results in shunning or the Meidung, the severity of which depends on many factors, such as the family, the local community as well as the type of Amish. Some Amish communities cease shunning after one year if the person joins another church later on, especially if it is another Mennonite church. At the most severe, other members of the congregation are prohibited almost all contact with an excommunicated member including social and business ties between the excommunicant and the congregation, sometimes even marital contact between the excommunicant and spouse remaining in the congregation or family contact between adult children and parents. Mennonites In the Mennonite Church excommunication is rare and is carried out only after many attempts at reconciliation and on someone who is flagrantly and repeatedly violating standards of behavior that the church expects. Occasionally excommunication is also carried against those who repeatedly question the church's behavior or who genuinely differ with the church's theology as well, although in almost all cases the dissenter will leave the church before any discipline need be invoked. In either case, the church will attempt reconciliation with the member in private, first one on one and then with a few church leaders. Only if the church's reconciliation attempts are unsuccessful, the congregation formally revokes church membership. Members of the church generally pray for the excluded member. Some regional conferences (the Mennonite counterpart to dioceses of other denominations) of the Mennonite Church have acted to expel member congregations that have openly welcomed non-celibate homosexuals as members. This internal conflict regarding homosexuality has also been an issue for other moderate denominations, such as the American Baptists and Methodists. The practice among Old Order Mennonite congregations is more along the lines of Amish, but perhaps less severe typically. An Old Order member who disobeys the Ordnung (church regulations) must meet with the leaders of the church. If a church regulation is broken a second time there is a confession in the church. Those who refuse to confess are excommunicated. However upon later confession, the church member will be reinstated. An excommunicated member is placed under the ban. This person is not banned from eating with their own family. Excommunicated persons can still have business dealings with church members and can maintain marital relations with a marriage partner, who remains a church member. Hutterites The separatist, communal, and self-contained Hutterites also use excommunication and shunning as form of church discipline. Since Hutterites have communal ownership of goods, the effects of excommunication could impose a hardship upon the excluded member and family leaving them without employment income and material assets such as a home. However, often arrangements are made to provide material benefits to the family leaving the colony such as an automobile and some transition funds for rent, etc. One Hutterite colony in Manitoba (Canada) had a protracted dispute when leaders attempted to force the departure of a group that had been excommunicated but would not leave. About a dozen lawsuits in both Canada and the United States were filed between the various Hutterite factions and colonies concerning excommunication, shunning, the legitimacy of leadership, communal property rights, and fair division of communal property when factions have separated. Baptists For Baptists, excommunication is used as a last resort by denominations and churches for members who do not want to repent of beliefs or behavior at odds with the confession of faith of the community. The vote of community members, however, can restore a person who has repented. The Church of Jesus Christ of Latter-day Saints The Church of Jesus Christ of Latter-day Saints (LDS Church) practices excommunication as a penalty for those who commit serious sins, i.e., actions that significantly impair the name or moral influence of the church or pose a threat to other people. In 2020, the church ceased using the term "excommunication" and instead refers to "withdrawal of membership". According to the church leadership General Handbook, the purposes of withdrawing membership or imposing membership restrictions are, (1) to help protect others; (2) to help a person access the redeeming power of Jesus Christ through repentance; and (3) to protect the integrity of the Church. The origins of LDS disciplinary procedures and excommunications are traced to a revelation Joseph Smith dictated on 9 February 1831, later canonized as Doctrine and Covenants, section 42 and codified in the General Handbook. The LDS Church also practices the lesser sanctions of private counsel and caution and informal and formal membership restrictions. (Informal membership restrictions was formerly known as "probation"; formal membership restrictions was formerly known as "disfellowshipment".) Formal membership restrictions are used for serious sins that do not rise to the level of membership withdrawal. Formal membership restriction denies some privileges but does not include a loss of church membership. Once formal membership restrictions are in place, persons may not take the sacrament or enter church temples, nor may they offer public prayers or sermons. Such persons may continue to attend most church functions and are allowed to wear temple garments, pay tithes and offerings, and participate in church classes if their conduct is orderly. Formal membership restrictions typically lasts for one year, after which one may be reinstated as a member in good standing. In the more grievous or recalcitrant cases, withdrawal of membership becomes a disciplinary option. Such an action is generally reserved for what are seen as the most serious sins, including committing serious crimes such as murder, child abuse, and incest; committing adultery; involvement in or teaching of polygamy; involvement in homosexual conduct; apostasy; participation in an abortion; teaching false doctrine; or openly criticizing church leaders. The General Handbook states that formally joining another church constitutes apostasy and is worthy of membership withdrawal; however, merely attending another church does not constitute apostasy. A withdrawal of membership can occur only after a formal church membership council. Formerly called a "disciplinary council" or a "church court," the councils were renamed to avoid focusing on guilt and instead to emphasize the availability of repentance. The decision to withdraw the membership a Melchizedek priesthood holder is generally the province of the leadership of a stake. In such a disciplinary council, the stake presidency and, sometimes in more difficult cases, the stake high council attend. If the high council is involved, the twelve members of the high council are split in half: one group represents the member in question and is charged with "prevent[ing] insult or injustice"; the other group represents the church as a whole. The member under scrutiny is invited to attend the membership proceedings, but the council can go forward without him. In making a decision, the leaders of the high council consult with the stake presidency, but the decision about which discipline is necessary is the stake president's alone. It is possible to appeal a decision of a stake membership council to the church's First Presidency. For females and for male members not initiated into the Melchizedek priesthood, a ward membership council is held. In such cases, a bishop determines whether withdrawal of membership or a lesser sanction is warranted. He does this in consultation with his two counselors, with the bishop making the final determination after prayer. The decision of a ward membership council can be appealed to the stake president. The following list of variables serves as a general set of guidelines for when membership withdrawal or lesser action may be warranted, beginning with those more likely to result in severe sanction: Violation of covenants: Covenants are made in conjunction with specific ordinances in the LDS Church. Violated covenants that might result in excommunication are usually those surrounding marriage covenants, temple covenants, and priesthood covenants. Position of trust or authority: The person's position in the church hierarchy factors into the decision. It is considered more serious when a sin is committed by an area seventy; a stake, mission, or temple president; a bishop; a patriarch; or a full-time missionary. Repetition: Repetition of a sin is more serious than a single instance. Magnitude: How often, how many individuals were impacted, and who is aware of the sin factor into the decision. Age, maturity, and experience: Those who are young in age, or immature in their understanding, are typically afforded leniency. Interests of the innocent: How the discipline will impact innocent family members may be considered. Time between transgression and confession: If the sin was committed in the distant past, and there has not been repetition, leniency may be considered. Voluntary confession: If a person voluntarily confesses the sin, leniency is suggested. Evidence of repentance: Sorrow for sin, |
levels of one or more chemicals build up (charging); while it is discharging, they reduce and the resulting electromotive force can do work. A common secondary cell is the lead-acid battery. This can be commonly found as car batteries. They are used for their high voltage, low costs, reliability, and long lifetime. Lead-acid batteries are used in an automobile to start an engine and to operate the car's electrical accessories when the engine is not running. The alternator, once the car is running, recharges the battery. Fuel cell A fuel cell is an electrochemical cell that converts the chemical energy from a fuel into electricity through an electrochemical reaction of hydrogen fuel with oxygen or another oxidizing agent. Fuel cells are different from batteries in requiring a continuous source of fuel and oxygen (usually from air) to sustain the chemical reaction, whereas in a battery the chemical energy comes from chemicals already present in the battery. Fuel cells can produce electricity continuously for as long as fuel and oxygen are supplied. The first fuel cells were invented in 1838. The first commercial use of fuel cells came more than a century later in NASA space programmes to generate power for satellites and space capsules. Since then, fuel cells have been used in many other applications. Fuel cells are used for primary and backup power for commercial, industrial and residential buildings and in remote or inaccessible areas. They are also used to power fuel cell vehicles, including forklifts, automobiles, buses, boats, motorcycles and submarines. There are many types of fuel cells, but they all consist of an anode, a cathode, and an electrolyte that allows positively charged hydrogen ions (protons) to move between the two sides of the fuel cell. At the anode a catalyst causes the fuel to undergo oxidation reactions that generate protons (positively charged hydrogen ions) and electrons. The protons flow from the anode to the cathode through the electrolyte after the reaction. At the same time, electrons are drawn from the anode to the cathode through an external circuit, producing direct current electricity. At the cathode, another catalyst causes hydrogen ions, electrons, and oxygen to react, forming water. Fuel cells are classified by the type of electrolyte they use and by the difference in startup time, which ranges from 1 second for proton-exchange membrane fuel cells (PEM fuel cells, or PEMFC) to 10 minutes for solid oxide fuel cells (SOFC). A related technology is flow batteries, in which the fuel can be regenerated by recharging. Individual fuel cells produce relatively small electrical potentials, about 0.7 volts, so cells are "stacked", or placed in series, to create sufficient voltage to meet an application's requirements. In addition to electricity, fuel cells produce water, heat and, depending on the fuel source, very small amounts of nitrogen dioxide and other emissions. The energy efficiency of a fuel cell is generally between 40 and 60%; however, if waste heat is captured in a cogeneration scheme, efficiencies up to 85% can be obtained. The fuel cell market is growing, and in 2013 Pike Research estimated that the stationary fuel cell market will reach 50 GW by 2020. Half-cells An electrochemical cell consists of two half-cells. Each half-cell consists of an electrode and an electrolyte. The two half-cells may use the same electrolyte, or they may use different electrolytes. The chemical reactions in the cell may involve the electrolyte, the electrodes, or an external substance (as in fuel cells that may use hydrogen gas as a reactant). In a full electrochemical cell, species from one half-cell lose electrons (oxidation) to their electrode while species from the other half-cell gain electrons (reduction) from their electrode. A salt bridge (e.g., filter paper soaked in KNO3, NaCl, or some other electrolyte) is often employed to provide ionic contact between two half-cells with different electrolytes, yet prevent the solutions from mixing and causing unwanted side reactions. An alternative to a salt bridge is to allow direct contact (and mixing) between the two half-cells, for example in simple electrolysis of water. As electrons flow from one half-cell to the other through an external circuit, a difference in charge is established. If no ionic contact were provided, this charge difference would quickly prevent the further flow of electrons. A salt bridge allows the flow of negative or positive ions to maintain a steady-state charge distribution between the oxidation and reduction vessels, while keeping the contents otherwise separate. Other devices for achieving separation of solutions are porous pots and gelled solutions. A porous pot is used in the Bunsen cell (right). Equilibrium reaction Each half-cell has a characteristic voltage. Various choices of substances for each half-cell give different potential differences. Each reaction is undergoing an equilibrium reaction between different oxidation states of the ions: When equilibrium is reached, the cell cannot provide further voltage. In the half-cell that is undergoing oxidation, the closer the equilibrium lies to the ion/atom with the more positive oxidation state the more potential this reaction will provide. Likewise, in the reduction reaction, the closer the equilibrium lies to the ion/atom with the more negative oxidation state the higher the potential. Cell potential The cell | with oxygen or another oxidizing agent. Fuel cells are different from batteries in requiring a continuous source of fuel and oxygen (usually from air) to sustain the chemical reaction, whereas in a battery the chemical energy comes from chemicals already present in the battery. Fuel cells can produce electricity continuously for as long as fuel and oxygen are supplied. The first fuel cells were invented in 1838. The first commercial use of fuel cells came more than a century later in NASA space programmes to generate power for satellites and space capsules. Since then, fuel cells have been used in many other applications. Fuel cells are used for primary and backup power for commercial, industrial and residential buildings and in remote or inaccessible areas. They are also used to power fuel cell vehicles, including forklifts, automobiles, buses, boats, motorcycles and submarines. There are many types of fuel cells, but they all consist of an anode, a cathode, and an electrolyte that allows positively charged hydrogen ions (protons) to move between the two sides of the fuel cell. At the anode a catalyst causes the fuel to undergo oxidation reactions that generate protons (positively charged hydrogen ions) and electrons. The protons flow from the anode to the cathode through the electrolyte after the reaction. At the same time, electrons are drawn from the anode to the cathode through an external circuit, producing direct current electricity. At the cathode, another catalyst causes hydrogen ions, electrons, and oxygen to react, forming water. Fuel cells are classified by the type of electrolyte they use and by the difference in startup time, which ranges from 1 second for proton-exchange membrane fuel cells (PEM fuel cells, or PEMFC) to 10 minutes for solid oxide fuel cells (SOFC). A related technology is flow batteries, in which the fuel can be regenerated by recharging. Individual fuel cells produce relatively small electrical potentials, about 0.7 volts, so cells are "stacked", or placed in series, to create sufficient voltage to meet an application's requirements. In addition to electricity, fuel cells produce water, heat and, depending on the fuel source, very small amounts of nitrogen dioxide and other emissions. The energy efficiency of a fuel cell is generally between 40 and 60%; however, if waste heat is captured in a cogeneration scheme, efficiencies up to 85% can be obtained. The fuel cell market is growing, and in 2013 Pike Research estimated that the stationary fuel cell market will reach 50 GW by 2020. Half-cells An electrochemical cell consists of two half-cells. Each half-cell consists of an electrode and an electrolyte. The two half-cells may use the same electrolyte, or they may use different electrolytes. The chemical reactions in the cell may involve the electrolyte, the electrodes, or an external substance (as in fuel cells that may use hydrogen gas as a reactant). In a full electrochemical cell, species from one half-cell lose electrons (oxidation) to their electrode while species from the other half-cell gain electrons (reduction) from their electrode. A salt bridge (e.g., filter paper soaked in KNO3, NaCl, or some other electrolyte) is often employed to provide ionic contact between two half-cells with different electrolytes, yet prevent the solutions from mixing and causing unwanted side reactions. An alternative to a salt bridge is to allow direct contact (and mixing) between the two half-cells, for example in simple electrolysis of water. As electrons flow from one half-cell to the other through an external circuit, a difference in charge is established. If no ionic contact were provided, this charge difference would quickly prevent the further flow of electrons. A salt bridge allows the flow of negative or positive ions to maintain a steady-state charge distribution between the oxidation and reduction vessels, while keeping the contents otherwise separate. Other devices for achieving separation of solutions are porous pots and gelled solutions. A porous pot is used in the Bunsen cell (right). Equilibrium reaction Each half-cell has a characteristic voltage. Various choices of substances for each half-cell give different potential differences. Each reaction is undergoing an equilibrium reaction between different oxidation states of the ions: When equilibrium is reached, the cell cannot provide further voltage. In the half-cell that is undergoing oxidation, the closer the equilibrium lies to the ion/atom with the more positive oxidation state the more potential this reaction will provide. Likewise, in the reduction reaction, the closer the equilibrium lies to the ion/atom with the more negative oxidation state the higher the potential. Cell potential The cell potential can be predicted through the use of electrode potentials (the voltages of each half-cell). These half-cell potentials are defined relative to |
old exoskeleton from the underlying epidermal cells. For most organisms, the resting period is a stage of preparation during which the secretion of fluid from the moulting glands of the epidermal layer and the loosening of the underpart of the cuticle occur. Once the old cuticle has separated from the epidermis, a digesting fluid is secreted into the space between them. However, this fluid remains inactive until the upper part of the new cuticle has been formed. Then, by crawling movements, the organism pushes forward in the old integumentary shell, which splits down the back allowing the animal to emerge. Often, this initial crack is caused by a combination of movement and increase in blood pressure within the body, forcing an expansion across its exoskeleton, leading to an eventual crack that allows for certain organisms such as spiders to extricate themselves. While the old cuticle is being digested, the new layer is secreted. All cuticular structures are shed at ecdysis, including the inner parts of the exoskeleton, which includes terminal linings of the alimentary tract and of the tracheae if they are present. Insects Each stage of development between moults for insects in the taxon endopterygota is called an instar, or stadium, and each stage between moults of insects in the Exopterygota is called a nymph: there may be up to 15 nymphal stages. Endopterygota tend to have only four or five instars. Endopterygotes have more alternatives to moulting, such as expansion of the cuticle and collapse of air sacs to allow growth of internal organs. The process of moulting in insects begins with the separation of the cuticle from the underlying epidermal cells (apolysis) and ends with the shedding of the old cuticle (ecdysis). In many species it is initiated by an increase in the hormone ecdysone. This hormone causes: apolysis – the separation of the cuticle from the epidermis secretion of new cuticle materials beneath the old degradation of the old cuticle After apolysis the insect is known as a pharate. Moulting fluid is then secreted into the exuvial space between the old cuticle and the epidermis, this contains inactive enzymes which are activated only after the new epicuticle is secreted. This prevents the new procuticle from getting digested as it is laid down. The lower regions of the old cuticle, the endocuticle and mesocuticle, are then digested by the enzymes and subsequently absorbed. The exocuticle and epicuticle resist digestion and are hence shed at ecdysis. Spiders Spiders generally change their skin for the first time while still inside the egg sac, and the spiderling that emerges broadly resembles the adult. The number of moults varies, both between species and sexes, but generally will be between five times and nine times before the spider reaches maturity. Not surprisingly, since males are generally smaller than females, the males of many species mature faster and do not undergo ecdysis as many times as the females before maturing. Members of the Mygalomorphae are very long-lived, sometimes 20 years or more; they moult annually even after they mature. Spiders stop feeding at some time before moulting, usually for several days. The physiological processes of releasing the old | Since the cuticle of these animals typically forms a largely inelastic exoskeleton, it is shed during growth and a new, larger covering is formed. The remnants of the old, empty exoskeleton are called exuviae. After moulting, an arthropod is described as teneral, a callow; it is "fresh", pale and soft-bodied. Within one or two hours, the cuticle hardens and darkens following a tanning process analogous to the production of leather. During this short phase the animal expands, since growth is otherwise constrained by the rigidity of the exoskeleton. Growth of the limbs and other parts normally covered by hard exoskeleton is achieved by transfer of body fluids from soft parts before the new skin hardens. A spider with a small abdomen may be undernourished but more probably has recently undergone ecdysis. Some arthropods, especially large insects with tracheal respiration, expand their new exoskeleton by swallowing or otherwise taking in air. The maturation of the structure and colouration of the new exoskeleton might take days or weeks in a long-lived insect; this can make it difficult to identify an individual if it has recently undergone ecdysis. Ecdysis allows damaged tissue and missing limbs to be regenerated or substantially re-formed. Complete regeneration may require a series of moults, the stump becoming a little larger with each moult until it is a normal, or near normal, size. Etymology The term ecdysis comes from (), "to take off, strip off". Process In preparation for ecdysis, the arthropod becomes inactive for a period of time, undergoing apolysis or separation of the old exoskeleton from the underlying epidermal cells. For most organisms, the resting period is a stage of preparation during which the secretion of fluid from the moulting glands of the epidermal layer and the loosening of the underpart of the cuticle occur. Once the old cuticle has separated from the epidermis, a digesting fluid is secreted into the space between them. However, this fluid remains inactive until the upper part of the new cuticle has been formed. Then, by crawling movements, the organism pushes forward in the old integumentary shell, which splits down the back allowing the animal to emerge. Often, this initial crack is caused by a combination of movement and increase in blood pressure within the body, forcing an expansion across its exoskeleton, leading to an eventual crack that allows for certain organisms such as spiders to extricate themselves. While the old cuticle is being digested, the new layer is secreted. All cuticular structures are shed at ecdysis, including the inner parts of the exoskeleton, which includes terminal linings of the alimentary tract and of the tracheae if they are present. Insects Each stage of development between moults for insects in the taxon endopterygota is called an instar, or stadium, and each stage between moults of insects in the Exopterygota is called a nymph: there may be up to 15 nymphal |
way between Armidale and the coast. Dorrigo to the east is away with the Coffs Coast away along Waterfall Way. In the , Ebor's zone had a population of 166. History Ebor is named after a nearby set of waterfalls, which is a local tourist attraction. At the 2016 census, Ebor had a population of 166 people. Borderlands Although "The Heart of Waterfall Way", Ebor is on the eastern edge of Armidale Regional Council, and close to the border of Clarence Valley Council and Bellingen Shire Council. Until the amalgamation of Guyra and Armidale councils, one side of Ebor was under Armidale council, and the other under Guyra shire. Likewise, Ebor is close to three state (Northern Tablelands, Oxley and Clarence) and three federal electoral boundaries (New England, Cowper and Page). Facilities Amenities in the area include a cafe, a combined post office, fuel station and general store, a pub/motel with camp ground, and a NSW DEC primary school. The local sports ground is home of the Ebor Campdraft. There are also Rural Fire Service and National Parks and Wildlife Service depots in the area, but no police or ambulance services based in Ebor. The nearest hospital and 24h emergency department is in Dorrigo. Features Due to its central position on Waterfall Way, Ebor offers easy access for residents and tourists to Guy Fawkes River National Park, Cathedral Rock National Park, Cunnawarra National Park, New England National Park, part of Oxley Wild Rivers National Park, Nymboi-Binderay National Park and Mount Hyland Nature Reserve. The natural environment of the surrounding district includes some areas which have been cleared for pastoralism and forestry. Nonetheless, the national parks around Ebor have been described as a bush walking "Mecca". The main tourist attraction is the twin Ebor Falls. In 1930 Sydney Smith Jr. wrote that: "During a recent visit to Ebor I was much impressed with the possibilities of this part of the State as a tourist resort... Around Ebor and Guy Fawkes can be seen some of the most magnificent scenery in this State if not Australia. ...The two falls are scenes of beauty, and in winter time are sometimes frozen, making a beautiful spectacle as they hang in huge icicles. The water from the Ebor eventually finds an outlet in the Clarence River. ...The view, ...as regards expansiveness, ruggedness, and beauty, must compare more than favourably with views of a similar nature in any part of the Commonwealth. It reminded me of the Valley of a Thousand Hills, outside Durban, in South Africa". In 1976, | a third of the way between Armidale and the coast. Dorrigo to the east is away with the Coffs Coast away along Waterfall Way. In the , Ebor's zone had a population of 166. History Ebor is named after a nearby set of waterfalls, which is a local tourist attraction. At the 2016 census, Ebor had a population of 166 people. Borderlands Although "The Heart of Waterfall Way", Ebor is on the eastern edge of Armidale Regional Council, and close to the border of Clarence Valley Council and Bellingen Shire Council. Until the amalgamation of Guyra and Armidale councils, one side of Ebor was under Armidale council, and the other under Guyra shire. Likewise, Ebor is close to three state (Northern Tablelands, Oxley and Clarence) and three federal electoral boundaries (New England, Cowper and Page). Facilities Amenities in the area include a cafe, a combined post office, fuel station and general store, a pub/motel with camp ground, and a NSW DEC primary school. The local sports ground is home of the Ebor Campdraft. There are also Rural Fire Service and National Parks and Wildlife Service depots in the area, but no police or ambulance services based in Ebor. The nearest hospital and 24h emergency department is in Dorrigo. Features Due to its central position on Waterfall Way, Ebor offers easy access for residents and tourists to Guy Fawkes River National Park, Cathedral Rock National Park, Cunnawarra National Park, New England National Park, part of Oxley Wild Rivers National Park, Nymboi-Binderay National Park and Mount Hyland Nature Reserve. The natural environment of the surrounding district includes some areas which have been cleared for pastoralism and forestry. Nonetheless, the national parks around Ebor have been described as a bush walking "Mecca". The main tourist attraction is the twin Ebor Falls. In 1930 Sydney Smith Jr. wrote that: "During a recent visit to Ebor I was much impressed with the possibilities of this part of the State as a tourist resort... Around Ebor and Guy Fawkes can be seen some of the most magnificent scenery |
in Western Asia. While they made large inroads into the modern-day territory of Afghanistan, about 100 years later another Indo-European group from the north—the Kushans (a subgroup of the tribe called the Yuezhi by the Chinese)—entered the region of Afghanistan and established an empire lasting almost four centuries, which would dominate most of the Afghanistan region. The Kushan Empire spread from the Kabul River valley to defeat other Central Asian tribes that had previously conquered parts of the northern central Iranian Plateau once ruled by the Parthians. By the middle of the 1st century BC, the Kushans' base of control became Afghanistan and their empire spanned from the north of the Pamir mountains to the Ganges river valley in India. Early in the 2nd century under Kanishka, the most powerful of the Kushan rulers, the empire reached its greatest geographic and cultural breadth to become a center of literature and art. Kanishka extended Kushan control to the mouth of the Indus River on the Arabian Sea, into Kashmir, and into what is today the Chinese-controlled area north of Tibet. Kanishka was a patron of religion and the arts. It was during his reign that Buddhism, which was promoted in northern India earlier by the Mauryan emperor Ashoka (c. 260 BC–232 BC), reached its zenith in Central Asia. Though the Kushanas supported local Buddhists and Hindus as well as the worship of various local deities. Sasanian & Hephthalite invasions (300–650) In the 3rd century, Kushan control fragmented into semi-independent kingdoms that became easy targets for conquest by the rising Iranian dynasty, the Sasanians (c. 224–561) which annexed Afghanistan by 300 AD. In these far off easternmost territories, they established vassal kings as rulers, known as the Kushanshahs. Sasanian control was tenuous at times as numerous challenges from Central Asian tribes led to instability and constant warfare in the region. The disunited Kushan and Sasanian kingdoms were in a poor position to meet the threat several waves of Xionite/Huna invaders from the north from the 4th century onwards. In particular, the Hephthalites (or Ebodalo; Bactrian script ηβοδαλο) swept out of Central Asia during the 5th century into Bactria and Iran, overwhelming the last of the Kushan kingdoms. Historians believe that Hephthalite control continued for a century and was marked by constant warfare with the Sassanians to the west who exerted nominal control over the region. By the middle of the 6th century the Hephthalites were defeated in the territories north of the Amu Darya (the Oxus River of antiquity) by another group of Central Asian nomads, the Göktürks, and by the resurgent Sassanians in the lands south of the Amu Darya. It was the ruler of western Göktürks, Sijin (a.k.a. Sinjibu, Silzibul and Yandu Muchu Khan) who led the forces against the Hepthalites who were defeated at the Battle of Chach (Tashkent) and at the Battle of Bukhara. Kabul Shahi The Shahi dynasties ruled portions of the Kabul Valley (in eastern Afghanistan) and the old province of Gandhara (northern Pakistan and Kashmir) from the decline of the Kushan Empire in the 3rd century to the early 9th century. They are split into two eras the Buddhist Turk Shahis and the later Hindu Shahis with the change-over occurring around 870, and ruled up until the Islamic conquest of Afghanistan. When Xuanzang visited the region early in the 7th century, the Kabul region was ruled by a Kshatriya king, who is identified as the Shahi Khingal, and whose name has been found in an inscription found in Gardez. The Turkic Shahi regency was overthrown and replaced by a Mohyal Shahi dynasty of Brahmins who began the first phase of the Hindu Shahi dynasty. These Hindu kings of Kabul and Gandhara may have had links to some ruling families in neighboring Kashmir and other areas to the east. The Shahis were rulers of predominantly Buddhist, Zoroastrian, Hindu and Muslim populations and were thus patrons of numerous faiths, and various artifacts and coins from their rule have been found that display their multicultural domain. In 964 AD, the last Mohyal Shahi was succeeded by the Janjua overlord, Jayapala, of the Panduvanshi dynasty. The last Shahi emperors Jayapala, Anandapala and Tirlochanpala fought the Muslim Ghaznavids of Ghazna and were gradually defeated. Their remaining army were eventually exiled into northern India. Archaeological remnants Most of the Zoroastrian, Greek, Hellenistic, Buddhist, Hindu and other indigenous cultures were replaced by the coming of Islam and little influence remains in Afghanistan today. Along ancient trade routes, however, stone monuments of the once flourishing Buddhist culture did exist as reminders of the past. The two massive sandstone Buddhas of Bamyan, 35 and 53 meters high, overlooked the ancient route through Bamyan to Balkh and dated from the 3rd and 5th centuries. They survived until 2001, when they were destroyed by the Taliban. In this and other key places in Afghanistan, archaeologists have located frescoes, stucco decorations, statuary, and rare objects from as far away as China, Phoenicia, and Rome, which were crafted as early as the 2nd century and bear witness to the influence of these ancient civilizations upon Afghanistan. One of the early Buddhist schools, the Mahāsāṃghika-Lokottaravāda, were known to be prominent in the area of Bamiyan. The Chinese Buddhist monk Xuanzang visited a Lokottaravāda monastery in the 7th century CE, at Bamiyan, Afghanistan, and this monastery site has since been rediscovered by archaeologists. Birchbark and palm leaf manuscripts of texts in this monastery's collection, including Mahāyāna sūtras, have been discovered at the site, and these are now located in the Schøyen Collection. Some manuscripts are in the Gāndhārī language and Kharoṣṭhī script, while others are in Sanskrit and written in forms of the Gupta script. Manuscripts and fragments that have survived from this monastery's collection include well-known Buddhist texts such as the Mahāparinirvāṇa Sūtra (from the Āgamas), the Diamond Sūtra (Vajracchedikā Prajñāpāramitā), the Medicine Buddha Sūtra, and the Śrīmālādevī Siṃhanāda Sūtra. In 2010, reports stated that about 42 Buddhist relics have been discovered in the Logar Province of Afghanistan, which is south of Kabul. Some of these items date back to the 2nd century according to Archaeologists. The items included two Buddhist temples (Stupas), Buddha statues, frescos, | from, Zoroastrianism spread to become one of the world's most influential religions and became the main faith of the old Aryan people for centuries. It also remained the official religion of Persia until the defeat of the Sassanian ruler Yazdegerd III—over a thousand years after its founding—by Muslim Arabs. In what is today southern Iran, the Persians emerged to challenge Median supremacy on the Iranian plateau. By 550 BC, the Persians had replaced Median rule with their own dominion and even began to expand past previous Median imperial borders. Both Gandhara and Kamboja Mahajanapadas of the Buddhist texts soon fell a prey to the Achaemenian Dynasty during the reign of Achaemenid, Cyrus the Great (558–530 BC), or in the first year of Darius I, marking the region or of the easternmost provinces of the empire, located partly in nowadays Afghanistan. According to Pliny's evidence, Cyrus the Great (Cyrus II) had destroyed Kapisa in Capiscene which was a Kamboja city. The former region of Gandhara and Kamboja (upper Indus) had constituted seventh satrapy of the Achaemenid Empire and annually contributed 170 talents of gold dust as a tribute to the Achaemenids. Bactria had a special position in old Afghanistan, being the capital of a vice-kingdom. By the 4th century BC, Persian control of outlying areas and the internal cohesion of the empire had become somewhat tenuous. Although distant provinces like Bactriana had often been restless under Achaemenid rule, Bactrian troops nevertheless fought in the decisive Battle of Gaugamela in 330 BC against the advancing armies of Alexander the Great. The Achaemenids were decisively defeated by Alexander and retreated from his advancing army of Greco-Macedonians and their allies. Darius III, the last Achaemenid ruler, tried to flee to Bactria but was assassinated by a subordinate lord, the Bactrian-born Bessus, who proclaimed himself the new ruler of Persia as Artaxerxes (V). Bessus was unable to mount a successful resistance to the growing military might of Alexander's army so he fled to his native Bactria, where he attempted to rally local tribes to his side but was instead turned over to Alexander who proceeded to have him tortured and executed for having committed regicide. Alexander the Great to Greco-Bactrian rule (330 BC–ca. 150 BC) Moving thousands of kilometers eastward from recently subdued Persia, the Macedonian leader Alexander the Great, encountered fierce resistance from the local tribes of Aria, Drangiana, Arachosia (South and Eastern Afghanistan, North-West Pakistan) and Bactria (North and Central Afghanistan). Upon Alexander's death in 323 BC, his empire, which had never been politically consolidated, broke apart as his companions began to divide it amongst themselves. Alexander's cavalry commander, Seleucus, took nominal control of the eastern lands and founded the Seleucid dynasty. Under the Seleucids, as under Alexander, Greek colonists and soldiers colonized Bactria, roughly corresponding to modern Afghanistan's borders. However, the majority of Macedonian soldiers of Alexander the Great wanted to leave the east and return home to Greece. Later, Seleucus sought to guard his eastern frontier and moved Ionian Greeks (also known as Yavanas to many local groups) to Bactria in the 3rd century BC. Maurya Empire While the Diadochi were warring amongst themselves, the Mauryan Empire was developing in the northern part of the Indian subcontinent. The founder of the empire, Chandragupta Maurya, confronted a Macedonian invasion force led by Seleucus I in 305 BC and following a brief conflict, an agreement was reached as Seleucus ceded Gandhara and Arachosia (centered around ancient Kandahar) and areas south of Bagram (corresponding to the extreme south-east of modern Afghanistan) to the Mauryans. During the 120 years of the Mauryans in southern Afghanistan, Buddhism was introduced and eventually become a major religion alongside Zoroastrianism and local pagan beliefs. The ancient Grand Trunk Road was built linking what is now Kabul to various cities in the Punjab and the Gangetic Plain. Commerce, art, and architecture (seen especially in the construction of stupas) developed during this period. It reached its high point under Emperor Ashoka whose edicts, roads, and rest stops were found throughout the subcontinent. Although the vast majority of them throughout the subcontinent were written in Prakrit, Afghanistan is notable for the inclusion of 2 Greek and Aramaic ones alongside the court language of the Mauryans. Inscriptions made by the Mauryan Emperor Ashoka, a fragment of Edict 13 in Greek, as well as a full Edict, written in both Greek and Aramaic has been discovered in Kandahar. It is said to be written in excellent Classical Greek, using sophisticated philosophical terms. In this Edict, Ashoka uses the word Eusebeia ("Piety") as the Greek translation for the ubiquitous "Dharma" of his other Edicts written in Prakrit: "Ten years (of reign) having been completed, King Piodasses (Ashoka) made known (the doctrine of) Piety (εὐσέβεια, Eusebeia) to men; and from this moment he has made men more pious, and everything thrives throughout the whole world. And the king abstains from (killing) living beings, and other men and those who (are) huntsmen and fishermen of the king have desisted from hunting. And if some (were) intemperate, they have ceased from their intemperance as was in their power; and obedient to their father and mother and to the elders, in opposition to the past also in the future, by so acting on every occasion, they will live better and more happily." (Trans. by G.P. Carratelli) The last ruler in the region was probably Subhagasena (Sophagasenus of Polybius), who, in all probability, belonged to the Ashvaka (q.v.) background. Greco-Bactrians In the middle of the 3rd century BC, an independent, Hellenistic state was declared in Bactria and eventually the control of the Seleucids and Mauryans was overthrown in western and southern Afghanistan. Graeco-Bactrian rule spread until it included a large territory which stretched from Turkmenistan in the west to the Punjab in India in the east by about 170 BC. Graeco-Bactrian rule was eventually defeated by a combination of internecine disputes that plagued Greek and Hellenized rulers to the west, continual conflict with Indian kingdoms, as well as the pressure of two groups of nomadic invaders from Central Asia—the Parthians and Sakas. Kushan Empire (150 BC–300 AD) In the 3rd and 2nd centuries BC, the Parthians, a nomadic Iranian peoples, arrived in Western Asia. While they made large inroads into the modern-day territory of Afghanistan, about 100 years later another Indo-European group from the north—the Kushans (a subgroup of the tribe called the Yuezhi by the Chinese)—entered the region of Afghanistan and established an empire lasting almost four centuries, which would dominate most of the Afghanistan region. The Kushan Empire spread from the Kabul River valley to defeat other Central Asian tribes that had previously conquered parts of the northern central Iranian Plateau once ruled by the Parthians. By the middle of the 1st century BC, the Kushans' base of control became Afghanistan and their empire spanned from the north of the Pamir mountains to the Ganges river valley in India. Early in the 2nd century under Kanishka, the most powerful of the Kushan rulers, the empire reached its greatest geographic and cultural breadth to become a center of literature and art. Kanishka extended Kushan control to the mouth of the Indus River on the Arabian Sea, into Kashmir, and into what is today the Chinese-controlled area north of Tibet. Kanishka was a patron of religion and the arts. It was during his reign that Buddhism, which was promoted in northern India earlier by the Mauryan emperor Ashoka (c. 260 BC–232 BC), reached its zenith in Central Asia. Though the Kushanas supported local Buddhists and Hindus as well as the worship of various local deities. Sasanian & Hephthalite invasions (300–650) In the 3rd century, Kushan control fragmented into semi-independent kingdoms that became easy targets for conquest by the rising Iranian dynasty, the Sasanians (c. 224–561) which annexed Afghanistan by 300 AD. In these far off easternmost territories, they established vassal kings as rulers, known as the Kushanshahs. Sasanian control was tenuous at times as numerous challenges from Central Asian tribes led to instability and constant warfare in the region. The disunited Kushan and Sasanian kingdoms were in a poor position to meet the threat several waves of Xionite/Huna invaders from the north from the 4th century onwards. In particular, the Hephthalites (or Ebodalo; Bactrian script ηβοδαλο) swept out of Central Asia during the 5th century into Bactria and Iran, overwhelming the last of the Kushan kingdoms. Historians believe that Hephthalite control continued for a century and was marked by constant warfare with the Sassanians to the west who exerted nominal control over the region. By the middle of the 6th century the Hephthalites were defeated in the territories north of the Amu Darya (the Oxus River of antiquity) by another group of Central Asian nomads, the Göktürks, and by the resurgent Sassanians in the lands south of the Amu Darya. It was the ruler of western Göktürks, Sijin (a.k.a. Sinjibu, Silzibul and Yandu Muchu Khan) who led the forces against the Hepthalites who were defeated at the Battle of Chach (Tashkent) and at the Battle of Bukhara. Kabul Shahi The Shahi dynasties ruled portions of the Kabul Valley (in eastern Afghanistan) and the old province of Gandhara (northern Pakistan and Kashmir) from the decline of the Kushan Empire in the 3rd century to the early 9th century. They are split into two eras the Buddhist Turk Shahis and the later Hindu Shahis with the change-over occurring around 870, and ruled up until the Islamic conquest of Afghanistan. When Xuanzang visited the region early in the 7th century, the Kabul region was ruled by a Kshatriya king, who is identified as the Shahi Khingal, and whose name has been found in an inscription found in Gardez. The Turkic Shahi regency was overthrown and replaced by a Mohyal Shahi dynasty of Brahmins who began the first phase of the Hindu Shahi dynasty. These Hindu kings of Kabul and Gandhara may have had links to some ruling families in neighboring |
by the Schwarzschild metric, , where is the clock time of an observer at distance R from the center, is the time measured by an observer at infinity, is the Schwarzschild radius , "..." represents terms that vanish if the observer is at rest, is Newton's gravitational constant, the mass of the gravitating body, and the speed of light. The result is that frequencies and wavelengths are shifted according to the ratio where is the wavelength of the light as measured by the observer at infinity, is the wavelength measured at the source of emission, and is the radius at which the photon is emitted. This can be related to the redshift parameter conventionally defined as . In the case where neither the emitter nor the observer is at infinity, the transitivity of Doppler shifts allows us to generalize the result to . The redshift formula for the frequency is . When is small, these results are consistent with the equation given above based on the equivalence principle. The redshift ratio may also be expressed in terms of a (Newtonian) escape velocity at , resulting in the corresponding Lorentz factor: . For an object compact enough to have an event horizon, the redshift is not defined for photons emitted inside the Schwarzschild radius, both because signals cannot escape from inside the horizon and because an object such as the emitter cannot be stationary inside the horizon, as was assumed above. Therefore, this formula only applies when is larger than . When the photon is emitted at a distance equal to the Schwarzschild radius, the redshift will be infinitely large, and it will not escape to any finite distance from the Schwarzschild sphere. When the photon is emitted at an infinitely large distance, there is no redshift. Newtonian limit In the Newtonian limit, i.e. when is sufficiently large compared to the Schwarzschild radius , the redshift can be approximated as where is the gravitational acceleration at . For Earth's surface with respect to infinity, z is approximately 7 × 10−10 (the equivalent of a 0.2 m/s radial Doppler shift); for the Moon it is approximately 3 × 10−11 (about 1 cm/s). The value for the surface of the sun is about 2 × 10−6, corresponding to 0.64 km/s. (For non-relativisitc velocities, the radial Doppler equivalent velocity can be approximated by multiplying z with the speed of light.) The z-value can be expressed succinctly in terms of the escape velocity at , since the gravitational potential is equal to half the square of the escape velocity, thus: where is the escape velocity at . It can also be related to the circular orbit velocity at , which equals , thus . For example, the gravitational blueshift of distant starlight due to the sun's gravity, which the Earth is orbiting at about 30 km/s, would be approximately 1 × 10−8 or the equivalent of a 3 m/s radial Doppler shift. However, the Earth is in free-fall around the sun, and is thus an inertial observer, so the effect is not visible. For an object in a (circular) orbit, the gravitational redshift is of comparable magnitude as the transverse Doppler effect, where β=v/c, while both are much smaller than the radial Doppler effect, for which . Experimental verification Astronomical observations A number of experimenters initially claimed to have identified the effect using astronomical measurements, and the effect was considered to have been finally identified in the spectral lines of the star Sirius B by W.S. Adams in 1925. However, measurements by Adams have been criticized as being too low and these observations are now considered to be measurements of spectra that are unusable because of scattered light from the primary, Sirius A. The first accurate measurement of the gravitational redshift of a white dwarf was done by Popper in 1954, measuring a 21 km/s gravitational redshift of 40 Eridani B. The redshift of Sirius B was finally measured by Greenstein et al. in 1971, obtaining the value for the gravitational redshift of 89±19 km/s, with more accurate measurements by the Hubble Space Telescope, showing 80.4±4.8 km/s. James W. Brault, a graduate student of Robert Dicke at Princeton University, measured the gravitational redshift of the sun using optical methods in 1962. In 2020, a team of scientists published the most accurate measurement of the solar gravitational redshift so far, made by analyzing Fe spectral lines in sunlight reflected by the moon; their measurement of a mean global 638 ± 6 m/s lineshift is in agreement with the theoretical value of 633.1 m/s. Measuring the solar redshift is complicated by the Doppler shift caused by the motion of the sun's surface, which is of similar magnitude as the gravitational effect. In 2011 the group of Radek Wojtak of the Niels Bohr Institute at the University of Copenhagen collected data from 8000 galaxy clusters and found that the light coming from the cluster centers tended to be red-shifted compared to the cluster edges, confirming the energy loss due to gravity. In 2018, the star S2 made its closest approach to Sgr A*, the 4-million solar mass supermassive black hole at the centre of the Milky Way, reaching 7650 km/s or | effect. In 2011 the group of Radek Wojtak of the Niels Bohr Institute at the University of Copenhagen collected data from 8000 galaxy clusters and found that the light coming from the cluster centers tended to be red-shifted compared to the cluster edges, confirming the energy loss due to gravity. In 2018, the star S2 made its closest approach to Sgr A*, the 4-million solar mass supermassive black hole at the centre of the Milky Way, reaching 7650 km/s or about 2,5% of the speed of light while passing the black hole at a distance of just 120 AU, or 1400 Schwarzschild radii. Independent analyses by the GRAVITY collaboration (led by Reinhard Genzel) and the KECK/UCLA Galactic Center Group (led by Andrea Ghez) revealed a combined transverse Doppler and gravitational redshift up to 200 km/s/c, in agreement with general relativity predictions. In 2021, Mediavilla (IAC, Spain) & Jiménez-Vicente (UGR, Spain) were able to use measurements of the gravitational redshift in quasars up to cosmological redshift of z~3 to confirm the predictions of Einstein's Equivalence Principle and the lack of cosmological evolution within 13%. Terrestrial tests The effect is now considered to have been definitively verified by the experiments of Pound, Rebka and Snider between 1959 and 1965. The Pound–Rebka experiment of 1959 measured the gravitational redshift in spectral lines using a terrestrial 57Fe gamma source over a vertical height of 22.5 metres. This paper was the first determination of the gravitational redshift which used measurements of the change in wavelength of gamma-ray photons generated with the Mössbauer effect, which generates radiation with a very narrow line width. The accuracy of the gamma-ray measurements was typically 1%. An improved experiment was done by Pound and Snider in 1965, with an accuracy better than the 1% level. A very accurate gravitational redshift experiment was performed in 1976, where a hydrogen maser clock on a rocket was launched to a height of 10,000 km, and its rate compared with an identical clock on the ground. It tested the gravitational redshift to 0.007%. Later tests can be done with the Global Positioning System (GPS), which must account for the gravitational redshift in its timing system, and physicists have analyzed timing data from the GPS to confirm other tests. When the first satellite was launched, it showed the predicted shift of 38 microseconds per day. This rate of the discrepancy is sufficient to substantially impair the function of GPS within hours if not accounted for. An excellent account of the role played by general relativity in the design of GPS can be found in Ashby 2003. In 2020 a group at the University of Tokyo measured the gravitational redshift of two strontium-87 optical lattice clocks. The measurement took place at Tokyo Tower where the clocks were separated by approximately 450 m and connected by telecom fibers. The gravitational redshift can be expressed as , where is the gravitational redshift, is the optical clock transition frequency, is the difference in gravitational potential, and denotes the violation from general relativity. By Ramsey spectroscopy of the strontium-87 optical clock transition (429 THz, 698 nm) the group determined the gravitational redshift between the two optical clocks to be 21.18 Hz, corresponding to a z-value of approximately 5 × 10−14. Their measured value of , , is an agreement with recent measurements made with hydrogen masers in elliptical orbits. Early historical development of the theory The gravitational weakening of light from high-gravity stars was predicted by John Michell in 1783 and Pierre-Simon Laplace in 1796, using Isaac Newton's concept of light corpuscles (see: emission theory) and who predicted that some stars would have a gravity so strong that light would not be able to escape. The effect of gravity on light was then explored by Johann Georg von Soldner (1801), who calculated the amount of deflection of a light ray by the sun, arriving at the Newtonian answer which is half the value predicted by general relativity. All of this early work assumed that light could slow down and fall, which is inconsistent with the modern understanding of light waves. Once it became accepted that light was an electromagnetic wave, it was clear that the frequency of light should not change from place to place, since waves from a source with a fixed frequency keep the same frequency everywhere. One way around this conclusion would be if time itself were altered—if clocks at different points had different rates. This was precisely Einstein's conclusion in 1911. He considered an accelerating box, and noted that according to the special theory of relativity, the clock rate at the "bottom" of the box (the side away from the direction of acceleration) was slower than the clock rate at the "top" (the side toward the direction of acceleration). Nowadays, this can be easily shown in accelerated coordinates. The metric tensor in units where the speed of light is one is: and for an observer at a constant value of r, the rate at which a clock ticks, R(r), |
Volunteers who turned out. British Naval Intelligence had been aware of the arms shipment, Casement's return, and the Easter date for the rising through radio messages between Germany and its embassy in the United States that were intercepted by the Royal Navy and deciphered in Room 40 of the Admiralty. The information was passed to the Under-Secretary for Ireland, Sir Matthew Nathan, on 17 April, but without revealing its source and Nathan was doubtful about its accuracy. When news reached Dublin of the capture of the Aud and the arrest of Casement, Nathan conferred with the Lord Lieutenant, Lord Wimborne. Nathan proposed to raid Liberty Hall, headquarters of the Citizen Army, and Volunteer properties at Father Matthew Park and at Kimmage, but Wimborne insisted on wholesale arrests of the leaders. It was decided to postpone action until after Easter Monday, and in the meantime, Nathan telegraphed the Chief Secretary, Augustine Birrell, in London seeking his approval. By the time Birrell cabled his reply authorising the action, at noon on Monday 24 April 1916, the Rising had already begun. On the morning of Easter Sunday, 23 April, the Military Council met at Liberty Hall to discuss what to do in light of MacNeill's countermanding order. They decided that the Rising would go ahead the following day, Easter Monday, and that the Irish Volunteers and Irish Citizen Army would go into action as the 'Army of the Irish Republic'. They elected Pearse as president of the Irish Republic, and also as Commander-in-Chief of the army; Connolly became Commandant of the Dublin Brigade. Messengers were then sent to all units informing them of the new orders. The Rising in Dublin Easter Monday On the morning of Monday 24 April, about 1,200 members of the Irish Volunteers and Irish Citizen Army mustered at several locations in central Dublin. Among them were members of the all-female Cumann na mBan. Some wore Irish Volunteer and Citizen Army uniforms, while others wore civilian clothes with a yellow Irish Volunteer armband, military hats, and bandoliers. They were armed mostly with rifles (especially 1871 Mausers), but also with shotguns, revolvers, a few Mauser C96 semi-automatic pistols, and grenades. The number of Volunteers who mobilised was much smaller than expected. This was due to MacNeill's countermanding order, and the fact that the new orders had been sent so soon beforehand. However, several hundred Volunteers joined the Rising after it began. Shortly before midday, the rebels began to seize important sites in central Dublin. The rebels' plan was to hold Dublin city centre. This was a large, oval-shaped area bounded by two canals: the Grand to the south and the Royal to the north, with the River Liffey running through the middle. On the southern and western edges of this district were five British Army barracks. Most of the rebels' positions had been chosen to defend against counter-attacks from these barracks. The rebels took the positions with ease. Civilians were evacuated and policemen were ejected or taken prisoner. Windows and doors were barricaded, food and supplies were secured, and first aid posts were set up. Barricades were erected on the streets to hinder British Army movement. A joint force of about 400 Volunteers and Citizen Army gathered at Liberty Hall under the command of Commandant James Connolly. This was the headquarters battalion, and it also included Commander-in-Chief Patrick Pearse, as well as Tom Clarke, Seán Mac Diarmada and Joseph Plunkett. They marched to the General Post Office (GPO) on O'Connell Street, Dublin's main thoroughfare, occupied the building and hoisted two republican flags. Pearse stood outside and read the Proclamation of the Irish Republic. Copies of the Proclamation were also pasted on walls and handed out to bystanders by Volunteers and newsboys. The GPO would be the rebels' headquarters for most of the Rising. Volunteers from the GPO also occupied other buildings on the street, including buildings overlooking O'Connell Bridge. They took over a wireless telegraph station and sent out a radio broadcast in Morse code, announcing that an Irish Republic had been declared. This was the first radio broadcast in Ireland. Elsewhere, some of the headquarters battalion under Michael Mallin occupied St Stephen's Green, where they dug trenches and barricaded the surrounding roads. The 1st battalion, under Edward 'Ned' Daly, occupied the Four Courts and surrounding buildings, while a company under Seán Heuston occupied the Mendicity Institution, across the River Liffey from the Four Courts. The 2nd battalion, under Thomas MacDonagh, occupied Jacob's biscuit factory. The 3rd battalion, under Éamon de Valera, occupied Boland's Mill and surrounding buildings. The 4th battalion, under Éamonn Ceannt, occupied the South Dublin Union and the distillery on Marrowbone Lane. From each of these garrisons, small units of rebels established outposts in the surrounding area. The rebels also attempted to cut transport and communication links. As well as erecting roadblocks, they took control of various bridges and cut telephone and telegraph wires. Westland Row and Harcourt Street railway stations were occupied, though the latter only briefly. The railway line was cut at Fairview and the line was damaged by bombs at Amiens Street, Broadstone, Kingsbridge and Lansdowne Road. Around midday, a small team of Volunteers and Fianna Éireann members swiftly captured the Magazine Fort in the Phoenix Park and disarmed the guards. The goal was to seize weapons and blow up the ammunition store to signal that the Rising had begun. They seized weapons and planted explosives, but the blast was not loud enough to be heard across the city. The 23-year-old son of the fort's commander was fatally shot when he ran to raise the alarm. A contingent under Seán Connolly occupied Dublin City Hall and adjacent buildings. They attempted to seize neighbouring Dublin Castle, the heart of British rule in Ireland. As they approached the gate a lone and unarmed police sentry, James O'Brien, attempted to stop them and was shot dead by Connolly. According to some accounts, he was the first casualty of the Rising. The rebels overpowered the soldiers in the guardroom but failed to press further. The British Army's chief intelligence officer, Major Ivon Price, fired on the rebels while the Under-Secretary for Ireland, Sir Matthew Nathan, helped shut the castle gates. Unbeknownst to the rebels, the Castle was lightly guarded and could have been taken with ease. The rebels instead laid siege to the Castle from City Hall. Fierce fighting erupted there after British reinforcements arrived. The rebels on the roof exchanged fire with soldiers on the street. Seán Connolly was shot dead by a sniper, becoming the first rebel casualty. By the following morning, British forces had re-captured City Hall and taken the rebels prisoner. The rebels did not attempt to take some other key locations, notably Trinity College, in the heart of the city centre and defended by only a handful of armed unionist students. Failure to capture the telephone exchange in Crown Alley left communications in the hands of Government with GPO staff quickly repairing telephone wires that had been cut by the rebels. The failure to occupy strategic locations was attributed to lack of manpower. In at least two incidents, at Jacob's and Stephen's Green, the Volunteers and Citizen Army shot dead civilians trying to attack them or dismantle their barricades. Elsewhere, they hit civilians with their rifle butts to drive them off. The British military were caught totally unprepared by the Rising and their response of the first day was generally un-coordinated. Two squadrons of British cavalry were sent to investigate what was happening. They took fire and casualties from rebel forces at the GPO and at the Four Courts. As one troop passed Nelson's Pillar, the rebels opened fire from the GPO, killing three cavalrymen and two horses and fatally wounding a fourth man. The cavalrymen retreated and were withdrawn to barracks. On Mount Street, a group of Volunteer Training Corps men stumbled upon the rebel position and four were killed before they reached Beggars Bush Barracks. The only substantial combat of the first day of the Rising took place at the South Dublin Union where a piquet from the Royal Irish Regiment encountered an outpost of Éamonn Ceannt's force at the northwestern corner of the South Dublin Union. The British troops, after taking some casualties, managed to regroup and launch several assaults on the position before they forced their way inside and the small rebel force in the tin huts at the eastern end of the Union surrendered. However, the Union complex as a whole remained in rebel hands. A nurse in uniform, Margaret Keogh, was shot dead by British soldiers at the Union. She is believed to have been the first civilian killed in the Rising. Three unarmed Dublin Metropolitan Police were shot dead on the first day of the Rising and their Commissioner pulled them off the streets. Partly as a result of the police withdrawal, a wave of looting broke out in the city centre, especially in the area of O'Connell Street (still officially called "Sackville Street" at the time). Tuesday and Wednesday Lord Wimborne, the Lord Lieutenant, declared martial law on Tuesday evening and handed over civil power to Brigadier-General William Lowe. British forces initially put their efforts into securing the approaches to Dublin Castle and isolating the rebel headquarters, which they believed was in Liberty Hall. The British commander, Lowe, worked slowly, unsure of the size of the force he was up against, and with only 1,269 troops in the city when he arrived from the Curragh Camp in the early hours of Tuesday 25 April. City Hall was taken from the rebel unit that had attacked Dublin Castle on Tuesday morning. In the early hours of Tuesday, 120 British soldiers, with machine-guns, occupied two buildings overlooking St Stephen's Green: the Shelbourne Hotel and United Services Club. At dawn they opened fire on the Citizen Army occupying the green. The rebels returned fire but were forced to retreat to the Royal College of Surgeons building. They remained there for the rest of the week, exchanging fire with British forces. Fighting erupted along the northern edge of the city centre on Tuesday afternoon. In the northeast, British troops left Amiens Street railway station in an armoured train, to secure and repair a section of damaged tracks. They were attacked by rebels who had taken up position at Annesley Bridge. After a two-hour battle, the British were forced to retreat and several soldiers were captured. At Phibsborough, in the northwest, rebels had occupied buildings and erected barricades at junctions on the North Circular Road. The British summoned 18-pounder field artillery from Athlone and shelled the rebel positions, destroying the barricades. After a fierce firefight, the rebels withdrew. That afternoon Pearse walked out into O'Connell Street with a small escort and stood in front of Nelson's Pillar. As a large crowd gathered, he read out a 'manifesto to the citizens of Dublin,' calling on them to support the Rising. The rebels had failed to take either of Dublin's two main railway stations or either of its ports, at Dublin Port and Kingstown. As a result, during the following week, the British were able to bring in thousands of reinforcements from Britain and from their garrisons at the Curragh and Belfast. By the end of the week, British strength stood at over 16,000 men. Their firepower was provided by field artillery which they positioned on the Northside of the city at Phibsborough and at Trinity College, and by the patrol vessel Helga, which sailed up the Liffey, having been summoned from the port at Kingstown. On Wednesday, 26 April, the guns at Trinity College and Helga shelled Liberty Hall, and the Trinity College guns then began firing at rebel positions, first at Boland's Mill and then in O'Connell Street. Some rebel commanders, particularly James Connolly, did not believe that the British would shell the 'second city' of the British Empire. The principal rebel positions at the GPO, the Four Courts, Jacob's Factory and Boland's Mill saw little action. The British surrounded and bombarded them rather than assault them directly. One Volunteer in the GPO recalled, "we did practically no shooting as there was no target". However, where the rebels dominated the routes by which the British tried to funnel reinforcements into the city, there was fierce fighting. At 5:25PM Volunteers Eamon Martin, Garry Holohan, Robert Beggs, Sean Cody, Dinny O'Callaghan, Charles Shelley, Peadar Breslin and five others attempted to occupy Broadstone railway station on Church Street, the attack was unsuccessful and Martin was injured. On Wednesday morning, hundreds of British troops encircled the Mendicity Institution, which was occupied by 26 Volunteers under Seán Heuston. British troops advanced on the building, supported by snipers and machine-gun fire, but the Volunteers put up stiff resistance. Eventually, the troops got close enough to hurl grenades into the building, some of which the rebels threw back. Exhausted and almost out of ammunition, Heuston's men became the first rebel position to surrender. Heuston had been ordered to hold his position for a few hours, to delay the British, but had held on for three days. Reinforcements were sent to Dublin from Britain and disembarked at Kingstown on the morning of Wednesday 26 April. Heavy fighting occurred at the rebel-held positions around the Grand Canal as these troops advanced towards Dublin. More than 1,000 Sherwood Foresters were repeatedly caught in a cross-fire trying to cross the canal at Mount Street Bridge. Seventeen Volunteers were able to severely disrupt the British advance, killing or wounding 240 men. Despite there being alternative routes across the canal nearby, General Lowe ordered repeated frontal assaults on the Mount Street position. The British eventually took the position, which had not been reinforced by the nearby rebel garrison at Boland's Mills, on Thursday, but the fighting there inflicted up to two-thirds of their casualties for the entire week for a cost of just four dead Volunteers. It had taken nearly nine hours for the British to advance . On Wednesday Linenhall Barracks on Constitution Hill was burnt down under the orders of Commandant Edward Daly to prevent its reoccupation by the British. Thursday to Saturday The rebel position at the South Dublin Union (site of the present-day St. James's Hospital) and Marrowbone Lane, further west along the canal, also inflicted heavy losses on British troops. The South Dublin Union was a large complex of buildings and there was vicious fighting around and inside the buildings. Cathal Brugha, a rebel officer, distinguished himself in this action and was badly wounded. By the end of the week, the British had taken some of the buildings in the Union, but others remained in rebel hands. British troops also took casualties in unsuccessful frontal assaults on the Marrowbone Lane Distillery. The third major scene of fighting during the week was in the area of North King Street, north of the Four Courts. The rebels had established strong outposts in the area, occupying numerous small buildings and barricading the streets. From Thursday to Saturday, the British made repeated attempts to capture the area, in what was some of the fiercest fighting of the Rising. As the troops moved in, the rebels continually opened fire from windows and behind chimneys and barricades. At one point, a platoon led by Major Sheppard made a bayonet charge on one of the barricades but was cut down by rebel fire. The British employed machine guns and attempted to avoid direct fire by using makeshift armoured trucks, and by mouse-holing through the inside walls of terraced houses to get near the rebel positions. By the time of the rebel headquarters' surrender on Saturday, the South Staffordshire Regiment under Colonel Taylor had advanced only down the street at a cost of 11 dead and 28 wounded. The enraged troops broke into the houses along the street and shot or bayoneted fifteen unarmed male civilians whom they accused of being rebel fighters. Elsewhere, at Portobello Barracks, an officer named Bowen Colthurst summarily executed six civilians, including the pacifist nationalist activist, Francis Sheehy-Skeffington. These instances of British troops killing Irish civilians would later be highly controversial in Ireland. Surrender The headquarters garrison at the GPO was forced to evacuate after days of shelling when a fire caused by the shells spread to the GPO. Connolly had been incapacitated by a bullet wound to the ankle and had passed command on to Pearse. The O'Rahilly was killed in a sortie from the GPO. They tunnelled through the walls of the neighbouring buildings in order to evacuate the Post Office without coming under fire and took up a new position in 16 Moore Street. The young Seán McLoughlin was given military command and planned a breakout, but Pearse realised this plan would lead to further loss of civilian life. On Saturday 29 April, from this new headquarters, Pearse issued an order for all companies to surrender. Pearse surrendered unconditionally to Brigadier-General Lowe. The surrender document read: The other posts surrendered only after Pearse's surrender order, carried by nurse Elizabeth O'Farrell, reached them. Sporadic fighting, therefore, continued until Sunday, when word of the surrender was got to the other rebel garrisons. Command of British forces had passed from Lowe to General John Maxwell, who arrived in Dublin just in time to take the surrender. Maxwell was made temporary military governor of Ireland. The Rising outside Dublin Irish Volunteer units mobilised on Easter Sunday in several places outside of Dublin, but because of Eoin MacNeill's countermanding order, most of them returned home without fighting. In addition, because of the interception of the German arms aboard the Aud, the provincial Volunteer units were very poorly armed. In the south, around 1,200 Volunteers commanded by Tomás Mac Curtain mustered on the Sunday in Cork, but they dispersed on Wednesday after receiving nine contradictory orders by dispatch from the Volunteer leadership in Dublin. At their Sheares Street headquarters, some of the Volunteers engaged in a standoff with British forces. Much to the anger of many Volunteers, MacCurtain, under pressure from Catholic clergy, agreed to surrender his men's arms to the British. The only violence in Cork occurred when the RIC attempted to raid the home of the Kent family. The Kent brothers, who were Volunteers, engaged in a three-hour firefight with the RIC. An RIC officer and one of the brothers were killed, while another brother was later executed. In the north, Volunteer companies were mobilised in County Tyrone at Coalisland (including 132 men from Belfast led by IRB President Dennis McCullough) and Carrickmore, under the leadership of Patrick McCartan. They also mobilised at Creeslough, County Donegal under Daniel Kelly and James McNulty. However, in part because of the confusion caused by the countermanding order, the Volunteers in these locations dispersed without fighting. Fingal In Fingal (north County Dublin), about 60 Volunteers mobilised near Swords. They belonged to the 5th Battalion of the Dublin Brigade (also known as the Fingal Battalion), and were led by Thomas Ashe and his second in command, Richard Mulcahy. Unlike the rebels elsewhere, the Fingal Battalion successfully employed guerrilla tactics. They set up camp and Ashe split the battalion into four sections: three would undertake operations while the fourth was kept in reserve, guarding camp and foraging for food. The Volunteers moved against the RIC barracks in Swords, Donabate and Garristown, forcing the RIC to surrender and seizing all the weapons. They also damaged railway lines and cut telegraph wires. The railway line at Blanchardstown was bombed to prevent a troop train reaching Dublin. This derailed a cattle train, which had been sent ahead of the troop train. The only large-scale engagement of the Rising, outside Dublin city, was at Ashbourne, County Meath. On Friday, about 35 Fingal Volunteers surrounded the Ashbourne RIC barracks and called on it to surrender, but the RIC responded with a volley of gunfire. A firefight followed, and the RIC surrendered after the Volunteers attacked the building with a homemade grenade. Before the surrender could be taken, up to sixty RIC men arrived in a convoy, sparking a five-hour gun battle, in which eight RIC men were killed and 18 wounded. Two Volunteers were also killed and five wounded, and a civilian was fatally shot. The RIC surrendered and were disarmed. Ashe let them go after warning them not to fight against the Irish Republic again. Ashe's men camped at Kilsalaghan near Dublin until they received orders to surrender on Saturday. The Fingal Battalion's tactics during the Rising foreshadowed those of the IRA during the War of Independence that followed. Volunteer contingents also mobilised nearby in counties Meath and Louth but proved unable to link up with the North Dublin unit until after it had surrendered. In County Louth, Volunteers shot dead an RIC man near the village of Castlebellingham on 24 April, in an incident in which 15 RIC men were also taken prisoner. Enniscorthy In County Wexford, 100–200 Volunteers—led by Robert Brennan, Séamus Doyle and Seán Etchingham—took over the town of Enniscorthy on Thursday 27 April until Sunday. Volunteer officer Paul Galligan had cycled 200 km from rebel headquarters in Dublin with orders to mobilise. They blocked all roads into the town and made a brief attack on the RIC barracks, but chose to blockade it rather than attempt to capture it. They flew the tricolour over the Athenaeum building, which they had made their headquarters, and paraded uniformed in the streets. They also occupied Vinegar Hill, where the United Irishmen had made a last stand in the 1798 rebellion. The public largely supported the rebels and many local men offered to join them. By Saturday, up to 1,000 rebels had been mobilised, and a detachment was sent to occupy the nearby village of Ferns. In Wexford, the British assembled a column of 1,000 soldiers (including the Connaught Rangers), two field guns and a 4.7 inch naval gun on a makeshift armoured train. On Sunday, the British sent messengers to Enniscorthy, informing the rebels of Pearse's surrender order. However, the Volunteer officers were sceptical. Two of them were escorted by the British to Arbour Hill Prison, where Pearse confirmed the surrender order. Galway In County Galway, 600–700 Volunteers mobilised on Tuesday under Liam Mellows. His plan was to "bottle up the British garrison and divert the British from concentrating on Dublin". However, his men were poorly armed, with only 25 rifles, 60 revolvers, 300 shotguns and some homemade grenades – many of them only had pikes. Most of the action took place in a rural area to the east of Galway city. They made unsuccessful attacks on the RIC barracks at Clarinbridge and Oranmore, captured several officers, and bombed a bridge and railway line, before taking up position near Athenry. There was also a skirmish between rebels and an RIC mobile patrol at Carnmore crossroads. A constable, Patrick Whelan, was shot dead after he had called to the rebels: "Surrender, boys, I know ye all". On Wednesday, arrived in Galway Bay and shelled the countryside on the northeastern edge of Galway. The rebels retreated southeast to Moyode, an abandoned country house and estate. From here they set up lookout posts and sent out scouting parties. On Friday, landed 200 Royal Marines and began shelling the countryside near the rebel position. The rebels retreated further south to Limepark, another abandoned country house. Deeming the situation to be hopeless, they dispersed on Saturday morning. Many went home and were arrested following the Rising, while others, including Mellows, went "on the run". By the time British reinforcements arrived in the west, the Rising there had already disintegrated. Limerick and Clare In County Limerick, 300 Irish Volunteers assembled at Glenquin Castle near Killeedy, but they did not take any military action. In County Clare, Micheal Brennan marched with 100 Volunteers (from Meelick, Oatfield, and Cratloe) to the River Shannon on Easter Monday to await orders from the Rising leaders in Dublin, and weapons from the expected Casement shipment. However, neither arrived and no actions were taken. Casualties The Easter Rising resulted in at least 485 deaths, according to the Glasnevin Trust. Of those killed: 260 (about 54%) were civilians 126 (about 26%) were U.K. forces (120 U.K. military personnel, 5 Volunteer Training Corps members, and one Canadian soldier) 35 – Irish Regiments:- 11 – Royal Dublin Fusiliers 10 – Royal Irish Rifles 9 – Royal Irish Regiment 2 – Royal Inniskilling Fusiliers 2 – Royal Irish Fusiliers 1 – Leinster Regiment 74 – British Regiments:- 29 – Sherwood Foresters 15 – South Staffordshire 2 – North Staffordshire 1 – Royal Field Artillery 4 – Royal Engineers 5 – Army Service Corps 10 – Lancers 7 – 8th Hussars 2 – 2nd King Edwards Horse 3 – Yeomanry 1 – Royal Navy 82 (about 16%) were Irish rebel forces (64 Irish Volunteers, 15 Irish Citizen Army and 3 Fianna Éireann) 17 (about 4%) were police 14 – Royal Irish Constabulary 3 – Dublin Metropolitan Police More than 2,600 were wounded; including at least 2,200 civilians and rebels, at least 370 British soldiers and 29 policemen. All 16 police fatalities and 22 of the British soldiers killed were Irishmen. About 40 of those killed were children (under 17 years old), four of whom were members of the rebel forces. The number of casualties each day steadily rose, with 55 killed on Monday and 78 killed on Saturday. The British Army suffered their biggest losses in the Battle of Mount Street Bridge on Wednesday, when at least 30 soldiers were killed. The rebels also suffered their biggest losses on that day. The RIC suffered most of their casualties in the Battle of Ashbourne on Friday. The majority of the casualties, both killed and wounded, were civilians. Most of the civilian casualties and most of the casualties overall were caused by the British Army. This was due to the British using artillery, incendiary shells and heavy machine guns in built-up areas, as well as their "inability to discern rebels from civilians". One Royal Irish Regiment officer recalled, "they regarded, not unreasonably, everyone they saw as an enemy, and fired at anything that moved". Many other civilians were killed when caught in the crossfire. Both sides, British and rebel, also shot civilians deliberately on occasion; for not obeying orders (such as to stop at checkpoints), for assaulting or attempting to hinder them, and for looting. There were also instances of British troops killing unarmed civilians out of revenge or frustration: notably in the North King Street Massacre, where fifteen were killed, and at Portobello Barracks, where six were shot. Furthermore, there were incidents of friendly fire. On 29 April, the Royal Dublin Fusiliers under Company Quartermaster Sergeant Robert Flood shot dead two British officers and two Irish civilian employees of the Guinness Brewery after he decided they were rebels. Flood was court-martialled for murder but acquitted. According to the historian Fearghal McGarry, the rebels attempted to avoid needless bloodshed. Desmond Ryan stated that Volunteers were told "no firing was to take place except under orders or to repel attack". Aside from the engagement at Ashbourne, policemen and unarmed soldiers were not systematically targeted, and a large group of policemen was allowed to stand at Nelson's Pillar throughout Monday. McGarry writes that the Irish Citizen Army "were more ruthless than Volunteers when it came to shooting policemen" and attributes this to the "acrimonious legacy" of the Dublin Lock-out. The vast majority of the Irish casualties were buried in Glasnevin Cemetery in the aftermath of the fighting. British families came to Dublin Castle in May 1916 to reclaim the bodies of British soldiers, and funerals were arranged. Soldiers whose bodies were not claimed were given military funerals in Grangegorman Military Cemetery. Aftermath Arrests and executions General Maxwell quickly signalled his intention "to arrest all dangerous Sinn Feiners", including "those who have taken an active part in the movement although not in the present rebellion", reflecting the popular belief that Sinn Féin, a separatist organisation that was neither militant nor republican, was behind the Rising. A total of 3,430 men and 79 women were arrested, including 425 people for looting. A series of courts-martial began on 2 May, in which 187 people were tried, most of them at Richmond Barracks. The president of the courts-martial was Charles Blackader. Controversially, Maxwell decided that the courts-martial would be held in secret and without a defence, which Crown law officers later ruled to have been illegal. Some of those who conducted the trials had commanded British troops involved in suppressing the Rising, a conflict of interest that the Military Manual prohibited. Only one of those tried by courts-martial was a woman, Constance Markievicz, who was also the only woman to be kept in solitary confinement. Ninety were sentenced to death. Fifteen of those (including all seven signatories of the Proclamation) had their sentences confirmed by Maxwell and fourteen were executed by firing squad at Kilmainham Gaol between 3 and 12 May. Among them was the seriously wounded Connolly, who was shot while tied to a chair because of his shattered ankle. Maxwell stated that only the "ringleaders" and those proven to have committed "coldblooded murder" would be executed. However, the evidence presented was weak, and some of those executed were not leaders and did not kill anyone: Willie Pearse described himself as "a personal attaché to my brother, Patrick Pearse"; John MacBride had not even been aware of the Rising until it began, but had fought against the British in the Boer War fifteen years before; Thomas Kent did not come out at all—he was executed for the killing of a police officer during the raid on his house the week after the Rising. The most prominent leader to escape execution was Éamon de Valera, Commandant of the 3rd Battalion, who did so partly because of his American birth. Most of the executions took place over a ten-day period: 3 May: Patrick Pearse, Thomas MacDonagh and Thomas Clarke 4 May: Joseph Plunkett, William Pearse, Edward Daly and Michael O'Hanrahan 5 May: John MacBride 8 May: Éamonn Ceannt, Michael Mallin, Seán Heuston and Con Colbert 12 May: James Connolly and Seán Mac Diarmada As the executions went on, the Irish public grew increasingly hostile towards the British and sympathetic to the rebels. After the first three executions, John Redmond, leader of the moderate Irish Parliamentary Party, said in the British Parliament that the rising "happily, seems to be over. It has been dealt with with firmness, which was not only right, but it was the duty of the Government to so deal with it". However, he urged the Government "not to show undue hardship or severity to the great masses of those who are implicated". As the executions continued, Redmond pleaded with Asquith to stop them, warning that "if more executions take place in Ireland, the position will become impossible for any constitutional party". Ulster Unionist Party leader Edward Carson expressed similar views. Redmond's deputy, John Dillon, made an impassioned speech in parliament, saying "thousands of people […] who ten days ago were bitterly opposed to the whole of the Sinn Fein movement and to the rebellion, are now becoming infuriated against the Government on account of these executions". He said | its reoccupation by the British. Thursday to Saturday The rebel position at the South Dublin Union (site of the present-day St. James's Hospital) and Marrowbone Lane, further west along the canal, also inflicted heavy losses on British troops. The South Dublin Union was a large complex of buildings and there was vicious fighting around and inside the buildings. Cathal Brugha, a rebel officer, distinguished himself in this action and was badly wounded. By the end of the week, the British had taken some of the buildings in the Union, but others remained in rebel hands. British troops also took casualties in unsuccessful frontal assaults on the Marrowbone Lane Distillery. The third major scene of fighting during the week was in the area of North King Street, north of the Four Courts. The rebels had established strong outposts in the area, occupying numerous small buildings and barricading the streets. From Thursday to Saturday, the British made repeated attempts to capture the area, in what was some of the fiercest fighting of the Rising. As the troops moved in, the rebels continually opened fire from windows and behind chimneys and barricades. At one point, a platoon led by Major Sheppard made a bayonet charge on one of the barricades but was cut down by rebel fire. The British employed machine guns and attempted to avoid direct fire by using makeshift armoured trucks, and by mouse-holing through the inside walls of terraced houses to get near the rebel positions. By the time of the rebel headquarters' surrender on Saturday, the South Staffordshire Regiment under Colonel Taylor had advanced only down the street at a cost of 11 dead and 28 wounded. The enraged troops broke into the houses along the street and shot or bayoneted fifteen unarmed male civilians whom they accused of being rebel fighters. Elsewhere, at Portobello Barracks, an officer named Bowen Colthurst summarily executed six civilians, including the pacifist nationalist activist, Francis Sheehy-Skeffington. These instances of British troops killing Irish civilians would later be highly controversial in Ireland. Surrender The headquarters garrison at the GPO was forced to evacuate after days of shelling when a fire caused by the shells spread to the GPO. Connolly had been incapacitated by a bullet wound to the ankle and had passed command on to Pearse. The O'Rahilly was killed in a sortie from the GPO. They tunnelled through the walls of the neighbouring buildings in order to evacuate the Post Office without coming under fire and took up a new position in 16 Moore Street. The young Seán McLoughlin was given military command and planned a breakout, but Pearse realised this plan would lead to further loss of civilian life. On Saturday 29 April, from this new headquarters, Pearse issued an order for all companies to surrender. Pearse surrendered unconditionally to Brigadier-General Lowe. The surrender document read: The other posts surrendered only after Pearse's surrender order, carried by nurse Elizabeth O'Farrell, reached them. Sporadic fighting, therefore, continued until Sunday, when word of the surrender was got to the other rebel garrisons. Command of British forces had passed from Lowe to General John Maxwell, who arrived in Dublin just in time to take the surrender. Maxwell was made temporary military governor of Ireland. The Rising outside Dublin Irish Volunteer units mobilised on Easter Sunday in several places outside of Dublin, but because of Eoin MacNeill's countermanding order, most of them returned home without fighting. In addition, because of the interception of the German arms aboard the Aud, the provincial Volunteer units were very poorly armed. In the south, around 1,200 Volunteers commanded by Tomás Mac Curtain mustered on the Sunday in Cork, but they dispersed on Wednesday after receiving nine contradictory orders by dispatch from the Volunteer leadership in Dublin. At their Sheares Street headquarters, some of the Volunteers engaged in a standoff with British forces. Much to the anger of many Volunteers, MacCurtain, under pressure from Catholic clergy, agreed to surrender his men's arms to the British. The only violence in Cork occurred when the RIC attempted to raid the home of the Kent family. The Kent brothers, who were Volunteers, engaged in a three-hour firefight with the RIC. An RIC officer and one of the brothers were killed, while another brother was later executed. In the north, Volunteer companies were mobilised in County Tyrone at Coalisland (including 132 men from Belfast led by IRB President Dennis McCullough) and Carrickmore, under the leadership of Patrick McCartan. They also mobilised at Creeslough, County Donegal under Daniel Kelly and James McNulty. However, in part because of the confusion caused by the countermanding order, the Volunteers in these locations dispersed without fighting. Fingal In Fingal (north County Dublin), about 60 Volunteers mobilised near Swords. They belonged to the 5th Battalion of the Dublin Brigade (also known as the Fingal Battalion), and were led by Thomas Ashe and his second in command, Richard Mulcahy. Unlike the rebels elsewhere, the Fingal Battalion successfully employed guerrilla tactics. They set up camp and Ashe split the battalion into four sections: three would undertake operations while the fourth was kept in reserve, guarding camp and foraging for food. The Volunteers moved against the RIC barracks in Swords, Donabate and Garristown, forcing the RIC to surrender and seizing all the weapons. They also damaged railway lines and cut telegraph wires. The railway line at Blanchardstown was bombed to prevent a troop train reaching Dublin. This derailed a cattle train, which had been sent ahead of the troop train. The only large-scale engagement of the Rising, outside Dublin city, was at Ashbourne, County Meath. On Friday, about 35 Fingal Volunteers surrounded the Ashbourne RIC barracks and called on it to surrender, but the RIC responded with a volley of gunfire. A firefight followed, and the RIC surrendered after the Volunteers attacked the building with a homemade grenade. Before the surrender could be taken, up to sixty RIC men arrived in a convoy, sparking a five-hour gun battle, in which eight RIC men were killed and 18 wounded. Two Volunteers were also killed and five wounded, and a civilian was fatally shot. The RIC surrendered and were disarmed. Ashe let them go after warning them not to fight against the Irish Republic again. Ashe's men camped at Kilsalaghan near Dublin until they received orders to surrender on Saturday. The Fingal Battalion's tactics during the Rising foreshadowed those of the IRA during the War of Independence that followed. Volunteer contingents also mobilised nearby in counties Meath and Louth but proved unable to link up with the North Dublin unit until after it had surrendered. In County Louth, Volunteers shot dead an RIC man near the village of Castlebellingham on 24 April, in an incident in which 15 RIC men were also taken prisoner. Enniscorthy In County Wexford, 100–200 Volunteers—led by Robert Brennan, Séamus Doyle and Seán Etchingham—took over the town of Enniscorthy on Thursday 27 April until Sunday. Volunteer officer Paul Galligan had cycled 200 km from rebel headquarters in Dublin with orders to mobilise. They blocked all roads into the town and made a brief attack on the RIC barracks, but chose to blockade it rather than attempt to capture it. They flew the tricolour over the Athenaeum building, which they had made their headquarters, and paraded uniformed in the streets. They also occupied Vinegar Hill, where the United Irishmen had made a last stand in the 1798 rebellion. The public largely supported the rebels and many local men offered to join them. By Saturday, up to 1,000 rebels had been mobilised, and a detachment was sent to occupy the nearby village of Ferns. In Wexford, the British assembled a column of 1,000 soldiers (including the Connaught Rangers), two field guns and a 4.7 inch naval gun on a makeshift armoured train. On Sunday, the British sent messengers to Enniscorthy, informing the rebels of Pearse's surrender order. However, the Volunteer officers were sceptical. Two of them were escorted by the British to Arbour Hill Prison, where Pearse confirmed the surrender order. Galway In County Galway, 600–700 Volunteers mobilised on Tuesday under Liam Mellows. His plan was to "bottle up the British garrison and divert the British from concentrating on Dublin". However, his men were poorly armed, with only 25 rifles, 60 revolvers, 300 shotguns and some homemade grenades – many of them only had pikes. Most of the action took place in a rural area to the east of Galway city. They made unsuccessful attacks on the RIC barracks at Clarinbridge and Oranmore, captured several officers, and bombed a bridge and railway line, before taking up position near Athenry. There was also a skirmish between rebels and an RIC mobile patrol at Carnmore crossroads. A constable, Patrick Whelan, was shot dead after he had called to the rebels: "Surrender, boys, I know ye all". On Wednesday, arrived in Galway Bay and shelled the countryside on the northeastern edge of Galway. The rebels retreated southeast to Moyode, an abandoned country house and estate. From here they set up lookout posts and sent out scouting parties. On Friday, landed 200 Royal Marines and began shelling the countryside near the rebel position. The rebels retreated further south to Limepark, another abandoned country house. Deeming the situation to be hopeless, they dispersed on Saturday morning. Many went home and were arrested following the Rising, while others, including Mellows, went "on the run". By the time British reinforcements arrived in the west, the Rising there had already disintegrated. Limerick and Clare In County Limerick, 300 Irish Volunteers assembled at Glenquin Castle near Killeedy, but they did not take any military action. In County Clare, Micheal Brennan marched with 100 Volunteers (from Meelick, Oatfield, and Cratloe) to the River Shannon on Easter Monday to await orders from the Rising leaders in Dublin, and weapons from the expected Casement shipment. However, neither arrived and no actions were taken. Casualties The Easter Rising resulted in at least 485 deaths, according to the Glasnevin Trust. Of those killed: 260 (about 54%) were civilians 126 (about 26%) were U.K. forces (120 U.K. military personnel, 5 Volunteer Training Corps members, and one Canadian soldier) 35 – Irish Regiments:- 11 – Royal Dublin Fusiliers 10 – Royal Irish Rifles 9 – Royal Irish Regiment 2 – Royal Inniskilling Fusiliers 2 – Royal Irish Fusiliers 1 – Leinster Regiment 74 – British Regiments:- 29 – Sherwood Foresters 15 – South Staffordshire 2 – North Staffordshire 1 – Royal Field Artillery 4 – Royal Engineers 5 – Army Service Corps 10 – Lancers 7 – 8th Hussars 2 – 2nd King Edwards Horse 3 – Yeomanry 1 – Royal Navy 82 (about 16%) were Irish rebel forces (64 Irish Volunteers, 15 Irish Citizen Army and 3 Fianna Éireann) 17 (about 4%) were police 14 – Royal Irish Constabulary 3 – Dublin Metropolitan Police More than 2,600 were wounded; including at least 2,200 civilians and rebels, at least 370 British soldiers and 29 policemen. All 16 police fatalities and 22 of the British soldiers killed were Irishmen. About 40 of those killed were children (under 17 years old), four of whom were members of the rebel forces. The number of casualties each day steadily rose, with 55 killed on Monday and 78 killed on Saturday. The British Army suffered their biggest losses in the Battle of Mount Street Bridge on Wednesday, when at least 30 soldiers were killed. The rebels also suffered their biggest losses on that day. The RIC suffered most of their casualties in the Battle of Ashbourne on Friday. The majority of the casualties, both killed and wounded, were civilians. Most of the civilian casualties and most of the casualties overall were caused by the British Army. This was due to the British using artillery, incendiary shells and heavy machine guns in built-up areas, as well as their "inability to discern rebels from civilians". One Royal Irish Regiment officer recalled, "they regarded, not unreasonably, everyone they saw as an enemy, and fired at anything that moved". Many other civilians were killed when caught in the crossfire. Both sides, British and rebel, also shot civilians deliberately on occasion; for not obeying orders (such as to stop at checkpoints), for assaulting or attempting to hinder them, and for looting. There were also instances of British troops killing unarmed civilians out of revenge or frustration: notably in the North King Street Massacre, where fifteen were killed, and at Portobello Barracks, where six were shot. Furthermore, there were incidents of friendly fire. On 29 April, the Royal Dublin Fusiliers under Company Quartermaster Sergeant Robert Flood shot dead two British officers and two Irish civilian employees of the Guinness Brewery after he decided they were rebels. Flood was court-martialled for murder but acquitted. According to the historian Fearghal McGarry, the rebels attempted to avoid needless bloodshed. Desmond Ryan stated that Volunteers were told "no firing was to take place except under orders or to repel attack". Aside from the engagement at Ashbourne, policemen and unarmed soldiers were not systematically targeted, and a large group of policemen was allowed to stand at Nelson's Pillar throughout Monday. McGarry writes that the Irish Citizen Army "were more ruthless than Volunteers when it came to shooting policemen" and attributes this to the "acrimonious legacy" of the Dublin Lock-out. The vast majority of the Irish casualties were buried in Glasnevin Cemetery in the aftermath of the fighting. British families came to Dublin Castle in May 1916 to reclaim the bodies of British soldiers, and funerals were arranged. Soldiers whose bodies were not claimed were given military funerals in Grangegorman Military Cemetery. Aftermath Arrests and executions General Maxwell quickly signalled his intention "to arrest all dangerous Sinn Feiners", including "those who have taken an active part in the movement although not in the present rebellion", reflecting the popular belief that Sinn Féin, a separatist organisation that was neither militant nor republican, was behind the Rising. A total of 3,430 men and 79 women were arrested, including 425 people for looting. A series of courts-martial began on 2 May, in which 187 people were tried, most of them at Richmond Barracks. The president of the courts-martial was Charles Blackader. Controversially, Maxwell decided that the courts-martial would be held in secret and without a defence, which Crown law officers later ruled to have been illegal. Some of those who conducted the trials had commanded British troops involved in suppressing the Rising, a conflict of interest that the Military Manual prohibited. Only one of those tried by courts-martial was a woman, Constance Markievicz, who was also the only woman to be kept in solitary confinement. Ninety were sentenced to death. Fifteen of those (including all seven signatories of the Proclamation) had their sentences confirmed by Maxwell and fourteen were executed by firing squad at Kilmainham Gaol between 3 and 12 May. Among them was the seriously wounded Connolly, who was shot while tied to a chair because of his shattered ankle. Maxwell stated that only the "ringleaders" and those proven to have committed "coldblooded murder" would be executed. However, the evidence presented was weak, and some of those executed were not leaders and did not kill anyone: Willie Pearse described himself as "a personal attaché to my brother, Patrick Pearse"; John MacBride had not even been aware of the Rising until it began, but had fought against the British in the Boer War fifteen years before; Thomas Kent did not come out at all—he was executed for the killing of a police officer during the raid on his house the week after the Rising. The most prominent leader to escape execution was Éamon de Valera, Commandant of the 3rd Battalion, who did so partly because of his American birth. Most of the executions took place over a ten-day period: 3 May: Patrick Pearse, Thomas MacDonagh and Thomas Clarke 4 May: Joseph Plunkett, William Pearse, Edward Daly and Michael O'Hanrahan 5 May: John MacBride 8 May: Éamonn Ceannt, Michael Mallin, Seán Heuston and Con Colbert 12 May: James Connolly and Seán Mac Diarmada As the executions went on, the Irish public grew increasingly hostile towards the British and sympathetic to the rebels. After the first three executions, John Redmond, leader of the moderate Irish Parliamentary Party, said in the British Parliament that the rising "happily, seems to be over. It has been dealt with with firmness, which was not only right, but it was the duty of the Government to so deal with it". However, he urged the Government "not to show undue hardship or severity to the great masses of those who are implicated". As the executions continued, Redmond pleaded with Asquith to stop them, warning that "if more executions take place in Ireland, the position will become impossible for any constitutional party". Ulster Unionist Party leader Edward Carson expressed similar views. Redmond's deputy, John Dillon, made an impassioned speech in parliament, saying "thousands of people […] who ten days ago were bitterly opposed to the whole of the Sinn Fein movement and to the rebellion, are now becoming infuriated against the Government on account of these executions". He said "it is not murderers who are being executed; it is insurgents who have fought a clean fight, a brave fight, however misguided". Dillon was heckled by English MPs. The British Government itself had also become concerned at the reaction to the executions, and at the way the courts-martial were being carried out. Asquith had warned Maxwell that "a large number of executions would […] sow the seeds of lasting trouble in Ireland". After Connolly's execution, Maxwell bowed to pressure and had the other death sentences commuted to penal servitude. Most of the people arrested were subsequently released, however under Regulation 14B of the Defence of the Realm Act 1914 1,836 men were interned at internment camps and prisons in England and Wales. Many of them, like Arthur Griffith, had little or nothing to do with the Rising. Camps such as Frongoch internment camp became "Universities of Revolution" where future leaders including Michael Collins, Terence McSwiney and J. J. O'Connell began to plan the coming struggle for independence. Casement was tried in London for high treason and hanged at Pentonville Prison on 3 August. British atrocities After the Rising, claims of atrocities carried out by British troops began to emerge. Although they did not receive as much attention as the executions, they sparked outrage among the Irish public and were raised by Irish MPs in Parliament. One incident was the 'Portobello killings'. On Tuesday 25 April, Dubliner Francis Sheehy-Skeffington, a pacifist nationalist activist, had been arrested by British soldiers. Captain John Bowen-Colthurst then took him with a British raiding party as a hostage and human shield. On Rathmines Road he stopped a boy named James Coade, whom he shot dead. His troops then destroyed a tobacconist's shop with grenades and seized journalists Thomas Dickson and Patrick MacIntyre. The next morning, Colthurst had Skeffington and the two journalists shot by firing squad in Portobello Barracks. The bodies were then buried there. Later that day he shot a Labour Party councillor, Richard O'Carroll. When Major Sir Francis Vane learned of the killings he telephoned his superiors in Dublin Castle, but no action was taken. Vane informed Herbert Kitchener, who told Maxwell to arrest Colthurst, but Maxwell refused. Colthurst was eventually arrested and court-martialled in June. He was found guilty of murder but insane, and detained for twenty months at Broadmoor. Public and political pressure led to a public inquiry, which reached similar conclusions. Major Vane was discharged "owing to his action in the Skeffington murder case". The other incident was the 'North King Street Massacre'. On the night of 28–29 April, British soldiers of the South Staffordshire Regiment, under Colonel Henry Taylor, had burst into houses on North King Street and killed fifteen male civilians whom they accused of being rebels. The soldiers shot or bayoneted the victims, then secretly buried some of them in cellars or back yards after robbing them. The area saw some of the fiercest fighting of the Rising and the British had taken heavy casualties for little gain. Maxwell attempted to excuse the killings and argued that the rebels were ultimately responsible. He claimed that "the rebels wore no uniform" and that the people of North King Street were rebel sympathisers. Maxwell concluded that such incidents "are absolutely unavoidable in such a business as this" and that "under the circumstance the troops [...] behaved with the greatest restraint". A private brief, prepared for the Prime Minister, said the soldiers "had orders not to take any prisoners" but took it to mean they were to shoot any suspected rebel. The City Coroner's inquest found that soldiers had killed "unarmed and unoffending" residents. The military court of inquiry ruled that no specific soldiers could be held responsible, and no action was taken. These killings, and the British response to them, helped sway Irish public opinion against the British. Inquiry A Royal Commission was set up to enquire into the causes of the Rising. It began hearings on 18 May under the chairmanship of Lord Hardinge of Penshurst. The Commission heard evidence from Sir Matthew Nathan, Augustine Birrell, Lord Wimborne, Sir Neville Chamberlain (Inspector-General of the Royal Irish Constabulary), General Lovick Friend, Major Ivor Price of Military Intelligence and others. The report, published on 26 June, was critical of the Dublin administration, saying that "Ireland for several years had been administered on the principle that it was safer and more expedient to leave the law in abeyance if collision with any faction of the Irish people could thereby be avoided." Birrell and Nathan had resigned immediately after the Rising. Wimborne resisted the pressure to resign, but was recalled to London by Asquith. He was re-appointed in July 1916. Chamberlain also resigned. Reaction of the Dublin public At first, many Dubliners were bewildered by the outbreak of the Rising. James Stephens, who was in Dublin during the week, thought, "None of these people were prepared for Insurrection. The thing had been sprung on them so suddenly they were unable to take sides." There was great hostility towards the Volunteers in some parts of the city. Historian Keith Jeffery noted that most of the opposition came from people whose relatives were in the British Army and who depended on their army allowances. Those most openly hostile to the Volunteers were the "separation women" (so-called because they were paid "separation money" by the British government), whose husbands and sons were fighting in the British Army in the First World War. There was also hostility from unionists. Supporters of the Irish Parliamentary Party also felt the rebellion was a betrayal of their party. When occupying positions in the South Dublin Union and Jacob's factory, the rebels got involved in physical confrontations with civilians who tried to tear down the rebel barricades and prevent them taking over buildings. The Volunteers shot and clubbed a number of civilians who assaulted them or tried to dismantle their barricades. That the Rising resulted in a great deal of death and destruction, as well as disrupting food supplies, also contributed to the antagonism toward the rebels. After the surrender, the Volunteers were hissed at, pelted with refuse, and denounced as "murderers" and "starvers of the people". Volunteer Robert Holland for example remembered being "subjected to very ugly remarks and cat-calls from the poorer classes" as they marched to surrender. He also reported being abused by people he knew as he was marched through the Kilmainham area into captivity and said the British troops saved them from being manhandled by the crowd. However, some Dubliners expressed support for the rebels. Canadian journalist and writer Frederick Arthur McKenzie wrote that in poorer areas, "there was a vast amount of sympathy with the rebels, particularly after the rebels were defeated". He wrote of crowds cheering a column of rebel prisoners as it passed, with one woman remarking "Shure, we cheer them. Why shouldn't we? Aren't they our own flesh and blood?". At Boland's Mill, the defeated rebels were met with a large crowd, "many weeping and expressing sympathy and sorrow, all of them friendly and kind". Other onlookers were sympathetic but watched in silence. Christopher M. Kennedy notes that "those who sympathised with the rebels would, out of fear for their own safety, keep their opinions to themselves". Áine Ceannt witnessed British soldiers arresting a woman who cheered the captured rebels. An RIC District Inspector's report stated: "Martial law, of course, prevents any expression of it; but a strong undercurrent of disloyalty exists". Thomas Johnson, the Labour Party leader, thought there was "no sign of sympathy for the rebels, but general admiration for their courage and strategy". The aftermath of the Rising, and in particular the British reaction to it, helped sway a large section of Irish nationalist opinion away from hostility or ambivalence and towards support for the rebels of Easter 1916. Dublin businessman and Quaker James G. Douglas, for example, hitherto a Home Ruler, wrote that his political outlook changed radically during the course of the Rising because of the British military occupation of the city and that he became convinced that parliamentary methods would not be enough to expel the British from Ireland. Rise of Sinn Féin A meeting called by Count Plunkett on 19 April 1917 led to the formation of a broad political movement under the banner of Sinn Féin which was formalised at the Sinn Féin Ard Fheis of 25 October 1917. The Conscription Crisis of 1918 further intensified public support for Sinn Féin before the general elections to the British Parliament on 14 December 1918, which resulted in a landslide victory for Sinn Féin, winning 73 seats out of 105, whose Members of Parliament (MPs) gathered in Dublin on 21 January 1919 to form Dáil Éireann and adopt the Declaration of Independence. Legacy Shortly after the Easter Rising, poet Francis Ledwidge wrote "O’Connell Street" and "Lament for the Poets of 1916", which both describe his sense of loss and an expression of holding the same "dreams," as the Easter Rising's Irish Republicans. He would also go on to write lament for Thomas MacDonagh for his fallen friend and fellow Irish Volunteer. A few months after the Easter Rising, W.B. Yeats commemorated some of the fallen figures of the Irish Republican movement, as well as his torn emotions regarding these events, in the poem Easter, 1916. Some of the survivors of the Rising went on to become leaders of the independent Irish state. Those who were executed were venerated by many as martyrs; their graves in Dublin's former military prison of Arbour Hill became a national monument and the Proclamation text was taught in schools. An annual commemorative military parade was held each year on Easter Sunday. In 1935, Éamon de Valera unveiled a statue of the mythical Irish hero Cú Chulainn, sculpted by Oliver Sheppard, at the General Post Office as part of the Rising commemorations that year – it is often seen to be an important symbol of martyrdom in remembrance of the 1916 rebels. Memorials to the heroes of the Rising are to be found in other Irish cities, such as Limerick. The 1916 Medal was issued in 1941 to people with recognised military service during the Rising. The parades culminated in a huge national celebration on the 50th anniversary of the Rising in 1966. Medals were issued by the government to survivors who took part in the Rising at the event. RTÉ, the Irish national broadcaster, as one of its first major undertakings made a series of commemorative programmes for the 1966 anniversary of the Rising. Roibéárd Ó Faracháin, head of programming said, "While still seeking historical truth, the emphasis will be on homage, on salutation." At the same time, CIÉ, the Republic of Ireland's railway operator, renamed several of its major stations after republicans who played key roles in the Easter Rising. Ireland's first commemorative coin was also issued in 1966 to pay tribute to the Easter Rising. It was valued at 10 shillings, therefore having the highest denomination of any pre-decimal coin issued by the country. The coin featured a bust of Patrick Pearse on the obverse and an image of the statue of Cú Chulainn in the GPO on the reverse. Its edge inscription reads, "Éirí Amach na Cásca 1916", which translates to, "1916 Easter Rising". Due to their 83.5% silver content, many of the coins were melted down shortly after issue. A €2 coin was also issued by Ireland in 2016, featuring the statue of Hibernia above the GPO, to commemorate the Rising's centenary. With the outbreak of the Troubles in Northern Ireland, government, academics and the media began to revise the country's militant past, and particularly the Easter Rising. The coalition government of 1973–77, in particular the Minister for Posts and Telegraphs, Conor Cruise O'Brien, began to promote the view that the violence of 1916 was essentially no different from the violence then taking place in the streets of Belfast and Derry. O'Brien and others asserted that the Rising was doomed to military defeat from the outset, and that it failed to account for the determination of Ulster Unionists to remain in the United Kingdom. Irish republicans continue to venerate the Rising and its leaders with murals in republican areas of Belfast and other towns celebrating the actions of Pearse and his comrades, and annual parades in remembrance of the Rising. The Irish government, however, discontinued its annual parade in Dublin in the early 1970s, and in 1976 it took the unprecedented step of proscribing (under the Offences against the State Act) a 1916 commemoration ceremony at the GPO organised by Sinn Féin and the Republican Commemoration Committee. A Labour Party TD, David Thornley, embarrassed the government (of which Labour was a member) by appearing on the platform at the |
, was identified as a basal eschrichtiid by who recombined it to Eschrichtioides gastaldii. found that the gray whale is phylogenetically distinct from rorquals and that previous morphological studies were correct in the conclusion that the evolution of gulp feeding was a single event in the rorqual lineage. In contrast, multiple later studies found the gray whale to fall within the family Balaenopteridae, being more derived than the minke whales but basal to all other members in the family, and reclassified it in Balaenopteridae; the American Society of Mammalogists has followed this classification. Evolution Fossils of Eschrichtiidae have been found in all major oceanic basins in the Northern Hemisphere, and the family is believed date back to the Late Miocene. Today, gray whales are only present in the northern Pacific, but a population was also present in the northern Atlantic | extant genus and the family honours Danish zoologist Daniel Eschricht. Taxonomy In his morphological analysis, found that eschrichtiids and Cetotheriidae (Cetotherium, Mixocetus and Metopocetus) form a monophyletic sister group of Balaenopteridae. A specimen from the Late Pliocene of northern Italy, named "Cetotherium" gastaldii by and renamed "Balaenoptera" gastaldii by , was identified as a basal eschrichtiid by who recombined it to Eschrichtioides gastaldii. found that the gray whale is phylogenetically distinct from rorquals and that previous morphological studies were correct in the conclusion that the evolution of gulp feeding was a single event in the rorqual lineage. In contrast, multiple later studies found the gray whale to fall within the family Balaenopteridae, being more derived than the minke whales but basal to all other members in the family, and reclassified it in Balaenopteridae; the American Society of Mammalogists has followed this classification. Evolution Fossils of Eschrichtiidae have been found in all major oceanic basins in the Northern Hemisphere, and the family is believed date back to the Late |
the English. They arranged a treaty at Leicester which surrendered the Five Boroughs of Lincoln, Leicester, Nottingham, Stamford and Derby, to Guthfrithson. This was the first serious setback for the English since Edward the Elder began to roll back Viking conquests in the early tenth century, and it was described by the historian Frank Stenton as "an ignominious surrender". Guthfrithson had coins struck at York with the lower Viking weight than the English standard, Guthfrithson died in 941, allowing Edmund to reverse his losses. In 942 he recovered the Five Boroughs, and his victory was considered so significant that it was commemorated by a poem in the Anglo-Saxon Chronicle: Here King Edmund, lord of the English guardian of kinsmen, beloved instigator of deeds, conquered Mercia, bounded by The Dore Whitwell Gap and Humber river broad ocean-stream; five boroughs: Leicester and Lincoln, and Nottingham likewise Stamford also and Derby. Earlier the Danes were under Northmen, subjected by force in heathens' captive fetters, for a long time until they were ransomed again, to the honour of Edward's son, protector of warriors, King Edmund. Like other tenth-century poems in the Anglo-Saxon Chronicle, this one shows a concern with English nationalism and the West Saxon royal dynasty, and in this case displays the Christian English and Danes as united under Edmund in their victorious opposition to Norse (Norwegian) pagans. Stenton commented that the poem brings out the highly significant fact that the Danes of eastern Mercia, after fifteen years of Æthelstan's government, had come to regard themselves as the rightful subjects of the English king. Above all, it emphasises the antagonism between Danes and Norsemen, which is often ignored by modern writers, but underlies the whole history of England in this period. It is the first political poem in the English language, and its author understood political realities. However, Williams is sceptical, arguing that the poem is not contemporary, and that it is doubtful whether the contemporaries saw their situation in those terms. In the same year Edmund granted large estates in northern Mercia to a leading nobleman, Wulfsige the Black, continuing the policy of his father of granting land in the Danelaw to supporters in order to give them an interest in resisting the Vikings. Guthfrithson was succeeded as king of York by his cousin, Anlaf Sihtricson, who was baptised in 943 with Edmund as his godfather, suggesting that he accepted West Saxon overlordship. Sihtricson issued his own coinage, but he clearly had rivals in York as coins were also issued there in two other names: Ragnall, a brother of Anlaf Guthfrithson who also accepted baptism under Edmund's sponsorship, and an otherwise unknown Sihtric. The coins of all three men were issued with the same design, which may suggest joint authority. In 944 Edmund expelled the Viking rulers of York and seized control of the city with the assistance of Archbishop Wulfstan, who had previously supported the Vikings, and of Æthelmund, who had been appointed an ealdorman in Mercia by Edmund in 940. When Edmund died, his successor Eadred faced further revolts in Northumbria, which were not finally defeated until 954. In Miller's view, Edmund's reign "shows clearly that although Æthelstan had conquered Northumbria, it was still not really part of a united England, nor would it be until the end of Eadred's reign". The Northumbrians' repeated revolts show that they retained separatist ambitions, which they only abandoned under pressure from successive southern kings. Unlike Æthelstan, Edmund and Eadred rarely claimed jurisdiction over the whole of Britain, although each did sometimes describe himself as 'king of the English' even at times when he did not control Northumbria. In charters Edmund sometimes even called himself by the lesser title of 'king of the Anglo-Saxons' in 940 and 942, and only claimed to be king of all Britain once he had gained full control over Northumbria in 945. He never described himself as Rex Totius Britanniae on his coinage. Relations with other British kingdoms Edmund inherited overlordship over the kings of Wales from Æthelstan, but Idwal Foel, king of Gwynedd in north Wales, apparently took advantage of Edmund's early weakness to withhold fealty and may have supported Anlaf Guthfrithson, as according to the Annales Cambriæ he was killed by the English in 942 and his kingdom was then conquered by Hywel Dda, the king of Deheubarth in south Wales. Attestations of Welsh kings to English charters appear to have been rare compared with those in Æthelstan's reign, but in the historian David Dumville's view there is no reason to doubt that Edmund retained his overlordship over the Welsh kings. Attestations became more common again after Eadred succeeded Edmund. In a charter of 944 disposing of land in Devon, Edmund is styled "king of the English and ruler of this British province", suggesting that the former British kingdom of Dumnonia was still not regarded as fully integrated into England, although the historian Simon Keynes "suspects some 'local' interference" in the wording of Edmund's title. By 945 both Scotland and Strathclyde had kings who had assumed the throne since Brunanburh, and it is likely that whereas Scotland allied with England, Strathclyde held to its alliance with the Vikings. In that year Edmund ravaged Strathclyde. According to the thirteenth-century chronicler Roger of Wendover, the invasion was supported by Hywel Dda, and Edmund had two sons of the king of Strathclyde blinded, perhaps to deprive their father of throneworthy heirs. Edmund then gave the kingdom to Malcolm I of Scotland in return for a pledge to defend it on land and on sea, a decision variously interpreted by historians. Dumville and the historian of Wales Thomas Charles-Edwards regard it as granting Strathclyde to the Scottish king in return for an acknowledgement of Edmund's overlordship, whereas Williams thinks it probably means that he agreed to Malcolm's overlordship of the area in return for an alliance against the Dublin Vikings, and Stenton and Miller see it as recognition by Edmund that Northumbria was the northern limit of Anglo-Saxon England. According to the hagiography of a Gaelic monk called Cathróe he travelled through England on his journey from Scotland to the Continent; Edmund summoned him to court and Oda, Archbishop of Canterbury, then ceremonially conducted him to his ship at Lympne. Travelling clerics played an important part in the circulation of manuscripts and ideas in this period, and Cathróe is unlikely to have been the only Celtic cleric at Edmund's court. Relations with Continental Europe Edmund inherited strong Continental contacts from Æthelstan's cosmopolitan court, and these were enhanced by their sisters' marriages to foreign kings and princes. Edmund carried on his brother's Continental policies and maintained his alliances, especially with his nephew King Louis IV of West Francia and Otto I, King of East Francia and future Holy Roman Emperor. Louis was both nephew and brother-in-law of Otto, while Otto and Edmund were brothers-in-law. There were almost certainly extensive diplomatic contacts between Edmund and Continental rulers which have not been recorded, but it is known that Otto sent delegations to Edmund's court. In the early 940s some Norman lords sought the help of the Danish prince Harald against Louis, and in 945 Harald captured Louis and handed him to Hugh the Great, Duke of the Franks, who kept him prisoner. Edmund and Otto both protested and demanded his immediate release, but this only took place after the town of Laon was surrendered to Hugh. Edmund's name is in the confraternity book of Pfäfers Abbey in Switzerland, perhaps at the request of Archbishop Oda when staying there on his way to or from Rome to collect his pallium. As with the diplomatic relations, this probably represents rare surviving evidence of extensive contacts between English and Continental churchmen which continued from Æthelstan's reign. Administration Edmund inherited his brother's interests and leading advisers, such as Æthelstan Half-King, ealdorman of East Anglia, Ælfheah the Bald, bishop of Winchester, and Oda, bishop of Ramsbury, who was appointed as Archbishop of Canterbury by Edmund in 941. Æthelstan Half-King first witnessed a charter as an ealdorman in 932, and within three years of Edmund's accession he had been joined by two of his brothers as ealdormen; their territories covered more than half of England and his wife fostered the future King Edgar. The historian Cyril Hart compares the brothers' power during Edmund's reign to that of the Godwins a century later. Edmund's mother, Eadgifu, who had been in eclipse during her step-son's reign, was also very influential. For the first half of 940 there were no changes in the attestations of ealdormen compared with the end of Æthelstan's reign, but later in the year the number of ealdormen was doubled from four to eight, with three of the new ealdormen covering Mercian districts. There was an increased reliance on the family of Æthelstan Half-King, which was enriched by grants in 942. The appointments may have been part of Edmund's measures to deal with Anlaf's incursion. The family of Ealhhelm, ealdorman of Mercia, was also influential. There were further major changes in personnel in 943, perhaps reflecting the continuing growth in influence of Æthelstan Half-King and other East Anglians and Mercians. The historian Alaric Trousdale sees 943 as the turning point in Edmund's reign. Over the previous four years he had appointed five new ealdormen and largely replaced the old guard from Æthelstan's reign, while growing closer to his Mercian and East Anglian subjects and breaking down factional barriers between regions. The charter evidence shows him constantly reassessing his relationship with his great men and replacing many of them on two separate occasions, with early promotions of Mercians and East Anglians to help deal with the Viking threat, while after 943 there was a more of a focus on the administration of Wessex. Eadgifu and Eadred attested many of Edmund's charters, showing a high degree of family cooperation; initially Eadgifu attested first, but from sometime in late 943 or early 944 Eadred took precedence, perhaps reflecting his growing authority. Eadgifu attested around one third, always as regis mater (king's mother), including all grants to religious institutions and individuals. Eadred attested over half of his brother's charters. Eadgifu's and Eadred's prominence in charter attestations is unparalleled by any other West Saxon king's mother and male relative. Charters The period from around 925 to 975 was | the English and ruler of this British province", suggesting that the former British kingdom of Dumnonia was still not regarded as fully integrated into England, although the historian Simon Keynes "suspects some 'local' interference" in the wording of Edmund's title. By 945 both Scotland and Strathclyde had kings who had assumed the throne since Brunanburh, and it is likely that whereas Scotland allied with England, Strathclyde held to its alliance with the Vikings. In that year Edmund ravaged Strathclyde. According to the thirteenth-century chronicler Roger of Wendover, the invasion was supported by Hywel Dda, and Edmund had two sons of the king of Strathclyde blinded, perhaps to deprive their father of throneworthy heirs. Edmund then gave the kingdom to Malcolm I of Scotland in return for a pledge to defend it on land and on sea, a decision variously interpreted by historians. Dumville and the historian of Wales Thomas Charles-Edwards regard it as granting Strathclyde to the Scottish king in return for an acknowledgement of Edmund's overlordship, whereas Williams thinks it probably means that he agreed to Malcolm's overlordship of the area in return for an alliance against the Dublin Vikings, and Stenton and Miller see it as recognition by Edmund that Northumbria was the northern limit of Anglo-Saxon England. According to the hagiography of a Gaelic monk called Cathróe he travelled through England on his journey from Scotland to the Continent; Edmund summoned him to court and Oda, Archbishop of Canterbury, then ceremonially conducted him to his ship at Lympne. Travelling clerics played an important part in the circulation of manuscripts and ideas in this period, and Cathróe is unlikely to have been the only Celtic cleric at Edmund's court. Relations with Continental Europe Edmund inherited strong Continental contacts from Æthelstan's cosmopolitan court, and these were enhanced by their sisters' marriages to foreign kings and princes. Edmund carried on his brother's Continental policies and maintained his alliances, especially with his nephew King Louis IV of West Francia and Otto I, King of East Francia and future Holy Roman Emperor. Louis was both nephew and brother-in-law of Otto, while Otto and Edmund were brothers-in-law. There were almost certainly extensive diplomatic contacts between Edmund and Continental rulers which have not been recorded, but it is known that Otto sent delegations to Edmund's court. In the early 940s some Norman lords sought the help of the Danish prince Harald against Louis, and in 945 Harald captured Louis and handed him to Hugh the Great, Duke of the Franks, who kept him prisoner. Edmund and Otto both protested and demanded his immediate release, but this only took place after the town of Laon was surrendered to Hugh. Edmund's name is in the confraternity book of Pfäfers Abbey in Switzerland, perhaps at the request of Archbishop Oda when staying there on his way to or from Rome to collect his pallium. As with the diplomatic relations, this probably represents rare surviving evidence of extensive contacts between English and Continental churchmen which continued from Æthelstan's reign. Administration Edmund inherited his brother's interests and leading advisers, such as Æthelstan Half-King, ealdorman of East Anglia, Ælfheah the Bald, bishop of Winchester, and Oda, bishop of Ramsbury, who was appointed as Archbishop of Canterbury by Edmund in 941. Æthelstan Half-King first witnessed a charter as an ealdorman in 932, and within three years of Edmund's accession he had been joined by two of his brothers as ealdormen; their territories covered more than half of England and his wife fostered the future King Edgar. The historian Cyril Hart compares the brothers' power during Edmund's reign to that of the Godwins a century later. Edmund's mother, Eadgifu, who had been in eclipse during her step-son's reign, was also very influential. For the first half of 940 there were no changes in the attestations of ealdormen compared with the end of Æthelstan's reign, but later in the year the number of ealdormen was doubled from four to eight, with three of the new ealdormen covering Mercian districts. There was an increased reliance on the family of Æthelstan Half-King, which was enriched by grants in 942. The appointments may have been part of Edmund's measures to deal with Anlaf's incursion. The family of Ealhhelm, ealdorman of Mercia, was also influential. There were further major changes in personnel in 943, perhaps reflecting the continuing growth in influence of Æthelstan Half-King and other East Anglians and Mercians. The historian Alaric Trousdale sees 943 as the turning point in Edmund's reign. Over the previous four years he had appointed five new ealdormen and largely replaced the old guard from Æthelstan's reign, while growing closer to his Mercian and East Anglian subjects and breaking down factional barriers between regions. The charter evidence shows him constantly reassessing his relationship with his great men and replacing many of them on two separate occasions, with early promotions of Mercians and East Anglians to help deal with the Viking threat, while after 943 there was a more of a focus on the administration of Wessex. Eadgifu and Eadred attested many of Edmund's charters, showing a high degree of family cooperation; initially Eadgifu attested first, but from sometime in late 943 or early 944 Eadred took precedence, perhaps reflecting his growing authority. Eadgifu attested around one third, always as regis mater (king's mother), including all grants to religious institutions and individuals. Eadred attested over half of his brother's charters. Eadgifu's and Eadred's prominence in charter attestations is unparalleled by any other West Saxon king's mother and male relative. Charters The period from around 925 to 975 was the golden age of Anglo-Saxon royal charters, when they were at their peak as instruments of royal government, and the scribes who drew up most of Edmund's charters constituted a royal secretariat which he inherited from his brother. From 928 until 935 charters were produced by the very learned scribe designated by scholars as Æthelstan A in a highly elaborate style. Keynes comments: "It is only by dwelling on the glories and complexities of the diplomas drafted and written by Æthelstan A that one can appreciate the elegant simplicity of the diplomas that followed." A scribe known as Edmund C wrote an inscription in a gospel book (BL Cotton Tiberius A. ii folio 15v) during Æthelstan's reign and wrote charters for Edmund and Eadred between 944 and 949. Four of Edmund's charters are part of a group, dating mainly to Eadred's reign, called the "alliterative charters". They were drafted by a very learned scholar, almost certainly someone in the circle of Cenwald, Bishop of Worcester, or perhaps the bishop himself. These charters are characterised both by a high proportion of words starting with the same letter and by the use of unusual words. Ben Snook describes the charters as "impressive literary works", and like much of the writing of the period their style displays the influence of Aldhelm, a leading scholar and early eighth-century bishop of Sherborne. Coinage The only coin in common use in the tenth century was the penny. The main coin designs in Edmund's reign were H (Horizontal) types, with a cross or other decoration on the obverse surrounded by a circular inscription including the king's name, and the moneyer's name horizontally on the reverse. There were also substantial numbers of BC (Bust Crowned) types in East Anglia and the Danish shires; these had a portrait of the king, often crudely drawn, on the obverse. For a period in Æthelstan's reign many coins showed the mint town, but this had become rare by the time of Edmund's accession, except in Norwich, where it continued during the 940s for BC types. After the reign of Edward the Elder there was a slight decline in the weight of coins under Æthelstan, and the deterioration increased after around 940, continuing until Edgar's reform of the coinage in around 973. However, based on a very small sample, there is no evidence of a decline in the silver content under Edmund. His reign saw an increase in regional diversity of the coinage which lasted for twenty years until a return to relative unity of design early in Edgar's reign. Legislation Three law codes of Edmund survive, carrying on Æthelstan's tradition of legal reform. They are called I Edmund, II Edmund and III Edmund. The order in which they were issued is clear, but not the dates of issue. I Edmund is concerned with ecclesiastical matters, while the other codes deal with public order. I Edmund was promulgated at a council in London convened by Edmund and attended by archbishops Oda and Wulfstan. The code is very similar to "Constitutions" previously promulgated by Oda. Uncelibate clerics were threatened with the loss of property and forbidden burial in consecrated ground, and there were also provisions regarding church dues and the restoration of church property. A clause forbidding a murderer from coming into the neighbourhood of the king, unless he had done penance for his crime, reflected an increasing emphasis on the sanctity of kingship. Edmund was one of the few Anglo-Saxon kings to promulgate laws concerned with sorcery and idolatry, and the code condemns false witness and the use of magical drugs. The association between perjury and the use of drugs in magic was traditional, probably because they both involved the breaking of a religious oath. In II Edmund, the king and his counsellors are stated to be "greatly distressed by the manifold illegal deeds of violence which are in our midst", and aimed to promote "peace and concord". The main focus is on regulating and controlling blood feuds. The authorities (witan) are required to put a stop to vendettas following murders: the killer should instead pay wergeld (compensation) to the relatives of the victim. If no wergeld is paid, the killer has to bear the feud, but attacks on him are forbidden in churches and royal manor houses. If the killer's kin abandon him and refuse to contribute to a wergeld and to protect him, then it is the king's will that they are to be exempt from the feud: any of the victim's kin taking vengeance on them shall incur the hostility of the king and his friends and shall lose all their possessions. In the view of the historian Dorothy Whitelock the need for legislation to control the feud was partly due to the influx of Danish settlers who believed that it was more manly to pursue a vendetta than to settle a dispute by accepting compensation. Several Scandinavian loan words are first recorded in this code, such as hamsocn, the crime of attacking a homestead; the penalty is loss of all the offender's property, while the king decides whether he also loses his life. In contrast to Edmund's concern about the level of violence, he congratulated his people on their success in suppressing thefts. The code encourages greater local initiative in upholding the law, while emphasising Edmund's royal dignity and authority. The relationship between Anglo-Saxon kings and their leading men was personal; kings were lords and protectors in return for pledges of loyalty and obedience, and this is spelled out in terms based on Carolingian legislation for the first time in III Edmund, issued at Colyton in Devon. This requires that all shall swear in the name of the Lord, before whom that holy thing is holy, that they will be faithful to King Edmund, even as it behoves a man to be faithful to his lord, without |
that releases or "gives out" energy, usually in the form of heat and sometimes as electrical energy. Thus in each term (endothermic and exothermic) the prefix refers to where heat (or electrical energy) goes as the process occurs. In chemistry Due to bonds breaking and forming during various processes (changes in state, chemical reactions), there is usually a change in energy. If the energy of the forming bonds is greater than the energy of the breaking bonds, then energy is released. This is known as an exothermic reaction. However, if more energy is needed to break the bonds than the energy being released, energy is taken up. Therefore, it is an endothermic reaction. Details Whether a process can occur spontaneously depends not only on the enthalpy change but also on the entropy change (∆S) and absolute temperature T. If a process is a spontaneous process at a certain temperature, the products have a lower Gibbs free energy G = H - TS than the reactants (an exergonic process), even if the enthalpy of the products is higher. Thus, an endothermic process usually requires a favorable entropy | products is higher. Thus, an endothermic process usually requires a favorable entropy increase (∆S > 0) in the system that overcomes the unfavorable increase in enthalpy so that still ∆G < 0. While endothermic phase transitions into more disordered states of higher entropy, e.g. melting and vaporization, are common, spontaneous chemical processes at moderate temperatures are rarely endothermic. The enthalpy increase ∆H >> 0 in a hypothetical strongly endothermic process usually results in ∆G = ∆H -T∆S > 0, which means that the process will not occur (unless driven by electrical or photon energy). An example of an endothermic and exergonic process is C6H12O6 + 6 H2O → 12 H2 + 6 CO2, ∆rH° = +627 kJ/mol, ∆rG° = -31 kJ/mol Examples Evaporation Sublimation Cracking of alkanes Thermal decomposition Hydrolysis Nucleosynthesis of elements heavier than nickel in stellar cores High-energy neutrons can produce tritium from lithium-7 in an endothermic process, consuming 2.466 MeV. This was discovered when the 1954 Castle Bravo |
never put in place, only the smaller Nymboida Power Station. Decentralisation also remained a pet project, with Page frequently arguing for New South Wales and Queensland to be divided into smaller states to aid regional development. The movement for New England statehood waned in the 1920s, but re-emerged in the 1950s; a legally binding referendum on the subject was finally held in 1967, after Page's death, but was narrowly defeated in controversial circumstances. Bruce–Page Government Government formation Page was elected leader of the Country Party in 1921, replacing William McWilliams. At the 1922 federal election the party campaigned on a platform which included the establishment of a national sinking fund, national insurance scheme covering "sickness, unemployment, poverty and age", and conversion of the Commonwealth Bank of Australia into a full central bank. The party emerged from the election with the balance of power in the House; the Nationalist government of Billy Hughes lost its majority and could not govern without Country Party support. It soon became apparent that the price for that support would be a full coalition with the Nationalists. However, the Country Party had been formed partly due to discontent with Hughes' rural policy, and Page's animosity toward Hughes was such that he would not even consider supporting him. Indeed, he would not even begin talks with the Nationalists as long as Hughes remained leader. Bowing to the inevitable, Hughes resigned. Page then began negotiations with Hughes' successor as leader of the Nationalists, Stanley Bruce. His terms were stiff; he wanted his Country Party to have five seats in an 11-man cabinet, including the post of Treasurer and the second rank in the ministry for himself. These demands were unprecedented for a prospective junior coalition partner in a Westminster system, and especially so for such a new party. Nonetheless, Bruce agreed rather than force another election. For all intents and purposes, Page was the first Deputy Prime Minister of Australia (a title that did not officially exist until 1968). Since then, the leader of the Country/National Party has been the second-ranking member in nearly every non-Labor government. Page served as acting prime minister on several occasions, and in January 1924 chaired the first meeting of Federal Cabinet ever held in Canberra, at Yarralumla. Parliament did not move to Canberra until 1927. Treasurer As Treasurer, Page formed a close working relationship with Bruce. Due to favourable economic conditions the government was able to abolish land tax, cut income tax, and establishment the national sinking fund that Page had campaigned on. The government also established an investment fund for the Council for Scientific and Industrial Research and sponsored the first national housing program. The final years of Page's treasurership were marked by the beginnings of an economic downturn. The budget went into deficit in 1927 and his 1929 budget speech referred to a "temporary financial depression". He was a strong believer in orthodox finance and conservative policies, as well as a "high protectionist" supporting tariff barriers to protect Australian rural industries. Page introduced a series of reforms to the Commonwealth Bank to enhance its central banking functions. In 1924, he announced that the government would place the Commonwealth Bank under an independent board, comprising a governor, the Treasury secretary, and representatives of industry. The same bill placed banknotes under the direct control of the bank, whereas previously it had been under a nominally independent Note Issue Board. Later reforms saw the establishment of a Rural Credits Department within the bank, the profits of which were partly hypothecated to agricultural research. In March 1925, cabinet decided to return Australia to the gold standard, which it had left during World War I. It delayed its announcement until the United Kingdom had decided it would do the same, which "disguised what was arguably Australia’s first explicit macroeconomic policy decision". In 1924, Bruce and Page established the Loan Council to coordinate public-sector borrowings between the state and federal governments. It was given constitutional force with an amendment passed in 1928. The government abolished the previous system of per-capita grants to states that had been implemented in 1911 and began introducing tied grants, initially for road building. It also announced a royal commission into a national insurance scheme chaired by Senator John Millen. Page was one of the chief supporters of the National Insurance Bill 1928, which would have provided "sickness, old age, disability and maternity benefits", as well as payments to orphans and a limited form of child endowment. It was to be paid for by compulsory contributions from workers and co-contributions from employers. The government took the policy to the 1928 federal election but failed to pass the bill by the time of its defeat in 1929. As Treasurer, Page continued his professional medical practice. On 22 October 1924, he had to tell his best friend, Thomas Shorten Cole (1870–1957), the news that his wife Mary Ann Crane had just died on the operating table from complications of intestinal or stomach cancer, reputed by their daughter Dorothy May Cole to be "the worst day of his life". Due to a shortage of surgeons in Canberra, in 1928 Page performed an appendectomy on fellow MP Parker Moloney. Opposition and Lyons Government The Bruce-Page government was heavily defeated by Labor in 1929 (with Bruce losing his own seat), and Page went into opposition. In 1931, a group of dissident Labor MPs led by Joseph Lyons merged with the Nationalists to form the United Australia Party under Lyons' leadership. Lyons and the UAP won majority government at the 1931 election. Although Lyons was keen to form a coalition with the Country Party, talks broke down, and Lyons opted to govern alone—to date, the last time that the Country/National Party has not had any posts in a non-Labor government. In 1934, however, the UAP suffered an eight-seat swing, forcing Lyons to take the Country Party back into his government in a full-fledged Coalition. Page became Minister for Commerce. He was made a Knight Grand Cross of the Order of St Michael and St George (GCMG) in the New Year's Day Honours of 1938. While nine Australian Prime Ministers were knighted (and Bruce was elevated to the peerage), Page is the only one who was knighted before becoming Prime Minister. Prime Minister and aftermath When Lyons died suddenly in 1939, the Governor-General Lord Gowrie appointed Page as caretaker Prime Minister pending the UAP choosing a new leader. He held the office for three weeks until the UAP elected former deputy leader Robert Menzies as its new leader, and hence Prime Minister. Page had been close to Lyons, but disliked Menzies, whom he charged publicly with having been disloyal to Lyons. Page contacted Stanley Bruce (now in London as Australian High Commissioner to the UK) and offered to resign his seat if Bruce would return to Australia to seek re-election to the parliament in a by-election for Page's old seat, and then seek election as UAP leader. Bruce said that he would only re-enter the parliament as an independent. When Menzies was elected UAP leader, Page refused to serve under him, and made an extraordinary personal attack on him in the House, accusing him not only of ministerial incompetence but of physical cowardice (for failing to enlist during World War I). His party soon rebelled, though, and Page was deposed as Country Party leader in favour of Archie Cameron. World War II In March 1940, Archie Cameron led the Country Party back into coalition with the UAP. However, he resigned as party leader on 16 October, following the 1940 federal election. Page attempted to regain the party's leadership, but was deadlocked with John McEwen over multiple ballots. As a compromise, the party elected Arthur Fadden as acting leader; he was confirmed in the position a few months later. Page replaced Cameron as Minister for Commerce in the reconstituted ministry. Fadden replaced Menzies as prime minister in August 1941. A few weeks later, cabinet decided to send Page to London as resident minister, with the intention that he would be granted access to the British War Cabinet. While he was en route to England, the Fadden Government lost a confidence motion and was replaced by an ALP minority government. The new prime minister John Curtin nonetheless allowed Page to take up the position, declining his offer to return to Australia. The attack on Pearl Harbor in December changed the dynamic of Anglo-Australian relations, as the War in the Pacific became the primary concern of the Australian government. Page assisted in the creation of the Pacific War Council early the following year. He later recalled Winston Churchill's frustration in war cabinet meetings with Curtin's decision to withdraw troops from the Middle East and North Africa and return them to Australia. He credited himself with helping negate the tensions between the two men, but in February 1942 mistakenly advised Churchill that the Australian government was amenable to diverting the 7th Division to Burma rather than return it directly to Australia. He was heavily rebuked by Curtin and external affairs minister H. V. Evatt for his error. Page wrote to Curtin in April 1942 that since January he had been through "the worst period of acute mental distress of my whole life". His tenure was not regarded as a success, and he was said to have suffered from a lack of experience in diplomacy. Field Marshal Alan Brooke, the Chief of the Imperial General Staff, recalled that in war cabinet meetings he had "the mentality of a greengrocer". Page left London in June 1942 following a severe bout of pneumonia. He had been made a Companion of Honour (CH) before his departure. He returned to Australia in August, travelling via the United States, and quickly turned his attention to planning for post-war reconstruction. Page spent the remaining years of the Curtin and Chifley Governments on the opposition backbench. He served on the Advisory War Council and was a delegate to the constitutional convention in Canberra in late 1942, which included members of all major political parties. | begin talks with the Nationalists as long as Hughes remained leader. Bowing to the inevitable, Hughes resigned. Page then began negotiations with Hughes' successor as leader of the Nationalists, Stanley Bruce. His terms were stiff; he wanted his Country Party to have five seats in an 11-man cabinet, including the post of Treasurer and the second rank in the ministry for himself. These demands were unprecedented for a prospective junior coalition partner in a Westminster system, and especially so for such a new party. Nonetheless, Bruce agreed rather than force another election. For all intents and purposes, Page was the first Deputy Prime Minister of Australia (a title that did not officially exist until 1968). Since then, the leader of the Country/National Party has been the second-ranking member in nearly every non-Labor government. Page served as acting prime minister on several occasions, and in January 1924 chaired the first meeting of Federal Cabinet ever held in Canberra, at Yarralumla. Parliament did not move to Canberra until 1927. Treasurer As Treasurer, Page formed a close working relationship with Bruce. Due to favourable economic conditions the government was able to abolish land tax, cut income tax, and establishment the national sinking fund that Page had campaigned on. The government also established an investment fund for the Council for Scientific and Industrial Research and sponsored the first national housing program. The final years of Page's treasurership were marked by the beginnings of an economic downturn. The budget went into deficit in 1927 and his 1929 budget speech referred to a "temporary financial depression". He was a strong believer in orthodox finance and conservative policies, as well as a "high protectionist" supporting tariff barriers to protect Australian rural industries. Page introduced a series of reforms to the Commonwealth Bank to enhance its central banking functions. In 1924, he announced that the government would place the Commonwealth Bank under an independent board, comprising a governor, the Treasury secretary, and representatives of industry. The same bill placed banknotes under the direct control of the bank, whereas previously it had been under a nominally independent Note Issue Board. Later reforms saw the establishment of a Rural Credits Department within the bank, the profits of which were partly hypothecated to agricultural research. In March 1925, cabinet decided to return Australia to the gold standard, which it had left during World War I. It delayed its announcement until the United Kingdom had decided it would do the same, which "disguised what was arguably Australia’s first explicit macroeconomic policy decision". In 1924, Bruce and Page established the Loan Council to coordinate public-sector borrowings between the state and federal governments. It was given constitutional force with an amendment passed in 1928. The government abolished the previous system of per-capita grants to states that had been implemented in 1911 and began introducing tied grants, initially for road building. It also announced a royal commission into a national insurance scheme chaired by Senator John Millen. Page was one of the chief supporters of the National Insurance Bill 1928, which would have provided "sickness, old age, disability and maternity benefits", as well as payments to orphans and a limited form of child endowment. It was to be paid for by compulsory contributions from workers and co-contributions from employers. The government took the policy to the 1928 federal election but failed to pass the bill by the time of its defeat in 1929. As Treasurer, Page continued his professional medical practice. On 22 October 1924, he had to tell his best friend, Thomas Shorten Cole (1870–1957), the news that his wife Mary Ann Crane had just died on the operating table from complications of intestinal or stomach cancer, reputed by their daughter Dorothy May Cole to be "the worst day of his life". Due to a shortage of surgeons in Canberra, in 1928 Page performed an appendectomy on fellow MP Parker Moloney. Opposition and Lyons Government The Bruce-Page government was heavily defeated by Labor in 1929 (with Bruce losing his own seat), and Page went into opposition. In 1931, a group of dissident Labor MPs led by Joseph Lyons merged with the Nationalists to form the United Australia Party under Lyons' leadership. Lyons and the UAP won majority government at the 1931 election. Although Lyons was keen to form a coalition with the Country Party, talks broke down, and Lyons opted to govern alone—to date, the last time that the Country/National Party has not had any posts in a non-Labor government. In 1934, however, the UAP suffered an eight-seat swing, forcing Lyons to take the Country Party back into his government in a full-fledged Coalition. Page became Minister for Commerce. He was made a Knight Grand Cross of the Order of St Michael and St George (GCMG) in the New Year's Day Honours of 1938. While nine Australian Prime Ministers were knighted (and Bruce was elevated to the peerage), Page is the only one who was knighted before becoming Prime Minister. Prime Minister and aftermath When Lyons died suddenly in 1939, the Governor-General Lord Gowrie appointed Page as caretaker Prime Minister pending the UAP choosing a new leader. He held the office for three weeks until the UAP elected former deputy leader Robert Menzies as its new leader, and hence Prime Minister. Page had been close to Lyons, but disliked Menzies, whom he charged publicly with having been disloyal to Lyons. Page contacted Stanley Bruce (now in London as Australian High Commissioner to the UK) and offered to resign his seat if Bruce would return to Australia to seek re-election to the parliament in a by-election for Page's old seat, and then seek election as UAP leader. Bruce said that he would only re-enter the parliament as an independent. When Menzies was elected UAP leader, Page refused to serve under him, and made an extraordinary personal attack on him in the House, accusing him not only of ministerial incompetence but of physical cowardice (for failing to enlist during World War I). His party soon rebelled, though, and Page was deposed as Country Party leader in favour of Archie Cameron. World War II In March 1940, Archie Cameron led the Country Party back into coalition with the UAP. However, he resigned as party leader on 16 October, following the 1940 federal election. Page attempted to regain the party's leadership, but was deadlocked with John McEwen over multiple ballots. As a compromise, the party elected Arthur Fadden as acting leader; he was confirmed in the position a few months later. Page replaced Cameron as Minister for Commerce in the reconstituted ministry. Fadden replaced Menzies as prime minister in August 1941. A few weeks later, cabinet decided to send Page to London as resident minister, with the intention that he would be granted access to the British War Cabinet. While he was en route to England, the Fadden Government lost a confidence motion and was replaced by an ALP minority government. The new prime minister John Curtin nonetheless allowed Page to take up the position, declining his offer to return to Australia. The attack on Pearl Harbor in December changed the dynamic of Anglo-Australian relations, as the War in the Pacific became the primary concern of the Australian government. Page assisted in the creation of the Pacific War Council early the following year. He later recalled Winston Churchill's frustration in war cabinet meetings with Curtin's decision to withdraw troops from the Middle East and North Africa and return them to Australia. He credited himself with helping negate the tensions between the two men, but in February 1942 mistakenly advised Churchill that the Australian government was amenable to diverting the 7th Division to Burma rather than return it directly to Australia. He was heavily rebuked by Curtin and external affairs minister H. V. Evatt for his error. Page wrote to Curtin in April 1942 that since January he had been through "the worst period of acute mental distress of my whole life". His tenure was not regarded as a success, and he was said to have suffered from a lack of experience in diplomacy. Field Marshal Alan Brooke, the Chief of the Imperial General Staff, recalled that in war cabinet meetings he had "the mentality of a greengrocer". Page left London in June 1942 following a severe bout of pneumonia. He had been made a Companion of Honour (CH) before his departure. He returned to Australia in August, travelling via the United States, and quickly turned his attention to planning for post-war reconstruction. Page spent the remaining years of the Curtin and Chifley Governments on the opposition backbench. He served on the Advisory War Council and was a delegate to the constitutional convention in Canberra in late 1942, which included members of all major political parties. However, he was frustrated by the government's failure to offer him any formal role in developing post-war policy, which he believed was due to him given his past work. Page's brother Harold and nephew Robert were killed by the Japanese during the war. Return to the ministry Page was reappointed Minister for Health after the Coalition won the 1949 federal election, at the age of 69. He was the chief architect of the National Health Act 1953, which established a national public health scheme based on government subsidies of voluntary private insurance and free medical services for pensioners. He played a key role in securing the support of the medical profession, which had strongly opposed the Chifley Government's attempt to introduce universal health care. Unlike in previous governments, Page had little influence beyond his own policy area and was frustrated by the lack of interest in his ideas for national development. Upon the death of Billy Hughes in October 1952, Page became the Father of the House of Representatives and Father of the Parliament. In 1954, he became the first chancellor of the University of New England, which had become fully autonomous from the University of Sydney. He retired from cabinet at the age of 76, moving to the backbench in January 1956 after the December 1955 election. Upon Arthur Fadden's retirement in 1958, Page became the only former Prime Minister returned at that year's election. Electoral history Later life and death By the 1961 election, Page was gravely ill, suffering from lung cancer. Although he was too sick to actively campaign, Page refused to even consider retiring from Parliament and soldiered on for his 17th general election. In one of the great upsets of Australian electoral history, he lost his seat to Labor challenger Frank McGuren, whom he had defeated soundly in 1958. Page had gone into the election holding Cowper with what appeared to be an insurmountable 11-point majority, but McGuren managed to win the seat on a swing of 13%. Page had campaigned sporadically before going to Royal Prince Alfred Hospital in Sydney for emergency surgery. He went comatose a few days before the election and never regained consciousness. He died on 20 December, 11 days after the election, without ever knowing that he had been defeated. Page had represented Cowper for just four days short of 42 years, making him the longest-serving Australian federal parliamentarian who represented the same seat throughout his career. Only Billy Hughes and Philip Ruddock have served in Parliament longer than Page. He was the last former Prime Minister to lose his seat until Tony Abbott lost his seat of Warringah in 2019, though John Howard would lose his seat of Bennelong as a sitting Prime Minister in 2007. Page's defeat/death saw the Australian Federal Parliament having no former Prime Ministers among its members, for the first time since the period between Sir Joseph Cook's resignation from Parliament in 1921 to become Australia's High Commissioner to the United Kingdom and Billy Hughes' forced resignation as Prime Minister in 1923. Personal life Page married Ethel Blunt on 18 September 1906. They had met at Royal Prince Alfred Hospital while he was undertaking his medical residency; she was a senior nurse there. Page soon began courting her, and convinced her to become the matron of his new hospital in Grafton. She gave up nursing after their marriage, but was active in politics and community organisations. The couple had five children: Mary (b. 1909), Earle Jr. (b. 1910), Donald (b. 1912), Iven (b. 1914), and Douglas (b. 1916). Their grandchildren include Don Page, who was active in New South Wales state politics, and Geoff Page, a poet. Page was predeceased by his first wife and his oldest son. Earle Jr., a qualified veterinarian, was killed by a lightning strike in January 1933, aged 22. Ethel died in May 1958, aged 82, after a long illness. On 20 July 1959 at St Paul's Cathedral, London, Page married for a second time, wedding his long-serving secretary Jean Thomas (32 years his junior). Stanley Bruce was his best man. The second Lady Page lived for almost 50 years after her husband's death, dying on 20 June 2011; her ashes were interred at Northern Suburbs Crematorium. Honours Decorations In 1929, Page was made a member of the Privy Council of the United Kingdom (PC). In 1938, Page was made a Knight Grand Cross of the Order of St Michael and St George (GCMG). In 1942, Page was made a member of the Order of the Companions of Honour (CH). In 1942, Page was made an honorary Fellow of the Royal College of Surgeons of England (FRCS). In 1952, Page was awarded the degree of Doctor of Science honoris causa by the University of Sydney. In 1955, Page was awarded the degree of Doctor of |
that, for centuries after his death, Christian authors wrote hundreds of pseudepigraphal works in his name. He has been called the most significant of all of the fathers of the Syriac-speaking church tradition. Life Ephrem was born around the year 306 in the city of Nisibis (modern Nusaybin, Turkey), in the Roman province of Mesopotamia, that was recently acquired by the Roman Empire. Internal evidence from Ephrem's hymnody suggests that both his parents were part of the growing Christian community in the city, although later hagiographers wrote that his father was a pagan priest. In those days, religious culture in the region of Nisibis included local polytheism, Judaism and several varieties of the Early Christianity. Most of the population spoke Aramaic language, while Greek and Latin were languages of administration. The city had a complex ethnic composition, consisted of "Assyrian, Arabs, Greeks, Jews, Parthians, Romans, and Iranians". Jacob, the second bishop of Nisibis, was appointed in 308, and Ephrem grew up under his leadership of the community. Jacob of Nisibis is recorded as a signatory at the First Council of Nicea in 325. Ephrem was baptized as a youth and almost certainly became a son of the covenant, an unusual form of Syriac proto-monasticism. Jacob appointed Ephrem as a teacher (Syriac malp̄ānâ, a title that still carries great respect for Syriac Christians). He was ordained as a deacon either at his baptism or later. He began to compose hymns and write biblical commentaries as part of his educational office. In his hymns, he sometimes refers to himself as a "herdsman" (, ‘allānâ), to his bishop as the "shepherd" (, rā‘yâ), and to his community as a 'fold' (, dayrâ). Ephrem is popularly credited as the founder of the School of Nisibis, which, in later centuries, was the centre of learning of the Church of the East. In 337, Emperor Constantine I, who had legalised and promoted the practice of Christianity in the Roman Empire, died. Seizing on this opportunity, Shapur II of Persia began a series of attacks into Roman North Mesopotamia. Nisibis was besieged in 338, 346 and 350. During the first siege, Ephrem credits Bishop Jacob as defending the city with his prayers. In the third siege, of 350, Shapur rerouted the River Mygdonius to undermine the walls of Nisibis. The Nisibenes quickly repaired the walls while the Persian elephant cavalry became bogged down in the wet ground. Ephrem celebrated what he saw as the miraculous salvation of the city in a hymn that portrayed Nisibis as being like Noah's Ark, floating to safety on the flood. One important physical link to Ephrem's lifetime is the baptistery of Nisibis. The inscription tells that it was constructed under Bishop Vologeses in 359. In that year, Shapur attacked again. The cities around Nisibis were destroyed one by one, and their citizens killed or deported. Constantius II was unable to respond; the campaign of Julian in 363 ended with his death in battle. His army elected Jovian as the new emperor, and to rescue his army, he was forced to surrender Nisibis to Persia (also in 363) and to permit the expulsion of the entire Christian population. Ephrem, with the others, went first to Amida (Diyarbakır), eventually settling in Edessa (Urhay, in Aramaic) in 363. Ephrem, in his late fifties, applied himself to ministry in his new church and seems to have continued his work as a teacher, perhaps in the School of Edessa. Edessa had been an important center of the Aramaic-speaking world, and the birthplace of a specific Middle Aramaic dialect that came to be known as the Syriac language. The city was rich with rivaling philosophies and religions. Ephrem comments that orthodox Nicene Christians were simply called "Palutians" in Edessa, after a former bishop. Arians, Marcionites, Manichees, Bardaisanites and various gnostic sects proclaimed themselves as the true church. In this confusion, Ephrem wrote a great number of hymns defending Nicene orthodoxy. A later Syriac writer, Jacob of Serugh, wrote that Ephrem rehearsed all-female choirs to sing his hymns set to Syriac folk tunes in the forum of Edessa. After a ten-year residency in Edessa, in his sixties, Ephrem succumbed to the plague as he ministered to its victims. The most reliable date for his death is 9 June 373. Language Ephrem wrote exclusively in his native Aramaic language, using the local Edessan (Urhaya) dialect, that later came to be known as the Classical Syriac. Ephrem's works contain several endonymic (native) references to his language (Aramaic), homeland (Aram) and people (Arameans). He is therefore known as "the authentic voice of Aramaic Christianity". In the early stages of modern scholarly studies, it was believed that some examples of the long-standing Greek practice of labeling Aramaic as "Syriac", that are found in the "Cave of Treasures", can be attributed to Ephrem, | of Nisibis, was a prominent Christian theologian and writer, who is revered as one of the most notable hymnographers of Eastern Christianity. He was born in Nisibis, served as a deacon and later lived in Edessa. Ephrem is venerated as a saint by all traditional Churches. He is especially revered in Syriac Christianity, both in East Syriac tradition and West Syriac tradition, and also counted as a Venerable Father (i.e., a sainted Monk) in the Eastern Orthodox Church. He was declared a Doctor of the Church in the Roman Catholic Church in 1920. Ephrem is also credited as the founder of the School of Nisibis, which, in later centuries, was the centre of learning of the Church of the East. Ephrem wrote a wide variety of hymns, poems, and sermons in verse, as well as prose exegesis. These were works of practical theology for the edification of the Church in troubled times. So popular were his works, that, for centuries after his death, Christian authors wrote hundreds of pseudepigraphal works in his name. He has been called the most significant of all of the fathers of the Syriac-speaking church tradition. Life Ephrem was born around the year 306 in the city of Nisibis (modern Nusaybin, Turkey), in the Roman province of Mesopotamia, that was recently acquired by the Roman Empire. Internal evidence from Ephrem's hymnody suggests that both his parents were part of the growing Christian community in the city, although later hagiographers wrote that his father was a pagan priest. In those days, religious culture in the region of Nisibis included local polytheism, Judaism and several varieties of the Early Christianity. Most of the population spoke Aramaic language, while Greek and Latin were languages of administration. The city had a complex ethnic composition, consisted of "Assyrian, Arabs, Greeks, Jews, Parthians, Romans, and Iranians". Jacob, the second bishop of Nisibis, was appointed in 308, and Ephrem grew up under his leadership of the community. Jacob of Nisibis is recorded as a signatory at the First Council of Nicea in 325. Ephrem was baptized as a youth and almost certainly became a son of the covenant, an unusual form of Syriac proto-monasticism. Jacob appointed Ephrem as a teacher (Syriac malp̄ānâ, a title that still carries great respect for Syriac Christians). He was ordained as a deacon either at his baptism or later. He began to compose hymns and write biblical commentaries as part of his educational office. In his hymns, he sometimes refers to himself as a "herdsman" (, ‘allānâ), to his bishop as the "shepherd" (, rā‘yâ), and to his community as a 'fold' (, dayrâ). Ephrem is popularly credited as the founder of the School of Nisibis, which, in later centuries, was the centre of learning of the Church of the East. In 337, Emperor Constantine I, who had legalised and promoted the practice of Christianity in the Roman Empire, died. Seizing on this opportunity, Shapur II of Persia began a series of attacks into Roman North Mesopotamia. Nisibis was besieged in 338, 346 and 350. During the first siege, Ephrem credits Bishop Jacob as defending the city with his prayers. In the third siege, of 350, Shapur rerouted the River Mygdonius to undermine the walls of Nisibis. The Nisibenes quickly repaired the walls while the Persian elephant cavalry became bogged down in the wet ground. Ephrem celebrated what he saw as the miraculous salvation of the city in a hymn that portrayed Nisibis as being like Noah's Ark, floating to safety on the flood. One important physical link to Ephrem's lifetime is the baptistery of Nisibis. The inscription tells that it was constructed under Bishop Vologeses in 359. In that year, Shapur attacked again. The cities around Nisibis were destroyed one by one, and their citizens killed or deported. Constantius II was unable to respond; the campaign of Julian in 363 ended with his death in battle. His army elected Jovian as the new emperor, and to rescue his army, he was forced to surrender Nisibis to Persia (also in 363) and to permit the expulsion of the entire Christian population. Ephrem, with the others, went first to Amida (Diyarbakır), eventually settling in Edessa (Urhay, in Aramaic) in 363. Ephrem, in his late fifties, applied himself to ministry in his new church and seems to have continued his work as a teacher, perhaps in the School of Edessa. Edessa had been an important center of the Aramaic-speaking world, and the birthplace of a specific Middle Aramaic dialect that came to be known as the Syriac language. The city was rich with rivaling philosophies and religions. Ephrem comments that orthodox Nicene Christians were simply called "Palutians" in Edessa, after a former bishop. Arians, Marcionites, Manichees, Bardaisanites and various gnostic sects proclaimed themselves as the true church. In this confusion, Ephrem wrote a great number of hymns defending Nicene orthodoxy. A later Syriac writer, Jacob of Serugh, wrote that Ephrem rehearsed all-female choirs to sing his hymns set to Syriac folk tunes in the forum of Edessa. After a ten-year residency in Edessa, in his sixties, Ephrem succumbed to the plague as he ministered to its victims. The most reliable date for his death is 9 June 373. Language Ephrem wrote exclusively in his native Aramaic language, using the local Edessan (Urhaya) dialect, that later came to be known as the Classical Syriac. Ephrem's works contain several endonymic (native) references to his language (Aramaic), homeland (Aram) and people (Arameans). He is therefore known as "the authentic voice of Aramaic Christianity". In the early stages of modern scholarly studies, it was believed that some |
in 1992. Notable improvements were the Super Agnus and the HiRes Denise chips. The sound and floppy controller chip, Paula, remained unchanged from the OCS design. Super Agnus supports 2 MB of Chip RAM, whereas the original Agnus/Fat Agnus and subsequent Fatter Agnus can address 512 KB and 1 MB, respectively. The ECS Denise chip offers Productivity (640×480 non-interlaced) and SuperHiRes (1280×200 or 1280×256) display modes (also available in interlaced mode), which are however limited to only 4 on-screen colors. Essentially, a 35 ns pixel mode was added plus the ability to run arbitrary horizontal and vertical scan rates. This made other display modes possible, but only the aforementioned modes were supported originally out of the box. For example, the Linux Amiga framebuffer device driver allows the use of several other display modes. Other improvements were the ability of the blitter to copy regions larger than 1024×1024 pixels in one operation and the ability to display sprites in border | non-interlaced) and SuperHiRes (1280×200 or 1280×256) display modes (also available in interlaced mode), which are however limited to only 4 on-screen colors. Essentially, a 35 ns pixel mode was added plus the ability to run arbitrary horizontal and vertical scan rates. This made other display modes possible, but only the aforementioned modes were supported originally out of the box. For example, the Linux Amiga framebuffer device driver allows the use of several other display modes. Other improvements were the ability of the blitter to copy regions larger than 1024×1024 pixels in one operation and the ability to display sprites in border regions (outside of any display window where bitplanes are shown). ECS also allows software switching between 60 Hz and 50 Hz video modes. These improvements largely favored application software, which benefited from higher resolution and VGA-like display modes, rather than games. As an incremental update, ECS was intended to be backward compatible with software designed for OCS machines, though some pre-ECS games were found to be incompatible. Additionally, features from the improved Kickstart 2 |
management: Helping manage radio spectrum used by all satellite operators History The European Space Operations Centre was formally inaugurated in Darmstadt, Germany, on 8 September 1967 by the then-Minister of Research of the Federal Republic of Germany, Gerhard Stoltenberg. Its role was to provide satellite control for the European Space Research Organisation (ESRO), which is today known as its successor organisation, the European Space Agency (ESA). The 90-person ESOC facility was, as it is today, located on the west side of Darmstadt; it employed the staff and resources previously allocated to the European Space Data Centre (ESDAC), which had been established in 1963 to conduct orbit calculations. These were augmented by mission control staff transferred from ESTEC to operate satellites and manage the ESTRACK tracking station network. Within just eight months, ESOC, as part of ESRO, was already operating its first mission, ESRO-2B, a scientific research satellite and the first of many operated from ESOC for ESRO, and later ESA. By July 2012, ESOC had operated over 56 missions spanning science, Earth observation, orbiting observatories, meteorology and space physics. Location and expansion ESOC is located on the west side of the city of Darmstadt, some from the main train station, at Robert-Bosch-Straße 5. In 2011, ESA announced the first phase of the ESOC II modernisation and expansion project valued at €60 million. The new construction will be located across Robert-Bosch-Straße, opposite the current centre. Employees At ESOC, ESA employs approximately 800, comprising some 250 permanent staff and about 550 contractors. Staff from ESOC are routinely dispatched to work at other ESA establishments, ESTRACK stations, the ATV Control Centre (Toulouse), the Columbus Control Centre (Oberpfaffenhofen) and at partner facilities in several countries. See also ATV Control Centre (Toulouse, France) Columbus Control Centre (Oberpfaffenhofen, Germany) European Space Research and Technology Centre (ESTEC) European Space Astronomy Centre (ESAC) European Centre for Space Applications and | tracking, telemetry and telecommanding; and space debris. Missions ESOC's current missions comprise the following: Planetary and solar missions BepiColombo Mars Express Solar Orbiter ExoMars Trace Gas Orbiter Cluster II Astronomy and fundamental physics missions Gaia INTEGRAL XMM-Newton Earth observation missions CryoSat-2 Swarm Sentinel-1A Sentinel-1B Sentinel-2A Sentinel-2B Sentinel-5 Precursor ADM-Aeolus In addition, the ground segment and mission control teams for several missions are in preparation and training, including: ExoMars Biomass EarthCare Euclid JUpiter ICy moons Explorer (JUICE) PLATO OPS-SAT the remaining satellites of the Sentinel programme ESTRACK ESOC hosts the control centre for the Agency's European Tracking ESTRACK station network. The core network comprises seven stations in seven countries: Kourou (French Guiana), Cebreros (Spain), Redu (Belgium), Santa Maria (Portugal), Kiruna (Sweden), Malargüe (Argentina) and New Norcia (Australia). Operators are on duty at ESOC 24 hours/day, year round, to conduct tracking passes, uploading telecommands and downloading telemetry and data. Activities In addition to 'pure' mission operations, a number of other activities take place at the Centre, most of which are directly related to ESA's broader space operations activities. Flight dynamics: A team is responsible for all orbital calculations and orbit determinations. Mission analysis: Selection and calculation of possible orbits and launch windows Software development: Mission control systems and spacecraft management tools ESA Navigation Support Office: Calculating and predicting GPS and Galileo satellite orbits Ground station engineering: Developing deep-space tracking technology Space debris: Coordinating ESA's debris research, provision of conjunction warning services and cooperating with agencies worldwide Frequency management: Helping manage radio spectrum used by all satellite operators History The European Space Operations Centre was formally |
June 2021. The new evolution of the rocket incorporates a larger first stage booster, the P120C replacing the P80, an upgraded Zefiro (rocket stage) second stage, and the AVUM+ upper stage. This new variant enables larger single payloads, dual payloads, return missions, and orbital transfer capabilities. Ariane launch vehicle development funding Historically, the Ariane family rockets have been funded primarily "with money contributed by ESA governments seeking to participate in the program rather than through competitive industry bids. This [has meant that] governments commit multiyear funding to the development with the expectation of a roughly 90% return on investment in the form of industrial workshare." ESA is proposing changes to this scheme by moving to competitive bids for the development of the Ariane 6. Future rocket development Future projects include the Prometheus reusable engine technology demonstrator, Phoebus (an upgraded second stage for Ariane 6), and Themis (a reusable first stage). Human space flight Formation and development At the time ESA was formed, its main goals did not encompass human space flight; rather it considered itself to be primarily a scientific research organisation for uncrewed space exploration in contrast to its American and Soviet counterparts. It is therefore not surprising that the first non-Soviet European in space was not an ESA astronaut on a European space craft; it was Czechoslovak Vladimír Remek who in 1978 became the first non-Soviet or American in space (the first man in space being Yuri Gagarin of the Soviet Union) – on a Soviet Soyuz spacecraft, followed by the Pole Mirosław Hermaszewski and East German Sigmund Jähn in the same year. This Soviet co-operation programme, known as Intercosmos, primarily involved the participation of Eastern bloc countries. In 1982, however, Jean-Loup Chrétien became the first non-Communist Bloc astronaut on a flight to the Soviet Salyut 7 space station. Because Chrétien did not officially fly into space as an ESA astronaut, but rather as a member of the French CNES astronaut corps, the German Ulf Merbold is considered the first ESA astronaut to fly into space. He participated in the STS-9 Space Shuttle mission that included the first use of the European-built Spacelab in 1983. STS-9 marked the beginning of an extensive ESA/NASA joint partnership that included dozens of space flights of ESA astronauts in the following years. Some of these missions with Spacelab were fully funded and organisationally and scientifically controlled by ESA (such as two missions by Germany and one by Japan) with European astronauts as full crew members rather than guests on board. Beside paying for Spacelab flights and seats on the shuttles, ESA continued its human space flight co-operation with the Soviet Union and later Russia, including numerous visits to Mir. During the latter half of the 1980s, European human space flights changed from being the exception to routine and therefore, in 1990, the European Astronaut Centre in Cologne, Germany was established. It selects and trains prospective astronauts and is responsible for the co-ordination with international partners, especially with regard to the International Space Station. As of 2006, the ESA astronaut corps officially included twelve members, including nationals from most large European countries except the United Kingdom. In 2008, ESA started to recruit new astronauts so that final selection would be due in spring 2009. Almost 10,000 people registered as astronaut candidates before registration ended in June 2008. 8,413 fulfilled the initial application criteria. Of the applicants, 918 were chosen to take part in the first stage of psychological testing, which narrowed down the field to 192. After two-stage psychological tests and medical evaluation in early 2009, as well as formal interviews, six new members of the European Astronaut Corps were selected – five men and one woman. Astronaut names The astronauts of the European Space Agency are: France Jean-François Clervoy Italy Samantha Cristoforetti Belgium Frank De Winne Spain Pedro Duque Germany Reinhold Ewald France Léopold Eyharts Germany Alexander Gerst Italy Umberto Guidoni Sweden Christer Fuglesang Netherlands André Kuipers Germany Matthias Maurer Denmark Andreas Mogensen Italy Paolo Nespoli Switzerland Claude Nicollier Italy Luca Parmitano United Kingdom Timothy Peake France Philippe Perrin France Thomas Pesquet Germany Thomas Reiter Germany Hans Schlegel Germany Gerhard Thiele France Michel Tognini Italy Roberto Vittori Crew vehicles In the 1980s, France pressed for an independent European crew launch vehicle. Around 1978, it was decided to pursue a reusable spacecraft model and starting in November 1987 a project to create a mini-shuttle by the name of Hermes was introduced. The craft was comparable to early proposals for the Space Shuttle and consisted of a small reusable spaceship that would carry 3 to 5 astronauts and 3 to 4 metric tons of payload for scientific experiments. With a total maximum weight of 21 metric tons it would have been launched on the Ariane 5 rocket, which was being developed at that time. It was planned solely for use in low Earth orbit space flights. The planning and pre-development phase concluded in 1991; the production phase was never fully implemented because at that time the political landscape had changed significantly. With the fall of the Soviet Union ESA looked forward to co-operation with Russia to build a next-generation space vehicle. Thus the Hermes programme was cancelled in 1995 after about 3 billion dollars had been spent. The Columbus space station programme had a similar fate. In the 21st century, ESA started new programmes in order to create its own crew vehicles, most notable among its various projects and proposals is Hopper, whose prototype by EADS, called Phoenix, has already been tested. While projects such as Hopper are neither concrete nor to be realised within the next decade, other possibilities for human spaceflight in co-operation with the Russian Space Agency have emerged. Following talks with the Russian Space Agency in 2004 and June 2005, a co-operation between ESA and the Russian Space Agency was announced to jointly work on the Russian-designed Kliper, a reusable spacecraft that would be available for space travel beyond LEO (e.g. the moon or even Mars). It was speculated that Europe would finance part of it. A €50 million participation study for Kliper, which was expected to be approved in December 2005, was finally not approved by the ESA member states. The Russian state tender for the project was subsequently cancelled in 2006. In June 2006, ESA member states granted 15 million to the Crew Space Transportation System (CSTS) study, a two-year study to design a spacecraft capable of going beyond Low-Earth orbit based on the current Soyuz design. This project was pursued with Roskosmos instead of the cancelled Kliper proposal. A decision on the actual implementation and construction of the CSTS spacecraft was contemplated for 2008. In mid-2009 EADS Astrium was awarded a €21 million study into designing a crew vehicle based on the European ATV which is believed to now be the basis of the Advanced Crew Transportation System design. In November 2012, ESA decided to join NASA's Orion programme. The ATV would form the basis of a propulsion unit for NASA's new crewed spacecraft. ESA may also seek to work with NASA on Orion's launch system as well in order to secure a seat on the spacecraft for its own astronauts. In September 2014, ESA signed an agreement with Sierra Nevada Corporation for co-operation in Dream Chaser project. Further studies on the Dream Chaser for European Utilization or DC4EU project were funded, including the feasibility of launching a Europeanised Dream Chaser onboard Ariane 5. Cooperation with other countries and organisations ESA has signed co-operation agreements with the following states that currently neither plan to integrate as tightly with ESA institutions as Canada, nor envision future membership of ESA: Argentina, Brazil, China, India (for the Chandrayan mission), Russia and Turkey. Additionally, ESA has joint projects with the EUSPA of the European Union, NASA of the United States and is participating in the International Space Station together with the United States (NASA), Russia and Japan (JAXA). National space organisations of member states The Centre National d'Études Spatiales (CNES) (National Centre for Space Study) is the French government space agency (administratively, a "public establishment of industrial and commercial character"). Its headquarters are in central Paris. CNES is the main participant on the Ariane project. Indeed, CNES designed and tested all Ariane family rockets (mainly from its centre in Évry near Paris) The UK Space Agency is a partnership of the UK government departments which are active in space. Through the UK Space Agency, the partners provide delegates to represent the UK on the various ESA governing bodies. Each partner funds its own programme. The Italian Space Agency (Agenzia Spaziale Italiana or ASI) was founded in 1988 to promote, co-ordinate and conduct space activities in Italy. Operating under the Ministry of the Universities and of Scientific and Technological Research, the agency cooperates with numerous entities active in space technology and with the president of the Council of Ministers. Internationally, the ASI provides Italy's delegation to the Council of the European Space Agency and to its subordinate bodies. The German Aerospace Center (DLR) (German: Deutsches Zentrum für Luft- und Raumfahrt e. V.) is the national research centre for aviation and space flight of the Federal Republic of Germany and of other member states in the Helmholtz Association. Its extensive research and development projects are included in national and international cooperative programmes. In addition to its research projects, the centre is the assigned space agency of Germany bestowing headquarters of German space flight activities and its associates. The Instituto Nacional de Técnica Aeroespacial (INTA) (National Institute for Aerospace Technique) is a Public Research Organisation specialised in aerospace research and technology development in Spain. Among other functions, it serves as a platform for space research and acts as a significant testing facility for the aeronautic and space sector in the country. NASA ESA has a long history of collaboration with NASA. Since ESA's astronaut corps was formed, the Space Shuttle has been the primary launch vehicle used by ESA's astronauts to get into space through partnership programmes with NASA. In the 1980s and 1990s, the Spacelab programme was an ESA-NASA joint research programme that had ESA develop and manufacture orbital labs for the Space Shuttle for several flights on which ESA participate with astronauts in experiments. In robotic science mission and exploration missions, NASA has been ESA's main partner. Cassini–Huygens was a joint NASA-ESA mission, along with the Infrared Space Observatory, INTEGRAL, SOHO, and others. Also, the Hubble Space Telescope is a joint project of NASA and ESA. Future ESA-NASA joint projects include the James Webb Space Telescope and the proposed Laser Interferometer Space Antenna. NASA has supported ESA's MarcoPolo-R mission which landed on asteroid Bennu in October 2020 and is scheduled to return a sample to Earth for further analysis in 2023. NASA and ESA will also likely join together for a Mars sample-return mission. In October 2020, the ESA entered into a memorandum of understanding (MOU) with NASA to work together on the Artemis program, which will provide an orbiting Lunar Gateway and also accomplish the first manned lunar landing in 50 years, whose team will include the first woman on the Moon. Astronaut selection announcements are expected within two years of the 2024 scheduled launch date. ESA also purchases seats on the NASA operated Commercial Crew Program. The first ESA astronaut to be on a Commercial Crew Program mission is Thomas Pesquet. Pesquet launched into space aboard Crew Dragon Endeavour on the Crew-2 mission. ESA also has seats on Crew-3 with Matthias Maurer and Crew-4 with Samantha Cristoforetti. Cooperation with other space agencies Since China has invested more money into space activities, the Chinese Space Agency has sought international partnerships. Besides the Russian Space Agency, ESA is one of its most important partners. Both space agencies cooperated in the development of the Double Star Mission. In 2017, ESA sent two astronauts to China for two weeks sea survival training with Chinese astronauts in Yantai, Shandong. ESA entered into a major joint venture with Russia in the form of the CSTS, the preparation of French Guiana spaceport for launches of Soyuz-2 rockets and other projects. With India, ESA agreed to send instruments into space aboard the ISRO's Chandrayaan-1 in 2008. ESA is also co-operating with Japan, the most notable current project in collaboration with JAXA is the BepiColombo mission to Mercury. Speaking to reporters at an air show near Moscow in August 2011, ESA head Jean-Jacques Dordain said ESA and Russia's Roskosmos space agency would "carry out the first flight to Mars together." International Space Station With regard to the International Space Station (ISS), ESA is not represented by all of its member states: 11 of the 22 ESA member states currently participate in the project: Belgium, Denmark, France, Germany, Italy, Netherlands, Norway, Spain, Sweden, Switzerland and United Kingdom. Austria, Finland and Ireland chose not to participate, because of lack of interest or concerns about the expense of the project. Portugal, Luxembourg, Greece, the Czech Republic, Romania, Poland, Estonia and Hungary joined ESA after the agreement had been signed. ESA takes part in the construction and operation of the ISS, with contributions such as Columbus, a science laboratory module that was brought into orbit by NASA's STS-122 Space Shuttle mission, and the Cupola observatory module that was completed in July 2005 by Alenia Spazio for ESA. The current estimates for the ISS are approaching €100 billion in total (development, construction and 10 years of maintaining the station) of which ESA has committed to paying €8 billion. About 90% of the costs of ESA's ISS share will be contributed by Germany (41%), France (28%) and Italy (20%). German ESA astronaut Thomas Reiter was the first long-term ISS crew member. ESA has developed the Automated Transfer Vehicle for ISS resupply. Each ATV has a cargo capacity of . The first ATV, Jules Verne, was launched on 9 March 2008 and on 3 April 2008 successfully docked with the ISS. This manoeuvre, considered a major technical feat, involved using automated systems to allow the ATV to track the ISS, moving at 27,000 km/h, and attach itself with an accuracy of 2 cm. Five vehicles were launched before the program ended with the launch of the fifth ATV, Georges Lemaître, in 2014. As of 2020, the spacecraft establishing supply links to the ISS are the Russian Progress and Soyuz, Japanese Kounotori (HTV), and the USA vehicles Cargo Dragon 2 and Cygnus stemmed from the Commercial Resupply Services program. European Life and Physical Sciences research on board the International Space Station (ISS) is mainly based on the European Programme for Life and Physical Sciences in Space programme that was initiated in 2001. Languages According to Annex 1, Resolution No. 8 of the ESA Convention and Council Rules of Procedure, English, French and German may be used in all meetings of the Agency, with interpretation provided into these three languages. All official documents are available in English and French with all documents concerning the ESA Council being available in German as well. Facilities ESA Headquarters (HQ), Paris, France European Space Operations Centre (ESOC), Darmstadt, Germany European Space Research and Technology Centre (ESTEC), Noordwijk, Netherlands European Space Astronomy Centre (ESAC), Madrid, Spain European Centre for Space Applications and Telecommunications (ECSAT), Oxfordshire, United Kingdom European Astronaut Centre (EAC), Cologne, Germany ESA Centre for Earth Observation (ESRIN), Frascati, Italy Guiana Space Centre (CSG), Kourou, French Guiana European Space Tracking Network (ESTRACK) European Data Relay System EU/ESA Space Council The Flag of Europe is flown in space during missions. It was flown by ESA's Andre Kuipers during Delta mission. The political perspective of the European Union (EU) was to make ESA an agency of the EU by 2014; however, this date was not met. The EU member states provide most of ESA's funding, and they are all either full ESA members or observers. ESA is not an agency or body of the European Union, and has non-EU countries (Norway, Switzerland, and the United Kingdom) as members. There are however ties between the two, with various agreements in place and being worked on, to define the legal status of ESA with regard to the EU. There are common goals between ESA and the EU. ESA has an EU liaison office in Brussels. On certain projects, the EU and ESA co-operate, such as the Galileo satellite navigation system. Space policy has since December 2009 been an area for voting in the European Council. Under the European Space Policy, later renamed European Union Space Programme, the EU, ESA and its Member States committed themselves to increasing co-ordination of their activities and programmes and to organising their respective roles relating to space. The Lisbon Treaty of 2009 reinforces the case for space in Europe and strengthens the role of ESA as an R&D space agency. Article 189 of the Treaty gives the EU a mandate to elaborate a European space policy and take related measures, and provides that the EU should establish appropriate relations with | GTO) which was brought into ESA service in October 2011. ESA entered into a €340 million joint venture with the Russian Federal Space Agency over the use of the Soyuz launcher. Under the agreement, the Russian agency manufactures Soyuz rocket parts for ESA, which are then shipped to French Guiana for assembly. ESA benefits because it gains a medium payload launcher, complementing its fleet while saving on development costs. The Soyuz rocket—which has been the Russian's space launch workhorse for more than 50 years—is proven technology with a very good safety record. Russia benefits in that it gets access to the Kourou launch site. Due to its proximity to the equator, launching from Kourou rather than Baikonur nearly doubles Soyuz's payload to GTO (3.0 tonnes vs. 1.7 tonnes). Soyuz first launched from Kourou on 21 October 2011, and successfully placed two Galileo satellites into orbit 23,222 kilometres above Earth. Vega Vega is ESA's carrier for small satellites. Developed by seven ESA members led by Italy, it is capable of carrying a payload with a mass of between 300 and 1500 kg to an altitude of 700 km, for low polar orbit. Its maiden launch from Kourou was on 13 February 2012. Vega began full commercial exploitation in December 2015. The rocket has three solid propulsion stages and a liquid propulsion upper stage (the AVUM) for accurate orbital insertion and the ability to place multiple payloads into different orbits. A larger version of the Vega launcher, Vega-C is in development and the first flight is expected in June 2021. The new evolution of the rocket incorporates a larger first stage booster, the P120C replacing the P80, an upgraded Zefiro (rocket stage) second stage, and the AVUM+ upper stage. This new variant enables larger single payloads, dual payloads, return missions, and orbital transfer capabilities. Ariane launch vehicle development funding Historically, the Ariane family rockets have been funded primarily "with money contributed by ESA governments seeking to participate in the program rather than through competitive industry bids. This [has meant that] governments commit multiyear funding to the development with the expectation of a roughly 90% return on investment in the form of industrial workshare." ESA is proposing changes to this scheme by moving to competitive bids for the development of the Ariane 6. Future rocket development Future projects include the Prometheus reusable engine technology demonstrator, Phoebus (an upgraded second stage for Ariane 6), and Themis (a reusable first stage). Human space flight Formation and development At the time ESA was formed, its main goals did not encompass human space flight; rather it considered itself to be primarily a scientific research organisation for uncrewed space exploration in contrast to its American and Soviet counterparts. It is therefore not surprising that the first non-Soviet European in space was not an ESA astronaut on a European space craft; it was Czechoslovak Vladimír Remek who in 1978 became the first non-Soviet or American in space (the first man in space being Yuri Gagarin of the Soviet Union) – on a Soviet Soyuz spacecraft, followed by the Pole Mirosław Hermaszewski and East German Sigmund Jähn in the same year. This Soviet co-operation programme, known as Intercosmos, primarily involved the participation of Eastern bloc countries. In 1982, however, Jean-Loup Chrétien became the first non-Communist Bloc astronaut on a flight to the Soviet Salyut 7 space station. Because Chrétien did not officially fly into space as an ESA astronaut, but rather as a member of the French CNES astronaut corps, the German Ulf Merbold is considered the first ESA astronaut to fly into space. He participated in the STS-9 Space Shuttle mission that included the first use of the European-built Spacelab in 1983. STS-9 marked the beginning of an extensive ESA/NASA joint partnership that included dozens of space flights of ESA astronauts in the following years. Some of these missions with Spacelab were fully funded and organisationally and scientifically controlled by ESA (such as two missions by Germany and one by Japan) with European astronauts as full crew members rather than guests on board. Beside paying for Spacelab flights and seats on the shuttles, ESA continued its human space flight co-operation with the Soviet Union and later Russia, including numerous visits to Mir. During the latter half of the 1980s, European human space flights changed from being the exception to routine and therefore, in 1990, the European Astronaut Centre in Cologne, Germany was established. It selects and trains prospective astronauts and is responsible for the co-ordination with international partners, especially with regard to the International Space Station. As of 2006, the ESA astronaut corps officially included twelve members, including nationals from most large European countries except the United Kingdom. In 2008, ESA started to recruit new astronauts so that final selection would be due in spring 2009. Almost 10,000 people registered as astronaut candidates before registration ended in June 2008. 8,413 fulfilled the initial application criteria. Of the applicants, 918 were chosen to take part in the first stage of psychological testing, which narrowed down the field to 192. After two-stage psychological tests and medical evaluation in early 2009, as well as formal interviews, six new members of the European Astronaut Corps were selected – five men and one woman. Astronaut names The astronauts of the European Space Agency are: France Jean-François Clervoy Italy Samantha Cristoforetti Belgium Frank De Winne Spain Pedro Duque Germany Reinhold Ewald France Léopold Eyharts Germany Alexander Gerst Italy Umberto Guidoni Sweden Christer Fuglesang Netherlands André Kuipers Germany Matthias Maurer Denmark Andreas Mogensen Italy Paolo Nespoli Switzerland Claude Nicollier Italy Luca Parmitano United Kingdom Timothy Peake France Philippe Perrin France Thomas Pesquet Germany Thomas Reiter Germany Hans Schlegel Germany Gerhard Thiele France Michel Tognini Italy Roberto Vittori Crew vehicles In the 1980s, France pressed for an independent European crew launch vehicle. Around 1978, it was decided to pursue a reusable spacecraft model and starting in November 1987 a project to create a mini-shuttle by the name of Hermes was introduced. The craft was comparable to early proposals for the Space Shuttle and consisted of a small reusable spaceship that would carry 3 to 5 astronauts and 3 to 4 metric tons of payload for scientific experiments. With a total maximum weight of 21 metric tons it would have been launched on the Ariane 5 rocket, which was being developed at that time. It was planned solely for use in low Earth orbit space flights. The planning and pre-development phase concluded in 1991; the production phase was never fully implemented because at that time the political landscape had changed significantly. With the fall of the Soviet Union ESA looked forward to co-operation with Russia to build a next-generation space vehicle. Thus the Hermes programme was cancelled in 1995 after about 3 billion dollars had been spent. The Columbus space station programme had a similar fate. In the 21st century, ESA started new programmes in order to create its own crew vehicles, most notable among its various projects and proposals is Hopper, whose prototype by EADS, called Phoenix, has already been tested. While projects such as Hopper are neither concrete nor to be realised within the next decade, other possibilities for human spaceflight in co-operation with the Russian Space Agency have emerged. Following talks with the Russian Space Agency in 2004 and June 2005, a co-operation between ESA and the Russian Space Agency was announced to jointly work on the Russian-designed Kliper, a reusable spacecraft that would be available for space travel beyond LEO (e.g. the moon or even Mars). It was speculated that Europe would finance part of it. A €50 million participation study for Kliper, which was expected to be approved in December 2005, was finally not approved by the ESA member states. The Russian state tender for the project was subsequently cancelled in 2006. In June 2006, ESA member states granted 15 million to the Crew Space Transportation System (CSTS) study, a two-year study to design a spacecraft capable of going beyond Low-Earth orbit based on the current Soyuz design. This project was pursued with Roskosmos instead of the cancelled Kliper proposal. A decision on the actual implementation and construction of the CSTS spacecraft was contemplated for 2008. In mid-2009 EADS Astrium was awarded a €21 million study into designing a crew vehicle based on the European ATV which is believed to now be the basis of the Advanced Crew Transportation System design. In November 2012, ESA decided to join NASA's Orion programme. The ATV would form the basis of a propulsion unit for NASA's new crewed spacecraft. ESA may also seek to work with NASA on Orion's launch system as well in order to secure a seat on the spacecraft for its own astronauts. In September 2014, ESA signed an agreement with Sierra Nevada Corporation for co-operation in Dream Chaser project. Further studies on the Dream Chaser for European Utilization or DC4EU project were funded, including the feasibility of launching a Europeanised Dream Chaser onboard Ariane 5. Cooperation with other countries and organisations ESA has signed co-operation agreements with the following states that currently neither plan to integrate as tightly with ESA institutions as Canada, nor envision future membership of ESA: Argentina, Brazil, China, India (for the Chandrayan mission), Russia and Turkey. Additionally, ESA has joint projects with the EUSPA of the European Union, NASA of the United States and is participating in the International Space Station together with the United States (NASA), Russia and Japan (JAXA). National space organisations of member states The Centre National d'Études Spatiales (CNES) (National Centre for Space Study) is the French government space agency (administratively, a "public establishment of industrial and commercial character"). Its headquarters are in central Paris. CNES is the main participant on the Ariane project. Indeed, CNES designed and tested all Ariane family rockets (mainly from its centre in Évry near Paris) The UK Space Agency is a partnership of the UK government departments which are active in space. Through the UK Space Agency, the partners provide delegates to represent the UK on the various ESA governing bodies. Each partner funds its own programme. The Italian Space Agency (Agenzia Spaziale Italiana or ASI) was founded in 1988 to promote, co-ordinate and conduct space activities in Italy. Operating under the Ministry of the Universities and of Scientific and Technological Research, the agency cooperates with numerous entities active in space technology and with the president of the Council of Ministers. Internationally, the ASI provides Italy's delegation to the Council of the European Space Agency and to its subordinate bodies. The German Aerospace Center (DLR) (German: Deutsches Zentrum für Luft- und Raumfahrt e. V.) is the national research centre for aviation and space flight of the Federal Republic of Germany and of other member states in the Helmholtz Association. Its extensive research and development projects are included in national and international cooperative programmes. In addition to its research projects, the centre is the assigned space agency of Germany bestowing headquarters of German space flight activities and its associates. The Instituto Nacional de Técnica Aeroespacial (INTA) (National Institute for Aerospace Technique) is a Public Research Organisation specialised in aerospace research and technology development in Spain. Among other functions, it serves as a platform for space research and acts as a significant testing facility for the aeronautic and space sector in the country. NASA ESA has a long history of collaboration with NASA. Since ESA's astronaut corps was formed, the Space Shuttle has been the primary launch vehicle used by ESA's astronauts to get into space through partnership programmes with NASA. In the 1980s and 1990s, the Spacelab programme was an ESA-NASA joint research programme that had ESA develop and manufacture orbital labs for the Space Shuttle for several flights on which ESA participate with astronauts in experiments. In robotic science mission and exploration missions, NASA has been ESA's main partner. Cassini–Huygens was a joint NASA-ESA mission, along with the Infrared Space Observatory, INTEGRAL, SOHO, and others. Also, the Hubble Space Telescope is a joint project of NASA and ESA. Future ESA-NASA joint projects include the James Webb Space Telescope and the proposed Laser Interferometer Space Antenna. NASA has supported ESA's MarcoPolo-R mission which landed on asteroid Bennu in October 2020 and is scheduled to return a sample to Earth for further analysis in 2023. NASA and ESA will also |
an aperture. Much very soft practice can help overcome this. Claude Gordon was a student of Louis Maggio and Herbert L. Clarke and systematized the concepts of these teachers. Claude Gordon made use of pedal tones for embouchure development as did Maggio and Herbert L. Clarke. All three stressed that the mouthpiece should be placed higher on the top lip for a more free vibration of the lips. Tongue-controlled embouchure This embouchure method, advocated by a minority of brass pedagogues such as Jerome Callet, has not yet been sufficiently researched to support the claims that this system is the most effective approach for all brass performers. Advocates of Callet's approach believe that this method was recommended and taught by the great brass instructors of the early 20th century. Two French trumpet technique books, authored by Jean-Baptiste Arban and Saint-Jacome, were translated into English for use by American players. According to some, due to a misunderstanding arising from differences in pronunciation between French and English, the commonly used brass embouchure in Europe was incorrectly interpreted. Callet attributes this difference in embouchure technique as the reason the great players of the past were able to play at the level of technical virtuosity which they did, although the increased difficulty of contemporary compositions for brass seem to indicate that the level of brass technique achieved by today's performers equals or even exceeds that of most performers from the late 19th and early 20th centuries. Callet's method of brass embouchure consists of the tongue remaining forward and through the teeth at all times. The corners of the mouth always remain relaxed, and only a small amount of air is used. The top and bottom lips curl inward and grip the forward tongue. The tongue will force the teeth, and subsequently the throat, wide open, supposedly resulting in a bigger, more open sound. The forward tongue resists the pressure of the mouthpiece, controls the flow of air for lower and higher notes, and protects the lips and teeth from damage or injury from mouthpiece pressure. Because of the importance of the tongue in this method many refer to this as a "tongue-controlled embouchure". This technique facilitates the use of a smaller mouthpiece and larger bore instruments. It results in improved intonation and stronger harmonically related partials across the player's range. Woodwind embouchure Flute embouchure A variety of transverse flute embouchures are employed by professional flutists, though the most natural form is perfectly symmetrical, the corners of the mouth relaxed (i.e. not smiling), the lower lip placed along and at a short distance from the embouchure hole. It must be stressed, however, that achieving a symmetrical, or perfectly centred blowing hole ought not to be an end in itself. Indeed, French flautist Marcel Moyse did not play with a symmetrical embouchure. The end-blown xiao, kaval, shakuhachi and hocchiku flutes demand especially difficult embouchures, sometimes requiring many lessons before any sound can be produced. The embouchure is an important element to tone production. The right embouchure, developed with "time, patience, and intelligent work", will produce a beautiful sound and a correct intonation. The embouchure is produced with the muscles around the lips: principally the orbicularis oris muscle and the depressor anguli oris, whilst avoiding activation of zygomaticus major, which will produce a smile, flattening the top lip against the maxillary (upper jaw) teeth. Beginner flute-players tend to suffer fatigue in these muscles, and notably struggle to use the depressor muscle, which necessarily helps to keep the top lip directing the flow of air across the embouchure hole. These muscles have to be properly warmed up and exercised before practicing. Tone-development exercises including long notes and harmonics must be done as part of the warm up daily. Some further adjustments to the embouchure are necessary when moving from the transverse orchestral flute to the piccolo. With the piccolo, it becomes necessary to place the near side of the embouchure hole slightly higher on the lower lip, i.e. above the lip margin, and greater muscle tone from the lip muscles is needed to keep the stream/pressure of air directed across the smaller embouchure hole, particularly when playing in higher piccolo registers. Reed instrument embouchure With the woodwinds, aside from the flute, piccolo, and recorder, the sound is generated by a reed and not with the lips. The embouchure is therefore based on sealing the area around the reed and mouthpiece. This serves to prevent air from escaping while simultaneously supporting the reed, allowing it to vibrate, and constrict the reed preventing it from vibrating too much. With woodwinds, it is important to ensure that the mouthpiece is not placed too far into the mouth, which would result in too much vibration (no control), often creating a sound an octave (or harmonic twelfth for the clarinet) above the intended note. If the mouthpiece is not placed far enough into the mouth, no sound will be generated, as the reed will not vibrate. The standard embouchures for single reed woodwinds like the clarinet and saxophone are variants of the single lip embouchure, formed by resting the reed upon the bottom lip, which rests on the teeth and is supported by the chin muscles and the buccinator muscles on the sides of the mouth. The top teeth rest on top of the mouthpiece. The manner in which the lower lip rests against the teeth differs between clarinet and saxophone embouchures. In clarinet playing, the lower lip is rolled over the teeth and corners of the mouth are drawn back, which has the effect of drawing the upper lip around the mouthpiece to create a seal due to the angle at which the mouthpiece rests in the mouth. With the saxophone embouchure, the lower lip rests against, but not over, the teeth as in pronouncing the letter "V" and the corners of the lip are drawn in (similar to a drawstring bag). With the less common double-lip embouchure, the top lip is placed under (around) the top teeth. In both instances, the position of | an extended time in the upper register. The pucker can make it easy to use to open an aperture. Much very soft practice can help overcome this. Claude Gordon was a student of Louis Maggio and Herbert L. Clarke and systematized the concepts of these teachers. Claude Gordon made use of pedal tones for embouchure development as did Maggio and Herbert L. Clarke. All three stressed that the mouthpiece should be placed higher on the top lip for a more free vibration of the lips. Tongue-controlled embouchure This embouchure method, advocated by a minority of brass pedagogues such as Jerome Callet, has not yet been sufficiently researched to support the claims that this system is the most effective approach for all brass performers. Advocates of Callet's approach believe that this method was recommended and taught by the great brass instructors of the early 20th century. Two French trumpet technique books, authored by Jean-Baptiste Arban and Saint-Jacome, were translated into English for use by American players. According to some, due to a misunderstanding arising from differences in pronunciation between French and English, the commonly used brass embouchure in Europe was incorrectly interpreted. Callet attributes this difference in embouchure technique as the reason the great players of the past were able to play at the level of technical virtuosity which they did, although the increased difficulty of contemporary compositions for brass seem to indicate that the level of brass technique achieved by today's performers equals or even exceeds that of most performers from the late 19th and early 20th centuries. Callet's method of brass embouchure consists of the tongue remaining forward and through the teeth at all times. The corners of the mouth always remain relaxed, and only a small amount of air is used. The top and bottom lips curl inward and grip the forward tongue. The tongue will force the teeth, and subsequently the throat, wide open, supposedly resulting in a bigger, more open sound. The forward tongue resists the pressure of the mouthpiece, controls the flow of air for lower and higher notes, and protects the lips and teeth from damage or injury from mouthpiece pressure. Because of the importance of the tongue in this method many refer to this as a "tongue-controlled embouchure". This technique facilitates the use of a smaller mouthpiece and larger bore instruments. It results in improved intonation and stronger harmonically related partials across the player's range. Woodwind embouchure Flute embouchure A variety of transverse flute embouchures are employed by professional flutists, though the most natural form is perfectly symmetrical, the corners of the mouth relaxed (i.e. not smiling), the lower lip placed along and at a short distance from the embouchure hole. It must be stressed, however, that achieving a symmetrical, or perfectly centred blowing hole ought not to be an end in itself. Indeed, French flautist Marcel Moyse did not play with a symmetrical embouchure. The end-blown xiao, kaval, shakuhachi and hocchiku flutes demand especially difficult embouchures, sometimes requiring many lessons before any sound can be produced. The embouchure is an important element to tone production. The right embouchure, developed with "time, patience, and intelligent work", will produce a beautiful sound and a correct intonation. The embouchure is produced with the muscles around the lips: principally the orbicularis oris muscle and the depressor anguli oris, whilst avoiding activation of zygomaticus major, which will produce a smile, flattening the top lip against the maxillary (upper jaw) teeth. Beginner flute-players tend to suffer fatigue in these muscles, and notably struggle to use the depressor muscle, which necessarily helps to keep the top lip directing the flow of air across the embouchure hole. These muscles have to be properly warmed up and exercised before practicing. Tone-development exercises including long notes and harmonics must be done as part of the warm up daily. Some further adjustments to the embouchure are necessary when moving from the transverse orchestral flute to the piccolo. With the piccolo, it becomes necessary to place the near side of the embouchure hole slightly higher on the lower lip, i.e. above the lip margin, and greater muscle tone from the lip muscles is needed to keep the stream/pressure of air directed across the smaller embouchure hole, particularly when playing in higher piccolo registers. Reed instrument embouchure With the woodwinds, aside from the flute, piccolo, and recorder, the sound is generated by a reed and not with the lips. The embouchure is therefore based on sealing the area around the reed and mouthpiece. This serves to prevent air from escaping while simultaneously supporting the reed, allowing it to vibrate, and constrict the reed preventing it from vibrating too much. With woodwinds, it is important to ensure that the mouthpiece is not placed too far into the mouth, which would result in too much vibration (no control), often creating a sound an octave (or harmonic twelfth for the clarinet) above the intended note. If the mouthpiece is not placed far enough into the mouth, no sound will be generated, as the reed will not vibrate. The standard embouchures for single reed woodwinds like the clarinet and saxophone are variants of the single lip embouchure, |
other. Additionally, Neutral Milk Hotel and the Olivia Tremor Control went on hiatus. Mangum became reclusive as he struggled to cope with his newfound stardom, while the members of the Olivia Tremor Control wanted to record their own solo music. Beulah member Pat Noel said many bands were dismayed at how journalists would "pigeonhole" them to the collective. "We kind of made a conscious decision to distance ourselves a little bit from the whole thing." Schneider took a break from producing albums, and the final album to be affixed with the Elephant 6 Recording Company logo was Cul-De-Sacs and Dead Ends by the Minders in 1999. The collective slowly dissipated, although bands like the Apples in Stereo, Elf Power, and of Montreal continued making music throughout the 2000s. Brief reemergence The collective was relatively dormant until the release of New Magnetic Wonder, a 2007 album by the Apples in Stereo. New Magnetic Wonder featured all four of the collective's originating members. While recording the album, they discussed new ideas, which in turn facilitated a need to make more music. The following year, Koster organized the "Elephant 6 Holiday Surprise Tour," a short concert tour that featured fifteen artists and ten Elephant 6 bands. Koster said "Elephant 6 is back," and added: "Somehow, everything's happening for us now. I don't know why we were ever interrupted, and why all this is happening now. But we're all just so happy." The Olivia Tremor Control reunited in 2009, and Mangum returned to the public eye with solo concerts over the next few years. On July 30, 2012, Doss died from a reported aneurysm. His death came as a shock to the collective, and stalled nearly all recordings at the time. Schneider said: "I can't say what it means for the Elephant 6 or the Apples ... On a musical level it's too soon to say. I mean, I don't want to say definitively that I don't want to make music again, but on a musical level there's no way to come to terms with the loss." The Olivia Tremor Control continued making music, and in 2017 Schneider confirmed he was producing unfinished recordings. Today, the Elephant 6 collective still exists, albeit on a much smaller scale. Bands like Elf Power and of Montreal continue to record music, and many bands have moved onto Elephant 6 offshoot labels such as Orange Twin Records and Cloud Recordings. Influences and style Elephant 6 bands explore a variety of music genres, including indie rock, synth-pop, and twee pop. A common interest for nearly every associated band, however, is psychedelic pop of the 1960s. Bands such as the Beach Boys, the Beatles, and the Zombies are important influences for Elephant 6 groups like the Apples in Stereo, Beulah, and the Olivia Tremor Control. Elephant 6's de facto leader Robert Schneider notes the particular influence of the Beach Boys' unfinished album Smile, calling it the "Holy Grail" for many members of the collective. He notes how he and other members were obsessed with Beach Boys albums, and attempted to create the type of music they felt would have been included in Smile. Most Elephant 6 members are anti-consumer and possess a DIY ethic. Their music sometimes features intentionally low quality production, and bands may experiment with unique recording methods; for example, the Olivia Tremor Control's 1996 album Music from the Unrealized Film Script: Dusk at Cubist Castle features recording techniques such as tape manipulation and sound collages. Schneider notes his hatred of both indie music and modern pop music, and said that his vision for Elephant 6 is a "perfect pop world," untarnished by commercial interests. Impact Several journalists regard Elephant 6 as an important underground music movement, and a key | the culmination of everything the [Elephant 6] collective was about in the mid-'90s: distinctive, ragged, catchy records ripped straight from their makers' veins." Many bands associated with the collective were formed during this period, and Athens became a major hub city. Elf Power, of Montreal, and Doss' solo project the Sunshine Fix were among the more notable Athens based groups. of Montreal frontman Kevin Barnes said: "The heyday, most of the late 1990s, everyone was involved in each others lives, and we would collaborate more, have dinners where everyone would make something." Schneider compares this period to the Summer of Love, and said the driving force for many of the bands was "out-weirding [their] neighbor" with their music. Elephant 6 bands would tour with each other, the larger bands allowed the smaller bands to open for them. Denver was the smaller of the two hub cities. In addition to the Apples in Stereo, the major bands from Denver were the Minders, Dressy Bessy, and McIntyre's solo project Von Hemmling. The main draw for Elephant 6 bands in Denver was Pet Sounds Studio, a recording studio Schneider built in McIntyre's house. Many Elephant 6 albums were recorded at Pet Sounds, and were produced by Schneider. In addition to the two main hub cities, Elephant 6 bands began forming in various cities in the United States, such as the Essex Green and the Ladybug Transistor in Brooklyn, and Beulah in San Francisco. Inactivity In the early 2000s, Elephant 6 stagnated in activity. Neutral Milk Hotel member Scott Spillane identifies the sudden uptick of bands across the country as an important factor to this period. "At the time the Elephant 6 thing was getting out of hand, and we started seeing all of these bands that had little Elephant 6 logos on them all over the place" said Spillane. Bands began to tour more often, and the members had less time to interact with each other. Additionally, Neutral Milk Hotel and the Olivia Tremor Control went on hiatus. Mangum became reclusive as he struggled to cope with his newfound stardom, while the members of the Olivia Tremor Control wanted to record their own solo music. Beulah member Pat Noel said many bands were dismayed at how journalists would "pigeonhole" them to the collective. "We kind of made a conscious decision to distance ourselves a little bit from the whole thing." Schneider took a break from producing albums, and the final album to be affixed with the Elephant 6 Recording Company logo was Cul-De-Sacs and Dead Ends by the Minders in 1999. The collective slowly dissipated, although bands like the Apples in Stereo, Elf Power, and of Montreal continued making music throughout the 2000s. Brief reemergence The collective was relatively dormant until the release of New Magnetic Wonder, a 2007 album by the Apples in Stereo. New Magnetic Wonder featured all four of the collective's originating members. While recording the album, they discussed new ideas, which in turn facilitated a need to make more music. The following year, Koster organized the "Elephant 6 Holiday Surprise Tour," a short concert tour that featured fifteen artists and ten Elephant 6 bands. Koster said "Elephant 6 is back," and added: "Somehow, everything's happening for us now. I don't know why we were ever interrupted, and why all this is happening now. But we're all just so happy." The Olivia Tremor Control reunited in 2009, and Mangum returned to the public eye with solo concerts over the next few years. On July 30, 2012, Doss died from a reported aneurysm. His death came as a shock to the collective, and stalled nearly all recordings at the time. Schneider said: "I can't say what it means for the Elephant 6 or the Apples ... On a musical level it's too soon to say. I mean, I don't want to say definitively that I don't want to make music again, but on a musical level there's no way to come to terms with the loss." The Olivia Tremor Control continued making music, and in 2017 Schneider confirmed he was producing unfinished recordings. Today, the Elephant 6 collective still exists, albeit on a much smaller scale. Bands like Elf Power and of Montreal continue to record music, and many bands have moved onto Elephant 6 offshoot labels such as Orange Twin Records and Cloud Recordings. Influences and style Elephant 6 bands explore a variety of music genres, including indie rock, synth-pop, and twee pop. A common interest for nearly every associated band, however, is psychedelic pop of the 1960s. Bands such as the Beach Boys, the Beatles, and the Zombies are important influences for Elephant 6 groups like the Apples in Stereo, Beulah, and the Olivia Tremor Control. Elephant 6's de facto leader Robert Schneider notes the particular influence of the Beach Boys' unfinished album Smile, calling it the "Holy Grail" for many members of the collective. He notes how he and other members were obsessed with Beach Boys albums, and attempted to create the type of music they felt would have been included in Smile. Most Elephant 6 members are |
navigation and ranging), the use of sound on water or underwater, to navigate or to locate other watercraft, usually by submarines. Echo sounding, listening to the echo of sound pulses to measure the distance to the bottom of the sea, a special case of Sonar. Medical ultrasonography, the use of ultrasound echoes to look inside the body. Other Echolocation (album), a 2001 album by Fruit Bats Echolocation, a 2017 album by Gone Is Gone See also Radar, locating | to the echo of sound pulses to measure the distance to the bottom of the sea, a special case of Sonar. Medical ultrasonography, the use of ultrasound echoes to look inside the body. Other Echolocation (album), a 2001 album by Fruit Bats Echolocation, a |
In evangelical churches, young adults and unmarried couples are encouraged to marry early in order to live a sexuality according to the will of God. A 2009 American study of the National Campaign to Prevent Teen and Unplanned Pregnancy reported that 80 percent of young, unmarried evangelicals had had sex and that 42 percent were in a relationship with sex, when surveyed. The majority of evangelical Christian churches are against abortion and support adoption agencies and social support agencies for young mothers. Masturbation is seen as forbidden by some evangelical pastors because of the sexual thoughts that may accompany it. However, evangelical pastors have pointed out that the practice has been erroneously associated with Onan by scholars, that it is not a sin if it is not practiced with fantasies or compulsively, and that it was useful in a married couple, if his or her partner did not have the same frequency of sexual needs. Some evangelical churches speak only of sexual abstinence and do not speak of sexuality in marriage. Other evangelical churches in the United States and Switzerland speak of satisfying sexuality as a gift from God and a component of a Christian marriage harmonious, in messages during worship services or conferences. Many evangelical books and websites are specialized on the subject. The book The Act of Marriage: The Beauty of Sexual Love published in 1976 by Baptist pastor Tim LaHaye and his wife Beverly LaHaye was a pioneer in the field. The perceptions of homosexuality in the Evangelical Churches are varied. They range from liberal to fundamentalist or moderate Conservative and neutral. A 2011 Pew Research Center study found that 84 percent of evangelical leaders surveyed believed homosexuality should be discouraged. It is in the fundamentalist conservative positions, that there are anti-gay activists on TV or radio who claim that homosexuality is the cause of many social problems, such as terrorism. Some churches have a Conservative moderate position. Although they do not approve homosexual practices, they show sympathy and respect for homosexuals. Some evangelical denominations have adopted neutral positions, leaving the choice to local churches to decide for same-sex marriage. There are some international evangelical denominations that are gay-friendly. Other views For a majority of evangelical Christians, a belief in biblical inerrancy ensures that the miracles described in the Bible are still relevant and may be present in the life of the believer. Healings, academic or professional successes, the birth of a child after several attempts, the end of an addiction, etc., would be tangible examples of God's intervention with the faith and prayer, by the Holy Spirit. In the 1980s, the neo-charismatic movement re-emphasized miracles and faith healing. In certain churches, a special place is thus reserved for faith healings with laying on of hands during worship services or for evangelization campaigns. Faith healing or divine healing is considered to be an inheritance of Jesus acquired by his death and resurrection. In terms of science and the origin of the earth and human life, some evangelicals support young Earth creationism. For example, Answers in Genesis, founded in Australia in 1986, is an evangelical organization that defends this thesis. In 2007, it founded the Creation Museum in Petersburg, in Kentucky and in 2016 the Ark Encounter in Williamstown. Since the end of the 20th century, literalist creationism has been abandoned by some evangelicals in favor of intelligent design. For example, the think tank Discovery Institute, established in 1991 in Seattle, defends this thesis. Other evangelicals who accept the scientific consensus on evolution and the age of Earth believe in theistic evolution or evolutionary creation—the notion that God used the process of evolution to create life; a Christian organization that espouses this view is the BioLogos Foundation. Diversity The Reformed, Baptist, Methodist, Pentecostal, Churches of Christ, Plymouth Brethren, charismatic Protestant, and nondenominational Protestant traditions have all had strong influence within contemporary evangelicalism. Some Anabaptist denominations (such as the Brethren Church) are evangelical, and some Lutherans self-identify as evangelicals. There are also evangelical Anglicans and Quakers. In the early 20th century, evangelical influence declined within mainline Protestantism and Christian fundamentalism developed as a distinct religious movement. Between 1950 and 2000 a mainstream evangelical consensus developed that sought to be more inclusive and more culturally relevant than fundamentalism while maintaining conservative Protestant teaching. According to Brian Stanley, professor of world Christianity, this new postwar consensus is termed neo-evangelicalism, the new evangelicalism, or simply evangelicalism in the United States, while in Great Britain and in other English-speaking countries, it is commonly termed conservative evangelicalism. Over the years, less-conservative evangelicals have challenged this mainstream consensus to varying degrees. Such movements have been classified by a variety of labels, such as progressive, open, post-conservative, and post-evangelical. Outside of self-consciously evangelical denominations, there is a broader "evangelical streak" in mainline Protestantism. Mainline Protestant churches predominantly have a liberal theology while evangelical churches predominantly have a conservative or moderate theology. Some commentators have complained that Evangelicalism as a movement is too broad and its definition too vague to be of any practical value. Theologian Donald Dayton has called for a "moratorium" on use of the term. Historian D. G. Hart has also argued that "evangelicalism needs to be relinquished as a religious identity because it does not exist". Christian fundamentalism Fundamentalism regards biblical inerrancy, the virgin birth of Jesus, penal substitutionary atonement, the literal resurrection of Christ, and the Second Coming of Christ as fundamental Christian doctrines. Fundamentalism arose among evangelicals in the 1920s to combat modernist or liberal theology in mainline Protestant churches. Failing to reform the mainline churches, fundamentalists separated from them and established their own churches, refusing to participate in ecumenical organizations such as the National Council of Churches (founded in 1950). They also made separatism (rigid separation from non-fundamentalist churches and their culture) a true test of faith. According to historian George Marsden, most fundamentalists are Baptists and dispensationalist. Mainstream varieties Mainstream evangelicalism is historically divided between two main orientations: confessionalism and revivalism. These two streams have been critical of each other. Confessional evangelicals have been suspicious of unguarded religious experience, while revivalist evangelicals have been critical of overly intellectual teaching that (they suspect) stifles vibrant spirituality. In an effort to broaden their appeal, many contemporary evangelical congregations intentionally avoid identifying with any single form of evangelicalism. These "generic evangelicals" are usually theologically and socially conservative, but their churches often present themselves as nondenominational (or, if a denominational member, strongly de-emphasizing its ties to such, such as a church name which excludes the denominational name) within the broader evangelical movement. In the words of Albert Mohler, president of the Southern Baptist Theological Seminary, confessional evangelicalism refers to "that movement of Christian believers who seek a constant convictional continuity with the theological formulas of the Protestant Reformation". While approving of the evangelical distinctions proposed by Bebbington, confessional evangelicals believe that authentic evangelicalism requires more concrete definition in order to protect the movement from theological liberalism and from heresy. According to confessional evangelicals, subscription to the ecumenical creeds and to the Reformation-era confessions of faith (such as the confessions of the Reformed churches) provides such protection. Confessional evangelicals are represented by conservative Presbyterian churches (emphasizing the Westminster Confession), certain Baptist churches that emphasize historic Baptist confessions such as the Second London Confession, evangelical Anglicans who emphasize the Thirty-Nine Articles (such as in the Anglican Diocese of Sydney, Australia), Methodist churches that adhere to the Articles of Religion, and some confessional Lutherans with pietistic convictions. The emphasis on historic Protestant orthodoxy among confessional evangelicals stands in direct contrast to an anti-creedal outlook that has exerted its own influence on evangelicalism, particularly among churches strongly affected by revivalism and by pietism. Revivalist evangelicals are represented by some quarters of Methodism, the Wesleyan Holiness churches, the Pentecostal and charismatic churches, some Anabaptist churches, and some Baptists and Presbyterians. Revivalist evangelicals tend to place greater emphasis on religious experience than their confessional counterparts. Non-conservative varieties Evangelicals dissatisfied with the movement's conservative mainstream have been variously described as progressive evangelicals, post-conservative evangelicals, Open Evangelicals and post-evangelicals. Progressive evangelicals, also known as the evangelical left, share theological or social views with other progressive Christians while also identifying with evangelicalism. Progressive evangelicals commonly advocate for women's equality, pacifism and social justice. As described by Baptist theologian Roger E. Olson, post-conservative evangelicalism is a theological school of thought that adheres to the four marks of evangelicalism, while being less rigid and more inclusive of other Christians. According to Olson, post-conservatives believe that doctrinal truth is secondary to spiritual experience shaped by Scripture. Post-conservative evangelicals seek greater dialogue with other Christian traditions and support the development of a multicultural evangelical theology that incorporates the voices of women, racial minorities, and Christians in the developing world. Some post-conservative evangelicals also support open theism and the possibility of near universal salvation. The term "Open Evangelical" refers to a particular Christian school of thought or churchmanship, primarily in Great Britain (especially in the Church of England). Open evangelicals describe their position as combining a traditional evangelical emphasis on the nature of scriptural authority, the teaching of the ecumenical creeds and other traditional doctrinal teachings, with an approach towards culture and other theological points-of-view which tends to be more inclusive than that taken by other evangelicals. Some open evangelicals aim to take a middle position between conservative and charismatic evangelicals, while others would combine conservative theological emphases with more liberal social positions. British author Dave Tomlinson coined the phrase post-evangelical to describe a movement comprising various trends of dissatisfaction among evangelicals. Others use the term with comparable intent, often to distinguish evangelicals in the emerging church movement from post-evangelicals and anti-evangelicals. Tomlinson argues that "linguistically, the distinction [between evangelical and post-evangelical] resembles the one that sociologists make between the modern and postmodern eras". History Background Evangelicalism emerged in the 18th century, first in Britain and its North American colonies. Nevertheless, there were earlier developments within the larger Protestant world that preceded and influenced the later evangelical revivals. According to religion scholar Randall Balmer, Evangelicalism resulted "from the confluence of Pietism, Presbyterianism, and the vestiges of Puritanism. Evangelicalism picked up the peculiar characteristics from each strain – warmhearted spirituality from the Pietists (for instance), doctrinal precisionism from the Presbyterians, and individualistic introspection from the Puritans". Historian Mark Noll adds to this list High Church Anglicanism, which contributed to Evangelicalism a legacy of "rigorous spirituality and innovative organization". During the 17th century, Pietism emerged in Europe as a movement for the revival of piety and devotion within the Lutheran church. As a protest against "cold orthodoxy" or against an overly formal and rational Christianity, Pietists advocated for an experiential religion that stressed high moral standards both for clergy and for lay people. The movement included both Christians who remained in the liturgical, state churches as well as separatist groups who rejected the use of baptismal fonts, altars, pulpits, and confessionals. As Radical Pietism spread, the movement's ideals and aspirations influenced and were absorbed by evangelicals. When George Fox, who is considered the father of Quakerism, was eleven, he wrote that God spoke to him about "keeping pure and being faithful to God and man." After being troubled when his friends asked him to drink alcohol with them at the age of nineteen, Fox spent the night in prayer and soon afterwards, he felt left his home to search for spiritual satisfaction, which lasted four years. In his Journal, at age 23, he believed that he "found through faith in Jesus Christ the full assurance of salvation." Fox began to spread his message and his emphasis on "the necessity of an inward transformation of heart", as well as the possibility of Christian perfection, drew opposition from English clergy and laity. In the mid-1600s, many people became attracted to Fox's preaching and his followers became known as the Religious Society of Friends. By 1660, the Quakers grew to 35,000 and are considered to be among the first in the evangelical Christian movement. The Presbyterian heritage not only gave Evangelicalism a commitment to Protestant orthodoxy but also contributed a revival tradition that stretched back to the 1620s in Scotland and northern Ireland. Central to this tradition was the communion season, which normally occurred in the summer months. For Presbyterians, celebrations of Holy Communion were infrequent but popular events preceded by several Sundays of preparatory preaching and accompanied with preaching, singing, and prayers. Puritanism combined Calvinism with a doctrine that conversion was a prerequisite for church membership and with an emphasis on the study of Scripture by lay people. It took root in the colonies of New England, where the Congregational church became an established religion. There the Half-Way Covenant of 1662 allowed parents who had not testified to a conversion experience to have their children baptized, while reserving Holy Communion for converted church members alone. By the 18th century Puritanism was in decline and many ministers expressed alarm at the loss of religious piety. This concern over declining religious commitment led many people to support evangelical revival. High-Church Anglicanism also exerted influence on early Evangelicalism. High Churchmen were distinguished by their desire to adhere to primitive Christianity. This desire included imitating the faith and ascetic practices of early Christians as well as regularly partaking of Holy Communion. High Churchmen were also enthusiastic organizers of voluntary religious societies. Two of the most prominent were the Society for Promoting Christian Knowledge (founded in London in 1698), which distributed Bibles and other literature and built schools, and the Society for the Propagation of the Gospel in Foreign Parts, which was founded in England in 1701 to facilitate missionary work in British colonies (especially among colonists in North America). Samuel and Susanna Wesley, the parents of John and Charles Wesley (born 1703 and 1707 respectively), were both devoted advocates of High-Church ideas. 18th century In the 1730s, Evangelicalism emerged as a distinct phenomenon out of religious revivals that began in Britain and New England. While religious revivals had occurred within Protestant churches in the past, the evangelical revivals that marked the 18th century were more intense and radical. Evangelical revivalism imbued ordinary men and women with a confidence and enthusiasm for sharing the gospel and converting others outside of the control of established churches, a key discontinuity with the Protestantism of the previous era. It was developments in the doctrine of assurance that differentiated Evangelicalism from what went before. Bebbington says, "The dynamism of the Evangelical movement was possible only because its adherents were assured in their faith." He goes on: The first local revival occurred in Northampton, Massachusetts, under the leadership of Congregationalist minister Jonathan Edwards. In the fall of 1734, Edwards preached a sermon series on "Justification By Faith Alone", and the community's response was extraordinary. Signs of religious commitment among the laity increased, especially among the town's young people. The revival ultimately spread to 25 communities in western Massachusetts and central Connecticut until it began to wane by the spring of 1735. Edwards was heavily influenced by Pietism, so much so that one historian has stressed his "American Pietism". One practice clearly copied from European Pietists was the use of small groups divided by age and gender, which met in private homes to conserve and promote the fruits of revival. At the same time, students at Yale University (at that time Yale College) in New Haven, Connecticut, were also experiencing revival. Among them was Aaron Burr, Sr., who would become a prominent Presbyterian minister and future president of Princeton University. In New Jersey, Gilbert Tennent, another Presbyterian minister, was preaching the evangelical message and urging the Presbyterian Church to stress the necessity of converted ministers. The spring of 1735 also marked important events in England and Wales. Howell Harris, a Welsh schoolteacher, had a conversion experience on May 25 during a communion service. He described receiving assurance of God's grace after a period of fasting, self-examination, and despair over his sins. Sometime later, Daniel Rowland, the Anglican curate of Llangeitho, Wales, experienced conversion as well. Both men began preaching the evangelical message to large audiences, becoming leaders of the Welsh Methodist revival. At about the same time that Harris experienced conversion in Wales, George Whitefield was converted at Oxford University after his own prolonged spiritual crisis. Whitefield later remarked, "About this time God was pleased to enlighten my soul, and bring me into the knowledge of His free grace, and the necessity of being justified in His sight by faith only". Whitefield's fellow Holy Club member and spiritual mentor, Charles Wesley, reported an evangelical conversion in 1738. In the same week, Charles' brother and future founder of Methodism, John Wesley was also converted after a long period of inward struggle. During this spiritual crisis, John Wesley was directly influenced by Pietism. Two years before his conversion, Wesley had traveled to the newly established colony of Georgia as a missionary for the Society for Promoting Christian Knowledge. He shared his voyage with a group of Moravian Brethren led by August Gottlieb Spangenberg. The Moravians' faith and piety deeply impressed Wesley, especially their belief that it was a normal part of Christian life to have an assurance of one's salvation. Wesley recounted the following exchange with Spangenberg on February 7, 1736: Wesley finally received the assurance he had been searching for at a meeting of a religious society in London. While listening to a reading from Martin Luther's preface to the Epistle to the Romans, Wesley felt spiritually transformed: Pietism continued to influence Wesley, who had translated 33 Pietist hymns from German to English. Numerous German Pietist hymns became part of the English Evangelical repertoire. By 1737, Whitefield had become a national celebrity in England where his preaching drew large crowds, especially in London where the Fetter Lane Society had become a center of evangelical activity. Whitfield joined forces with Edwards to "fan the flame of revival" in the Thirteen Colonies in 1739–40. Soon the First Great Awakening stirred Protestants throughout America. Evangelical preachers emphasized personal salvation and piety more than ritual and tradition. Pamphlets and printed sermons crisscrossed the Atlantic, encouraging the revivalists. The Awakening resulted from powerful preaching that gave listeners a sense of deep personal revelation of their need of salvation by Jesus Christ. Pulling away from ritual and ceremony, the Great Awakening made Christianity intensely personal to the average person by fostering a deep sense of spiritual conviction and redemption, and by encouraging introspection and a commitment to a new standard of personal morality. It reached people who were already church members. It changed their rituals, their piety and their self-awareness. To the evangelical imperatives of Reformation Protestantism, 18th century American Christians added emphases on divine outpourings of the Holy Spirit and conversions that implanted within new believers an intense love for God. Revivals encapsulated those hallmarks and forwarded the newly created Evangelicalism into the early republic. By the 1790s, the Evangelical party in the Church of England remained a small minority but were not without influence. John Newton and Joseph Milner were influential evangelical clerics. Evangelical clergy networked together through societies such as the Eclectic Society in London and the Elland Society in Yorkshire. The Old Dissenter denominations (the Baptists, Congregationalists and Quakers) were falling under evangelical influence, with the Baptists most affected and Quakers the least. Evangelical ministers dissatisfied with both Anglicanism and Methodism often chose to work within these churches. In the 1790s, all of these evangelical groups, including the Anglicans, were Calvinist in orientation. Methodism (the "New Dissent") was the most visible expression of evangelicalism by the end of the 18th century. The Wesleyan Methodists boasted around 70,000 members throughout the British Isles, in addition to the Calvinistic Methodists in Wales and the Countess of Huntingdon's Connexion, which was organized under George Whitefield's influence. The Wesleyan Methodists, however, were still nominally affiliated with the Church of England and would not completely separate until 1795, four years after Wesley's death. The Wesleyan Methodist Church's Arminianism distinguished it from the other evangelical groups. At the same time, evangelicals were an important faction within the Presbyterian Church of Scotland. Influential ministers included John Erskine, Henry Wellwood Moncrieff and Stevenson Macgill. The church's General Assembly, however, was controlled by the Moderate Party, and evangelicals were involved in the First and Second Secessions from the national church during the 18th century. 19th century The start of the 19th century saw an increase in missionary work and many of the major missionary societies were founded around this time (see Timeline of Christian missions). Both the Evangelical and high church movements sponsored missionaries. The Second Great Awakening (which actually began in 1790) was primarily an American revivalist movement and resulted in substantial growth of the Methodist and Baptist churches. Charles Grandison Finney was an important preacher of this period. In Britain in addition to stressing the traditional Wesleyan combination of "Bible, cross, conversion, and activism", the revivalist movement sought a universal appeal, hoping to include rich and poor, urban and rural, and men and women. Special efforts were made to attract children and to generate literature to spread the revivalist message. "Christian conscience" was used by the British Evangelical movement to promote social activism. Evangelicals believed activism in government and the social sphere was an essential method in reaching the goal of eliminating sin in a world drenched in wickedness. The Evangelicals in the Clapham Sect included figures such as William Wilberforce who successfully campaigned for the abolition of slavery. In the late 19th century, the revivalist Wesleyan-Holiness movement | and support the development of a multicultural evangelical theology that incorporates the voices of women, racial minorities, and Christians in the developing world. Some post-conservative evangelicals also support open theism and the possibility of near universal salvation. The term "Open Evangelical" refers to a particular Christian school of thought or churchmanship, primarily in Great Britain (especially in the Church of England). Open evangelicals describe their position as combining a traditional evangelical emphasis on the nature of scriptural authority, the teaching of the ecumenical creeds and other traditional doctrinal teachings, with an approach towards culture and other theological points-of-view which tends to be more inclusive than that taken by other evangelicals. Some open evangelicals aim to take a middle position between conservative and charismatic evangelicals, while others would combine conservative theological emphases with more liberal social positions. British author Dave Tomlinson coined the phrase post-evangelical to describe a movement comprising various trends of dissatisfaction among evangelicals. Others use the term with comparable intent, often to distinguish evangelicals in the emerging church movement from post-evangelicals and anti-evangelicals. Tomlinson argues that "linguistically, the distinction [between evangelical and post-evangelical] resembles the one that sociologists make between the modern and postmodern eras". History Background Evangelicalism emerged in the 18th century, first in Britain and its North American colonies. Nevertheless, there were earlier developments within the larger Protestant world that preceded and influenced the later evangelical revivals. According to religion scholar Randall Balmer, Evangelicalism resulted "from the confluence of Pietism, Presbyterianism, and the vestiges of Puritanism. Evangelicalism picked up the peculiar characteristics from each strain – warmhearted spirituality from the Pietists (for instance), doctrinal precisionism from the Presbyterians, and individualistic introspection from the Puritans". Historian Mark Noll adds to this list High Church Anglicanism, which contributed to Evangelicalism a legacy of "rigorous spirituality and innovative organization". During the 17th century, Pietism emerged in Europe as a movement for the revival of piety and devotion within the Lutheran church. As a protest against "cold orthodoxy" or against an overly formal and rational Christianity, Pietists advocated for an experiential religion that stressed high moral standards both for clergy and for lay people. The movement included both Christians who remained in the liturgical, state churches as well as separatist groups who rejected the use of baptismal fonts, altars, pulpits, and confessionals. As Radical Pietism spread, the movement's ideals and aspirations influenced and were absorbed by evangelicals. When George Fox, who is considered the father of Quakerism, was eleven, he wrote that God spoke to him about "keeping pure and being faithful to God and man." After being troubled when his friends asked him to drink alcohol with them at the age of nineteen, Fox spent the night in prayer and soon afterwards, he felt left his home to search for spiritual satisfaction, which lasted four years. In his Journal, at age 23, he believed that he "found through faith in Jesus Christ the full assurance of salvation." Fox began to spread his message and his emphasis on "the necessity of an inward transformation of heart", as well as the possibility of Christian perfection, drew opposition from English clergy and laity. In the mid-1600s, many people became attracted to Fox's preaching and his followers became known as the Religious Society of Friends. By 1660, the Quakers grew to 35,000 and are considered to be among the first in the evangelical Christian movement. The Presbyterian heritage not only gave Evangelicalism a commitment to Protestant orthodoxy but also contributed a revival tradition that stretched back to the 1620s in Scotland and northern Ireland. Central to this tradition was the communion season, which normally occurred in the summer months. For Presbyterians, celebrations of Holy Communion were infrequent but popular events preceded by several Sundays of preparatory preaching and accompanied with preaching, singing, and prayers. Puritanism combined Calvinism with a doctrine that conversion was a prerequisite for church membership and with an emphasis on the study of Scripture by lay people. It took root in the colonies of New England, where the Congregational church became an established religion. There the Half-Way Covenant of 1662 allowed parents who had not testified to a conversion experience to have their children baptized, while reserving Holy Communion for converted church members alone. By the 18th century Puritanism was in decline and many ministers expressed alarm at the loss of religious piety. This concern over declining religious commitment led many people to support evangelical revival. High-Church Anglicanism also exerted influence on early Evangelicalism. High Churchmen were distinguished by their desire to adhere to primitive Christianity. This desire included imitating the faith and ascetic practices of early Christians as well as regularly partaking of Holy Communion. High Churchmen were also enthusiastic organizers of voluntary religious societies. Two of the most prominent were the Society for Promoting Christian Knowledge (founded in London in 1698), which distributed Bibles and other literature and built schools, and the Society for the Propagation of the Gospel in Foreign Parts, which was founded in England in 1701 to facilitate missionary work in British colonies (especially among colonists in North America). Samuel and Susanna Wesley, the parents of John and Charles Wesley (born 1703 and 1707 respectively), were both devoted advocates of High-Church ideas. 18th century In the 1730s, Evangelicalism emerged as a distinct phenomenon out of religious revivals that began in Britain and New England. While religious revivals had occurred within Protestant churches in the past, the evangelical revivals that marked the 18th century were more intense and radical. Evangelical revivalism imbued ordinary men and women with a confidence and enthusiasm for sharing the gospel and converting others outside of the control of established churches, a key discontinuity with the Protestantism of the previous era. It was developments in the doctrine of assurance that differentiated Evangelicalism from what went before. Bebbington says, "The dynamism of the Evangelical movement was possible only because its adherents were assured in their faith." He goes on: The first local revival occurred in Northampton, Massachusetts, under the leadership of Congregationalist minister Jonathan Edwards. In the fall of 1734, Edwards preached a sermon series on "Justification By Faith Alone", and the community's response was extraordinary. Signs of religious commitment among the laity increased, especially among the town's young people. The revival ultimately spread to 25 communities in western Massachusetts and central Connecticut until it began to wane by the spring of 1735. Edwards was heavily influenced by Pietism, so much so that one historian has stressed his "American Pietism". One practice clearly copied from European Pietists was the use of small groups divided by age and gender, which met in private homes to conserve and promote the fruits of revival. At the same time, students at Yale University (at that time Yale College) in New Haven, Connecticut, were also experiencing revival. Among them was Aaron Burr, Sr., who would become a prominent Presbyterian minister and future president of Princeton University. In New Jersey, Gilbert Tennent, another Presbyterian minister, was preaching the evangelical message and urging the Presbyterian Church to stress the necessity of converted ministers. The spring of 1735 also marked important events in England and Wales. Howell Harris, a Welsh schoolteacher, had a conversion experience on May 25 during a communion service. He described receiving assurance of God's grace after a period of fasting, self-examination, and despair over his sins. Sometime later, Daniel Rowland, the Anglican curate of Llangeitho, Wales, experienced conversion as well. Both men began preaching the evangelical message to large audiences, becoming leaders of the Welsh Methodist revival. At about the same time that Harris experienced conversion in Wales, George Whitefield was converted at Oxford University after his own prolonged spiritual crisis. Whitefield later remarked, "About this time God was pleased to enlighten my soul, and bring me into the knowledge of His free grace, and the necessity of being justified in His sight by faith only". Whitefield's fellow Holy Club member and spiritual mentor, Charles Wesley, reported an evangelical conversion in 1738. In the same week, Charles' brother and future founder of Methodism, John Wesley was also converted after a long period of inward struggle. During this spiritual crisis, John Wesley was directly influenced by Pietism. Two years before his conversion, Wesley had traveled to the newly established colony of Georgia as a missionary for the Society for Promoting Christian Knowledge. He shared his voyage with a group of Moravian Brethren led by August Gottlieb Spangenberg. The Moravians' faith and piety deeply impressed Wesley, especially their belief that it was a normal part of Christian life to have an assurance of one's salvation. Wesley recounted the following exchange with Spangenberg on February 7, 1736: Wesley finally received the assurance he had been searching for at a meeting of a religious society in London. While listening to a reading from Martin Luther's preface to the Epistle to the Romans, Wesley felt spiritually transformed: Pietism continued to influence Wesley, who had translated 33 Pietist hymns from German to English. Numerous German Pietist hymns became part of the English Evangelical repertoire. By 1737, Whitefield had become a national celebrity in England where his preaching drew large crowds, especially in London where the Fetter Lane Society had become a center of evangelical activity. Whitfield joined forces with Edwards to "fan the flame of revival" in the Thirteen Colonies in 1739–40. Soon the First Great Awakening stirred Protestants throughout America. Evangelical preachers emphasized personal salvation and piety more than ritual and tradition. Pamphlets and printed sermons crisscrossed the Atlantic, encouraging the revivalists. The Awakening resulted from powerful preaching that gave listeners a sense of deep personal revelation of their need of salvation by Jesus Christ. Pulling away from ritual and ceremony, the Great Awakening made Christianity intensely personal to the average person by fostering a deep sense of spiritual conviction and redemption, and by encouraging introspection and a commitment to a new standard of personal morality. It reached people who were already church members. It changed their rituals, their piety and their self-awareness. To the evangelical imperatives of Reformation Protestantism, 18th century American Christians added emphases on divine outpourings of the Holy Spirit and conversions that implanted within new believers an intense love for God. Revivals encapsulated those hallmarks and forwarded the newly created Evangelicalism into the early republic. By the 1790s, the Evangelical party in the Church of England remained a small minority but were not without influence. John Newton and Joseph Milner were influential evangelical clerics. Evangelical clergy networked together through societies such as the Eclectic Society in London and the Elland Society in Yorkshire. The Old Dissenter denominations (the Baptists, Congregationalists and Quakers) were falling under evangelical influence, with the Baptists most affected and Quakers the least. Evangelical ministers dissatisfied with both Anglicanism and Methodism often chose to work within these churches. In the 1790s, all of these evangelical groups, including the Anglicans, were Calvinist in orientation. Methodism (the "New Dissent") was the most visible expression of evangelicalism by the end of the 18th century. The Wesleyan Methodists boasted around 70,000 members throughout the British Isles, in addition to the Calvinistic Methodists in Wales and the Countess of Huntingdon's Connexion, which was organized under George Whitefield's influence. The Wesleyan Methodists, however, were still nominally affiliated with the Church of England and would not completely separate until 1795, four years after Wesley's death. The Wesleyan Methodist Church's Arminianism distinguished it from the other evangelical groups. At the same time, evangelicals were an important faction within the Presbyterian Church of Scotland. Influential ministers included John Erskine, Henry Wellwood Moncrieff and Stevenson Macgill. The church's General Assembly, however, was controlled by the Moderate Party, and evangelicals were involved in the First and Second Secessions from the national church during the 18th century. 19th century The start of the 19th century saw an increase in missionary work and many of the major missionary societies were founded around this time (see Timeline of Christian missions). Both the Evangelical and high church movements sponsored missionaries. The Second Great Awakening (which actually began in 1790) was primarily an American revivalist movement and resulted in substantial growth of the Methodist and Baptist churches. Charles Grandison Finney was an important preacher of this period. In Britain in addition to stressing the traditional Wesleyan combination of "Bible, cross, conversion, and activism", the revivalist movement sought a universal appeal, hoping to include rich and poor, urban and rural, and men and women. Special efforts were made to attract children and to generate literature to spread the revivalist message. "Christian conscience" was used by the British Evangelical movement to promote social activism. Evangelicals believed activism in government and the social sphere was an essential method in reaching the goal of eliminating sin in a world drenched in wickedness. The Evangelicals in the Clapham Sect included figures such as William Wilberforce who successfully campaigned for the abolition of slavery. In the late 19th century, the revivalist Wesleyan-Holiness movement based on John Wesley's doctrine of "entire sanctification" came to the forefront, and while many adherents remained within mainline Methodism, others established new denominations, such as the Free Methodist Church and Wesleyan Methodist Church. In urban Britain the Holiness message was less exclusive and censorious. Keswickianism taught the doctrine of the second blessing in non-Methodist circles and came to influence evangelicals of the Calvinistic (Reformed) tradition, leading to the establishment of denominations such as the Christian and Missionary Alliance. John Nelson Darby of the Plymouth Brethren was a 19th-century Irish Anglican minister who devised modern dispensationalism, an innovative Protestant theological interpretation of the Bible that was incorporated in the development of modern Evangelicalism. Cyrus Scofield further promoted the influence of dispensationalism through the explanatory notes to his Scofield Reference Bible. According to scholar Mark S. Sweetnam, who takes a cultural studies perspective, dispensationalism can be defined in terms of its Evangelicalism, its insistence on the literal interpretation of Scripture, its recognition of stages in God's dealings with humanity, its expectation of the imminent return of Christ to rapture His saints, and its focus on both apocalypticism and premillennialism. During the 19th century, the megachurches, churches with more than 2,000 people, began to develop. The first evangelical megachurch, the Metropolitan Tabernacle with a 6000-seat auditorium, was inaugurated in 1861 in London by Charles Spurgeon. Dwight L. Moody founded the Illinois Street Church in Chicago. An advanced theological perspective came from the Princeton theologians from the 1850s to the 1920s, such as Charles Hodge, Archibald Alexander and B.B. Warfield. 20th century After 1910 the Fundamentalist movement dominated Evangelicalism in the early part of the 20th century; the Fundamentalists rejected liberal theology and emphasized the inerrancy of the Scriptures. Following the 1904–1905 Welsh revival, the Azusa Street Revival in 1906 began the spread of Pentecostalism in North America. The 20th century also marked by the emergence of the televangelism. Aimee Semple McPherson, who founded the megachurch Angelus Temple in Los Angeles, used radio in the 1920s to reach a wider audience. After the Scopes trial in 1925, Christian Century wrote of "Vanishing Fundamentalism." In 1929 Princeton University, once the bastion of conservative theology, added several modernists to its faculty, resulting in the departure of J. Gresham Machen and a split in the Presbyterian Church in the United States of America. Evangelicalism began to reassert itself in the second half of the 1930s. One factor was the advent of the radio as a means of mass communication. When [Charles E. Fuller] began his "Old Fashioned Revival Hour" on October 3, 1937, he sought to avoid the contentious issues that had caused fundamentalists to be characterized as narrow. One hundred forty-seven representatives from thirty-four denominations met from April 7 through 9, 1942, in St. Louis, Missouri, for a "National Conference for United Action among Evangelicals." The next year six hundred representatives in Chicago established the National Association of Evangelicals (NAE) with Harold Ockenga as its first president. The NAE was partly a reaction to the founding of the American Council of Christian Churches (ACCC) under the leadership of the fundamentalist Carl McIntire. The ACCC in turn had been founded to counter the influence of the Federal Council of Churches (later merged into the National Council of Churches), which fundamentalists saw as increasingly embracing modernism in its ecumenism. Those who established the NAE had come to view the name fundamentalist as "an embarrassment instead of a badge of honor." Evangelical revivalist radio preachers organized themselves in the National Religious Broadcasters in 1944 in order to regulate their activity. With the founding of the NAE, American Protestantism was divided into three large groups—the fundamentalists, the modernists, and the new evangelicals, who sought to position themselves between the other two. In 1947 Harold Ockenga coined the term neo-evangelicalism to identify a movement distinct from fundamentalism. The neo-evangelicals had three broad characteristics that distinguished them from the conservative fundamentalism of the ACCC: Each of these characteristics took concrete shape by the mid-1950s. In 1947 Carl F. H. Henry's book The Uneasy Conscience of Fundamentalism called on evangelicals to engage in addressing social concerns: In the same year Fuller Theological Seminary was established with Ockenga as its president and Henry as the head of its theology department. The strongest impetus, however, was the development of the work of Billy Graham. Graham had begun his career with the support of McIntire and fellow conservatives Bob Jones Sr. and John R. Rice. However, in broadening the reach of his London crusade of 1954, he accepted the support of denominations that those men disapproved of. When he went even further in his 1957 New York crusade, conservatives strongly condemned him and withdrew their support. According to William Martin: A fourth development—the founding of Christianity Today (CT) with Henry as its first editor—was strategic in giving neo-evangelicals a platform to promote their views and in positioning them between the fundamentalists and modernists. In a letter to Harold Lindsell, Graham said that CT would: The post-war period also saw growth of the ecumenical movement and the founding of the World Council of Churches, which the Evangelical community generally regarded with suspicion. In the United Kingdom, John Stott (1921–2011) and Martyn Lloyd-Jones (1899–1981) emerged as key leaders in Evangelical Christianity. The charismatic movement began in the 1960s and resulted in the introduction of Pentecostal theology and practice into many mainline denominations. New charismatic groups such as the Association of Vineyard Churches and Newfrontiers trace their roots to this period (see also British New Church Movement). The closing years of the 20th century saw controversial postmodern influences entering some parts of Evangelicalism, particularly with the emerging church movement. Also controversial is the relationship between spiritualism and contemporary military metaphors and practices animating many branches of Christianity but especially relevant in the sphere of Evangelicalism. Spiritual warfare is the latest iteration in a long-standing partnership between religious organization and militarization, two spheres that are rarely considered together, although aggressive forms of prayer have long been used to further the aims of expanding Evangelical influence. Major moments of increased political militarization have occurred concurrently with the growth of prominence of militaristic imagery in evangelical communities. This paradigmatic language, paired with an increasing reliance on sociological and academic research to bolster militarized sensibility, serves to illustrate the violent ethos that effectively underscores militarized forms of evangelical prayer. 21st century In Nigeria, evangelical megachurches, such as Redeemed Christian Church of God and Living Faith Church Worldwide, have built autonomous cities with houses, supermarkets, banks, universities, and power plants. Evangelical Christian film production societies were founded in the early 2000s, such as Sherwood Pictures and Pure Flix . The growth of evangelical churches continues with the construction of new places of worship or enlargements in various regions of the world. Global statistics According to a 2011 Pew Forum study on global Christianity, 285,480,000 or 13.1 percent of all Christians are Evangelicals. These figures do not include the Pentecostalism and Charismatic movements. The study states that the category "Evangelicals" should not be considered as a separate category of "Pentecostal and Charismatic" categories, since some believers consider themselves in both movements where their church is affiliated with an Evangelical association. In 2015, the World Evangelical Alliance is "a network of churches in 129 nations that have each formed an Evangelical alliance and over 100 international organizations joining together to give a world-wide identity, voice, and platform to more than 600 million Evangelical Christians". The Alliance was formed in 1951 by Evangelicals from 21 countries. It has worked to support its members to work together globally. According to Sébastien Fath of CNRS, in 2016, there are 619 million Evangelicals in the world, one in four Christians. In 2017, about 630 million, an increase of 11 million, including Pentecostals. Operation World estimates the number of Evangelicals at 545.9 million, which makes for 7.9 percent of the world's population. From 1960 to 2000, the global growth of the number of reported Evangelicals grew three times the world's population rate, and twice that of Islam. According to Operation World, the Evangelical population's current annual growth rate is 2.6 percent, still more than twice the world's population growth rate. Africa In the 21st century, there are Evangelical churches active in Sudan, Angola, Mozambique, Zimbabwe, Malawi, Rwanda, Uganda, Ghana, Kenya, Zambia, South Africa, and Nigeria. They have grown especially since independence came in the 1960s, the strongest movements are based on Pentecostal-charismatic beliefs. There is a wide range of theology and organizations, including some sponsored by European missionaries and others that have emerged from African culture such as the Apostolic and Zionist Churches which enlist 40 percent of black South Africans, and their Aladura counterparts in western Africa. In Nigeria the Evangelical Church Winning All (formerly "Evangelical Church of West Africa") is the largest church organization with five thousand congregations and over three million members. It sponsors two seminaries and eight Bible colleges, and 1600 missionaries who serve in Nigeria and other countries with the Evangelical Missionary Society (EMS). There have been serious confrontations since 1999 between Muslims and Christians standing in opposition to the expansion of Sharia law in northern Nigeria. The confrontation has radicalized and politicized the Christians. Violence has been escalating. In Kenya, mainstream Evangelical denominations have taken the lead in promoting political activism and backers, with the smaller Evangelical sects of less importance. Daniel arap Moi was president 1978 to 2002 and claimed to be an Evangelical; he proved intolerant of dissent or pluralism or decentralization of power. The Berlin Missionary Society (BMS) was one of four German Protestant mission societies active in South Africa before 1914. It emerged from the German tradition of Pietism after 1815 and sent its first missionaries to South Africa in 1834. There were few positive reports in the early years, but it was especially active 1859–1914. It was especially strong in the Boer republics. The World War cut off contact with Germany, but the missions continued at a reduced pace. After 1945 the missionaries had to deal with decolonization across Africa and especially with the apartheid government. At all times the BMS emphasized spiritual inwardness, and values such as morality, hard work and self-discipline. It proved unable to speak and act decisively against injustice and racial discrimination and was disbanded in 1972. Since 1974, young professionals have been the active proselytizers of Evangelicalism in the cities of Malawi. In Mozambique, Evangelical Protestant Christianity emerged around 1900 from black migrants whose converted previously in South Africa. They were assisted by European missionaries, but, as industrial workers, they paid for their own churches and proselytizing. They prepared southern Mozambique for the spread of Evangelical Protestantism. During its time as a colonial power in Mozambique, the Catholic Portuguese government tried to counter the spread of Evangelical Protestantism. East African Revival The East African Revival was a renewal movement within Evangelical churches in East Africa during the late 1920s and 1930s that began at a Church Missionary Society mission station in the Belgian territory of Ruanda-Urundi in 1929, and spread to: Uganda, Tanzania and Kenya during the 1930s and 1940s contributing to the significant growth of the church in East Africa through the 1970s and had a visible influence on Western missionaries who were observer-participants of the movement. Latin America In modern Latin America, the term "Evangelical" is often simply a synonym for "Protestant". Brazil Protestantism in Brazil largely originated with German immigrants and British and American missionaries in the 19th century, following up on efforts that began in the 1820s. In the late nineteenth century, while the vast majority of Brazilians were nominal Catholics, the nation was underserved by priests, and for large numbers their religion was only nominal. The Catholic Church in Brazil was de-established in 1890, and responded by increasing the number of dioceses and the efficiency of its clergy. Many Protestants came from a large German immigrant community, but they were seldom engaged in proselytism and grew mostly by natural increase. Methodists were active along with Presbyterians and Baptists. The Scottish missionary Dr. Robert Reid Kalley, with support from the Free Church of Scotland, moved to Brazil in 1855, founding the first Evangelical church among the Portuguese-speaking population there in 1856. It was organized according to the Congregational policy as the Igreja Evangélica Fluminense; it became the mother church of Congregationalism in Brazil. The Seventh-day Adventists arrived in 1894, and the YMCA was organized in 1896. The missionaries promoted schools colleges and seminaries, including a liberal arts college in São Paulo, later known as Mackenzie, and an agricultural school in Lavras. The Presbyterian schools in particular later became the nucleus of the governmental system. In 1887 Protestants in Rio de Janeiro formed a hospital. The missionaries largely reached a working-class audience, as the Brazilian upper-class was wedded either to Catholicism or to secularism. By 1914, Protestant churches founded by American missionaries had 47,000 communicants, served by 282 missionaries. In general, these missionaries were more successful than they had been in Mexico, Argentina or elsewhere in Latin America. There were 700,000 Protestants by 1930, and increasingly they were in charge of their own affairs. In 1930, the Methodist Church of Brazil became independent of the missionary societies and elected its own bishop. Protestants were largely from a working-class, but their religious networks help speed their upward social mobility. Protestants accounted for fewer than 5 percent of the population until the 1960s, but grew exponentially by proselytizing and by 2000 made up over 15 percent of Brazilians affiliated with a |
both called trombones, while the cylindrical trumpet and the conical flugelhorn are given different names. As with the trumpet and flugelhorn, the two instruments are easily doubled by one player, with some modification of breath and embouchure, since the two have identical range and essentially identical fingering. The cylindrical baritone offers a brighter sound and the conical euphonium offers a more mellow sound. The American baritone, featuring three valves on the front of the instrument and a curved, forward-pointing bell, was dominant in American school bands throughout most of the 20th century, its weight, shape, and configuration conforming to the needs of the marching band. While this instrument is a conical-cylindrical bore hybrid, somewhere between the classic baritone horn and euphonium, it was almost universally labelled a "baritone" by both band directors and composers, thus contributing to the confusion of terminology in the United States. Several late 19th century music catalogs (such as Pepper and Lyon & Healy) sold a euphonium-like instrument called the "B bass" (to distinguish it from the E and BB bass). In these catalog drawings, the B Bass had thicker tubing than the baritone; both had three valves. Along the same lines, drum and bugle corps introduced the "Bass-baritone", and distinguished it from the baritone. The thicker tubing of the three-valve B bass allowed for production of strong false-tones, providing chromatic access to the pedal register. Ferdinand Sommer's original name for the instrument was the euphonion. It is sometimes called the tenor tuba in B, although this can also refer to other varieties of tuba. Names in other languages, as included in scores, can be ambiguous as well. They include French basse, saxhorn basse, and tuba basse; German Baryton, Tenorbass, and Tenorbasshorn; Italian baritono, bombardino, eufonio, and flicorno basso. The most common German name, Baryton, may have influenced Americans to adopt the name "baritone" for the instrument, due to the influx of German musicians to the United States in the nineteenth century. History and development As a baritone-voiced brass instrument, the euphonium traces its ancestry to the ophicleide and ultimately back to the serpent. The search for a satisfactory foundational wind instrument that could support massed sound above its pitch took many years. While the serpent was used for over two centuries dating back to the late Renaissance, it was notoriously difficult to control its pitch and tone quality due to its disproportionately small open finger holes. The ophicleide, which was used in bands and orchestras for a few decades in the early to mid-19th century, used a system of keys and was an improvement over the serpent but was still unreliable, especially in the high register. With the invention of the piston valve system 1818, the construction of brass instruments with an even sound and facility of playing in all registers became possible. The euphonium is said to have been invented, as a "wide-bore, valved bugle of baritone range", by Ferdinand Sommer of Weimar in 1843, though Carl Moritz in 1838 and Adolphe Sax in 1843 have also been credited. While Sax's family of saxhorns were invented at about the same time and the bass saxhorn is very similar to a euphonium, there are also differences—such as the bass saxhorn being narrower throughout the length of the instrument. The "British-style" compensating euphonium was developed in 1874 by David Blaikley, of Boosey & Co, and has been in use in Britain since then, with the basic construction little changed. Modern-day euphonium makers have been working to further enhance the construction of the instrument. Companies such as Adams and Besson have been leading the way in that respect. Adams euphoniums have developed an adjustable lead-pipe receiver, which allows players to change the timbre of the instrument to whatever they find preferable. Besson has been credited with the introducing an adjustable main tuning-slide trigger, which allows players more flexibility with intonation. Construction and general characteristics The euphonium, like the tenor trombone, is pitched in concert B. For a valved brass instrument like the euphonium, this means that when no valves are in use the instrument will produce partials of the B harmonic series. It is generally orchestrated as a non-transposing instrument like the trombone, written at concert pitch in the bass clef with higher passages in the tenor clef. Treble clef euphonium parts transposing down a major ninth are included in much concert band music: in the British-style brass band tradition, euphonium music is always written this way. In continental European band music, parts for the euphonium may be written in the bass clef as a B transposing instrument sounding a major second lower than written. Professional models have three top-action valves, played with the first three fingers of the right hand, plus a "compensating" fourth valve, generally found midway down the right side of the instrument, played with the left index finger; such an instrument is shown at the top of this page. Beginner models often have only the three top-action valves, while some intermediate "student" models may have a fourth top-action valve, played with the fourth finger of the right hand. Compensating systems are expensive to build, and there is in general a substantial difference in price between compensating and non-compensating models. For a thorough discussion of the valves and the compensation system, see the article on brass instruments. The euphonium has an extensive range, comfortably from E2 to about E4 for intermediate players (using scientific pitch notation). In professional hands this may extend from B0 to as high as B5. | medium-sized, 3 or 4-valve, often compensating, conical-bore, tenor-voiced brass instrument that derives its name from the Ancient Greek word euphōnos, meaning "well-sounding" or "sweet-voiced" ( eu means "well" or "good" and phōnē means "sound", hence "of good sound"). The euphonium is a valved instrument. Nearly all current models have piston valves, though some models with rotary valves do exist. The euphonium may be played in bass clef as a non-transposing instrument or in treble clef as a transposing instrument. In British brass bands, it is typically treated as a treble-clef instrument, while in American band music, parts may be written in either treble clef or bass clef, or both. Name The euphonium is in the family of brass instruments, more particularly low-brass instruments with many relatives. It is extremely similar to a baritone horn. The difference is that the bore size of the baritone horn is typically smaller than that of the euphonium, and the baritone is a primarily cylindrical bore, whereas the euphonium is predominantly conical bore. It is controversial whether this is sufficient to make them two different instruments. In the trombone family large and small bore trombones are both called trombones, while the cylindrical trumpet and the conical flugelhorn are given different names. As with the trumpet and flugelhorn, the two instruments are easily doubled by one player, with some modification of breath and embouchure, since the two have identical range and essentially identical fingering. The cylindrical baritone offers a brighter sound and the conical euphonium offers a more mellow sound. The American baritone, featuring three valves on the front of the instrument and a curved, forward-pointing bell, was dominant in American school bands throughout most of the 20th century, its weight, shape, and configuration conforming to the needs of the marching band. While this instrument is a conical-cylindrical bore hybrid, somewhere between the classic baritone horn and euphonium, it was almost universally labelled a "baritone" by both band directors and composers, thus contributing to the confusion of terminology in the United States. Several late 19th century music catalogs (such as Pepper and Lyon & Healy) sold a euphonium-like instrument called the "B bass" (to distinguish it from the E and BB bass). In these catalog drawings, the B Bass had thicker tubing than the baritone; both had three valves. Along the same lines, drum and bugle corps introduced the "Bass-baritone", and distinguished it from the baritone. The thicker tubing of the three-valve B bass allowed for production of strong false-tones, providing chromatic access to the pedal register. Ferdinand Sommer's original name for the instrument was the euphonion. It is sometimes called the tenor tuba in B, although this can also refer to other varieties of tuba. Names in other languages, as included in scores, can be ambiguous as well. They include French basse, saxhorn basse, and tuba basse; German Baryton, Tenorbass, and Tenorbasshorn; Italian baritono, bombardino, eufonio, and flicorno basso. The most common German name, Baryton, may have influenced Americans to adopt the name "baritone" for the instrument, due to the influx of German musicians to the United States in the nineteenth century. History and development As a baritone-voiced brass instrument, the euphonium traces its ancestry to the ophicleide and ultimately back to the serpent. The search for a satisfactory foundational wind instrument that could support massed sound above its pitch took many years. While the serpent was used for over two centuries dating back to the late Renaissance, it was notoriously difficult to control its pitch and tone quality due to its disproportionately small open finger holes. The ophicleide, which was used in bands and orchestras for a few decades in the early to mid-19th century, used a system of keys and was an improvement over the serpent but was still unreliable, especially in the high register. With the invention of the piston valve system 1818, the construction of brass instruments with an even sound and facility of playing in all registers became possible. The euphonium is said to have been invented, as a "wide-bore, valved bugle of baritone range", by Ferdinand Sommer of Weimar in 1843, though Carl Moritz in 1838 and Adolphe Sax in 1843 have also been credited. While Sax's family of saxhorns were invented at about the same time and the bass saxhorn is very similar to a euphonium, there are also differences—such as the bass saxhorn being narrower throughout the length of the instrument. The "British-style" compensating euphonium was developed in 1874 by David Blaikley, of Boosey & Co, and has been in use in Britain since then, with the basic construction little changed. Modern-day euphonium makers have been working to further enhance the construction of the instrument. Companies such as Adams and Besson have been leading the way in that respect. Adams euphoniums have developed an adjustable lead-pipe receiver, which allows players to change the timbre of the instrument to whatever they find preferable. Besson has been credited with the introducing an adjustable main tuning-slide trigger, which allows players more flexibility with intonation. Construction and general characteristics The euphonium, like the tenor trombone, is pitched in concert B. For a valved brass instrument like the euphonium, this means that when no valves are in use the instrument will produce partials of the B harmonic series. It is generally orchestrated as a non-transposing instrument like the trombone, written at concert pitch in the bass clef with higher passages in the tenor clef. Treble clef euphonium parts transposing down a major ninth are included in much concert band music: in the British-style brass band tradition, euphonium music is always written this way. In continental European band music, parts for the euphonium may be written in the bass clef as a B transposing instrument sounding a major second lower than written. Professional models have three top-action valves, played with the first three fingers of the right hand, plus a "compensating" fourth valve, generally found midway down the right side of the instrument, played with the left index finger; such an instrument is shown at the top of this page. Beginner models often have only the three top-action valves, while some intermediate "student" models may have a fourth top-action valve, played with the fourth finger of the right hand. Compensating systems are expensive to build, and there is in general a substantial difference in price between compensating and non-compensating models. For a thorough discussion of the valves and the compensation system, see the article on brass instruments. The euphonium has an extensive range, comfortably from E2 to about E4 for intermediate players (using scientific pitch notation). In professional hands this may extend from B0 to as high as B5. The lowest notes obtainable depend on the valve set-up of the instrument. All instruments are chromatic down to E2, but four-valved instruments extend that down to at least C2. Non-compensating four-valved instruments suffer from intonation problems from E2 down to C2 and cannot produce the low B1; compensating instruments do not have such intonation problems and can play the low B1. From B1 down lies the "pedal range", i.e., the fundamentals of the instrument's harmonic series. They are easily produced on the euphonium as compared to other brass instruments, and the extent of the range depends on the make of the instrument in exactly the same way as just described. Thus, on a compensating four-valved instrument, the lowest note possible is B0, sometimes called double pedal B, which is six ledger lines below the bass clef. As with the other conical-bore instruments, the cornet, flugelhorn, horn, and tuba, the euphonium's tubing (excepting the tubing in the valve section, which is necessarily cylindrical) gradually increases in diameter throughout its length, resulting in a softer, gentler tone compared to cylindrical-bore instruments such as the trumpet, trombone, sudrophone, and baritone horn. While a truly characteristic euphonium sound is rather hard to define precisely, most players would agree that an ideal sound is dark, rich, warm, and velvety, with virtually no hardness to it. This also has to do with the different models preferred by British and American players. Though the euphonium's fingerings are no different from those of the trumpet or tuba, beginning euphoniumists will likely experience significant problems with intonation, response and range compared to other beginning brass players. Types Compensating The compensating euphonium is common among professionals. It utilizes a three-plus-one-valve system with three upright valves and one side valve. The compensating valve system uses extra tubing, usually coming off of the back of the three upright valves, in order to achieve proper intonation in the lower range of the instrument. This range being from E2 down to B1. Not all four-valve and three-plus-one-valve euphoniums are compensating. Only those designed with extra tubing are compensating. There were, at one time, three-valve compensating euphoniums available. This configuration utilized extra tubing, just as the three-plus-one compensating models did, in order to bring the notes C2 and B1 in tune. This three-valve compensating configuration is still available in British style baritone horns, usually on professional models. Double-bell A creation unique to the United States was the double-bell euphonium, featuring a second smaller bell in addition to the main one; the player could switch bells for certain passages or even for individual notes by use of an additional valve, operated with the left hand. Ostensibly, the smaller bell was intended to emulate the sound of a trombone (it |
to the Weierstrass factorization theorem). The logarithm hits every complex number except possibly one number, which implies that the first function will hit any value other than 0 an infinite number of times. Similarly, a non-constant, entire function that does not hit a particular value will hit every other value an infinite number of times. Liouville's theorem is a special case of the following statement: Growth Entire functions may grow as fast as any increasing function: for any increasing function there exists an entire function such that for all real . Such a function may be easily found of the form: for a constant and a strictly increasing sequence of positive integers . Any such sequence defines an entire function , and if the powers are chosen appropriately we may satisfy the inequality for all real . (For instance, it certainly holds if one chooses and, for any integer one chooses an even exponent such that ). Order and type The order (at infinity) of an entire function is defined using the limit superior as: where is the disk of radius and denotes the supremum norm of on . The order is a non-negative real number or infinity (except when for all ). In other words, the order of is the infimum of all such that: The example of shows that this does not mean f(z) = O(exp(|z|m)) if is of order m. If one can also define the type: If the order is 1 and the type is , the function is said to be "of exponential type ". If it is of order less than 1 it is said to be of exponential type 0. If then the order and type can be found by the formulas Let denote the -th derivative of , then we may restate these formulas in terms of the derivatives at any arbitrary point : The type may be infinite, as in the case of the reciprocal gamma function, or zero (see example below under ). Examples Here are some examples of functions of various orders: Order ρ For arbitrary positive numbers and one can construct an example of an entire function of order and type using: Order 0 Non-zero polynomials Order 1/4 where Order 1/3 where Order 1/2 with a ≠ 0 (for which the type is given by σ = |a|) Order 1 with () the Bessel function the reciprocal gamma function ( is infinite) Order 3/2 Airy function Order 2 with () The Barnes G-function (σ is infinite). Order infinity Genus Entire functions of finite order have Hadamard's canonical representation: where are those roots of that are not zero (), is the order of the zero of at (the case being taken to mean ), a polynomial (whose degree we shall call ), and is the smallest non-negative integer such that the series converges. The non-negative integer is called the genus of the entire function . If the order ρ is not an integer, then is the integer part of . If the order is a positive integer, then there are two possibilities: or . For example, , and are entire functions of genus 1. Other examples According to J. E. Littlewood, the Weierstrass sigma function is a 'typical' entire function. This statement can be made precise in the theory of random entire functions: the asymptotic behavior of almost all entire functions is similar to that of the sigma function. Other examples include the Fresnel integrals, the Jacobi theta function, and the reciprocal Gamma function. The exponential function and the error function are special cases of the Mittag-Leffler function. According to the fundamental theorem of Paley and Wiener, Fourier transforms of functions (or distributions) with bounded support are entire functions of order 1 and finite type. Other examples are solutions of linear differential equations with polynomial coefficients. If the | the function is determined up to an imaginary constant. (For instance, if the real part is known on part of the unit circle, then it is known on the whole unit circle by analytic extension, and then the coefficients of the infinite series are determined from the coefficients of the Fourier series for the real part on the unit circle.) Note however that an entire function is not determined by its real part on all curves. In particular, if the real part is given on any curve in the complex plane where the real part of some other entire function is zero, then any multiple of that function can be added to the function we are trying to determine. For example, if the curve where the real part is known is the real line, then we can add i times any self-conjugate function. If the curve forms a loop, then it is determined by the real part of the function on the loop since the only functions whose real part is zero on the curve are those that are everywhere equal to some imaginary number. The Weierstrass factorization theorem asserts that any entire function can be represented by a product involving its zeroes (or "roots"). The entire functions on the complex plane form an integral domain (in fact a Prüfer domain). They also form a commutative unital associative algebra over the complex numbers. Liouville's theorem states that any bounded entire function must be constant. Liouville's theorem may be used to elegantly prove the fundamental theorem of algebra. As a consequence of Liouville's theorem, any function that is entire on the whole Riemann sphere (complex plane and the point at infinity) is constant. Thus any non-constant entire function must have a singularity at the complex point at infinity, either a pole for a polynomial or an essential singularity for a transcendental entire function. Specifically, by the Casorati–Weierstrass theorem, for any transcendental entire function and any complex there is a sequence such that Picard's little theorem is a much stronger result: any non-constant entire function takes on every complex number as value, possibly with a single exception. When an exception exists, it is called a lacunary value of the function. The possibility of a lacunary value is illustrated by the exponential function, which never takes on the value 0. One can take a suitable branch of the logarithm of an entire function that never hits 0, so that this will also be an entire function (according to |
literary device for saying almost everything about almost anything", and adds that "by tradition, almost by definition, the essay is a short piece". Furthermore, Huxley argues that "essays belong to a literary species whose extreme variability can be studied most effectively within a three-poled frame of reference". These three poles (or worlds in which the essay may exist) are: The personal and the autobiographical: The essayists that feel most comfortable in this pole "write fragments of reflective autobiography and look at the world through the keyhole of anecdote and description". The objective, the factual, and the concrete particular: The essayists that write from this pole "do not speak directly of themselves, but turn their attention outward to some literary or scientific or political theme. Their art consists of setting forth, passing judgment upon, and drawing general conclusions from the relevant data". The abstract-universal: In this pole "we find those essayists who do their work in the world of high abstractions", who are never personal and who seldom mention the particular facts of experience. Huxley adds that the most satisfying essays "...make the best not of one, not of two, but of all the three worlds in which it is possible for the essay to exist." History Montaigne Montaigne's "attempts" grew out of his commonplacing. Inspired in particular by the works of Plutarch, a translation of whose Œuvres Morales (Moral works) into French had just been published by Jacques Amyot, Montaigne began to compose his essays in 1572; the first edition, entitled Essais, was published in two volumes in 1580. For the rest of his life, he continued revising previously published essays and composing new ones. A third volume was published posthumously; together, their over 100 examples are widely regarded as the predecessor of the modern essay. Europe While Montaigne's philosophy was admired and copied in France, none of his most immediate disciples tried to write essays. But Montaigne, who liked to fancy that his family (the Eyquem line) was of English extraction, had spoken of the English people as his "cousins", and he was early read in England, notably by Francis Bacon. Bacon's essays, published in book form in 1597 (only five years after the death of Montaigne, containing the first ten of his essays), 1612, and 1625, were the first works in English that described themselves as essays. Ben Jonson first used the word essayist in 1609, according to the Oxford English Dictionary. Other English essayists included Sir William Cornwallis, who published essays in 1600 and 1617 that were popular at the time, Robert Burton (1577–1641) and Sir Thomas Browne (1605–1682). In Italy, Baldassare Castiglione wrote about courtly manners in his essay Il Cortigiano. In the 17th century, the Spanish Jesuit Baltasar Gracián wrote about the theme of wisdom. In England, during the Age of Enlightenment, essays were a favored tool of polemicists who aimed at convincing readers of their position; they also featured heavily in the rise of periodical literature, as seen in the works of Joseph Addison, Richard Steele and Samuel Johnson. Addison and Steele used the journal Tatler (founded in 1709 by Steele) and its successors as storehouses of their work, and they became the most celebrated eighteenth-century essayists in England. Johnson's essays appear during the 1750s in various similar publications. As a result of the focus on journals, the term also acquired a meaning synonymous with "article", although the content may not the strict definition. On the other hand, Locke's An Essay Concerning Human Understanding is not an essay at all, or cluster of essays, in the technical sense, but still it refers to the experimental and tentative nature of the inquiry which the philosopher was undertaking. In the 18th and 19th centuries, Edmund Burke and Samuel Taylor Coleridge wrote essays for the general public. The early 19th century, in particular, saw a proliferation of great essayists in English—William Hazlitt, Charles Lamb, Leigh Hunt and Thomas de Quincey all penned numerous essays on diverse subjects, reviving the earlier graceful style. Later in the century, Robert Louis Stevenson also raised the form's literary level. In the 20th century, a number of essayists, such as T.S. Eliot, tried to explain the new movements in art and culture by using essays. Virginia Woolf, Edmund Wilson, and Charles du Bos wrote literary criticism essays. In France, several writers produced longer works with the title of that were not true examples of the form. However, by the mid-19th century, the Causeries du lundi, newspaper columns by the critic Sainte-Beuve, are literary essays in the original sense. Other French writers followed suit, including Théophile Gautier, Anatole France, Jules Lemaître and Émile Faguet. Japan As with the novel, essays existed in Japan several centuries before they developed in Europe with a genre of essays known as zuihitsu—loosely connected essays and fragmented ideas. Zuihitsu have existed since almost the beginnings of Japanese literature. Many of the most noted early works of Japanese literature are in this genre. Notable examples include The Pillow Book (c. 1000), by court lady Sei Shōnagon, and Tsurezuregusa (1330), by particularly renowned Japanese Buddhist monk Yoshida Kenkō. Kenkō described his short writings similarly to Montaigne, referring to them as "nonsensical thoughts" written in "idle hours". Another noteworthy difference from Europe is that women have traditionally written in Japan, though the more formal, Chinese-influenced writings of male writers were more prized at the time. China The eight-legged essay (Chinese: 八股文; pinyin: bāgǔwén; lit. 'eight bone text') was a style of essay in imperial examinations during the Ming and Qing dynasties in China. The eight-legged essay was needed for those test takers in these civil service tests to show their merits for government service, often focusing on Confucian thought and knowledge of the Four Books and Five Classics, in relation to governmental ideals. Test takers could not write in innovative or creative ways, but needed to conform to the standards of the eight-legged essay. Various skills were examined, including the ability to write coherently and to display basic logic. In certain times, the candidates were expected to spontaneously compose poetry upon a set theme, whose value was also sometimes questioned, or eliminated as part of the test material. This was a major argument in favor of the eight-legged essay, arguing that it were better to eliminate creative art in favor of prosaic literacy. In the history of Chinese literature, the eight-legged essay is often said to have caused China's "cultural stagnation and economic backwardness" in the 19th century. Forms and styles This section describes the different forms and styles of essay writing. These are used by an array of authors, including university students and professional essayists. Cause and effect The defining features of a "cause and effect" essay are causal chains that connect from a cause to an effect, careful language, and chronological or emphatic order. A writer using this rhetorical method must consider the subject, determine the purpose, consider the audience, think critically about different causes or consequences, consider a thesis statement, arrange the parts, consider the language, and decide on a conclusion. Classification and division Classification is the categorization of objects into a larger whole while division is the breaking of a larger whole into smaller parts. Compare and contrast Compare and contrast essays are characterized by a basis for comparison, points of comparison, and analogies. It is grouped by the object (chunking) or by point (sequential). The comparison highlights the similarities between two or more similar objects while contrasting highlights the differences between two or more objects. When writing a compare/contrast essay, writers need to determine their purpose, consider their audience, consider the basis and points of comparison, consider their thesis statement, arrange and develop the comparison, and reach a conclusion. Compare and contrast is arranged emphatically. Expository An expository essay is used to inform, describe or explain a topic, using important facts to teach the reader about a topic. Mostly written in third-person, using "it", "he", "she", "they," the expository essay uses formal language to discuss someone or something. Examples of expository essays are: a medical or biological condition, social or technological process, life or character of a famous person. The writing of an expository essay often consists of the following steps: organizing thoughts (brainstorming), researching a topic, developing a thesis statement, writing the introduction, writing the body of essay, and writing the conclusion. Expository essays are often assigned as a part of SAT and other standardized testing or as homework for high school and college students. Descriptive Descriptive writing is characterized by sensory details, which appeal to the physical senses, and details that appeal to a reader's emotional, physical, or intellectual sensibilities. Determining the purpose, considering the audience, creating a dominant impression, using descriptive language, and organizing the description are the rhetorical choices to consider when using a description. A description is usually arranged spatially but can also be chronological or emphatic. The focus of a description is the scene. Description uses tools such as denotative language, connotative language, figurative language, metaphor, and simile to arrive at a dominant impression. One university essay guide states that "descriptive writing says what happened or what another author has discussed; it provides an account of the topic". Lyric essays are an important form of descriptive essays. Dialectic In the dialectic form of the essay, which is commonly used in philosophy, the writer makes a thesis and argument, then objects to their own argument (with a counterargument), but then counters the counterargument with a final and novel argument. This form benefits from presenting a broader perspective while countering a possible flaw that some may present. This type is sometimes called an ethics paper. Exemplification An exemplification essay is | text makes it clear to the reader why the argument or claim is as such. Narrative A narrative uses tools such as flashbacks, flash-forwards, and transitions that often build to a climax. The focus of a narrative is the plot. When creating a narrative, authors must determine their purpose, consider their audience, establish their point of view, use dialogue, and organize the narrative. A narrative is usually arranged chronologically. Argumentative An argumentative essay is a critical piece of writing, aimed at presenting objective analysis of the subject matter, narrowed down to a single topic. The main idea of all the criticism is to provide an opinion either of positive or negative implication. As such, a critical essay requires research and analysis, strong internal logic and sharp structure. Its structure normally builds around introduction with a topic's relevance and a thesis statement, body paragraphs with arguments linking back to the main thesis, and conclusion. In addition, an argumentative essay may include a refutation section where conflicting ideas are acknowledged, described, and criticized. Each argument of an argumentative essay should be supported with sufficient evidence, relevant to the point. Process A process essay is used for an explanation of making or breaking something. Often, it is written in chronological order or numerical order to show step-by-step processes. It has all the qualities of a technical document with the only difference is that it is often written in descriptive mood, while a technical document is mostly in imperative mood. Economic An economic essay can start with a thesis, or it can start with a theme. It can take a narrative course and a descriptive course. It can even become an argumentative essay if the author feels the need. After the introduction, the author has to do his/her best to expose the economic matter at hand, to analyze it, evaluate it, and draw a conclusion. If the essay takes more of a narrative form then the author has to expose each aspect of the economic puzzle in a way that makes it clear and understandable for the reader Reflective A reflective essay is an analytical piece of writing in which the writer describes a real or imaginary scene, event, interaction, passing thought, memory, or form—adding a personal reflection on the meaning of the topic in the author's life. Thus, the focus is not merely descriptive. The writer doesn't just describe the situation, but revisits the scene with more detail and emotion to examine what went well, or reveal a need for additional learning—and may relate what transpired to the rest of the author's life. Other logical structures The logical progression and organizational structure of an essay can take many forms. Understanding how the movement of thought is managed through an essay has a profound impact on its overall cogency and ability to impress. A number of alternative logical structures for essays have been visualized as diagrams, making them easy to implement or adapt in the construction of an argument. Academic In countries like the United States and the United Kingdom, essays have become a major part of a formal education in the form of free response questions. Secondary students in these countries are taught structured essay formats to improve their writing skills, and essays are often used by universities in these countries in selecting applicants (see admissions essay). In both secondary and tertiary education, essays are used to judge the mastery and comprehension of the material. Students are asked to explain, comment on, or assess a topic of study in the form of an essay. In some courses, university students must complete one or more essays over several weeks or months. In addition, in fields such as the humanities and social sciences, mid-term and end of term examinations often require students to write a short essay in two or three hours. In these countries, so-called academic essays, also called papers, are usually more formal than literary ones. They may still allow the presentation of the writer's own views, but this is done in a logical and factual manner, with the use of the first person often discouraged. Longer academic essays (often with a word limit of between 2,000 and 5,000 words) are often more discursive. They sometimes begin with a short summary analysis of what has previously been written on a topic, which is often called a literature review. Longer essays may also contain an introductory page that defines words and phrases of the essay's topic. Most academic institutions require that all substantial facts, quotations, and other supporting material in an essay be referenced in a bibliography or works cited page at the end of the text. This scholarly convention helps others (whether teachers or fellow scholars) to understand the basis of facts and quotations the author uses to support the essay's argument. The bibliography also helps readers evaluate to what extent the argument is supported by evidence and to evaluate the quality of that evidence. The academic essay tests the student's ability to present their thoughts in an organized way and is designed to test their intellectual capabilities. One of the challenges facing universities is that in some cases, students may submit essays purchased from an essay mill (or "paper mill") as their own work. An "essay mill" is a ghostwriting service that sells pre-written essays to university and college students. Since plagiarism is a form of academic dishonesty or academic fraud, universities and colleges may investigate papers they suspect are from an essay mill by using plagiarism detection software, which compares essays against a database of known mill essays and by orally testing students on the contents of their papers. Magazine or newspaper Essays often appear in magazines, especially magazines with an intellectual bent, such as The Atlantic and Harpers. Magazine and newspaper essays use many of the essay types described in the section on forms and styles (e.g., descriptive essays, narrative essays, etc.). Some newspapers also print essays in the op-ed section. Employment Employment essays detailing experience in a certain occupational field are required when applying for some jobs, especially government jobs in the United States. Essays known as Knowledge Skills and Executive Core Qualifications are required when applying to certain US federal government positions. A KSA, or "Knowledge, Skills, and Abilities," is a series of narrative statements that are required when applying to Federal government job openings in the United States. KSAs are used along with resumes to determine who the best applicants are when several candidates qualify for a job. The knowledge, skills, and abilities necessary for the successful performance of a position are contained on each job vacancy announcement. KSAs are brief and focused essays about one's career and educational background that presumably qualify one to perform the duties of the position being applied for. An Executive Core Qualification, or ECQ, is a narrative statement that is required when applying to Senior Executive Service positions within the US Federal government. Like the KSAs, ECQs are used along with resumes to determine who the best applicants are when several candidates qualify for a job. The Office of Personnel Management has established five executive core qualifications that all applicants seeking to enter the Senior Executive Service must demonstrate. Non-literary types Film A film essay (or "cinematic essay") consists of the evolution of a theme or an idea rather than a plot per se, or the film literally being a cinematic accompaniment to a narrator reading an essay. From another perspective, an essay film could be defined as a documentary film visual basis combined with a form of commentary that contains elements of self-portrait (rather than autobiography), where the signature (rather than the life story) of the filmmaker is apparent. The cinematic essay often blends documentary, fiction, and experimental film making using tones and editing styles. The genre is not well-defined but might include propaganda works of early Soviet filmmakers like Dziga Vertov, present-day filmmakers including Chris Marker, Michael Moore (Roger & Me, Bowling for Columbine and Fahrenheit 9/11), Errol Morris (The Thin Blue Line), Morgan Spurlock (Supersize Me) and Agnès Varda. Jean-Luc Godard describes his recent work as "film-essays". Two filmmakers whose work was the antecedent to the cinematic essay include Georges Méliès and Bertolt Brecht. Méliès made a short film (The Coronation of Edward VII (1902)) about the 1902 coronation of King Edward VII, which mixes actual footage with shots of a recreation of the event. Brecht was a playwright who experimented with film and incorporated film projections into some of his plays. Orson Welles made an essay film in his own pioneering style, released in 1974, called F for Fake, which dealt specifically with art forger Elmyr de Hory and with the themes of deception, "fakery," and authenticity in general. These are often published online on video hosting services. David Winks Gray's article "The essay film in action" states that the "essay film became an identifiable form of filmmaking in the 1950s and '60s". He states that since that time, essay films have tended to be "on the margins" of the filmmaking the world. Essay films have a "peculiar searching, questioning tone ... between documentary and fiction" but without "fitting comfortably" into either genre. Gray notes that just like written essays, essay films "tend to marry the personal voice of a guiding narrator (often the director) with a wide swath of other voices". The University of Wisconsin Cinematheque website echoes some of Gray's comments; it calls a film essay an "intimate and allusive" genre that "catches filmmakers in a pensive mood, ruminating on the margins between fiction and documentary" in a manner that is "refreshingly inventive, playful, and idiosyncratic". Music In the realm of music, composer Samuel Barber wrote a set of "Essays for Orchestra," relying on the form and content of the music to guide the listener's ear, rather than any extra-musical plot or story. Photography A photographic essay strives to cover a topic with a linked |
is not suitable for detecting maliciously introduced errors. It is characterized by specification of a generator polynomial, which is used as the divisor in a polynomial long division over a finite field, taking the input data as the dividend. The remainder becomes the result. A CRC has properties that make it well suited for detecting burst errors. CRCs are particularly easy to implement in hardware and are therefore commonly used in computer networks and storage devices such as hard disk drives. The parity bit can be seen as a special-case 1-bit CRC. Cryptographic hash function The output of a cryptographic hash function, also known as a message digest, can provide strong assurances about data integrity, whether changes of the data are accidental (e.g., due to transmission errors) or maliciously introduced. Any modification to the data will likely be detected through a mismatching hash value. Furthermore, given some hash value, it is typically infeasible to find some input data (other than the one given) that will yield the same hash value. If an attacker can change not only the message but also the hash value, then a keyed hash or message authentication code (MAC) can be used for additional security. Without knowing the key, it is not possible for the attacker to easily or conveniently calculate the correct keyed hash value for a modified message. Error correction code Any error-correcting code can be used for error detection. A code with minimum Hamming distance, d, can detect up to d − 1 errors in a code word. Using minimum-distance-based error-correcting codes for error detection can be suitable if a strict limit on the minimum number of errors to be detected is desired. Codes with minimum Hamming distance d = 2 are degenerate cases of error-correcting codes, and can be used to detect single errors. The parity bit is an example of a single-error-detecting code. Applications Applications that require low latency (such as telephone conversations) cannot use automatic repeat request (ARQ); they must use forward error correction (FEC). By the time an ARQ system discovers an error and re-transmits it, the re-sent data will arrive too late to be usable. Applications where the transmitter immediately forgets the information as soon as it is sent (such as most television cameras) cannot use ARQ; they must use FEC because when an error occurs, the original data is no longer available. Applications that use ARQ must have a return channel; applications having no return channel cannot use ARQ. Applications that require extremely low error rates (such as digital money transfers) must use ARQ due to the possibility of uncorrectable errors with FEC. Reliability and inspection engineering also make use of the theory of error-correcting codes. Internet In a typical TCP/IP stack, error control is performed at multiple levels: Each Ethernet frame uses CRC-32 error detection. Frames with detected errors are discarded by the receiver hardware. The IPv4 header contains a checksum protecting the contents of the header. Packets with incorrect checksums are dropped within the network or at the receiver. The checksum was omitted from the IPv6 header in order to minimize processing costs in network routing and because current link layer technology is assumed to provide sufficient error detection (see also RFC 3819). UDP has an optional checksum covering the payload and addressing information in the UDP and IP headers. Packets with incorrect checksums are discarded by the network stack. The checksum is optional under IPv4, and required under IPv6. When omitted, it is assumed the data-link layer provides the desired level of error protection. TCP provides a checksum for protecting the payload and addressing information in the TCP and IP headers. Packets with incorrect checksums are discarded by the network stack, and eventually get retransmitted using ARQ, either explicitly (such as through three-way handshake) or implicitly due to a timeout. Deep-space telecommunications The development of error-correction codes was tightly coupled with the history of deep-space missions due to the extreme dilution of signal power over interplanetary distances, and the limited power availability aboard space probes. Whereas early missions sent their data uncoded, starting in 1968, digital error correction was implemented in the form of (sub-optimally decoded) convolutional codes and Reed–Muller codes. The Reed–Muller code was well suited to the noise the spacecraft was subject to (approximately matching a bell curve), and was implemented for the Mariner spacecraft and used on missions between 1969 and 1977. The Voyager 1 and Voyager 2 missions, which started in 1977, were designed to deliver color imaging and scientific information from Jupiter and Saturn. This resulted in increased coding requirements, and thus, the spacecraft were supported by (optimally Viterbi-decoded) convolutional codes that could be concatenated with an outer Golay (24,12,8) code. The Voyager 2 craft additionally supported an implementation of a Reed–Solomon code. The concatenated Reed–Solomon–Viterbi (RSV) code allowed for very powerful error correction, and enabled the spacecraft's extended journey to Uranus and Neptune. After ECC system upgrades in 1989, both crafts used V2 RSV coding. The Consultative Committee for Space Data Systems currently recommends usage of error correction codes with performance similar to the Voyager 2 RSV code as a minimum. Concatenated codes are increasingly falling out of favor with space missions, and are replaced by more powerful codes such as Turbo codes or LDPC codes. The different kinds of deep space and orbital missions that are conducted suggest that trying to find a one-size-fits-all error correction system will be an ongoing problem. For missions close to Earth, the nature of the noise in the communication channel is different from that which a spacecraft on an interplanetary mission experiences. Additionally, as a spacecraft increases its distance from Earth, the problem of correcting for noise becomes more difficult. Satellite broadcasting The demand for satellite transponder bandwidth continues to grow, fueled by the desire to deliver television (including new channels and high-definition television) and IP data. Transponder availability and bandwidth constraints have limited this growth. Transponder capacity is determined by the selected modulation scheme and the proportion of capacity consumed by FEC. Data storage Error detection and correction codes are often | with a certain probability, and dynamic models where errors occur primarily in bursts. Consequently, error-detecting and correcting codes can be generally distinguished between random-error-detecting/correcting and burst-error-detecting/correcting. Some codes can also be suitable for a mixture of random errors and burst errors. If the channel characteristics cannot be determined, or are highly variable, an error-detection scheme may be combined with a system for retransmissions of erroneous data. This is known as automatic repeat request (ARQ), and is most notably used in the Internet. An alternate approach for error control is hybrid automatic repeat request (HARQ), which is a combination of ARQ and error-correction coding. Types of error correction There are three major types of error correction. Automatic repeat request (ARQ) Automatic Repeat reQuest (ARQ) is an error control method for data transmission that makes use of error-detection codes, acknowledgment and/or negative acknowledgment messages, and timeouts to achieve reliable data transmission. An acknowledgment is a message sent by the receiver to indicate that it has correctly received a data frame. Usually, when the transmitter does not receive the acknowledgment before the timeout occurs (i.e., within a reasonable amount of time after sending the data frame), it retransmits the frame until it is either correctly received or the error persists beyond a predetermined number of retransmissions. Three types of ARQ protocols are Stop-and-wait ARQ, Go-Back-N ARQ, and Selective Repeat ARQ. ARQ is appropriate if the communication channel has varying or unknown capacity, such as is the case on the Internet. However, ARQ requires the availability of a back channel, results in possibly increased latency due to retransmissions, and requires the maintenance of buffers and timers for retransmissions, which in the case of network congestion can put a strain on the server and overall network capacity. For example, ARQ is used on shortwave radio data links in the form of ARQ-E, or combined with multiplexing as ARQ-M. Forward error correction Forward error correction (FEC) is a process of adding redundant data such as an error-correcting code (ECC) to a message so that it can be recovered by a receiver even when a number of errors (up to the capability of the code being used) were introduced, either during the process of transmission, or on storage. Since the receiver does not have to ask the sender for retransmission of the data, a backchannel is not required in forward error correction, and it is therefore suitable for simplex communication such as broadcasting. Error-correcting codes are frequently used in lower-layer communication, as well as for reliable storage in media such as CDs, DVDs, hard disks, and RAM. Error-correcting codes are usually distinguished between convolutional codes and block codes: Convolutional codes are processed on a bit-by-bit basis. They are particularly suitable for implementation in hardware, and the Viterbi decoder allows optimal decoding. Block codes are processed on a block-by-block basis. Early examples of block codes are repetition codes, Hamming codes and multidimensional parity-check codes. They were followed by a number of efficient codes, Reed–Solomon codes being the most notable due to their current widespread use. Turbo codes and low-density parity-check codes (LDPC) are relatively new constructions that can provide almost optimal efficiency. Shannon's theorem is an important theorem in forward error correction, and describes the maximum information rate at which reliable communication is possible over a channel that has a certain error probability or signal-to-noise ratio (SNR). This strict upper limit is expressed in terms of the channel capacity. More specifically, the theorem says that there exist codes such that with increasing encoding length the probability of error on a discrete memoryless channel can be made arbitrarily small, provided that the code rate is smaller than the channel capacity. The code rate is defined as the fraction k/n of k source symbols and n encoded symbols. The actual maximum code rate allowed depends on the error-correcting code used, and may be lower. This is because Shannon's proof was only of existential nature, and did not show how to construct codes which are both optimal and have efficient encoding and decoding algorithms. Hybrid schemes Hybrid ARQ is a combination of ARQ and forward error correction. There are two basic approaches: Messages are always transmitted with FEC parity data (and error-detection redundancy). A receiver decodes a message using the parity information, and requests retransmission using ARQ only if the parity data was not sufficient for successful decoding (identified through a failed integrity check). Messages are transmitted without parity data (only with error-detection information). If a receiver detects an error, it requests FEC information from the transmitter using ARQ, and uses it to reconstruct the original message. The latter approach is particularly attractive on an erasure channel when using a rateless erasure code. Error detection schemes Error detection is most commonly realized using a suitable hash function (or specifically, a checksum, cyclic redundancy check or other algorithm). A hash function adds a fixed-length tag to a message, which enables receivers to verify the delivered message by recomputing the tag and comparing it with the one provided. There exists a vast variety of different hash function designs. However, some are of particularly widespread use because of either their simplicity or their suitability for detecting certain kinds of errors (e.g., the cyclic redundancy check's performance in detecting burst errors). Minimum distance coding A random-error-correcting code based on minimum distance coding can provide a strict guarantee on the number of detectable errors, but it may not protect against a preimage attack. Repetition codes A repetition code is a coding scheme that repeats the bits across a channel to achieve error-free communication. Given a stream of data to be transmitted, the data are divided into blocks of bits. Each block is transmitted some predetermined number of times. For example, to send the bit pattern "1011", the four-bit block can be repeated three times, thus producing "1011 1011 1011". If this twelve-bit pattern was received as "1010 1011 1011" – where the first block is unlike the other two – an error has occurred. A repetition code is very inefficient, and can be susceptible to problems if the error occurs in exactly the same place for each group (e.g., "1010 1010 1010" in the previous example would be detected as correct). The advantage of repetition codes is that they are extremely simple, and are in fact used in some transmissions of numbers stations. Parity bit A parity bit is a bit that is added to a group of source bits to ensure that the number of set bits (i.e., bits with value 1) in the outcome is even or odd. It is a very simple scheme that can be used to detect single or any other odd number (i.e., three, five, etc.) of errors in the output. An even number of flipped bits will make the parity bit appear correct even though the data is erroneous. Parity bits added to each "word" sent are called transverse redundancy checks, while those added at the end of a stream of "words" are called longitudinal redundancy checks. For example, if each of a series of m-bit "words" has a parity bit added, showing whether there were an odd or even number of ones in that word, any word with a single error in it will be detected. It will not be known where in the word the error is, however. If, in addition, after each stream of n words a parity sum is sent, each bit of which shows whether there were an odd or even number of ones at that bit-position sent in the most recent group, the exact position of the error can be determined and the error corrected. This method is only guaranteed to be effective, however, if there are no more than 1 error in every group of n words. With more error correction bits, more errors can be detected and in some cases corrected. There are also other bit-grouping techniques. Checksum A checksum of a message is a modular arithmetic sum of message code words of a fixed word length (e.g., byte values). The sum may be negated by means of a ones'-complement operation prior to transmission to detect unintentional all-zero messages. Checksum schemes include parity bits, check digits, and longitudinal redundancy checks. Some checksum schemes, such as the Damm algorithm, the Luhn algorithm, and the Verhoeff algorithm, are specifically designed to detect errors commonly introduced by humans in writing down or remembering identification numbers. Cyclic redundancy check A cyclic redundancy check (CRC) is a non-secure hash function designed to detect accidental changes to digital data in computer networks. It is not suitable for detecting maliciously introduced errors. It is characterized by specification of a generator polynomial, which is used as the divisor in a polynomial long division over a finite field, taking the input data as the dividend. The remainder becomes the result. A CRC has properties that make it well suited for detecting burst errors. CRCs are particularly easy to implement in hardware and are therefore commonly used in computer networks and storage devices such as hard disk drives. The parity bit can be seen as a special-case 1-bit CRC. Cryptographic hash function The output of a cryptographic hash function, also known as a message digest, |
power of occurring in . In particular, for two nonzero power series and , if and only if divides . Any discrete valuation ring. Define to be the highest power of the maximal ideal containing . Equivalently, let be a generator of , and be the unique integer such that is an associate of , then define . The previous example is a special case of this. A Dedekind domain with finitely many nonzero prime ideals . Define , where is the discrete valuation corresponding to the ideal . Examples of domains that are not Euclidean domains include: Every domain that is not a principal ideal domain, such as the ring of polynomials in at least two indeterminates over a field, or the ring of univariate polynomials with integer coefficients, or the number ring . The ring of integers of , consisting of the numbers where and are integers and both even or both odd. It is a principal ideal domain that is not Euclidean. The ring is also a principal ideal domain that is not Euclidean. To see that it is not a Euclidean domain, it suffices to show that for every non-zero prime , the map induced by the quotient map is not surjective. Properties Let R be a domain and f a Euclidean function on R. Then: R is a principal ideal domain (PID). In fact, if I is a nonzero ideal of R then any element a of I\{0} with minimal value (on that set) of f(a) is a generator of I. As a consequence R is also a unique factorization domain and a Noetherian ring. With respect to general principal ideal domains, the existence of factorizations (i.e., that R is an atomic domain) is particularly easy to prove in Euclidean domains: choosing a Euclidean function f satisfying (EF2), x cannot have any decomposition into more than f(x) nonunit factors, so starting with x and repeatedly decomposing reducible factors is bound to produce a factorization into irreducible elements. Any element of R at which f takes its globally minimal value is invertible in R. If an f satisfying (EF2) is chosen, then the converse also holds, and f takes its minimal value exactly at the invertible elements of R. If the Euclidean property is algorithmic, i.e., if there is a division algorithm that for given a and nonzero b produces a quotient q and remainder r with and either or , then an extended Euclidean algorithm can be defined in terms of this division operation. If a Euclidean domain is not a field then it has an element a with the following property: any element x not divisible by a can be written as x=ay+u for some unit u and some element y. This follows by taking a to be a non-unit with f(a) as small as possible. This strange property can be used to show that some principal ideal domains are not Euclidean domains, as not all PIDs have this property. For example, for d = −19, −43, −67, −163, the ring of integers of is a PID which is Euclidean, but the cases d = −1, −2, −3, −7, −11 Euclidean. However, in many finite extensions of Q with trivial class | nonzero . , the ring of integers. Define , the absolute value of . , the ring of Gaussian integers. Define , the norm of the Gaussian integer . (where is a primitive (non-real) cube root of unity), the ring of Eisenstein integers. Define , the norm of the Eisenstein integer . , the ring of polynomials over a field . For each nonzero polynomial , define to be the degree of . , the ring of formal power series over the field . For each nonzero power series , define as the order of , that is the degree of the smallest power of occurring in . In particular, for two nonzero power series and , if and only if divides . Any discrete valuation ring. Define to be the highest power of the maximal ideal containing . Equivalently, let be a generator of , and be the unique integer such that is an associate of , then define . The previous example is a special case of this. A Dedekind domain with finitely many nonzero prime ideals . Define , where is the discrete valuation corresponding to the ideal . Examples of domains that are not Euclidean domains include: Every domain that is not a principal ideal domain, such as the ring of polynomials in at least two indeterminates over a field, or the ring of univariate polynomials with integer coefficients, or the number ring . The ring of integers of , consisting of the numbers where and are integers and both even or both odd. It is a principal ideal domain that is not Euclidean. The ring is also a principal ideal domain that is not Euclidean. To see that it is not a Euclidean domain, it suffices to show that for every non-zero prime , the map induced by the quotient map is not surjective. Properties Let R be a domain and f a Euclidean function on R. Then: R is a principal ideal domain (PID). In fact, if I is a nonzero ideal of R then any element a of I\{0} with minimal value (on that set) of f(a) is a generator of I. As a consequence R is also a unique factorization domain and a Noetherian ring. With respect to general principal ideal domains, the existence of factorizations (i.e., that R is an atomic domain) is particularly easy to prove in Euclidean domains: choosing a Euclidean function f satisfying (EF2), x cannot have any decomposition into more than f(x) nonunit factors, so starting with x and repeatedly decomposing reducible factors is bound to produce a factorization into irreducible elements. Any element of R at which f takes its globally minimal value is invertible in R. If an f satisfying (EF2) is chosen, then the converse also holds, and f takes its minimal value exactly at the invertible elements of R. If the Euclidean property is algorithmic, i.e., if there is a division algorithm that for given a and nonzero b produces a quotient q and remainder r with and either or , then an extended Euclidean algorithm can be defined in terms of this division operation. If a Euclidean domain is not a field then it has an element a with |
algorithm for fast integer multiplication can be used to speed this up, leading to quasilinear algorithms for the GCD. Number of steps The number of steps to calculate the GCD of two natural numbers, a and b, may be denoted by T(a, b). If g is the GCD of a and b, then a = mg and b = ng for two coprime numbers m and n. Then as may be seen by dividing all the steps in the Euclidean algorithm by g. By the same argument, the number of steps remains the same if a and b are multiplied by a common factor w: T(a, b) = T(wa, wb). Therefore, the number of steps T may vary dramatically between neighboring pairs of numbers, such as T(a, b) and T(a, b + 1), depending on the size of the two GCDs. The recursive nature of the Euclidean algorithm gives another equation where T(x, 0) = 0 by assumption. Worst-case If the Euclidean algorithm requires N steps for a pair of natural numbers a > b > 0, the smallest values of a and b for which this is true are the Fibonacci numbers FN+2 and FN+1, respectively. More precisely, if the Euclidean algorithm requires N steps for the pair a > b, then one has a ≥ FN+2 and b ≥ FN+1. This can be shown by induction. If N = 1, b divides a with no remainder; the smallest natural numbers for which this is true is b = 1 and a = 2, which are F2 and F3, respectively. Now assume that the result holds for all values of N up to M − 1. The first step of the M-step algorithm is a = q0b + r0, and the Euclidean algorithm requires M − 1 steps for the pair b > r0. By induction hypothesis, one has b ≥ FM+1 and r0 ≥ FM. Therefore, a = q0b + r0 ≥ b + r0 ≥ FM+1 + FM = FM+2, which is the desired inequality. This proof, published by Gabriel Lamé in 1844, represents the beginning of computational complexity theory, and also the first practical application of the Fibonacci numbers. This result suffices to show that the number of steps in Euclid's algorithm can never be more than five times the number of its digits (base 10). For if the algorithm requires N steps, then b is greater than or equal to FN+1 which in turn is greater than or equal to φN−1, where φ is the golden ratio. Since b ≥ φN−1, then N − 1 ≤ logφb. Since log10φ > 1/5, (N − 1)/5 < log10φ logφb = log10b. Thus, N ≤ 5 log10b. Thus, the Euclidean algorithm always needs less than O(h) divisions, where h is the number of digits in the smaller number b. Average The average number of steps taken by the Euclidean algorithm has been defined in three different ways. The first definition is the average time T(a) required to calculate the GCD of a given number a and a smaller natural number b chosen with equal probability from the integers 0 to a − 1 However, since T(a, b) fluctuates dramatically with the GCD of the two numbers, the averaged function T(a) is likewise "noisy". To reduce this noise, a second average τ(a) is taken over all numbers coprime with a There are φ(a) coprime integers less than a, where φ is Euler's totient function. This tau average grows smoothly with a with the residual error being of order a−(1/6) + ε, where ε is infinitesimal. The constant C (Porter's Constant) in this formula equals where γ is the Euler–Mascheroni constant and ζ' is the derivative of the Riemann zeta function. The leading coefficient (12/π2) ln 2 was determined by two independent methods. Since the first average can be calculated from the tau average by summing over the divisors d of a it can be approximated by the formula where Λ(d) is the Mangoldt function. A third average Y(n) is defined as the mean number of steps required when both a and b are chosen randomly (with uniform distribution) from 1 to n Substituting the approximate formula for T(a) into this equation yields an estimate for Y(n) Computational expense per step In each step k of the Euclidean algorithm, the quotient qk and remainder rk are computed for a given pair of integers rk−2 and rk−1 The computational expense per step is associated chiefly with finding qk, since the remainder rk can be calculated quickly from rk−2, rk−1, and qk The computational expense of dividing h-bit numbers scales as O(h(ℓ+1)), where ℓ is the length of the quotient. For comparison, Euclid's original subtraction-based algorithm can be much slower. A single integer division is equivalent to the quotient q number of subtractions. If the ratio of a and b is very large, the quotient is large and many subtractions will be required. On the other hand, it has been shown that the quotients are very likely to be small integers. The probability of a given quotient q is approximately ln|u/(u − 1)| where u = (q + 1)2. For illustration, the probability of a quotient of 1, 2, 3, or 4 is roughly 41.5%, 17.0%, 9.3%, and 5.9%, respectively. Since the operation of subtraction is faster than division, particularly for large numbers, the subtraction-based Euclid's algorithm is competitive with the division-based version. This is exploited in the binary version of Euclid's algorithm. Combining the estimated number of steps with the estimated computational expense per step shows that the Euclid's algorithm grows quadratically (h2) with the average number of digits h in the initial two numbers a and b. Let h0, h1, ..., hN−1 represent the number of digits in the successive remainders r0, r1, ..., rN−1. Since the number of steps N grows linearly with h, the running time is bounded by Alternative methods Euclid's algorithm is widely used in practice, especially for small numbers, due to its simplicity. For comparison, the efficiency of alternatives to Euclid's algorithm may be determined. One inefficient approach to finding the GCD of two natural numbers a and b is to calculate all their common divisors; the GCD is then the largest common divisor. The common divisors can be found by dividing both numbers by successive integers from 2 to the smaller number b. The number of steps of this approach grows linearly with b, or exponentially in the number of digits. Another inefficient approach is to find the prime factors of one or both numbers. As noted above, the GCD equals the product of the prime factors shared by the two numbers a and b. Present methods for prime factorization are also inefficient; many modern cryptography systems even rely on that inefficiency. The binary GCD algorithm is an efficient alternative that substitutes division with faster operations by exploiting the binary representation used by computers. However, this alternative also scales like O(h²). It is generally faster than the Euclidean algorithm on real computers, even though it scales in the same way. Additional efficiency can be gleaned by examining only the leading digits of the two numbers a and b. The binary algorithm can be extended to other bases (k-ary algorithms), with up to fivefold increases in speed. Lehmer's GCD algorithm uses the same general principle as the binary algorithm to speed up GCD computations in arbitrary bases. A recursive approach for very large integers (with more than 25,000 digits) leads to quasilinear integer GCD algorithms, such as those of Schönhage, and Stehlé and Zimmermann. These algorithms exploit the 2×2 matrix form of the Euclidean algorithm given above. These quasilinear methods generally scale as Generalizations Although the Euclidean algorithm is used to find the greatest common divisor of two natural numbers (positive integers), it may be generalized to the real numbers, and to other mathematical objects, such as polynomials, quadratic integers and Hurwitz quaternions. In the latter cases, the Euclidean algorithm is used to demonstrate the crucial property of unique factorization, i.e., that such numbers can be factored uniquely into irreducible elements, the counterparts of prime numbers. Unique factorization is essential to many proofs of number theory. Rational and real numbers Euclid's algorithm can be applied to real numbers, as described by Euclid in Book 10 of his Elements. The goal of the algorithm is to identify a real number such that two given real numbers, and , are integer multiples of it: and , where and are integers. This identification is equivalent to finding an integer relation among the real numbers and ; that is, it determines integers and such that . Euclid uses this algorithm to treat the question of incommensurable lengths. The real-number Euclidean algorithm differs from its integer counterpart in two respects. First, the remainders are real numbers, although the quotients are integers as before. Second, the algorithm is not guaranteed to end in a finite number of steps. If it does, the fraction is a rational number, i.e., the ratio of two integers and can be written as a finite continued fraction . If the algorithm does not stop, the fraction is an irrational number and can be described by an infinite continued fraction . Examples of infinite continued fractions are the golden ratio and the square root of two, . The algorithm is unlikely to stop, since almost all ratios of two real numbers are irrational. An infinite continued fraction may be truncated at a step to yield an approximation to that improves as is increased. The approximation is described by convergents ; the numerator and denominators are coprime and obey the recurrence relation where and are the initial values of the recursion. The convergent is the best rational number approximation to with denominator : Polynomials Polynomials in a single variable x can be added, multiplied and factored into irreducible polynomials, which are the analogs of the prime numbers for integers. The greatest common divisor polynomial of two polynomials and is defined as the product of their shared irreducible polynomials, which can be identified using the Euclidean algorithm. The basic procedure is similar to that for integers. At each step , a quotient polynomial and a remainder polynomial are identified to satisfy the recursive equation where and . Each quotient polynomial is chosen such that each remainder is either zero or has a degree that is smaller than the degree of its predecessor: . Since the degree is a nonnegative integer, and since it decreases with every step, the Euclidean algorithm concludes in a finite number of steps. The last nonzero remainder is the greatest common divisor of the original two polynomials, and . For example, consider the following two quartic polynomials, which each factor into two quadratic polynomials Dividing by yields a remainder . In the next step, is divided by yielding a remainder . Finally, dividing by yields a zero remainder, indicating that is the greatest common divisor polynomial of and , consistent with their factorization. Many of the applications described above for integers carry over to polynomials. The Euclidean algorithm can be used to solve linear Diophantine equations and Chinese remainder problems for polynomials; continued fractions of polynomials can also be defined. The polynomial Euclidean algorithm has other applications, such as Sturm chains, a method for counting the zeros of a polynomial that lie inside a given real interval. This in turn has applications in several areas, such as the Routh–Hurwitz stability criterion in control theory. Finally, the coefficients of the polynomials need not be drawn from integers, real numbers or even the complex numbers. For example, the coefficients may be drawn from a general field, such as the finite fields described above. The corresponding conclusions about the Euclidean algorithm and its applications hold even for such polynomials. Gaussian integers The Gaussian integers are complex numbers of the form , where and are ordinary integers and is the square root of negative one. By defining an analog of the Euclidean algorithm, Gaussian integers can be shown to be uniquely factorizable, by the argument above. This unique factorization is helpful in many applications, such as deriving all Pythagorean triples or proving Fermat's theorem on sums of two squares. In general, the Euclidean algorithm is convenient in such applications, but not essential; for example, the theorems can often be proven by other arguments. The Euclidean algorithm developed for two Gaussian integers and is nearly the same as that for ordinary integers, but differs in two respects. As before, the task at each step is to identify a quotient and a remainder such that where , where , and where every remainder is strictly smaller than its predecessor: . The first difference is that the quotients and remainders are themselves Gaussian integers, and thus are complex numbers. The quotients are generally found by rounding the real and complex parts of the exact ratio (such as the complex number ) to the nearest integers. The second difference lies in the necessity of defining how one complex remainder can be "smaller" than another. To do this, a norm function is defined, which converts every Gaussian integer into an ordinary integer. After each step of the Euclidean algorithm, the norm of the remainder is smaller than the norm of the preceding remainder, . Since the norm is a nonnegative integer and decreases with every step, the Euclidean algorithm for Gaussian integers ends in a finite number of steps. The final nonzero remainder is , the Gaussian integer of largest norm that divides both and ; it is unique up to multiplication by a unit, or . Many of the other applications of the Euclidean algorithm carry over to Gaussian integers. For example, it can be used to solve linear Diophantine equations and Chinese remainder problems for Gaussian integers; continued fractions of Gaussian integers can also be defined. Euclidean domains A set of elements under two binary operations, denoted as addition and multiplication, is called a Euclidean domain if it forms a commutative ring and, roughly speaking, if a generalized Euclidean algorithm can be performed on them. The two operations of such a ring need not be the addition and multiplication of ordinary arithmetic; rather, they can be more general, such as the operations of a mathematical group or monoid. Nevertheless, these general operations should respect many of the laws governing ordinary arithmetic, such as commutativity, associativity and distributivity. The generalized Euclidean algorithm requires a Euclidean function, i.e., a mapping from into the set of nonnegative integers such that, for any two nonzero elements and in , there exist and in such that and . Examples of such mappings are the absolute value for integers, the degree for univariate polynomials, and the norm for Gaussian integers above. The basic principle is that each step of the algorithm reduces f inexorably; hence, if can be reduced only a finite number of times, the algorithm must stop in a finite number of steps. This principle relies on the well-ordering property of the non-negative integers, which asserts that every non-empty set of non-negative integers has a smallest member. The fundamental theorem of arithmetic applies to any Euclidean domain: Any number from a Euclidean domain can be factored uniquely into irreducible elements. Any Euclidean domain is a unique factorization domain (UFD), although the converse is not true. The Euclidean domains and the UFD's are subclasses of the GCD domains, domains in which a greatest common divisor of two numbers always exists. In other words, a greatest common divisor may exist (for all pairs of elements in a domain), although it may not be possible to find it using a Euclidean algorithm. A Euclidean domain is always a principal ideal domain (PID), an integral domain in which every ideal is a principal ideal. Again, the converse is not true: not every PID is a Euclidean domain. The unique factorization of Euclidean domains is useful in many applications. For example, the unique factorization of the Gaussian integers is convenient in deriving formulae for all Pythagorean triples and in proving Fermat's theorem on sums of two squares. Unique factorization was also a key element in an attempted proof of Fermat's Last Theorem published in 1847 by Gabriel Lamé, the same mathematician who analyzed the efficiency of Euclid's algorithm, based on a suggestion of Joseph Liouville. Lamé's approach required the unique factorization of numbers of the form , where and are integers, and is an th root of 1, that is, . Although this approach succeeds for some values of (such as , the Eisenstein integers), in general such numbers do factor uniquely. This failure of unique factorization in some cyclotomic fields led Ernst Kummer to the concept of ideal numbers and, later, Richard Dedekind to ideals. Unique factorization of quadratic integers The quadratic integer rings are helpful to illustrate Euclidean domains. Quadratic integers are generalizations of the Gaussian integers in which the imaginary unit i is replaced by a number . Thus, they have the form , where and are integers and has one of two forms, depending on a parameter . If does not equal a multiple of four plus one, then If, however, D does equal a multiple of four plus one, then If the function corresponds to a norm function, such as that used to order the Gaussian integers above, then the domain is known as norm-Euclidean. The norm-Euclidean rings of quadratic integers are exactly those where is one of the values −11, −7, −3, −2, −1, 2, 3, 5, 6, 7, 11, 13, 17, 19, 21, 29, 33, 37, 41, 57, or 73. The cases and yield the Gaussian integers and Eisenstein integers, respectively. If is allowed to be any Euclidean function, then the list of possible values of for which the domain is Euclidean is not yet known. The first example of a Euclidean domain that was not norm-Euclidean (with ) was published in 1994. In 1973, Weinberger proved that a quadratic integer ring with is Euclidean if, and only if, it is a principal ideal domain, provided that the generalized Riemann hypothesis holds. Noncommutative rings The Euclidean algorithm may be applied to some noncommutative rings such as the set of Hurwitz quaternions. Let and represent two elements from such a ring. They have a common right divisor if and for some choice of and in the ring. Similarly, they have a common left divisor if and for some choice of and in the ring. Since multiplication is not commutative, there are two versions of the Euclidean algorithm, one for right divisors and one for left divisors. Choosing the right divisors, the first step in finding the by the Euclidean algorithm can be written where represents the quotient and the remainder. This equation shows that any common right divisor of and is likewise a common divisor of the remainder . The analogous equation for the left divisors would be With either choice, the process is repeated as above until the greatest common right or left divisor is identified. As in the Euclidean domain, the "size" of the remainder (formally, its norm) must be strictly smaller than , and there must be only a finite number of possible sizes for , so that the algorithm is guaranteed to terminate. Most of the results for the GCD carry over to noncommutative numbers. For example, | into a grid of 12-by-12 squares, with two squares along one edge (24/12 = 2) and five squares along the other (60/12 = 5). The GCD of two numbers a and b is the product of the prime factors shared by the two numbers, where a same prime factor can be used multiple times, but only as long as the product of these factors divides both a and b. For example, since 1386 can be factored into 2 × 3 × 3 × 7 × 11, and 3213 can be factored into 3 × 3 × 3 × 7 × 17, the greatest common divisor of 1386 and 3213 equals 63 = 3 × 3 × 7, the product of their shared prime factors. If two numbers have no prime factors in common, their greatest common divisor is 1 (obtained here as an instance of the empty product), in other words they are coprime. A key advantage of the Euclidean algorithm is that it can find the GCD efficiently without having to compute the prime factors. Factorization of large integers is believed to be a computationally very difficult problem, and the security of many widely used cryptographic protocols is based upon its infeasibility. Another definition of the GCD is helpful in advanced mathematics, particularly ring theory. The greatest common divisor g of two nonzero numbers a and b is also their smallest positive integral linear combination, that is, the smallest positive number of the form ua + vb where u and v are integers. The set of all integral linear combinations of a and b is actually the same as the set of all multiples of g (mg, where m is an integer). In modern mathematical language, the ideal generated by a and b is the ideal generated by g alone (an ideal generated by a single element is called a principal ideal, and all ideals of the integers are principal ideals). Some properties of the GCD are in fact easier to see with this description, for instance the fact that any common divisor of a and b also divides the GCD (it divides both terms of ua + vb). The equivalence of this GCD definition with the other definitions is described below. The GCD of three or more numbers equals the product of the prime factors common to all the numbers, but it can also be calculated by repeatedly taking the GCDs of pairs of numbers. For example, Thus, Euclid's algorithm, which computes the GCD of two integers, suffices to calculate the GCD of arbitrarily many integers. Description Procedure The Euclidean algorithm proceeds in a series of steps such that the output of each step is used as an input for the next one. Let k be an integer that counts the steps of the algorithm, starting with zero. Thus, the initial step corresponds to k = 0, the next step corresponds to k = 1, and so on. Each step begins with two nonnegative remainders rk−1 and rk−2. Since the algorithm ensures that the remainders decrease steadily with every step, rk−1 is less than its predecessor rk−2. The goal of the kth step is to find a quotient qk and remainder rk that satisfy the equation and that have 0 ≤ rk < rk−1. In other words, multiples of the smaller number rk−1 are subtracted from the larger number rk−2 until the remainder rk is smaller than rk−1. In the initial step (k = 0), the remainders r−2 and r−1 equal a and b, the numbers for which the GCD is sought. In the next step (k = 1), the remainders equal b and the remainder r0 of the initial step, and so on. Thus, the algorithm can be written as a sequence of equations If a is smaller than b, the first step of the algorithm swaps the numbers. For example, if a < b, the initial quotient q0 equals zero, and the remainder r0 is a. Thus, rk is smaller than its predecessor rk−1 for all k ≥ 0. Since the remainders decrease with every step but can never be negative, a remainder rN must eventually equal zero, at which point the algorithm stops. The final nonzero remainder rN−1 is the greatest common divisor of a and b. The number N cannot be infinite because there are only a finite number of nonnegative integers between the initial remainder r0 and zero. Proof of validity The validity of the Euclidean algorithm can be proven by a two-step argument. In the first step, the final nonzero remainder rN−1 is shown to divide both a and b. Since it is a common divisor, it must be less than or equal to the greatest common divisor g. In the second step, it is shown that any common divisor of a and b, including g, must divide rN−1; therefore, g must be less than or equal to rN−1. These two conclusions are inconsistent unless rN−1 = g. To demonstrate that rN−1 divides both a and b (the first step), rN−1 divides its predecessor rN−2 since the final remainder rN is zero. rN−1 also divides its next predecessor rN−3 because it divides both terms on the right-hand side of the equation. Iterating the same argument, rN−1 divides all the preceding remainders, including a and b. None of the preceding remainders rN−2, rN−3, etc. divide a and b, since they leave a remainder. Since rN−1 is a common divisor of a and b, rN−1 ≤ g. In the second step, any natural number c that divides both a and b (in other words, any common divisor of a and b) divides the remainders rk. By definition, a and b can be written as multiples of c : a = mc and b = nc, where m and n are natural numbers. Therefore, c divides the initial remainder r0, since r0 = a − q0b = mc − q0nc = (m − q0n)c. An analogous argument shows that c also divides the subsequent remainders r1, r2, etc. Therefore, the greatest common divisor g must divide rN−1, which implies that g ≤ rN−1. Since the first part of the argument showed the reverse (rN−1 ≤ g), it follows that g = rN−1. Thus, g is the greatest common divisor of all the succeeding pairs: Worked example For illustration, the Euclidean algorithm can be used to find the greatest common divisor of a = 1071 and b = 462. To begin, multiples of 462 are subtracted from 1071 until the remainder is less than 462. Two such multiples can be subtracted (q0 = 2), leaving a remainder of 147: Then multiples of 147 are subtracted from 462 until the remainder is less than 147. Three multiples can be subtracted (q1 = 3), leaving a remainder of 21: Then multiples of 21 are subtracted from 147 until the remainder is less than 21. Seven multiples can be subtracted (q2 = 7), leaving no remainder: Since the last remainder is zero, the algorithm ends with 21 as the greatest common divisor of 1071 and 462. This agrees with the gcd(1071, 462) found by prime factorization above. In tabular form, the steps are: Visualization The Euclidean algorithm can be visualized in terms of the tiling analogy given above for the greatest common divisor. Assume that we wish to cover an a-by-b rectangle with square tiles exactly, where a is the larger of the two numbers. We first attempt to tile the rectangle using b-by-b square tiles; however, this leaves an r0-by-b residual rectangle untiled, where r0 < b. We then attempt to tile the residual rectangle with r0-by-r0 square tiles. This leaves a second residual rectangle r1-by-r0, which we attempt to tile using r1-by-r1 square tiles, and so on. The sequence ends when there is no residual rectangle, i.e., when the square tiles cover the previous residual rectangle exactly. The length of the sides of the smallest square tile is the GCD of the dimensions of the original rectangle. For example, the smallest square tile in the adjacent figure is 21-by-21 (shown in red), and 21 is the GCD of 1071 and 462, the dimensions of the original rectangle (shown in green). Euclidean division At every step k, the Euclidean algorithm computes a quotient qk and remainder rk from two numbers rk−1 and rk−2 where the rk is non-negative and is strictly less than the absolute value of rk−1. The theorem which underlies the definition of the Euclidean division ensures that such a quotient and remainder always exist and are unique. In Euclid's original version of the algorithm, the quotient and remainder are found by repeated subtraction; that is, rk−1 is subtracted from rk−2 repeatedly until the remainder rk is smaller than rk−1. After that rk and rk−1 are exchanged and the process is iterated. Euclidean division reduces all the steps between two exchanges into a single step, which is thus more efficient. Moreover, the quotients are not needed, thus one may replace Euclidean division by the modulo operation, which gives only the remainder. Thus the iteration of the Euclidean algorithm becomes simply Implementations Implementations of the algorithm may be expressed in pseudocode. For example, the division-based version may be programmed as function gcd(a, b) while b ≠ 0 t := b b := a mod b a := t return a At the beginning of the kth iteration, the variable b holds the latest remainder rk−1, whereas the variable a holds its predecessor, rk−2. The step b := a mod b is equivalent to the above recursion formula rk ≡ rk−2 mod rk−1. The temporary variable t holds the value of rk−1 while the next remainder rk is being calculated. At the end of the loop iteration, the variable b holds the remainder rk, whereas the variable a holds its predecessor, rk−1. (If negative inputs are allowed, or if the mod function may return negative values, the last line must be changed into return max(a, −a).) In the subtraction-based version, which was Euclid's original version, the remainder calculation (b := a mod b) is replaced by repeated subtraction. Contrary to the division-based version, which works with arbitrary integers as input, the subtraction-based version supposes that the input consists of positive integers and stops when a = b: function gcd(a, b) while a ≠ b if a > b a := a − b else b := b − a return a The variables a and b alternate holding the previous remainders rk−1 and rk−2. Assume that a is larger than b at the beginning of an iteration; then a equals rk−2, since rk−2 > rk−1. During the loop iteration, a is reduced by multiples of the previous remainder b until a is smaller than b. Then a is the next remainder rk. Then b is reduced by multiples of a until it is again smaller than a, giving the next remainder rk+1, and so on. The recursive version is based on the equality of the GCDs of successive remainders and the stopping condition gcd(rN−1, 0) = rN−1. function gcd(a, b) if b = 0 return a else return gcd(b, a mod b) (As above, if negative inputs are allowed, or if the mod function may return negative values, the instruction "return a" must be changed into "return max(a, −a)".) For illustration, the gcd(1071, 462) is calculated from the equivalent gcd(462, 1071 mod 462) = gcd(462, 147). The latter GCD is calculated from the gcd(147, 462 mod 147) = gcd(147, 21), which in turn is calculated from the gcd(21, 147 mod 21) = gcd(21, 0) = 21. Method of least absolute remainders In another version of Euclid's algorithm, the quotient at each step is increased by one if the resulting negative remainder is smaller in magnitude than the typical positive remainder. Previously, the equation assumed that . However, an alternative negative remainder can be computed: if or if . If is replaced by when , then one gets a variant of Euclidean algorithm such that at each step. Leopold Kronecker has shown that this version requires the fewest steps of any version of Euclid's algorithm. More generally, it has been proven that, for every input numbers a and b, the number of steps is minimal if and only if is chosen in order that where is the golden ratio. Historical development The Euclidean algorithm is one of the oldest algorithms in common use. It appears in Euclid's Elements (c. 300 BC), specifically in Book 7 (Propositions 1–2) and Book 10 (Propositions 2–3). In Book 7, the algorithm is formulated for integers, whereas in Book 10, it is formulated for lengths of line segments. (In modern usage, one would say it was formulated there for real numbers. But lengths, areas, and volumes, represented as real numbers in modern usage, are not measured in the same units and there is no natural unit of length, area, or volume; the concept of real numbers was unknown at that time.) The latter algorithm is geometrical. The GCD of two lengths a and b corresponds to the greatest length g that measures a and b evenly; in other words, the lengths a and b are both integer multiples of the length g. The algorithm was probably not discovered by Euclid, who compiled results from earlier mathematicians in his Elements. The mathematician and historian B. L. van der Waerden suggests that Book VII derives from a textbook on number theory written by mathematicians in the school of Pythagoras. The algorithm was probably known by Eudoxus of Cnidus (about 375 BC). The algorithm may even pre-date Eudoxus, judging from the use of the technical term ἀνθυφαίρεσις (anthyphairesis, reciprocal subtraction) in works by Euclid and Aristotle. Centuries later, Euclid's algorithm was discovered independently both in India and in China, primarily to solve Diophantine equations that arose in astronomy and making accurate calendars. In the late 5th century, the Indian mathematician and astronomer Aryabhata described the algorithm as the "pulverizer", perhaps because of its effectiveness in solving Diophantine equations. Although a special case of the Chinese remainder theorem had already been described in the Chinese book Sunzi Suanjing, the general solution was published by Qin Jiushao in his 1247 book Shushu Jiuzhang (數書九章 Mathematical Treatise in Nine Sections). The Euclidean algorithm was first described numerically and popularized in Europe in the second edition of Bachet's Problèmes plaisants et délectables (Pleasant and enjoyable problems, 1624). In Europe, it was likewise used to solve Diophantine equations and in developing continued fractions. The extended Euclidean algorithm was published by the English mathematician Nicholas Saunderson, who attributed it to Roger Cotes as a method for computing continued fractions efficiently. In the 19th century, the Euclidean algorithm led to the development of new number systems, such as Gaussian integers and Eisenstein integers. In 1815, Carl Gauss used the Euclidean algorithm to demonstrate unique factorization of Gaussian integers, although his work was first published in 1832. Gauss mentioned the algorithm in his Disquisitiones Arithmeticae (published 1801), but only as a method for continued fractions. Peter Gustav Lejeune Dirichlet seems to have been the first to describe the Euclidean algorithm as the basis for much of number theory. Lejeune Dirichlet noted that many results of number theory, such as unique factorization, would hold true for any other system of numbers to which the Euclidean algorithm could be applied. Lejeune Dirichlet's lectures on number theory were edited and extended by Richard Dedekind, who used Euclid's algorithm to study algebraic integers, a new general type of number. For example, Dedekind was the first to prove Fermat's two-square theorem using the unique factorization of Gaussian integers. Dedekind also defined the concept of a Euclidean domain, a number system in which a generalized version of the Euclidean algorithm can be defined (as described below). In the closing decades of the 19th century, the Euclidean algorithm gradually became eclipsed by Dedekind's more general theory of ideals. Other applications of Euclid's algorithm were developed in the 19th century. In 1829, Charles Sturm showed that the algorithm was useful in the Sturm chain method for counting the real roots of polynomials in any given interval. The Euclidean algorithm was the first integer relation algorithm, which is a method for finding integer relations between commensurate real numbers. Several novel integer relation algorithms have been developed, such as the algorithm of Helaman Ferguson and R.W. Forcade (1979) and the LLL algorithm. In 1969, Cole and Davie developed a two-player game based on the Euclidean algorithm, called The Game of Euclid, which has an optimal strategy. The players begin with two piles of a and b stones. The players take turns removing m multiples of the smaller pile from the larger. Thus, if the two piles consist of x and y stones, where x is larger than y, the next player can reduce the larger pile from x stones to x − my stones, as long as the latter is a nonnegative integer. The winner is the first player to reduce one pile to zero stones. Mathematical applications Bézout's identity Bézout's identity states that the greatest common divisor g of two integers a and b can be represented as a linear sum of the original two numbers a and b. In other words, it is always possible to find integers s and t such that g = sa + tb. The integers s and t can be calculated from the quotients q0, q1, etc. by reversing the order of equations in Euclid's algorithm. Beginning with the next-to-last equation, g can be expressed in terms of the quotient qN−1 and the two preceding remainders, rN−2 and rN−3: Those two remainders can be likewise expressed in terms of their quotients and preceding remainders, and Substituting these formulae for rN−2 and rN−3 into the first equation yields g as a linear sum of the remainders rN−4 and rN−5. The process of substituting remainders by formulae involving their predecessors can be continued until the original numbers a and b are reached: After all the remainders r0, r1, etc. have been substituted, the final equation expresses g as a linear sum of a and b: g = sa + tb. Bézout's |
and for interacting parts of the Earth-system. It uses numerical weather prediction methods to prepare forecasts and their initial conditions, and it contributes to monitoring the relevant parts of the Earth system. Work and projects Forecasting Numerical weather prediction (NWP) requires input of meteorological data, collected by satellites and earth observation systems such as automatic and manned stations, aircraft, ships and weather balloons. Assimilation of this data is used to produce an initial state of a computer model of the atmosphere, from which an atmospheric model is used to forecast the weather. These forecasts are typically: medium-range forecasts, predicting the weather up to 15 days ahead monthly forecasts, predicting the weather on a weekly basis 30 days ahead seasonal forecasts up to 12 months ahead. Over the past three decades ECMWF's wide-ranging programme of research has played a major role in developing such assimilation and modelling systems. This improves the accuracy and reliability of weather forecasting by about a day per decade, so that a seven-day forecast now (2015) is as accurate as a three-day forecast was four decades ago (1975). Monthly and seasonal forecasts ECMWF's monthly and seasonal forecasts provide early predictions of events such as heat waves, cold spells and droughts, as well as their impacts on sectors such as agriculture, energy and health. Since ECMWF runs a wave model, there are also predictions of coastal waves and storm surges in European waters which can be used to provide warnings. Early warning of severe weather events Forecasts of severe weather events allow appropriate mitigating action to be taken and contingency plans to be put into place by the authorities and the public. The increased time gained by issuing accurate warnings can save lives, for instance by evacuating people from a storm surge area. Authorities and businesses can plan to maintain services around threats such as high winds, floods or snow. In October 2012 the ECMWF model suggested seven days in advance that Hurricane Sandy was likely to make landfall on the East Coast of the United States. It also predicted the intensity and track of the November 2012 nor'easter, which impacted the east coast a week after Sandy. ECMWF's Extreme Forecast Index (EFI) was developed as a tool to identify where the EPS (Ensemble Prediction System) forecast distribution differs substantially from that of the model climate. It contains information regarding variability of weather parameters, in location and time and can highlight an abnormality of a weather situation without having to define specific space- and time-dependent thresholds. Satellite data ECMWF, through its partnerships with EUMETSAT, ESA, the EU and others, exploits satellite data for operational numerical weather prediction and operational seasonal forecasting with coupled atmosphere–ocean–land models. The increasing amount of satellite data and the development of more sophisticated ways of extracting information from that data have made a major contribution to improving the accuracy and utility of NWP forecasts. ECMWF continuously endeavours to improve the use of satellite observations for NWP. Reanalysis ECMWF supports research on climate variability using an approach known as reanalysis. This involves feeding weather observations collected over decades into a NWP system to recreate past atmospheric, sea- and land-surface conditions over specific time periods to obtain a clearer picture of how the climate has changed. Reanalysis provides a four-dimensional picture of the atmosphere and effectively allows monitoring of the variability and change of global climate, thereby contributing also to the understanding and attribution of climate change. To date, and with support from Europe's National Meteorological Services and the European Commission, ECMWF has conducted several major | was chosen. The move has been directly attributed to Brexit. Objectives ECMWF aims to provide accurate medium-range global weather forecasts out to 15 days and seasonal forecasts out to 12 months. Its products are provided to the national weather services of its member states and co-operating states as a complement to their national short-range and climatological activities, and those national states use ECMWF's products for their own national duties, in particular to give early warning of potentially damaging severe weather. ECMWF's core mission is to: Produce numerical weather forecasts and monitor planetary systems that influence weather Carry out scientific and technical research to improve forecast skill Maintain an archive of meteorological data To deliver this core mission, the Centre provides: Twice-daily global numerical weather forecasts Air quality analysis Atmospheric composition monitoring Climate monitoring Ocean circulation analysis Hydrological prediction The Centre develops and operates global atmospheric models and data assimilation systems for the dynamics, thermodynamics and composition of the Earth's atmosphere and for interacting parts of the Earth-system. It uses numerical weather prediction methods to prepare forecasts and their initial conditions, and it contributes to monitoring the relevant parts of the Earth system. Work and projects Forecasting Numerical weather prediction (NWP) requires input of meteorological data, collected by satellites and earth observation systems such as automatic and manned stations, aircraft, ships and weather balloons. Assimilation of this data is used to produce an initial state of a computer model of the atmosphere, from which an atmospheric model is used to forecast the weather. These forecasts are typically: medium-range forecasts, predicting the weather up to 15 days ahead monthly forecasts, predicting the weather on a weekly basis 30 days ahead seasonal forecasts up to 12 months ahead. Over the past three decades ECMWF's wide-ranging programme of research has played a major role in developing such assimilation and modelling systems. This improves the accuracy and reliability of weather forecasting by about a day per decade, so that a seven-day forecast now (2015) is as accurate as a three-day forecast was four decades ago (1975). Monthly and seasonal forecasts ECMWF's monthly and seasonal forecasts provide early predictions of events such as heat waves, cold spells and droughts, as well as their impacts on sectors such as agriculture, energy and |
of open standards (such as MPEG-2, DAB, DVB, etc.) ensures interoperability between products from different vendors, as well as facilitating the exchange of programme material between EBU Members and promoting "horizontal markets" for the benefit of all consumers. EBU Members and the EBU Technical Department have long played an important role in the development of many systems used in radio and television broadcasting, such as: The AES/EBU digital audio interface, formally known as AES3; Serial and parallel interfaces for digital video (ITU-R Recommendations 601 and 656); RDS – the radio data system used on FM broadcasting. The EBU Loudness Recommendation R 128 and 'EBU Mode' meters (EBU Tech 3341) The EBU has also actively encouraged the development and implementation of: Digital radio (DAB) through Eureka Project 147 and the WorldDAB Forum. Digital Video Broadcasting (DVB) through the DVB Project and DigiTAG. Digital radio in the bands currently used for AM broadcasting through DRM (Digital Radio Mondiale). Standardisation of PVR systems through the TV-Anytime Forum. Development of other content distribution networks on the internet through P2PTV; EBU Project Group D/P2P, from November 2007 to April 2008, with a trial of selected member channels, thanks to Octoshape's distribution platform. The EBU is also part of the European P2P-Next project. Controversies Greek state broadcaster (2013) On 11 June 2013, the Greek government shut down the state broadcaster ERT, at short notice, citing government spending concerns related to the Euro crisis. In response, the European Broadcasting Union set up a makeshift studio on the same day, near the former ERT offices in Athens, in order to continue providing EBU members with the news-gathering and broadcast relay services which had formerly been provided by ERT. The EBU put out a statement expressing its "profound dismay" at the shutdown, urged the Greek Prime Minister "to use all his powers to immediately reverse this decision" and offered the "advice, assistance and expertise necessary for ERT to be preserved". Starting on 4 May 2014, the New Hellenic Radio, Internet and Television broadcaster began nationwide transmissions, taking over ERT's vacant active membership slot in the EBU. On 11 June 2015, two years after ERT's closure, NERIT SA renamed as ERT SA which reopened with a comprehensive program in all radio stations (with nineteen regional, two world-range and five pan-Hellenic range radio stations) and three TV channels ERT1, ERT2 and ERT3. Belarusian state broadcaster (2021) The Belarusian Television and Radio Company (BTRC) has been accused of repressing its own employees, having fired more than 100 people since a wave of anti-Lukashenko protests in 2020 following alleged election fraud. Many of them have also been jailed. Many voices have been raised against the participation of Belarus and the BTRC in the otherwise unpolitical Eurovision Song Contest in 2021, the argument being that the EBU would make a political statement if it did endorse Belarus by essentially and silently saying that democracy is unimportant and so are basic human rights such as freedom of speech. On 28 May 2021, the EBU suspended the BTRC's membership as they had been "particularly alarmed by the broadcast of interviews apparently obtained under duress." BTRC was given two weeks to respond before the suspension came into effect, but did not do so publicly. The broadcaster was completely expelled from the EBU on 1 July 2021 for a period of three years. Russian broadcasters (2022) The three Russian members of the EBU, Channel One Russia, VGTRK, and Radio Dom Ostankino are all controlled by the Russian government. In February 2022, the Russian government recognized the independence of the Donetsk and Luhansk People's Republics, disputed territories that are internationally recognized as part of Ukraine. Ukraine's public broadcaster Suspilne called on the EBU to terminate the membership of Channel One Russia and VGTRK, and consider suspending Russia from the 2022 Eurovision Song Contest, citing the Russian governments use of both outlets to spread disinformation surrounding the Russian-Ukraine conflict. Following the 2022 Russian invasion of Ukraine, several other public broadcasters joined Suspilne in calling for Russia's exclusion from the 2022 Contest; Finland's Yle stated that they would not send a representative if Russia was allowed to participate. After initially stating that both Russia and Ukraine would be allowed to compete, the EBU announced that it would bar Russia from participating in the Contest. The three Russian broadcasters stated that they would leave the EBU on 26 February, citing increased politicization of the organisation. The EBU released a statement saying that it was aware of the reports, but that it had not received any formal confirmation. Members The Member list , comprises the following 66 broadcasting companies from 55 countries. Current members Suspended members Past members Associate members Any group or organisation from an International Telecommunication Union (ITU) member country, which provides a radio or television service outside of the European Broadcasting Area, is permitted to submit applications to the EBU for Associate Membership. It is also noted by the EBU that any country that is granted Associate Member status does not gain access into Eurovision events with the notable exceptions of Australia, who have participated in the Eurovision Song Contest and the Junior Eurovision Song Contest since 2015, Canada in Eurovision Young Dancers between 1987 and 1989 and Kazakhstan, who have participated in Junior Eurovision since 2018, all of which were individually invited. The list of Associate Members of EBU comprised the following 31 broadcasting companies from 20 countries . Past associate members The list of past associate members of EBU comprises the following 29 broadcasting companies from 18 countries and 1 autonomous territory. Approved participant members Any groups or organisations from a country with International Telecommunication Union (ITU) membership, which does not qualify for either the EBU's Active or Associate memberships, but still provide a broadcasting activity for the EBU, are granted a unique Approved Participants membership, which lasts approximately five years. An application for this status may be submitted to the EBU at any given time, providing an annual fee is paid. The following seven EBU broadcast members had status as Approved Participants in May 2016. The following members previously had status as Approved Participants. Organised events The EBU in co-operation with the respective host broadcaster organises competitions and events in which its members can participate if they wish to do so. These include: Eurovision Song Contest The Eurovision Song Contest () is an annual international song competition between EBU Members, that was first held in Lugano, Switzerland, on 24 May 1956. Seven countries participated – each submitting two songs, for a total of 14. This was the only contest in which more than one song per country was performed: since 1957 all contests have allowed one entry per country. The 1956 contest was won by the host nation, Switzerland. The most recent host city was Rotterdam, the Netherlands, where Italy won the competition. Let the Peoples Sing Let the Peoples Sing is a biennial choir competition, the participants of which are chosen from radio recordings entered by EBU radio members. The final, encompassing three categories and around ten choirs, is offered as a live broadcast to all EBU members. The overall winner is awarded the Silver Rose Bowl. Jeux sans frontières Jeux sans frontières (, or Games Without Borders) was a Europe-wide television game show. In its original conception, it was broadcast from 1965 to 1999 under the auspices of the EBU. The original series run ended in 1982 but was revived in 1988 with a different complexion of nations and was hosted by smaller broadcasters. Eurovision Young Musicians Eurovision Young Musicians is a competition for European musicians that are between the ages of 12 and 21 years old. It is organised by the EBU and is a member of EMCY. The first competition was held in Manchester, the United Kingdom on 11 May 1982. The televised competition is held every two years, with some countries holding national heats. Since its foundation in 1982, the Eurovision Young Musicians competition has become one of the most important music competitions on an international level. Eurovision Young Dancers The Eurovision Young Dancers is a biennial dance showcase broadcast on television throughout Europe. The first competition was held in Reggio Emilia, Italy on 16 June 1985. It uses a format similar to the Eurovision Song Contest, every country that is a member of the EBU has had the opportunity to send a dance act to compete for the title of "Eurovision Young Dancer". The competition is for solo dancers and all contestants must be between the ages of 16 and 21 years and not professionally engaged. Euroclassic Notturno Euroclassic Notturno is a six-hour sequence of classical music recordings assembled by BBC Radio from material supplied by members of the EBU and streamed back to | the New Hellenic Radio, Internet and Television broadcaster began nationwide transmissions, taking over ERT's vacant active membership slot in the EBU. On 11 June 2015, two years after ERT's closure, NERIT SA renamed as ERT SA which reopened with a comprehensive program in all radio stations (with nineteen regional, two world-range and five pan-Hellenic range radio stations) and three TV channels ERT1, ERT2 and ERT3. Belarusian state broadcaster (2021) The Belarusian Television and Radio Company (BTRC) has been accused of repressing its own employees, having fired more than 100 people since a wave of anti-Lukashenko protests in 2020 following alleged election fraud. Many of them have also been jailed. Many voices have been raised against the participation of Belarus and the BTRC in the otherwise unpolitical Eurovision Song Contest in 2021, the argument being that the EBU would make a political statement if it did endorse Belarus by essentially and silently saying that democracy is unimportant and so are basic human rights such as freedom of speech. On 28 May 2021, the EBU suspended the BTRC's membership as they had been "particularly alarmed by the broadcast of interviews apparently obtained under duress." BTRC was given two weeks to respond before the suspension came into effect, but did not do so publicly. The broadcaster was completely expelled from the EBU on 1 July 2021 for a period of three years. Russian broadcasters (2022) The three Russian members of the EBU, Channel One Russia, VGTRK, and Radio Dom Ostankino are all controlled by the Russian government. In February 2022, the Russian government recognized the independence of the Donetsk and Luhansk People's Republics, disputed territories that are internationally recognized as part of Ukraine. Ukraine's public broadcaster Suspilne called on the EBU to terminate the membership of Channel One Russia and VGTRK, and consider suspending Russia from the 2022 Eurovision Song Contest, citing the Russian governments use of both outlets to spread disinformation surrounding the Russian-Ukraine conflict. Following the 2022 Russian invasion of Ukraine, several other public broadcasters joined Suspilne in calling for Russia's exclusion from the 2022 Contest; Finland's Yle stated that they would not send a representative if Russia was allowed to participate. After initially stating that both Russia and Ukraine would be allowed to compete, the EBU announced that it would bar Russia from participating in the Contest. The three Russian broadcasters stated that they would leave the EBU on 26 February, citing increased politicization of the organisation. The EBU released a statement saying that it was aware of the reports, but that it had not received any formal confirmation. Members The Member list , comprises the following 66 broadcasting companies from 55 countries. Current members Suspended members Past members Associate members Any group or organisation from an International Telecommunication Union (ITU) member country, which provides a radio or television service outside of the European Broadcasting Area, is permitted to submit applications to the EBU for Associate Membership. It is also noted by the EBU that any country that is granted Associate Member status does not gain access into Eurovision events with the notable exceptions of Australia, who have participated in the Eurovision Song Contest and the Junior Eurovision Song Contest since 2015, Canada in Eurovision Young Dancers between 1987 and 1989 and Kazakhstan, who have participated in Junior Eurovision since 2018, all of which were individually invited. The list of Associate Members of EBU comprised the following 31 broadcasting companies from 20 countries . Past associate members The list of past associate members of EBU comprises the following 29 broadcasting companies from 18 countries and 1 autonomous territory. Approved participant members Any groups or organisations from a country with International Telecommunication Union (ITU) membership, which does not qualify for either the EBU's Active or Associate memberships, but still provide a broadcasting activity for the EBU, are granted a unique Approved Participants membership, which lasts approximately five years. An application for this status may be submitted to the EBU at any given time, providing an annual fee is paid. The following seven EBU broadcast members had status as Approved Participants in May 2016. The following members previously had status as Approved Participants. Organised events The EBU in co-operation with the respective host broadcaster organises competitions and events in which its members can participate if they wish to do so. These include: Eurovision Song Contest The Eurovision Song Contest () is an annual international song competition between EBU Members, that was first held in Lugano, Switzerland, on 24 May 1956. Seven countries participated – each submitting two songs, for a total of 14. This was the only contest in which more than one song per country was performed: since 1957 all contests have allowed one entry per country. The 1956 contest was won by the host nation, Switzerland. The most recent host city was Rotterdam, the Netherlands, where Italy won the competition. Let the Peoples Sing Let the Peoples Sing is a biennial choir competition, the participants of which are chosen from radio recordings entered by EBU radio members. The final, encompassing three categories and around ten choirs, is offered as a live broadcast to all EBU members. The overall winner is awarded the Silver Rose Bowl. Jeux sans frontières Jeux sans frontières (, or Games Without Borders) was a Europe-wide television game show. In its original conception, it was broadcast from 1965 to 1999 under the auspices of the EBU. The original series run ended in 1982 but was revived in 1988 with a different complexion of nations and was hosted by smaller broadcasters. Eurovision Young Musicians Eurovision |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.