sentence1
stringlengths 1
133k
| sentence2
stringlengths 1
131k
|
---|---|
limit, and Jupiter's tidal forces had acted to pull apart the comet. The comet was later observed as a series of fragments ranging up to in diameter. These fragments collided with Jupiter's southern hemisphere between July 16 and 22, 1994 at a speed of approximately (Jupiter's escape velocity) or . The prominent scars from the impacts were more easily visible than the Great Red Spot and persisted for many months. Discovery While conducting a program of observations designed to uncover near-Earth objects, the Shoemakers and Levy discovered Comet Shoemaker–Levy 9 on the night of March 24, 1993, in a photograph taken with the Schmidt telescope at the Palomar Observatory in California. The comet was thus a serendipitous discovery, but one that quickly overshadowed the results from their main observing program. Comet Shoemaker–Levy 9 was the ninth periodic comet (a comet whose orbital period is 200 years or less) discovered by the Shoemakers and Levy, hence its name. It was their eleventh comet discovery overall including their discovery of two non-periodic comets, which use a different nomenclature. The discovery was announced in IAU Circular 5725 on March 26, 1993. The discovery image gave the first hint that comet Shoemaker–Levy 9 was an unusual comet, as it appeared to show multiple nuclei in an elongated region about 50 arcseconds long and 10 arcseconds wide. Brian G. Marsden of the Central Bureau for Astronomical Telegrams noted that the comet lay only about 4 degrees from Jupiter as seen from Earth, and that although this could be a line-of-sight effect, its apparent motion in the sky suggested that the comet was physically close to the planet. Comet with a Jovian orbit Orbital studies of the new comet soon revealed that it was orbiting Jupiter rather than the Sun, unlike all other comets known at the time. Its orbit around Jupiter was very loosely bound, with a period of about 2 years and an apoapsis (the point in the orbit farthest from the planet) of . Its orbit around the planet was highly eccentric (e = 0.9986). Tracing back the comet's orbital motion revealed that it had been orbiting Jupiter for some time. It is likely that it was captured from a solar orbit in the early 1970s, although the capture may have occurred as early as the mid-1960s. Several other observers found images of the comet in precovery images obtained before March 24, including Kin Endate from a photograph exposed on March 15, S. Otomo on March 17, and a team led by Eleanor Helin from images on March 19. An image of the comet on a Schmidt photographic plate taken on March 19 was identified on March 21 by M. Lindgren, in a project searching for comets near Jupiter. However, as his team were expecting comets to be inactive or at best exhibit a weak dust coma, and SL9 had a peculiar morphology, its true nature was not recognised until the official announcement 5 days later. No precovery images dating back to earlier than March 1993 have been found. Before the comet was captured by Jupiter, it was probably a short-period comet with an aphelion just inside Jupiter's orbit, and a perihelion interior to the asteroid belt. The volume of space within which an object can be said to orbit Jupiter is defined by Jupiter's Hill sphere. When the comet passed Jupiter in the late 1960s or early 1970s, it happened to be near its aphelion, and found itself slightly within Jupiter's Hill sphere. Jupiter's gravity nudged the comet towards it. Because the comet's motion with respect to Jupiter was very small, it fell almost straight toward Jupiter, which is why it ended up on a Jove-centric orbit of very high eccentricity—that is to say, the ellipse was nearly flattened out. The comet had apparently passed extremely close to Jupiter on July 7, 1992, just over above its cloud tops—a smaller distance than Jupiter's radius of , and well within the orbit of Jupiter's innermost moon Metis and the planet's Roche limit, inside which tidal forces are strong enough to disrupt a body held together only by gravity. Although the comet had approached Jupiter closely before, the July 7 encounter seemed to be by far the closest, and the fragmentation of the comet is thought to have occurred at this time. Each fragment of the comet was denoted by a letter of the alphabet, from "fragment A" through to "fragment W", a practice already established from previously observed fragmented comets. More exciting for planetary astronomers was that the best orbital calculations suggested that the comet would pass within of the center of Jupiter, a distance smaller than the planet's radius, meaning that there was an extremely high probability that SL9 would collide with Jupiter in July 1994. Studies suggested that the train of nuclei would plow into Jupiter's atmosphere over a period of about five days. Predictions for the collision The discovery that the comet was likely to collide with Jupiter caused great excitement within the astronomical community and beyond, as astronomers had never before seen two significant Solar System bodies collide. Intense studies of the comet were undertaken, and as its orbit became more accurately established, the possibility of a collision became a certainty. The collision would provide a unique opportunity for scientists to look inside Jupiter's atmosphere, as the collisions were expected to cause eruptions of material from the layers normally hidden beneath the clouds. Astronomers estimated that the visible fragments of SL9 ranged in size from a few hundred metres (around ) to across, suggesting that the original comet may have had a nucleus up to across—somewhat larger than Comet Hyakutake, which became very bright when it passed close to the Earth in 1996. One of the great debates in advance of the impact was whether the effects of the impact of such small bodies would be noticeable from Earth, apart from a flash as they disintegrated like giant meteors. The most optimistic prediction was that large, asymmetric ballistic fireballs would rise above the limb of Jupiter and into sunlight to be visible from Earth. Other suggested effects of the impacts were seismic waves travelling across the planet, an increase in stratospheric haze on the planet due to dust from the impacts, and an increase in the mass of the Jovian ring system. However, given that observing such a collision was completely unprecedented, astronomers were cautious with their predictions of what the event might reveal. Impacts Anticipation grew as the predicted date for the collisions approached, and astronomers trained terrestrial telescopes on Jupiter. Several space observatories did the same, including the Hubble Space Telescope, the ROSAT X-ray-observing satellite, the W. M. Keck Observatory, and the Galileo spacecraft, then on its way to a rendezvous with Jupiter scheduled for 1995. Although the impacts took place on the side of Jupiter hidden from Earth, Galileo, then at a distance of from the planet, was able to see the impacts as they occurred. Jupiter's rapid rotation brought the impact sites into view for terrestrial observers a few minutes after the collisions. Two other space probes made observations at the time of the impact: the Ulysses spacecraft, primarily designed for solar observations, was pointed towards Jupiter from its location away, and the distant Voyager 2 probe, some from Jupiter and on its way out of the Solar System following its encounter with Neptune in 1989, was programmed to look for radio emission in the 1–390 kHz range and make observations with its ultraviolet spectrometer. Astronomer Ian Morison described the impacts as following: The first impact occurred at 20:13 UTC on July 16, 1994, when fragment A of the [comet's] nucleus slammed into Jupiter's southern hemisphere at about . Instruments on Galileo detected a fireball that reached a peak temperature of about , compared to the typical Jovian cloud-top temperature of about . It then expanded and cooled rapidly to about . The plume from the fireball quickly reached a height of over and was observed by the HST. A few minutes after the impact fireball was detected, Galileo measured renewed heating, probably due to ejected material falling back onto the planet. Earth-based observers detected the fireball rising over the limb of the planet shortly after the initial impact. Despite published predictions, astronomers had not expected to see the fireballs from the impacts and did not have any idea how visible the other atmospheric effects of the impacts would be from Earth. Observers soon saw a huge dark spot after the first impact; the spot was visible from Earth. This and subsequent dark spots were thought to have been caused by debris from the impacts, and were markedly asymmetric, forming crescent shapes in front of the direction of impact. Over the next six days, 21 distinct impacts were observed, with the largest coming on July 18 at 07:33 UTC when fragment G struck Jupiter. This impact created a giant dark spot over (almost one Earth diameter) across, and was estimated to have released an energy equivalent to 6,000,000 megatons of TNT (600 times the world's nuclear arsenal). Two impacts 12 hours apart on July 19 created impact marks of similar size to that caused by fragment G, and impacts continued until July 22, when fragment W struck the planet. Observations and discoveries Chemical studies Observers hoped that the impacts would give them a first glimpse of Jupiter beneath the cloud tops, as lower material was exposed by the comet fragments punching through the upper atmosphere. Spectroscopic studies revealed absorption lines in the Jovian spectrum due to diatomic sulfur (S2) and carbon disulfide (CS2), the first detection of either in Jupiter, and only the second detection of S2 in any astronomical object. Other molecules detected included ammonia (NH3) and hydrogen sulfide (H2S). The amount of sulfur implied by the quantities of these compounds was much greater than the amount that would be expected in a small cometary nucleus, showing that material from within Jupiter was being revealed. Oxygen-bearing molecules such as sulfur dioxide were not detected, to the surprise of astronomers. As well as these molecules, emission from heavy atoms such as iron, magnesium and silicon was detected, with abundances consistent with what would be found in a cometary nucleus. Although a substantial amount of water was detected spectroscopically, it was not as much | . The plume from the fireball quickly reached a height of over and was observed by the HST. A few minutes after the impact fireball was detected, Galileo measured renewed heating, probably due to ejected material falling back onto the planet. Earth-based observers detected the fireball rising over the limb of the planet shortly after the initial impact. Despite published predictions, astronomers had not expected to see the fireballs from the impacts and did not have any idea how visible the other atmospheric effects of the impacts would be from Earth. Observers soon saw a huge dark spot after the first impact; the spot was visible from Earth. This and subsequent dark spots were thought to have been caused by debris from the impacts, and were markedly asymmetric, forming crescent shapes in front of the direction of impact. Over the next six days, 21 distinct impacts were observed, with the largest coming on July 18 at 07:33 UTC when fragment G struck Jupiter. This impact created a giant dark spot over (almost one Earth diameter) across, and was estimated to have released an energy equivalent to 6,000,000 megatons of TNT (600 times the world's nuclear arsenal). Two impacts 12 hours apart on July 19 created impact marks of similar size to that caused by fragment G, and impacts continued until July 22, when fragment W struck the planet. Observations and discoveries Chemical studies Observers hoped that the impacts would give them a first glimpse of Jupiter beneath the cloud tops, as lower material was exposed by the comet fragments punching through the upper atmosphere. Spectroscopic studies revealed absorption lines in the Jovian spectrum due to diatomic sulfur (S2) and carbon disulfide (CS2), the first detection of either in Jupiter, and only the second detection of S2 in any astronomical object. Other molecules detected included ammonia (NH3) and hydrogen sulfide (H2S). The amount of sulfur implied by the quantities of these compounds was much greater than the amount that would be expected in a small cometary nucleus, showing that material from within Jupiter was being revealed. Oxygen-bearing molecules such as sulfur dioxide were not detected, to the surprise of astronomers. As well as these molecules, emission from heavy atoms such as iron, magnesium and silicon was detected, with abundances consistent with what would be found in a cometary nucleus. Although a substantial amount of water was detected spectroscopically, it was not as much as predicted, meaning that either the water layer thought to exist below the clouds was thinner than predicted, or that the cometary fragments did not penetrate deeply enough. Waves As predicted, the collisions generated enormous waves that swept across Jupiter at speeds of and were observed for over two hours after the largest impacts. The waves were thought to be travelling within a stable layer acting as a waveguide, and some scientists thought the stable layer must lie within the hypothesised tropospheric water cloud. However, other evidence seemed to indicate that the cometary fragments had not reached the water layer, and the waves were instead propagating within the stratosphere. Other observations Radio observations revealed a sharp increase in continuum emission at a wavelength of after the largest impacts, which peaked at 120% of the normal emission from the planet. This was thought to be due to synchrotron radiation, caused by the injection of relativistic electrons—electrons with velocities near the speed of light—into the Jovian magnetosphere by the impacts. About an hour after fragment K entered Jupiter, observers recorded auroral emission near the impact region, as well as at the antipode of the impact site with respect to Jupiter's strong magnetic field. The cause of these emissions was difficult to establish due to a lack of knowledge of Jupiter's internal magnetic field and of the geometry of the impact sites. One possible explanation was that upwardly accelerating shock waves from the impact accelerated charged particles enough to cause auroral emission, a phenomenon more typically associated with fast-moving solar wind particles striking a planetary atmosphere near a magnetic pole. Some astronomers had suggested that the impacts might have a noticeable effect on the Io torus, a torus of high-energy particles connecting Jupiter with the highly volcanic moon Io. High resolution spectroscopic studies found that variations in the ion density, rotational velocity, and temperatures at the time of impact and afterwards were within the normal limits. Voyager 2 failed to detect anything with calculations showing that the fireballs were just below the craft's limit of detection; no abnormal levels of UV radiation or radio signals were registered after the blast. Ulysses also failed to detect any abnormal radio frequencies. Post-impact analysis Several models were devised to compute the density and size of Shoemaker–Levy 9. Its average density was calculated to be about ; the breakup of a much less dense comet would not have resembled the observed string of objects. The size of the parent comet was calculated to be about in diameter. These predictions were among the few that were actually confirmed by subsequent observation. One of the surprises of the impacts was the small amount of water revealed compared to prior predictions. Before the impact, models of Jupiter's atmosphere had indicated that the break-up of the largest fragments would occur at atmospheric pressures of anywhere from 30 kilopascals to a few tens of megapascals (from 0.3 to a few hundred bar), with some predictions that the comet would penetrate a layer of water and create a bluish shroud over that region of Jupiter. Astronomers did not observe large amounts of water following the collisions, and later impact studies found that fragmentation and destruction of the cometary fragments in a meteor air burst probably occurred at much higher altitudes than previously expected, with even the largest fragments being destroyed when the pressure reached , well above the expected depth of the water layer. The smaller fragments were probably destroyed before they even reached the cloud layer. Longer-term effects The visible scars from the impacts could be seen on Jupiter for many months. They were extremely prominent, and observers described them as more easily visible than the Great Red Spot. A search of historical observations revealed that the spots were probably the most prominent transient features ever seen on the planet, and that although the Great Red Spot is notable for its striking color, no spots of the size and darkness of those caused by the SL9 impacts had ever been recorded before, or since. Spectroscopic observers found that ammonia and carbon disulfide persisted in the atmosphere for at least fourteen months after the collisions, with a considerable amount of ammonia being present in the stratosphere as opposed to its normal location in the troposphere. Counterintuitively, the atmospheric temperature dropped to normal levels much more quickly at the larger impact sites than at the smaller sites: at the larger impact sites, temperatures were elevated over a region wide, but dropped back to normal levels within a week of the impact. At smaller sites, temperatures higher than the surroundings persisted |
Ceres Brewery was a beer and soft drink producing facility in Århus, Denmark, that operated from 1856 until 2008. Although the brewery was closed by its owner Royal Unibrew the Ceres brand continues, with the product brewed at other facilities. The area where the brewery stood is being redeveloped for residential and commercial use and has been named CeresByen (Ceres City). History "Ceres Brewery" was founded in 1856 by Malthe Conrad Lottrup, a grocer, with chemists "A. S. Aagard" and "Knud Redelien", as the city's seventh brewery. It was named | Ceres, and its opening was announced in the local newspaper, Århus Stiftstidende. Lottrup expanded the brewery after ten years, adding a grand new building as his private residence. He was succeeded by his son-in-law, Laurits Christian Meulengracht, who ran the brewery for almost thirty years, expanding it further before selling it to "Østjyske Bryggerier", another brewing firm. The Ceres brewery was named an official purveyor to the "Royal Danish Court" in 1914. References External links Ceres |
meeting to support his language and encourage the use of algebraic expressions, Grace Hopper sent a memo to the short-range committee reiterating Sperry Rand's efforts to create a language based on English. In 1980, Grace Hopper commented that "COBOL 60 is 95% FLOW-MATIC" and that COMTRAN had had an "extremely small" influence. Furthermore, she said that she would claim that work was influenced by both FLOW-MATIC and COMTRAN only to "keep other people happy [so they] wouldn't try to knock us out". Features from COMTRAN incorporated into COBOL included formulas, the clause, an improved IF statement, which obviated the need for GO TOs, and a more robust file management system. The usefulness of the committee's work was subject of great debate. While some members thought the language had too many compromises and was the result of design by committee, others felt it was better than the three languages examined. Some felt the language was too complex; others, too simple. Controversial features included those some considered useless or too advanced for data processing users. Such features included boolean expressions, formulas and table (indices). Another point of controversy was whether to make keywords context-sensitive and the effect that would have on readability. Although context-sensitive keywords were rejected, the approach was later used in PL/I and partially in COBOL from 2002. Little consideration was given to interactivity, interaction with operating systems (few existed at that time) and functions (thought of as purely mathematical and of no use in data processing). The specifications were presented to the Executive Committee on 4 September. They fell short of expectations: Joseph Wegstein noted that "it contains rough spots and requires some additions", and Bob Bemer later described them as a "hodgepodge". The subcommittee was given until December to improve it. At a mid-September meeting, the committee discussed the new language's name. Suggestions included "BUSY" (Business System), "INFOSYL" (Information System Language) and "COCOSYL" (Common Computer Systems Language). It is unclear who coined the name "COBOL", although Bob Bemer later claimed it had been his suggestion. In October, the intermediate-range committee received copies of the FACT language specification created by Roy Nutt. Its features impressed the committee so much that they passed a resolution to base COBOL on it. This was a blow to the short-range committee, who had made good progress on the specification. Despite being technically superior, FACT had not been created with portability in mind or through manufacturer and user consensus. It also lacked a demonstrable implementation, allowing supporters of a FLOW-MATIC-based COBOL to overturn the resolution. RCA representative Howard Bromberg also blocked FACT, so that RCA's work on a COBOL implementation would not go to waste. It soon became apparent that the committee was too large for any further progress to be made quickly. A frustrated Howard Bromberg bought a $15 tombstone with "COBOL" engraved on it and sent it to Charles Phillips to demonstrate his displeasure. A sub-committee was formed to analyze existing languages and was made up of six individuals: William Selden and Gertrude Tierney of IBM, Howard Bromberg and Howard Discount of RCA, Vernon Reeves and Jean E. Sammet of Sylvania Electric Products. The sub-committee did most of the work creating the specification, leaving the short-range committee to review and modify their work before producing the finished specification. The specifications were approved by the Executive Committee on 8 January 1960, and sent to the government printing office, which printed these as COBOL 60. The language's stated objectives were to allow efficient, portable programs to be easily written, to allow users to move to new systems with minimal effort and cost, and to be suitable for inexperienced programmers. The CODASYL Executive Committee later created the COBOL Maintenance Committee to answer questions from users and vendors and to improve and expand the specifications. During 1960, the list of manufacturers planning to build COBOL compilers grew. By September, five more manufacturers had joined CODASYL (Bendix, Control Data Corporation, General Electric (GE), National Cash Register and Philco), and all represented manufacturers had announced COBOL compilers. GE and IBM planned to integrate COBOL into their own languages, GECOM and COMTRAN, respectively. In contrast, International Computers and Tabulators planned to replace their language, CODEL, with COBOL. Meanwhile, RCA and Sperry Rand worked on creating COBOL compilers. The first COBOL program ran on 17 August on an RCA 501. On 6 and 7 December, the same COBOL program (albeit with minor changes) ran on an RCA computer and a Remington-Rand Univac computer, demonstrating that compatibility could be achieved. The relative influences of which languages were used continues to this day in the recommended advisory printed in all COBOL reference manuals: COBOL-61 to COBOL-65 Many logical flaws were found in COBOL 60, leading GE's Charles Katz to warn that it could not be interpreted unambiguously. A reluctant short-term committee enacted a total cleanup and, by March 1963, it was reported that COBOL's syntax was as definable as ALGOL's, although semantic ambiguities remained. Early COBOL compilers were primitive and slow. A 1962 US Navy evaluation found compilation speeds of 3–11 statements per minute. By mid-1964, they had increased to 11–1000 statements per minute. It was observed that increasing memory would drastically increase speed and that compilation costs varied wildly: costs per statement were between $0.23 and $18.91. In late 1962, IBM announced that COBOL would be their primary development language and that development of COMTRAN would cease. The COBOL specification was revised three times in the five years after its publication. COBOL-60 was replaced in 1961 by COBOL-61. This was then replaced by the COBOL-61 Extended specifications in 1963, which introduced the sort and report writer facilities. The added facilities corrected flaws identified by Honeywell in late 1959 in a letter to the short-range committee. COBOL Edition 1965 brought further clarifications to the specifications and introduced facilities for handling mass storage files and tables. COBOL-68 Efforts began to standardize COBOL to overcome incompatibilities between versions. In late 1962, both ISO and the United States of America Standards Institute (now ANSI) formed groups to create standards. ANSI produced USA Standard COBOL X3.23 in August 1968, which became the cornerstone for later versions. This version was known as American National Standard (ANS) COBOL and was adopted by ISO in 1972. COBOL-74 By 1970, COBOL had become the most widely used programming language in the world. Independently of the ANSI committee, the CODASYL Programming Language Committee was working on improving the language. They described new versions in 1968, 1969, 1970 and 1973, including changes such as new inter-program communication, debugging and file merging facilities as well as improved string-handling and library inclusion features. Although CODASYL was independent of the ANSI committee, the CODASYL Journal of Development was used by ANSI to identify features that were popular enough to warrant implementing. The Programming Language Committee also liaised with ECMA and the Japanese COBOL Standard committee. The Programming Language Committee was not well-known, however. The vice-president, William Rinehuls, complained that two-thirds of the COBOL community did not know of the committee's existence. It was also poor, lacking the funds to make public documents, such as minutes of meetings and change proposals, freely available. In 1974, ANSI published a revised version of (ANS) COBOL, containing new features such as file organizations, the statement and the segmentation module. Deleted features included the statement, the statement (which was replaced by ) and the implementer-defined random access module (which was superseded by the new sequential and relative I/O modules). These made up 44 changes, which rendered existing statements incompatible with the new standard. The report writer was slated to be removed from COBOL, but was reinstated before the standard was published. ISO later adopted the updated standard in 1978. COBOL-85 In June 1978, work began on revising COBOL-74. The proposed standard (commonly called COBOL-80) differed significantly from the previous one, causing concerns about incompatibility and conversion costs. In January 1981, Joseph T. Brophy, Senior Vice-President of Travelers Insurance, threatened to sue the standard committee because it was not upwards compatible with COBOL-74. Mr. Brophy described previous conversions of their 40-million-line code base as "non-productive" and a "complete waste of our programmer resources". Later that year, the Data Processing Management Association (DPMA) said it was "strongly opposed" to the new standard, citing "prohibitive" conversion costs and enhancements that were "forced on the user". During the first public review period, the committee received 2,200 responses, of which 1,700 were negative form letters. Other responses were detailed analyses of the effect COBOL-80 would have on their systems; conversion costs were predicted to be at least 50 cents per line of code. Fewer than a dozen of the responses were in favor of the proposed standard. ISO TC97-SC5 installed in 1979 the international COBOL Experts Group, on initiative of Wim Ebbinkhuijsen. The group consisted of COBOL experts from many countries, including the United States. Its goal was to achieve mutual understanding and respect between ANSI and the rest of the world with regard to the need of new COBOL features. After three years, ISO changed the status of the group to a formal Working Group: WG 4 COBOL. The group took primary ownership and development of the COBOL standard, where ANSI did most of the proposals. In 1983, the DPMA withdrew its opposition to the standard, citing the responsiveness of the committee to public concerns. In the same year, a National Bureau of Standards study concluded that the proposed standard would present few problems. A year later, DEC released a VAX/VMS COBOL-80, who noted that conversion of COBOL-74 programs posed few problems. The new EVALUATE statement and inline PERFORM were particularly well received and improved productivity, thanks to simplified control flow and debugging. The second public review drew another 1,000 (mainly negative) responses, while the last drew just 25, by which time many concerns had been addressed. In 1985, the ISO Working Group 4 accepted the then-version of the ANSI proposed standard, made several changes and set it as the new ISO standard COBOL 85. It was published in late 1985. Sixty features were changed or deprecated and many were added, such as: Scope terminators (END-IF, END-PERFORM, END-READ, etc.) Nested subprograms CONTINUE, a no-operation statement EVALUATE, a switch statement INITIALIZE, a statement that can set groups of data to their default values Inline PERFORM loop bodies – previously, loop bodies had to be specified in a separate procedure Reference modification, which allows access to substrings I/O status codes. The new standard was adopted by all national standard bodies, including ANSI. Two amendments followed in 1989 and 1993, the first introducing intrinsic functions and the other providing corrections. COBOL 2002 and object-oriented COBOL In 1997, Gartner Group estimated that there were a total of 200 billion lines of COBOL in existence, which ran 80% of all business programs. In the early 1990s, work began on adding object-orientation in the next full revision of COBOL. Object-oriented features were taken from C++ and Smalltalk. The initial estimate was to have this revision completed by 1997, and an ISO Committee Draft (CD) was available by 1997. Some vendors (including Micro Focus, Fujitsu, and IBM) introduced object-oriented syntax based on drafts of the full revision. The final approved ISO standard was approved and published in late 2002. Fujitsu/GTSoftware, Micro Focus and RainCode introduced object-oriented COBOL compilers targeting the .NET Framework. There were many other new features, many of which had been in the CODASYL COBOL Journal of Development since 1978 and had missed the opportunity to be included in COBOL-85. These other features included: Free-form code User-defined functions Recursion Locale-based processing Support for extended character sets such as Unicode Floating-point and binary data types (until then, binary items were truncated based on their declaration's base-10 specification) Portable arithmetic results Bit and boolean data types Pointers and syntax for getting and freeing storage The for text-based user interfaces The facility Improved interoperability with other programming languages and framework environments such as .NET and Java. Three corrigenda were published for the standard: two in 2006 and one in 2009. COBOL 2014 Between 2003 and 2009, three technical reports were produced describing object finalization, XML processing and collection classes for COBOL. COBOL 2002 suffered from poor support: no compilers completely supported the standard. Micro Focus found that it was due to a lack of user demand for the new features and due to the abolition of the NIST test suite, which had been used to test compiler conformance. The standardization process was also found to be slow and under-resourced. COBOL 2014 includes the following changes: Portable arithmetic results have been replaced by IEEE 754 data types Major features have been made optional, such as the VALIDATE facility, the report writer and the screen-handling facility Method overloading Dynamic capacity tables (a feature dropped from the draft of COBOL 2002) Legacy COBOL programs are used globally in governments and businesses and are running on diverse operating systems such as z/OS, z/VSE, VME, Unix, OpenVMS and Windows. In 1997, the Gartner Group reported that 80% of the world's business ran on COBOL with over 200 billion lines of code and 5 billion lines more being written annually. Near the end of the 20th century, the year 2000 problem (Y2K) was the focus of significant COBOL programming effort, sometimes by the same programmers who had designed the systems decades before. The particular level of effort required to correct COBOL code has been attributed to the large amount of business-oriented COBOL, as business applications use dates heavily, and to fixed-length data fields. After the clean-up effort put into these programs for Y2K, a 2003 survey found that many remained in use. The authors said that the survey data suggest "a gradual decline in the importance of Cobol in application development over the [following] 10 years unless ... integration with other languages and technologies can be adopted". In 2006 and 2012, Computerworld surveys (of 352 readers) found that over 60% of organizations used COBOL (more than C++ and Visual Basic .NET) and that for half of those, COBOL was used for the majority of their internal software. 36% of managers said they planned to migrate from COBOL, and 25% said they would like to if it were cheaper. Instead, some businesses have migrated their systems from expensive mainframes to cheaper, more modern systems, while maintaining their COBOL programs. Testimony before the House of Representatives in 2016 indicated that COBOL is still in use by many federal agencies. Reuters reported in 2017 that 43% of banking systems still used COBOL with over 220 billion lines of COBOL code in use. By 2019, the number of COBOL programmers was shrinking fast due to retirements, leading to an impending skills gap in business and government organizations which still use mainframe systems for high-volume transaction processing. Efforts to rewrite systems in newer languages have proven expensive and problematic, as has the outsourcing of code maintenance, thus proposals to train more people in COBOL are advocated. During the COVID-19 pandemic and the ensuing surge of unemployment, several US states reported a shortage of skilled COBOL programmers to support the legacy systems used for unemployment benefit management. Many of these systems had been in the process of conversion to more modern programming languages prior to the pandemic, but the process had to be put on hold. Similarly, the US Internal Revenue Service rushed to patch its COBOL-based Individual Master File in order to disburse the tens of millions of payments mandated by the Coronavirus Aid, Relief, and Economic Security Act. Features Syntax COBOL has an English-like syntax, which is used to describe nearly everything in a program. For example, a condition can be expressed as or more concisely as or . More complex conditions can be "abbreviated" by removing repeated conditions and variables. For example, can be shortened to . To support this English-like syntax, COBOL has over 300 keywords. Some of the keywords are simple alternative or pluralized spellings of the same word, which provides for more English-like statements and clauses; e.g., the and keywords can be used interchangeably, as can and , and and . Each COBOL program is made up of four basic lexical items: words, literals, picture character-strings (see ) and separators. Words include reserved words and user-defined identifiers. They are up to 31 characters long and may include letters, digits, hyphens and underscores. Literals include numerals (e.g. ) and strings (e.g. ). Separators include the space character and commas and semi-colons followed by a space. A COBOL program is split into four divisions: the identification division, the environment division, the data division and the procedure division. The identification division specifies the name and type of the source element and is where classes and interfaces are specified. The environment division specifies any program features that depend on the system running it, such as files and character sets. The data division is used to declare variables and parameters. The procedure division contains the program's statements. Each division is sub-divided into sections, which are made up of paragraphs. Metalanguage COBOL's syntax is usually described with a unique metalanguage using braces, brackets, bars and underlining. The metalanguage was developed for the original COBOL specifications. Although Backus–Naur form did exist at the time, the committee had not heard of it. As an example, consider the following description of an ADD statement: This description permits the following variants: ADD 1 TO x ADD 1, a, b TO x ROUNDED, y, z ROUNDED ADD a, b TO c ON SIZE ERROR DISPLAY "Error" END-ADD ADD a TO b NOT SIZE ERROR DISPLAY "No error" ON SIZE ERROR DISPLAY "Error" Code format COBOL can be written in two formats: fixed (the default) or free. In fixed-format, code must be aligned to fit in certain areas (a hold-over from using punched cards). Until COBOL 2002, these were: In COBOL 2002, Areas A and B were merged to form the program-text area, which now ends at an implementor-defined column. COBOL 2002 also introduced free-format code. Free-format code can be placed in any column of the file, as in newer programming languages. Comments are specified using *>, which can be placed anywhere and can also be used in fixed-format source code. Continuation lines are not present, and the >>PAGE directive replaces the / indicator. Identification division The identification division identifies the following code entity and contains the definition of a class or interface. Object-oriented programming Classes and interfaces have been in COBOL since 2002. Classes have factory objects, containing class methods and variables, and instance objects, containing instance methods and variables. Inheritance and interfaces provide polymorphism. Support for generic programming is provided through parameterized classes, which can be instantiated to use any class or interface. Objects are stored as references which may be restricted to a certain type. There are two ways of calling a method: the statement, which acts similarly to , or through inline method invocation, which is analogous to using functions. *> These are equivalent. INVOKE my-class "foo" RETURNING var MOVE my-class::"foo" TO var *> Inline method invocation COBOL does not provide a way to hide methods. Class data can be hidden, however, by declaring it without a clause, which leaves the user with no way to access it. Method overloading was added in COBOL 2014. Environment division The environment division contains the configuration section and the input-output section. The configuration section is used to specify variable features such as currency signs, locales and character sets. The input-output section contains file-related information. Files COBOL supports three file formats, or : sequential, indexed and relative. In sequential files, records are contiguous and must be traversed sequentially, similarly to a linked list. Indexed files have one or more indexes which allow records to be randomly accessed and which can be sorted on them. Each record must have a unique key, but other, , record keys need not be unique. Implementations of indexed files vary between vendors, although common implementations, such as C-ISAM and VSAM, are based on IBM's ISAM. Relative files, like indexed files, have a unique record key, but they do not have alternate keys. A relative record's key is its ordinal position; for example, the 10th record has a key of 10. This means that creating a record with a key of 5 may require the creation of (empty) preceding records. Relative files also allow for both sequential and random access. A common non-standard extension is the organization, used to process text files. Records in a file are terminated by a newline and may be of varying length. Data division The data division is split into six sections which declare different items: the file section, for file records; the working-storage section, for static variables; the local-storage section, for automatic variables; the linkage section, for parameters and the return value; the report section and the screen section, for text-based user interfaces. Aggregated data Data items in COBOL are declared hierarchically through the use of level-numbers which indicate if a data item is part of another. An item with a higher level-number is subordinate to an item with a lower one. Top-level data items, with a level-number of 1, are called . Items that have subordinate aggregate data are called ; those that do not are called . Level-numbers used to describe standard data items are between 1 and 49. 01 some-record. *> Aggregate group record item 05 num PIC 9(10). *> Elementary item 05 the-date. *> Aggregate (sub)group record item 10 the-year PIC 9(4). *> Elementary item 10 the-month PIC 99. *> Elementary item 10 the-day PIC 99. *> Elementary item In the above example, elementary item and group item are subordinate to the record , while elementary items , , and are part of the group item . Subordinate items can be disambiguated with the (or ) keyword. For example, consider the example code above along with the following example: 01 sale-date. 05 the-year PIC 9(4). 05 the-month PIC 99. 05 the-day PIC 99. The names , , and are ambiguous by themselves, since more than one data item is defined with those names. To specify a particular data item, for instance one of the items contained within the group, the programmer would use (or the equivalent ). (This syntax is similar to the "dot notation" supported by most contemporary languages.) Other data levels A level-number of 66 is used to declare a re-grouping of previously defined items, irrespective of how those items are structured. This data level, also referred to by the associated , is rarely used and, circa 1988, was usually found in old programs. Its ability | Micro Focus and RainCode introduced object-oriented COBOL compilers targeting the .NET Framework. There were many other new features, many of which had been in the CODASYL COBOL Journal of Development since 1978 and had missed the opportunity to be included in COBOL-85. These other features included: Free-form code User-defined functions Recursion Locale-based processing Support for extended character sets such as Unicode Floating-point and binary data types (until then, binary items were truncated based on their declaration's base-10 specification) Portable arithmetic results Bit and boolean data types Pointers and syntax for getting and freeing storage The for text-based user interfaces The facility Improved interoperability with other programming languages and framework environments such as .NET and Java. Three corrigenda were published for the standard: two in 2006 and one in 2009. COBOL 2014 Between 2003 and 2009, three technical reports were produced describing object finalization, XML processing and collection classes for COBOL. COBOL 2002 suffered from poor support: no compilers completely supported the standard. Micro Focus found that it was due to a lack of user demand for the new features and due to the abolition of the NIST test suite, which had been used to test compiler conformance. The standardization process was also found to be slow and under-resourced. COBOL 2014 includes the following changes: Portable arithmetic results have been replaced by IEEE 754 data types Major features have been made optional, such as the VALIDATE facility, the report writer and the screen-handling facility Method overloading Dynamic capacity tables (a feature dropped from the draft of COBOL 2002) Legacy COBOL programs are used globally in governments and businesses and are running on diverse operating systems such as z/OS, z/VSE, VME, Unix, OpenVMS and Windows. In 1997, the Gartner Group reported that 80% of the world's business ran on COBOL with over 200 billion lines of code and 5 billion lines more being written annually. Near the end of the 20th century, the year 2000 problem (Y2K) was the focus of significant COBOL programming effort, sometimes by the same programmers who had designed the systems decades before. The particular level of effort required to correct COBOL code has been attributed to the large amount of business-oriented COBOL, as business applications use dates heavily, and to fixed-length data fields. After the clean-up effort put into these programs for Y2K, a 2003 survey found that many remained in use. The authors said that the survey data suggest "a gradual decline in the importance of Cobol in application development over the [following] 10 years unless ... integration with other languages and technologies can be adopted". In 2006 and 2012, Computerworld surveys (of 352 readers) found that over 60% of organizations used COBOL (more than C++ and Visual Basic .NET) and that for half of those, COBOL was used for the majority of their internal software. 36% of managers said they planned to migrate from COBOL, and 25% said they would like to if it were cheaper. Instead, some businesses have migrated their systems from expensive mainframes to cheaper, more modern systems, while maintaining their COBOL programs. Testimony before the House of Representatives in 2016 indicated that COBOL is still in use by many federal agencies. Reuters reported in 2017 that 43% of banking systems still used COBOL with over 220 billion lines of COBOL code in use. By 2019, the number of COBOL programmers was shrinking fast due to retirements, leading to an impending skills gap in business and government organizations which still use mainframe systems for high-volume transaction processing. Efforts to rewrite systems in newer languages have proven expensive and problematic, as has the outsourcing of code maintenance, thus proposals to train more people in COBOL are advocated. During the COVID-19 pandemic and the ensuing surge of unemployment, several US states reported a shortage of skilled COBOL programmers to support the legacy systems used for unemployment benefit management. Many of these systems had been in the process of conversion to more modern programming languages prior to the pandemic, but the process had to be put on hold. Similarly, the US Internal Revenue Service rushed to patch its COBOL-based Individual Master File in order to disburse the tens of millions of payments mandated by the Coronavirus Aid, Relief, and Economic Security Act. Features Syntax COBOL has an English-like syntax, which is used to describe nearly everything in a program. For example, a condition can be expressed as or more concisely as or . More complex conditions can be "abbreviated" by removing repeated conditions and variables. For example, can be shortened to . To support this English-like syntax, COBOL has over 300 keywords. Some of the keywords are simple alternative or pluralized spellings of the same word, which provides for more English-like statements and clauses; e.g., the and keywords can be used interchangeably, as can and , and and . Each COBOL program is made up of four basic lexical items: words, literals, picture character-strings (see ) and separators. Words include reserved words and user-defined identifiers. They are up to 31 characters long and may include letters, digits, hyphens and underscores. Literals include numerals (e.g. ) and strings (e.g. ). Separators include the space character and commas and semi-colons followed by a space. A COBOL program is split into four divisions: the identification division, the environment division, the data division and the procedure division. The identification division specifies the name and type of the source element and is where classes and interfaces are specified. The environment division specifies any program features that depend on the system running it, such as files and character sets. The data division is used to declare variables and parameters. The procedure division contains the program's statements. Each division is sub-divided into sections, which are made up of paragraphs. Metalanguage COBOL's syntax is usually described with a unique metalanguage using braces, brackets, bars and underlining. The metalanguage was developed for the original COBOL specifications. Although Backus–Naur form did exist at the time, the committee had not heard of it. As an example, consider the following description of an ADD statement: This description permits the following variants: ADD 1 TO x ADD 1, a, b TO x ROUNDED, y, z ROUNDED ADD a, b TO c ON SIZE ERROR DISPLAY "Error" END-ADD ADD a TO b NOT SIZE ERROR DISPLAY "No error" ON SIZE ERROR DISPLAY "Error" Code format COBOL can be written in two formats: fixed (the default) or free. In fixed-format, code must be aligned to fit in certain areas (a hold-over from using punched cards). Until COBOL 2002, these were: In COBOL 2002, Areas A and B were merged to form the program-text area, which now ends at an implementor-defined column. COBOL 2002 also introduced free-format code. Free-format code can be placed in any column of the file, as in newer programming languages. Comments are specified using *>, which can be placed anywhere and can also be used in fixed-format source code. Continuation lines are not present, and the >>PAGE directive replaces the / indicator. Identification division The identification division identifies the following code entity and contains the definition of a class or interface. Object-oriented programming Classes and interfaces have been in COBOL since 2002. Classes have factory objects, containing class methods and variables, and instance objects, containing instance methods and variables. Inheritance and interfaces provide polymorphism. Support for generic programming is provided through parameterized classes, which can be instantiated to use any class or interface. Objects are stored as references which may be restricted to a certain type. There are two ways of calling a method: the statement, which acts similarly to , or through inline method invocation, which is analogous to using functions. *> These are equivalent. INVOKE my-class "foo" RETURNING var MOVE my-class::"foo" TO var *> Inline method invocation COBOL does not provide a way to hide methods. Class data can be hidden, however, by declaring it without a clause, which leaves the user with no way to access it. Method overloading was added in COBOL 2014. Environment division The environment division contains the configuration section and the input-output section. The configuration section is used to specify variable features such as currency signs, locales and character sets. The input-output section contains file-related information. Files COBOL supports three file formats, or : sequential, indexed and relative. In sequential files, records are contiguous and must be traversed sequentially, similarly to a linked list. Indexed files have one or more indexes which allow records to be randomly accessed and which can be sorted on them. Each record must have a unique key, but other, , record keys need not be unique. Implementations of indexed files vary between vendors, although common implementations, such as C-ISAM and VSAM, are based on IBM's ISAM. Relative files, like indexed files, have a unique record key, but they do not have alternate keys. A relative record's key is its ordinal position; for example, the 10th record has a key of 10. This means that creating a record with a key of 5 may require the creation of (empty) preceding records. Relative files also allow for both sequential and random access. A common non-standard extension is the organization, used to process text files. Records in a file are terminated by a newline and may be of varying length. Data division The data division is split into six sections which declare different items: the file section, for file records; the working-storage section, for static variables; the local-storage section, for automatic variables; the linkage section, for parameters and the return value; the report section and the screen section, for text-based user interfaces. Aggregated data Data items in COBOL are declared hierarchically through the use of level-numbers which indicate if a data item is part of another. An item with a higher level-number is subordinate to an item with a lower one. Top-level data items, with a level-number of 1, are called . Items that have subordinate aggregate data are called ; those that do not are called . Level-numbers used to describe standard data items are between 1 and 49. 01 some-record. *> Aggregate group record item 05 num PIC 9(10). *> Elementary item 05 the-date. *> Aggregate (sub)group record item 10 the-year PIC 9(4). *> Elementary item 10 the-month PIC 99. *> Elementary item 10 the-day PIC 99. *> Elementary item In the above example, elementary item and group item are subordinate to the record , while elementary items , , and are part of the group item . Subordinate items can be disambiguated with the (or ) keyword. For example, consider the example code above along with the following example: 01 sale-date. 05 the-year PIC 9(4). 05 the-month PIC 99. 05 the-day PIC 99. The names , , and are ambiguous by themselves, since more than one data item is defined with those names. To specify a particular data item, for instance one of the items contained within the group, the programmer would use (or the equivalent ). (This syntax is similar to the "dot notation" supported by most contemporary languages.) Other data levels A level-number of 66 is used to declare a re-grouping of previously defined items, irrespective of how those items are structured. This data level, also referred to by the associated , is rarely used and, circa 1988, was usually found in old programs. Its ability to ignore the hierarchical and logical structure data meant its use was not recommended and many installations forbade its use. 01 customer-record. 05 cust-key PIC X(10). 05 cust-name. 10 cust-first-name PIC X(30). 10 cust-last-name PIC X(30). 05 cust-dob PIC 9(8). 05 cust-balance PIC 9(7)V99. 66 cust-personal-details RENAMES cust-name THRU cust-dob. 66 cust-all-details RENAMES cust-name THRU cust-balance. A 77 level-number indicates the item is stand-alone, and in such situations is equivalent to the level-number 01. For example, the following code declares two 77-level data items, and , which are non-group data items that are independent of (not subordinate to) any other data items: 77 property-name PIC X(80). 77 sales-region PIC 9(5). An 88 level-number declares a (a so-called 88-level) which is true when its parent data item contains one of the values specified in its clause. For example, the following code defines two 88-level condition-name items that are true or false depending on the current character data value of the data item. When the data item contains a value of , the condition-name is true, whereas when it contains a value of or , the condition-name is true. If the data item contains some other value, both of the condition-names are false. 01 wage-type PIC X. 88 wage-is-hourly VALUE "H". 88 wage-is-yearly VALUE "S", "Y". Data types Standard COBOL provides the following data types: Type safety is variable in COBOL. Numeric data is converted between different representations and sizes silently and alphanumeric data can be placed in any data item that can be stored as a string, including numeric and group data. In contrast, object references and pointers may only be assigned from items of the same type and their values may be restricted to a certain type. PICTURE clause A (or ) clause is a string of characters, each of which represents a portion of the data item and what it may contain. Some picture characters specify the type of the item and how many characters or digits it occupies in memory. For example, a indicates a decimal digit, and an indicates that the item is signed. Other picture characters (called and characters) specify how an item should be formatted. For example, a series of characters define character positions as well as how a leading sign character is to be positioned within the final character data; the rightmost non-numeric character will contain the item's sign, while other character positions corresponding to a to the left of this position will contain a space. Repeated characters can be specified more concisely by specifying a number in parentheses after a picture character; for example, is equivalent to . Picture specifications containing only digit () and sign () characters define purely data items, while picture specifications containing alphabetic () or alphanumeric () characters define data items. The presence of other formatting characters define or data items. USAGE clause The clause declares the format data is stored in. Depending on the data type, it can either complement or be used instead of a clause. While it can be used to declare pointers and object references, it is mostly geared towards specifying numeric types. These numeric formats are: Binary, where a minimum size is either specified by the PICTURE clause or by a USAGE clause such as BINARY-LONG. , where data may be stored in whatever format the implementation provides; often equivalent to , the default format, where data is stored as a string Floating-point, in either an implementation-dependent format or according to IEEE 754. , where data is stored as a string using an extended character set , where data is stored in the smallest possible decimal format (typically packed binary-coded decimal) Report writer The report writer is a declarative facility for creating reports. The programmer need only specify the report layout and the data required to produce it, freeing them from having to write code to handle things like page breaks, data formatting, and headings and footings. Reports are associated with report files, which are files which may only be written to through report writer statements. FD report-out REPORT sales-report. Each report is defined in the report section of the data division. A report is split into report groups which define the report's headings, footings and details. Reports work around hierarchical . Control breaks occur when a key variable changes it value; for example, when creating a report detailing customers' orders, a control break could occur when the program reaches a different customer's orders. Here is an example report description for a report which gives a salesperson's sales and which warns of any invalid records: RD sales-report PAGE LIMITS 60 LINES FIRST DETAIL 3 CONTROLS seller-name. 01 TYPE PAGE HEADING. 03 COL 1 VALUE "Sales Report". 03 COL 74 VALUE "Page". 03 COL 79 PIC Z9 SOURCE PAGE-COUNTER. 01 sales-on-day TYPE DETAIL, LINE + 1. 03 COL 3 VALUE "Sales on". 03 COL 12 PIC 99/99/9999 SOURCE sales-date. 03 COL 21 VALUE "were". 03 COL 26 PIC $$$$9.99 SOURCE sales-amount. 01 invalid-sales TYPE DETAIL, LINE + 1. 03 COL 3 VALUE "INVALID RECORD:". 03 COL 19 PIC X(34) SOURCE sales-record. 01 TYPE CONTROL HEADING seller-name, LINE + 2. 03 COL 1 VALUE "Seller:". 03 COL 9 PIC X(30) SOURCE seller-name. The above report description describes the following layout: Sales Report Page 1 Seller: Howard Bromberg Sales on 10/12/2008 were $1000.00 Sales on 12/12/2008 were $0.00 Sales on 13/12/2008 were $31.47 INVALID RECORD: Howard Bromberg XXXXYY Seller: Howard Discount ... Sales Report Page 12 Sales on 08/05/2014 were $543.98 INVALID RECORD: William Selden 12O52014FOOFOO Sales on 30/05/2014 were $0.00 Four statements control the report writer: , which prepares the report writer for printing; , which prints a report group; , which suppresses the printing of a report group; and , which terminates report processing. For the above sales report example, the procedure division might look like this: OPEN INPUT sales, OUTPUT report-out INITIATE sales-report PERFORM UNTIL 1 <> 1 READ sales AT END EXIT PERFORM END-READ VALIDATE sales-record IF valid-record GENERATE sales-on-day ELSE GENERATE invalid-sales END-IF END-PERFORM TERMINATE sales-report CLOSE sales, report-out . Use of the Report Writer facility tended to vary considerably; some organizations used it extensively and some not at all. In addition, implementations of Report Writer ranged in quality, with those at the lower end sometimes using excessive amounts of memory at runtime. Procedure division Procedures The sections and paragraphs in the procedure division (collectively called procedures) can be used as labels and as simple subroutines. Unlike in other divisions, paragraphs do not need to be in sections. Execution goes down through the procedures of a program until it is terminated. To use procedures as subroutines, the verb is used. A statement somewhat resembles a procedure call in a modern language in the sense that execution returns to the code following the statement at the end of the called code; however, it does not provide any mechanism for parameter passing or for returning a result value. If a subroutine is invoked using a simple statement like , then control returns at the end of the called procedure. However, is unusual in that it may be used to call a range spanning a sequence of several adjacent procedures. This is done with the construct: PROCEDURE so-and-so. PERFORM ALPHA PERFORM ALPHA THRU GAMMA STOP RUN. ALPHA. DISPLAY 'A'. BETA. DISPLAY 'B'. GAMMA. DISPLAY 'C'. The output of this program will be: "A A B C". also differs from conventional procedure calls in that there is, at least traditionally, no notion of a call stack. As a consequence, nested invocations are possible (a sequence of code being 'ed may execute a statement itself), but require extra care if parts of the same code are executed by both invocations. The problem arises when the code in the inner invocation reaches the exit point of the outer invocation. More formally, if control passes through the exit point of a invocation that was called earlier but has not completed yet, the COBOL 2002 standard officially stipulates that the behaviour is undefined. The reason is that COBOL, rather than a "return address", operates with what may be called a continuation address. When control flow reaches the end of any procedure, the continuation address is looked up and control is transferred to that address. Before the program runs, the continuation address for every procedure is initialised to the start address of the procedure that comes next in the program text so that, if no statements happen, control flows from top to bottom through the program. But when a statement executes, it modifies the continuation address of the called procedure (or the last procedure of the called range, if was used), so that control will return to the call site at the end. The original value is saved and is restored afterwards, but there is only one storage position. If two nested invocations operate on overlapping code, they may interfere which each other's management of the continuation address in several ways. The following example (taken from ) illustrates the problem: LABEL1. DISPLAY '1' PERFORM LABEL2 THRU LABEL3 STOP RUN. LABEL2. DISPLAY '2' PERFORM LABEL3 THRU LABEL4. LABEL3. DISPLAY '3'. LABEL4. DISPLAY '4'. One might expect that the output of this program would be "1 2 3 4 3": After displaying "2", the second causes "3" and "4" to be displayed, and then the first invocation continues on with "3". In traditional COBOL implementations, this is not the case. Rather, the first statement sets the continuation address at the end of so that it will jump back to the call site inside . The second statement sets the return at the end of but does not modify the continuation address of , expecting it to be the default continuation. Thus, when the inner invocation arrives at the end of , it jumps back to the outer statement, and the program stops having printed just "1 2 3". On the other hand, in some COBOL implementations like the open-source TinyCOBOL compiler, the two statements do not interfere with each other and the output is indeed "1 2 3 4 3". Therefore, the behaviour in such cases is not only (perhaps) surprising, it is also not portable. A special consequence of this limitation is that cannot be used to write recursive code. Another simple example to illustrate this (slightly simplified from ): MOVE 1 TO A PERFORM LABEL STOP RUN. LABEL. DISPLAY A IF A < 3 ADD 1 TO A PERFORM LABEL END-IF DISPLAY 'END'. One might expect that the output is "1 2 3 END END END", and in fact that is what some COBOL compilers will produce. But some compilers, like IBM COBOL, will produce code that prints "1 2 3 END END END END ..." and so on, printing "END" over and over in an endless loop. Since there is limited space to store backup continuation addresses, the backups get overwritten in the course of recursive invocations, and all that can be restored is the jump back to . Statements COBOL 2014 has 47 statements (also called ), which can be grouped into the following broad categories: control flow, I/O, data manipulation and the report writer. The report writer statements are covered in the report writer section. Control flow COBOL's conditional statements are and . is a switch-like statement with the added capability of evaluating multiple values and conditions. This can be used to implement decision tables. For example, the following might be used to control a CNC lathe: EVALUATE TRUE ALSO desired-speed ALSO current-speed WHEN lid-closed ALSO min-speed THRU max-speed ALSO LESS THAN desired-speed PERFORM speed-up-machine WHEN lid-closed ALSO min-speed THRU max-speed ALSO GREATER THAN desired-speed PERFORM slow-down-machine WHEN lid-open ALSO ANY ALSO NOT ZERO PERFORM emergency-stop WHEN OTHER CONTINUE END-EVALUATE The statement is used to define loops which are executed a condition is true (not true, which is more common in other languages). It is also used to call procedures or ranges of procedures (see the procedures section for more details). and call subprograms and methods, respectively. The name of the subprogram/method is contained in a string which may be a literal or a data item. Parameters can be passed by reference, by content (where a copy is passed by reference) or by value (but only if a prototype is available). unloads subprograms from memory. causes the program to jump to a specified procedure. The statement is a return statement and the statement stops the program. The statement has six different formats: it can be used as a return statement, a break statement, a continue statement, an end marker or to leave a procedure. Exceptions are raised by a statement and caught with a handler, or , defined in the portion of the procedure division. Declaratives are sections beginning with a statement which specify the errors to handle. Exceptions can be names or objects. is used in a declarative to jump to the statement after the one that raised the exception or to a procedure outside the . Unlike other languages, uncaught exceptions may not terminate the program and the program can proceed unaffected. I/O File I/O is handled by the self-describing , , , and statements along with a further three: , which updates a record; , which selects subsequent records to access by finding a record with a certain key; and , which releases a lock on the last record accessed. User interaction is done using and . Data manipulation The following verbs manipulate data: , which sets data items to their default values. , which assigns values to data items ; MOVE CORRESPONDING assigns corresponding like-named fields. , which has 15 formats: it can modify indices, assign object references and alter table capacities, among other functions. , , , , and , which handle arithmetic (with assigning the result of a formula to a variable). and , which handle dynamic memory. , which validates and distributes data as specified in an item's description in the data division. and , which concatenate and split strings, respectively. , which tallies or replaces instances of specified substrings within a string. , which searches a table for the first entry satisfying a condition. Files and tables are sorted using and the verb merges and sorts files. The verb provides records to sort and retrieves sorted records in order. Scope termination Some statements, such as and , may themselves contain statements. Such statements may be terminated in two ways: by a period (), which terminates all unterminated statements contained, or by a scope terminator, which terminates the nearest matching open statement. *> Terminator period ("implicit termination") IF invalid-record IF no-more-records NEXT SENTENCE ELSE READ record-file AT END SET no-more-records TO TRUE. *> Scope terminators ("explicit termination") IF invalid-record IF no-more-records CONTINUE ELSE READ record-file AT END SET no-more-records TO TRUE END-READ END-IF END-IF Nested statements terminated with a period are a common source of bugs. For example, examine the following code: IF x DISPLAY y. DISPLAY z. Here, the intent is to display y and z if condition x is true. However, z will be displayed whatever the value of x because the IF statement is terminated by an erroneous period after . Another bug is |
a body or a class of people who work at a common activity, generally in a structured or hierarchical organization. A location in which a crew works is called a crewyard or a workyard. The word has nautical resonances: the tasks involved in operating a ship, particularly a sailing ship, providing numerous specialities within a ship's crew, often organised with a chain of command. Traditional nautical | works is called a crewyard or a workyard. The word has nautical resonances: the tasks involved in operating a ship, particularly a sailing ship, providing numerous specialities within a ship's crew, often organised with a chain of command. Traditional nautical usage strongly distinguishes officers from crew, though the two groups combined form the ship's company. Members of a crew are often referred to by the title Crewman. Crew also refers to the sport of rowing, where teams row competitively in racing shells. See also For a specific sporting usage, see rowing crew. For filmmaking usage, see film crew. For live music usage, see |
response variable without a complete three-level factorial Complementary cumulative distribution function Continuous collision detection, especially in rigid-body dynamics Countercurrent distribution, used for separating mixtures Core complex die, an element of AMD Zen 3 microprocessors Medicine Canine compulsive disorder, a behavioral condition in dogs, similar to human obsessive-compulsive disorder (OCD) Caput-collum-diaphyseal angle, the angle between the neck and the shaft of the femur in the hip Cleidocranial dysostosis (also called cleidocranial dysplasia), a genetic abnormality in humans Central core disease, a rare neuromuscular disorder Congenital chloride diarrhea, a rare disorder in babies Continuity of Care Document, an XML-based markup standard for patient medical document exchange Cross-reactive carbohydrate determinants, protein-linked carbohydrate structures that have a role in the phenomenon of cross-reactivity in allergic patients Cortical collecting duct, a segment of | behavioral condition in dogs, similar to human obsessive-compulsive disorder (OCD) Caput-collum-diaphyseal angle, the angle between the neck and the shaft of the femur in the hip Cleidocranial dysostosis (also called cleidocranial dysplasia), a genetic abnormality in humans Central core disease, a rare neuromuscular disorder Congenital chloride diarrhea, a rare disorder in babies Continuity of Care Document, an XML-based markup standard for patient medical document exchange Cross-reactive carbohydrate determinants, protein-linked carbohydrate structures that have a role in the phenomenon of cross-reactivity in allergic patients Cortical collecting duct, a segment of the kidney Politics and government Census county division, a term used by the US Census Bureau Center City District, an economic development agency for the Center City area of Philadelphia Consular Consolidated Database, a database used for visa processing by the Bureau |
will travel. Simon Sze details the advantages of a buried-channel device: This thin layer (= 0.2–0.3 micron) is fully depleted and the accumulated photogenerated charge is kept away from the surface. This structure has the advantages of higher transfer efficiency and lower dark current, from reduced surface recombination. The penalty is smaller charge capacity, by a factor of 2–3 compared to the surface-channel CCD. The gate oxide, i.e. the capacitor dielectric, is grown on top of the epitaxial layer and substrate. Later in the process, polysilicon gates are deposited by chemical vapor deposition, patterned with photolithography, and etched in such a way that the separately phased gates lie perpendicular to the channels. The channels are further defined by utilization of the LOCOS process to produce the channel stop region. Channel stops are thermally grown oxides that serve to isolate the charge packets in one column from those in another. These channel stops are produced before the polysilicon gates are, as the LOCOS process utilizes a high-temperature step that would destroy the gate material. The channel stops are parallel to, and exclusive of, the channel, or "charge carrying", regions. Channel stops often have a p+ doped region underlying them, providing a further barrier to the electrons in the charge packets (this discussion of the physics of CCD devices assumes an electron transfer device, though hole transfer is possible). The clocking of the gates, alternately high and low, will forward and reverse bias the diode that is provided by the buried channel (n-doped) and the epitaxial layer (p-doped). This will cause the CCD to deplete, near the p–n junction and will collect and move the charge packets beneath the gates—and within the channels—of the device. CCD manufacturing and operation can be optimized for different uses. The above process describes a frame transfer CCD. While CCDs may be manufactured on a heavily doped p++ wafer it is also possible to manufacture a device inside p-wells that have been placed on an n-wafer. This second method, reportedly, reduces smear, dark current, and infrared and red response. This method of manufacture is used in the construction of interline-transfer devices. Another version of CCD is called a peristaltic CCD. In a peristaltic charge-coupled device, the charge-packet transfer operation is analogous to the peristaltic contraction and dilation of the digestive system. The peristaltic CCD has an additional implant that keeps the charge away from the silicon/silicon dioxide interface and generates a large lateral electric field from one gate to the next. This provides an additional driving force to aid in transfer of the charge packets. Architecture The CCD image sensors can be implemented in several different architectures. The most common are full-frame, frame-transfer, and interline. The distinguishing characteristic of each of these architectures is their approach to the problem of shuttering. In a full-frame device, all of the image area is active, and there is no electronic shutter. A mechanical shutter must be added to this type of sensor or the image smears as the device is clocked or read out. With a frame-transfer CCD, half of the silicon area is covered by an opaque mask (typically aluminum). The image can be quickly transferred from the image area to the opaque area or storage region with acceptable smear of a few percent. That image can then be read out slowly from the storage region while a new image is integrating or exposing in the active area. Frame-transfer devices typically do not require a mechanical shutter and were a common architecture for early solid-state broadcast cameras. The downside to the frame-transfer architecture is that it requires twice the silicon real estate of an equivalent full-frame device; hence, it costs roughly twice as much. The interline architecture extends this concept one step further and masks every other column of the image sensor for storage. In this device, only one pixel shift has to occur to transfer from image area to storage area; thus, shutter times can be less than a microsecond and smear is essentially eliminated. The advantage is not free, however, as the imaging area is now covered by opaque strips dropping the fill factor to approximately 50 percent and the effective quantum efficiency by an equivalent amount. Modern designs have addressed this deleterious characteristic by adding microlenses on the surface of the device to direct light away from the opaque regions and on the active area. Microlenses can bring the fill factor back up to 90 percent or more depending on pixel size and the overall system's optical design. The choice of architecture comes down to one of utility. If the application cannot tolerate an expensive, failure-prone, power-intensive mechanical shutter, an interline device is the right choice. Consumer snap-shot cameras have used interline devices. On the other hand, for those applications that require the best possible light collection and issues of money, power and time are less important, the full-frame device is the right choice. Astronomers tend to prefer full-frame devices. The frame-transfer falls in between and was a common choice before the fill-factor issue of interline devices was addressed. Today, frame-transfer is usually chosen when an interline architecture is not available, such as in a back-illuminated device. CCDs containing grids of pixels are used in digital cameras, optical scanners, and video cameras as light-sensing devices. They commonly respond to 70 percent of the incident light (meaning a quantum efficiency of about 70 percent) making them far more efficient than photographic film, which captures only about 2 percent of the incident light. Most common types of CCDs are sensitive to near-infrared light, which allows infrared photography, night-vision devices, and zero lux (or near zero lux) video-recording/photography. For normal silicon-based detectors, the sensitivity is limited to 1.1 μm. One other consequence of their sensitivity to infrared is that infrared from remote controls often appears on CCD-based digital cameras or camcorders if they do not have infrared blockers. Cooling reduces the array's dark current, improving the sensitivity of the CCD to low light intensities, even for ultraviolet and visible wavelengths. Professional observatories often cool their detectors with liquid nitrogen to reduce the dark current, and therefore the thermal noise, to negligible levels. Frame transfer CCD The frame transfer CCD imager was the first imaging structure proposed for CCD Imaging by Michael Tompsett at Bell Laboratories. A frame transfer CCD is a specialized CCD, often used in astronomy and some professional video cameras, designed for high exposure efficiency and correctness. The normal functioning of a CCD, astronomical or otherwise, can be divided into two phases: exposure and readout. During the first phase, the CCD passively collects incoming photons, storing electrons in its cells. After the exposure time is passed, the cells are read out one line at a time. During the readout phase, cells are shifted down the entire area of the CCD. While they are shifted, they continue to collect light. Thus, if the shifting is not fast enough, errors can result from light that falls on a cell holding charge during the transfer. These errors are referred to as "vertical smear" and cause a strong light source to create a vertical line above and below its exact location. In addition, the CCD cannot be used to collect light while it is being read out. Unfortunately, a faster shifting requires a faster readout, and a faster readout can introduce errors in the cell charge measurement, leading to a higher noise level. A frame transfer CCD solves both problems: it has a shielded, not light sensitive, area containing as many cells as the area exposed to light. Typically, this area is covered by a reflective material such as aluminium. When the exposure time is up, the cells are transferred very rapidly to the hidden area. Here, safe from any incoming light, cells can be read out at any speed one deems necessary to correctly measure the cells' charge. At the same time, the exposed part of the CCD is collecting light again, so no delay occurs between successive exposures. The disadvantage of such a CCD is the higher cost: the cell area is basically doubled, and more complex control electronics are needed. Intensified charge-coupled device An intensified charge-coupled device (ICCD) is a CCD that is optically connected to an image intensifier that is mounted in front of the CCD. An image intensifier includes three functional elements: a photocathode, a micro-channel plate (MCP) and a phosphor screen. These three elements are mounted one close behind the other in the mentioned sequence. The photons which are coming from the light source fall onto the photocathode, thereby generating photoelectrons. The photoelectrons are accelerated towards the MCP by an electrical control voltage, applied between photocathode and MCP. The electrons are multiplied inside of the MCP and thereafter accelerated towards the phosphor screen. The phosphor screen finally converts the multiplied electrons back to photons which are guided to the CCD by a fiber optic or a lens. An image intensifier inherently includes a shutter functionality: If the control voltage between the photocathode and the MCP is reversed, the emitted photoelectrons are not accelerated towards the MCP but return to the photocathode. Thus, no electrons are multiplied and emitted by the MCP, no electrons are going to the phosphor screen and no light is emitted from the image intensifier. In this case no light falls onto the CCD, which means that the shutter is closed. The | and the overall system's optical design. The choice of architecture comes down to one of utility. If the application cannot tolerate an expensive, failure-prone, power-intensive mechanical shutter, an interline device is the right choice. Consumer snap-shot cameras have used interline devices. On the other hand, for those applications that require the best possible light collection and issues of money, power and time are less important, the full-frame device is the right choice. Astronomers tend to prefer full-frame devices. The frame-transfer falls in between and was a common choice before the fill-factor issue of interline devices was addressed. Today, frame-transfer is usually chosen when an interline architecture is not available, such as in a back-illuminated device. CCDs containing grids of pixels are used in digital cameras, optical scanners, and video cameras as light-sensing devices. They commonly respond to 70 percent of the incident light (meaning a quantum efficiency of about 70 percent) making them far more efficient than photographic film, which captures only about 2 percent of the incident light. Most common types of CCDs are sensitive to near-infrared light, which allows infrared photography, night-vision devices, and zero lux (or near zero lux) video-recording/photography. For normal silicon-based detectors, the sensitivity is limited to 1.1 μm. One other consequence of their sensitivity to infrared is that infrared from remote controls often appears on CCD-based digital cameras or camcorders if they do not have infrared blockers. Cooling reduces the array's dark current, improving the sensitivity of the CCD to low light intensities, even for ultraviolet and visible wavelengths. Professional observatories often cool their detectors with liquid nitrogen to reduce the dark current, and therefore the thermal noise, to negligible levels. Frame transfer CCD The frame transfer CCD imager was the first imaging structure proposed for CCD Imaging by Michael Tompsett at Bell Laboratories. A frame transfer CCD is a specialized CCD, often used in astronomy and some professional video cameras, designed for high exposure efficiency and correctness. The normal functioning of a CCD, astronomical or otherwise, can be divided into two phases: exposure and readout. During the first phase, the CCD passively collects incoming photons, storing electrons in its cells. After the exposure time is passed, the cells are read out one line at a time. During the readout phase, cells are shifted down the entire area of the CCD. While they are shifted, they continue to collect light. Thus, if the shifting is not fast enough, errors can result from light that falls on a cell holding charge during the transfer. These errors are referred to as "vertical smear" and cause a strong light source to create a vertical line above and below its exact location. In addition, the CCD cannot be used to collect light while it is being read out. Unfortunately, a faster shifting requires a faster readout, and a faster readout can introduce errors in the cell charge measurement, leading to a higher noise level. A frame transfer CCD solves both problems: it has a shielded, not light sensitive, area containing as many cells as the area exposed to light. Typically, this area is covered by a reflective material such as aluminium. When the exposure time is up, the cells are transferred very rapidly to the hidden area. Here, safe from any incoming light, cells can be read out at any speed one deems necessary to correctly measure the cells' charge. At the same time, the exposed part of the CCD is collecting light again, so no delay occurs between successive exposures. The disadvantage of such a CCD is the higher cost: the cell area is basically doubled, and more complex control electronics are needed. Intensified charge-coupled device An intensified charge-coupled device (ICCD) is a CCD that is optically connected to an image intensifier that is mounted in front of the CCD. An image intensifier includes three functional elements: a photocathode, a micro-channel plate (MCP) and a phosphor screen. These three elements are mounted one close behind the other in the mentioned sequence. The photons which are coming from the light source fall onto the photocathode, thereby generating photoelectrons. The photoelectrons are accelerated towards the MCP by an electrical control voltage, applied between photocathode and MCP. The electrons are multiplied inside of the MCP and thereafter accelerated towards the phosphor screen. The phosphor screen finally converts the multiplied electrons back to photons which are guided to the CCD by a fiber optic or a lens. An image intensifier inherently includes a shutter functionality: If the control voltage between the photocathode and the MCP is reversed, the emitted photoelectrons are not accelerated towards the MCP but return to the photocathode. Thus, no electrons are multiplied and emitted by the MCP, no electrons are going to the phosphor screen and no light is emitted from the image intensifier. In this case no light falls onto the CCD, which means that the shutter is closed. The process of reversing the control voltage at the photocathode is called gating and therefore ICCDs are also called gateable CCD cameras. Besides the extremely high sensitivity of ICCD cameras, which enable single photon detection, the gateability is one of the major advantages of the ICCD over the EMCCD cameras. The highest performing ICCD cameras enable shutter times as short as 200 picoseconds. ICCD cameras are in general somewhat higher in price than EMCCD cameras because they need the expensive image intensifier. On the other hand, EMCCD cameras need a cooling system to cool the EMCCD chip down to temperatures around . This cooling system adds additional costs to the EMCCD camera and often yields heavy condensation problems in the application. ICCDs are used in night vision devices and in various scientific applications. Electron-multiplying CCD An electron-multiplying CCD (EMCCD, also known as an L3Vision CCD, a product commercialized by e2v Ltd., GB, L3CCD or Impactron CCD, a now-discontinued product offered in the past by Texas Instruments) is a charge-coupled device in which a gain register is placed between the shift register and the output amplifier. The gain register is split up into a large number of stages. In each stage, the electrons are multiplied by impact ionization in a similar way to an avalanche diode. The gain probability at every stage of the register is small (P < 2%), but as the number of elements is large (N > 500), the overall gain can be very high (), with single input electrons giving many thousands of output electrons. Reading a signal from a CCD gives a noise background, typically a few electrons. In an EMCCD, this noise is superimposed on many thousands of electrons rather than a single electron; the devices' primary advantage is thus their negligible readout noise. The use of avalanche breakdown for amplification of photo charges had already been described in the in 1973 by George E. Smith/Bell Telephone Laboratories. EMCCDs show a similar sensitivity to intensified CCDs (ICCDs). However, as with ICCDs, the gain that is applied in the gain register is stochastic and the exact gain that has been applied to a pixel's charge is impossible to know. At high gains (> 30), this uncertainty has the same effect on the signal-to-noise ratio (SNR) as halving the quantum efficiency (QE) with respect to operation with a gain of unity. However, at very low light levels (where the quantum efficiency is most important), it can be assumed that a pixel either contains an electron—or not. This removes the noise associated with the stochastic multiplication at the risk of counting multiple electrons in the same pixel as a single electron. To avoid multiple counts in one pixel due to coincident photons in this mode of operation, high frame rates are essential. The dispersion in the gain is shown in the graph on the right. For multiplication registers with many elements and large gains it is well modelled by the equation: where P is the probability of getting n output electrons given m input electrons and a total mean multiplication register gain of g. Because of the lower costs and better resolution, EMCCDs are capable of replacing ICCDs in many applications. ICCDs still have the advantage that they can be gated very fast and thus are useful in applications like range-gated imaging. EMCCD cameras indispensably need a cooling system—using either thermoelectric cooling or liquid nitrogen—to cool the chip down to temperatures in the range of . This cooling system unfortunately adds additional costs to the EMCCD imaging system and may yield condensation problems in the application. However, high-end EMCCD cameras are equipped with a permanent hermetic vacuum system confining the chip to avoid condensation issues. The low-light capabilities of EMCCDs find use in astronomy and biomedical research, among other fields. In particular, their low noise at high readout speeds makes them very useful for a variety of astronomical applications involving low light sources and transient events such as lucky imaging of faint stars, high speed photon counting photometry, Fabry-Pérot spectroscopy and high-resolution spectroscopy. More recently, these types of CCDs have broken into the field of biomedical research in low-light applications including small animal imaging, single-molecule imaging, Raman spectroscopy, super resolution microscopy as well as a wide variety of modern fluorescence microscopy techniques thanks to greater SNR in low-light conditions in comparison with traditional CCDs and ICCDs. In terms of noise, commercial EMCCD cameras typically have clock-induced charge (CIC) and dark current (dependent on the extent |
memory (DRAM) used for primary storage, and static random-access memory (SRAM) used for CPU cache. Most semiconductor memory is organized into memory cells each storing one bit (0 or 1). Flash memory organization includes both one bit per memory cell and multi-level cell capable of storing multiple bits per cell. The memory cells are grouped into words of fixed word length, for example, 1, 2, 4, 8, 16, 32, 64 or 128 bits. Each word can be accessed by a binary address of N bits, making it possible to store 2N words in the memory. History In the early 1940s, memory technology often permitted a capacity of a few bytes. The first electronic programmable digital computer, the ENIAC, using thousands of vacuum tubes, could perform simple calculations involving 20 numbers of ten decimal digits stored in the vacuum tubes. The next significant advance in computer memory came with acoustic delay-line memory, developed by J. Presper Eckert in the early 1940s. Through the construction of a glass tube filled with mercury and plugged at each end with a quartz crystal, delay lines could store bits of information in the form of sound waves propagating through the mercury, with the quartz crystals acting as transducers to read and write bits. Delay-line memory was limited to a capacity of up to a few thousand bits. Two alternatives to the delay line, the Williams tube and Selectron tube, originated in 1946, both using electron beams in glass tubes as means of storage. Using cathode ray tubes, Fred Williams invented the Williams tube, which was the first random-access computer memory. The Williams tube was able to store more information than the Selectron tube (the Selectron was limited to 256 bits, while the Williams tube could store thousands) and less expensive. The Williams tube was nevertheless frustratingly sensitive to environmental disturbances. Efforts began in the late 1940s to find non-volatile memory. Magnetic-core memory allowed for recall of memory after power loss. It was developed by Frederick W. Viehe and An Wang in the late 1940s, and improved by Jay Forrester and Jan A. Rajchman in the early 1950s, before being commercialised with the Whirlwind computer in 1953. Magnetic-core memory was the dominant form of memory until the development of MOS semiconductor memory in the 1960s. The first semiconductor memory was implemented as a flip-flop circuit in the early 1960s using bipolar transistors. Semiconductor memory made from discrete devices was first shipped by Texas Instruments to the United States Air Force in 1961. The same year, the concept of solid-state memory on an integrated circuit (IC) chip was proposed by applications engineer Bob Norman at Fairchild Semiconductor. The first bipolar semiconductor memory IC chip was the SP95 introduced by IBM in 1965. While semiconductor memory offered improved performance over magnetic-core memory, it remain larger and more expensive and did not displace magnetic-core memory until the late 1960s. MOS memory The invention of the metal–oxide–semiconductor field-effect transistor (MOSFET) enabled the practical use of metal–oxide–semiconductor (MOS) transistors as memory cell storage elements. MOS memory was developed by John Schmidt at Fairchild Semiconductor in 1964. In addition to higher performance, MOS semiconductor memory was cheaper and consumed less power than magnetic core memory. In 1965, J. Wood and R. Ball of the Royal Radar Establishment proposed digital storage systems that use CMOS (complementary MOS) memory cells, in addition to MOSFET power devices for the power supply, switched cross-coupling, switches and delay-line storage. The development of silicon-gate MOS integrated circuit (MOS IC) technology by Federico Faggin at Fairchild in 1968 enabled the production of MOS memory chips. NMOS memory was commercialized by IBM in the early 1970s. MOS memory overtook magnetic core memory as the dominant memory technology in the early 1970s. The two main types of volatile random-access memory (RAM) are static random-access memory (SRAM) and dynamic random-access memory (DRAM). Bipolar SRAM was invented by Robert Norman at Fairchild Semiconductor in 1963, followed by the development of MOS SRAM by John Schmidt at Fairchild in 1964. SRAM became an alternative to magnetic-core memory, but requires six transistors for each bit of data. Commercial use of SRAM began in 1965, when IBM introduced their SP95 SRAM chip for the System/360 Model 95. Toshiba introduced bipolar DRAM memory cells for its Toscal BC-1411 electronic calculator in 1965. While it offered improved performance, bipolar DRAM could not compete with the lower price of the then dominant magnetic-core memory. MOS technology is the basis for modern DRAM. In 1966, Robert H. Dennard at the IBM Thomas J. Watson Research Center was working on MOS memory. While examining the characteristics of MOS technology, he found it was possible to build capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of a single-transistor DRAM memory cell. In 1967, Dennard filed a patent for a single-transistor DRAM memory cell based on MOS technology. This led to the first commercial DRAM IC chip, the Intel 1103 in October 1970. Synchronous dynamic random-access memory (SDRAM) later debuted with the Samsung KM48SL2000 chip in 1992. The term memory is also often used to refer to non-volatile memory including read-only memory (ROM) through modern flash memory. Programmable read-only memory (PROM) was invented by Wen Tsing Chow in 1956, while working for the Arma Division of the American Bosch Arma Corporation. In 1967, Dawon Kahng and Simon Sze of Bell Labs proposed that the floating gate of a MOS semiconductor device could be used for the cell of a reprogrammable ROM, which led to Dov Frohman of Intel inventing EPROM (erasable PROM) in 1971. EEPROM (electrically erasable PROM) was developed by Yasuo Tarui, Yutaka Hayashi and Kiyoko Naga at the Electrotechnical Laboratory in 1972. Flash memory was invented by Fujio Masuoka at Toshiba in the early 1980s. Masuoka and colleagues presented the invention of NOR flash in 1984, and then NAND flash in 1987. Toshiba commercialized NAND flash memory in 1987. Developments in technology and economies of scale have made possible so-called (VLM) computers. Volatile memory Volatile memory is computer memory that requires power to maintain the stored information. Most modern semiconductor volatile memory is either static RAM (SRAM) or dynamic RAM (DRAM). SRAM retains its contents as long as the power is connected and is simpler for interfacing, but uses six transistors per bit. Dynamic RAM is more complicated for interfacing and control, needing regular refresh cycles to prevent losing its contents, but uses only one transistor and one capacitor per bit, allowing it to reach much higher densities and much cheaper per-bit costs. SRAM is not worthwhile for desktop system memory, where DRAM dominates, but is used for their cache memories. SRAM is commonplace in small embedded systems, which might only need tens of kilobytes or less. Volatile memory technologies that have attempted to compete or replace SRAM and DRAM include Z-RAM and A-RAM. Non-volatile memory Non-volatile memory is computer memory that can retain the stored information even when not powered. Examples of non-volatile memory include read-only memory (see ROM), flash memory, most types of magnetic computer storage devices (e.g. hard disk drives, floppy disks and magnetic tape), optical discs, and early computer storage methods such as paper tape and punched cards. Forthcoming non-volatile memory technologies include FERAM, CBRAM, PRAM, STT-RAM, SONOS, RRAM, racetrack memory, NRAM, 3D XPoint, and millipede memory. Semi-volatile memory A third category of memory is "semi-volatile". The term is used to describe a memory which has some limited non-volatile duration after power is removed, but then data is ultimately lost. A typical goal when using a semi-volatile memory is | memory made from discrete devices was first shipped by Texas Instruments to the United States Air Force in 1961. The same year, the concept of solid-state memory on an integrated circuit (IC) chip was proposed by applications engineer Bob Norman at Fairchild Semiconductor. The first bipolar semiconductor memory IC chip was the SP95 introduced by IBM in 1965. While semiconductor memory offered improved performance over magnetic-core memory, it remain larger and more expensive and did not displace magnetic-core memory until the late 1960s. MOS memory The invention of the metal–oxide–semiconductor field-effect transistor (MOSFET) enabled the practical use of metal–oxide–semiconductor (MOS) transistors as memory cell storage elements. MOS memory was developed by John Schmidt at Fairchild Semiconductor in 1964. In addition to higher performance, MOS semiconductor memory was cheaper and consumed less power than magnetic core memory. In 1965, J. Wood and R. Ball of the Royal Radar Establishment proposed digital storage systems that use CMOS (complementary MOS) memory cells, in addition to MOSFET power devices for the power supply, switched cross-coupling, switches and delay-line storage. The development of silicon-gate MOS integrated circuit (MOS IC) technology by Federico Faggin at Fairchild in 1968 enabled the production of MOS memory chips. NMOS memory was commercialized by IBM in the early 1970s. MOS memory overtook magnetic core memory as the dominant memory technology in the early 1970s. The two main types of volatile random-access memory (RAM) are static random-access memory (SRAM) and dynamic random-access memory (DRAM). Bipolar SRAM was invented by Robert Norman at Fairchild Semiconductor in 1963, followed by the development of MOS SRAM by John Schmidt at Fairchild in 1964. SRAM became an alternative to magnetic-core memory, but requires six transistors for each bit of data. Commercial use of SRAM began in 1965, when IBM introduced their SP95 SRAM chip for the System/360 Model 95. Toshiba introduced bipolar DRAM memory cells for its Toscal BC-1411 electronic calculator in 1965. While it offered improved performance, bipolar DRAM could not compete with the lower price of the then dominant magnetic-core memory. MOS technology is the basis for modern DRAM. In 1966, Robert H. Dennard at the IBM Thomas J. Watson Research Center was working on MOS memory. While examining the characteristics of MOS technology, he found it was possible to build capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of a single-transistor DRAM memory cell. In 1967, Dennard filed a patent for a single-transistor DRAM memory cell based on MOS technology. This led to the first commercial DRAM IC chip, the Intel 1103 in October 1970. Synchronous dynamic random-access memory (SDRAM) later debuted with the Samsung KM48SL2000 chip in 1992. The term memory is also often used to refer to non-volatile memory including read-only memory (ROM) through modern flash memory. Programmable read-only memory (PROM) was invented by Wen Tsing Chow in 1956, while working for the Arma Division of the American Bosch Arma Corporation. In 1967, Dawon Kahng and Simon Sze of Bell Labs proposed that the floating gate of a MOS semiconductor device could be used for the cell of a reprogrammable ROM, which led to Dov Frohman of Intel inventing EPROM (erasable PROM) in 1971. EEPROM (electrically erasable PROM) was developed by Yasuo Tarui, Yutaka Hayashi and Kiyoko Naga at the Electrotechnical Laboratory in 1972. Flash memory was invented by Fujio Masuoka at Toshiba in the early 1980s. Masuoka and colleagues presented the invention of NOR flash in 1984, and then NAND flash in 1987. Toshiba commercialized NAND flash memory in 1987. Developments in technology and economies of scale have made possible so-called (VLM) computers. Volatile memory Volatile memory is computer memory that requires power to maintain the stored information. Most modern semiconductor volatile memory is either static RAM (SRAM) or dynamic RAM (DRAM). SRAM retains its contents as long as the power is connected and is simpler for interfacing, but uses six transistors per bit. Dynamic RAM is more complicated for interfacing and control, needing regular refresh cycles to prevent losing its contents, but uses only one transistor and one capacitor per bit, allowing it to reach much higher densities and much cheaper per-bit costs. SRAM is not worthwhile for desktop system memory, where DRAM dominates, but is used for their cache memories. SRAM is commonplace in small embedded systems, which might only need tens of kilobytes or less. Volatile memory technologies that have attempted to compete or replace SRAM and DRAM include Z-RAM and A-RAM. Non-volatile memory Non-volatile memory is computer memory that can retain the stored information even when not powered. Examples of non-volatile memory include read-only memory (see ROM), flash memory, most types of magnetic computer storage devices (e.g. hard disk drives, floppy disks and magnetic tape), optical discs, and early computer storage methods such as paper tape and punched cards. Forthcoming non-volatile memory technologies include FERAM, CBRAM, PRAM, STT-RAM, SONOS, RRAM, racetrack memory, NRAM, 3D XPoint, and millipede memory. Semi-volatile memory A third category of memory is "semi-volatile". The term is used to describe a memory which has some limited non-volatile duration after power is removed, but then data is ultimately lost. A typical goal when using a semi-volatile memory is to provide high performance/durability/etc. associated with volatile memories, while providing some benefits of a true non-volatile memory. For example, some non-volatile memory types can wear out, where a "worn" cell has increased volatility but otherwise continues to work. Data locations which are written frequently can thus be directed to use worn circuits. As long as the location is updated within some known retention time, the data stays valid. If the retention time "expires" without an update, then the value is copied to a less-worn circuit with longer retention. Writing first to the worn area allows a high write rate while avoiding wear on the not-worn circuits. As a second example, an STT-RAM can be made non-volatile by building large cells, but the cost per bit and write power go up, while the write speed goes down. Using small cells improves cost, power, and speed, but leads to semi-volatile behavior. In some applications the increased volatility can be managed to provide many benefits of a non-volatile memory, for example by removing power but forcing a wake-up before data is lost; or by caching read-only data and discarding the cached data if the power-off time exceeds the non-volatile threshold. The term semi-volatile is also used to describe semi-volatile behavior constructed from other memory types. For example, a volatile and a non-volatile memory may be combined, where an external signal copies data from the volatile memory to the non-volatile memory, but if power is removed without copying, the data is lost. Or, a battery-backed volatile memory, and if external power is lost there is some known period where the |
central securities depository company Control Data Corporation, former supercomputer company CDC Software, a computer software company spun off from Control Data Corporation ComfortDelGro Australia, a major Australian operator of buses formerly named ComfortDelGro Cabcharge Construction Data Company, also known as CDC News and CDC Publishing, a commercial construction reporting service Loong Air, by ICAO code Other organizations Cult of the Dead Cow (cDc), a computer hacker and DIY media organization Places Center Day Camp, North Windham, Maine, U.S. , library in Quebec, Canada Communicable Disease Centre, former hospital in Novena, Singapore Cedar City Regional Airport, by IATA code Science Cholesterol-dependent cytolysin, exotoxins secreted by bacteria Cell-division cycle in biology cdc20 cdc25 Cdc42, cell-division cycle protein Complement-dependent cytotoxicity Conventional dendritic cell, cDC Cross dehydrogenative coupling Technology Change data capture, to track changed data Clock domain crossing of a signal Connected Device Configuration, of required Java ME features Communications daughter card for notebook computers Carbide-derived carbon USB communications device class Other uses CDC?, a children's book by William Steig Combat Direction Center of an aircraft carrier Cul de canard, duck feathers used in fly | and CDC Publishing, a commercial construction reporting service Loong Air, by ICAO code Other organizations Cult of the Dead Cow (cDc), a computer hacker and DIY media organization Places Center Day Camp, North Windham, Maine, U.S. , library in Quebec, Canada Communicable Disease Centre, former hospital in Novena, Singapore Cedar City Regional Airport, by IATA code Science Cholesterol-dependent cytolysin, exotoxins secreted by bacteria Cell-division cycle in biology cdc20 cdc25 Cdc42, cell-division cycle protein Complement-dependent cytotoxicity Conventional dendritic cell, cDC Cross dehydrogenative coupling Technology Change data capture, to track changed data Clock domain crossing of a signal Connected Device Configuration, of required Java ME features Communications daughter card for notebook computers Carbide-derived carbon USB communications device class Other uses CDC?, a children's book by William Steig Combat Direction Center of an aircraft carrier Cul de canard, duck feathers used in fly fishing Continuous Discharge Certificate, seafarer's identity document See also C.DC., the Swiss botany author abbreviation of Anne Casimir de Candolle Africa CDC (Centres for Disease |
for Chronic Disease Prevention and Health Promotion National Center for Environmental Health and Agency for Toxic Substances and Disease Registry National Center for Injury Prevention and Control Deputy Director – Infectious Diseases National Center for Immunization and Respiratory Diseases National Center for Emerging and Zoonotic Infectious Diseases (includes the Division of Global Migration and Quarantine, which issues quarantine orders) National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention National Institute for Occupational Safety and Health Office of the Director Chief of Staff Chief Operating Officer Human Resources Office Office of Financial Resources Office of Safety, Security, and Asset Management Office of the Chief Information Officer Chief Medical Officer CDC Washington Office Office of Equal Employment Opportunity Associate Director – Communication Associate Director – Laboratory Science and Safety Associate Director – Policy and Strategy The Office of Public Health Preparedness was created during the 2001 anthrax attacks shortly after the terrorist attacks of September 11, 2001. Its purpose was to coordinate among the government the response to a range of biological terrorism threats. Locations Most CDC centers are located in Atlanta. A few of the centers are based in or operate other domestic locations: The National Center for Emerging and Zoonotic Infectious Diseases' Division of Vector-Borne Diseases is based in Fort Collins, Colorado, with a branch in San Juan, Puerto Rico; its Arctic Investigations Program is based in Anchorage, Alaska. The National Center for Health Statistics is primarily located in Hyattsville, Maryland, with a branch in Research Triangle Park in North Carolina. The National Institute for Occupational Safety and Health's primary locations are Cincinnati, Ohio; Morgantown, West Virginia; Pittsburgh, Pennsylvania; Spokane, Washington; and Washington, D.C., with branches in Denver, Anchorage, and Atlanta. The CDC Washington Office is based in Washington, D.C. Building 18, which opened in 2005 at the CDC's main Roybal campus (named in honor of the late Representative Edward R. Roybal), contains the premier BSL4 laboratory in the United States. In addition, CDC operates quarantine facilities in 20 cities in the U.S. Budget CDC's budget for fiscal year 2018 is $11.9billion. The CDC offers grants that help many organizations each year advance health, safety and awareness at the community level throughout the United States. The CDC awards over 85 percent of its annual budget through these grants. Workforce CDC staff numbered approximately 15,000 personnel (including 6,000 contractors and 840 United States Public Health Service Commissioned Corps officers) in 170 occupations. Eighty percent held bachelor's degrees or higher; almost half had advanced degrees (a master's degree or a doctorate such as a PhD, D.O., or M.D.). Common CDC job titles include engineer, entomologist, epidemiologist, biologist, physician, veterinarian, behavioral scientist, nurse, medical technologist, economist, public health advisor, health communicator, toxicologist, chemist, computer scientist, and statistician.The CDC also operates a number of notable training and fellowship programs, including those indicated below. Epidemic Intelligence Service (EIS) The Epidemic Intelligence Service (EIS) is composed of "boots-on-the-ground disease detectives" who investigate public health problems domestically and globally. When called upon by a governmental body, EIS officers may embark on short-term epidemiological assistance assignments, or "Epi-Aids", to provide technical expertise in containing and investigating disease outbreaks. The EIS program is a model for the international Field Epidemiology Training Program. Public Health Associates Program The CDC also operates the Public Health Associate Program (PHAP), a two-year paid fellowship for recent college graduates to work in public health agencies all over the United States. PHAP was founded in 2007 and currently has 159 associates in 34 states. Leadership The Director of CDC is a Senior Executive Service position that may be filled either by a career employee, or as a political appointment that does not require Senate confirmation, with the latter method typically being used. The director serves at the pleasure of the President and may be fired at any time. The CDC director concurrently serves as the Administrator of the Agency for Toxic Substances and Disease Registry. Twenty directors have served the CDC or its predecessor agencies, including three who have served during the Trump administration (including Anne Schuchat who twice served as acting director) and three who have served during the Carter administration (including one acting director not shown here). Two served under Bill Clinton, but only one under the Nixon to Ford terms. Louis L. Williams Jr., MD (1942–1943) Mark D. Hollis, ScD (1944–1946) Raymond A. Vonderlehr, MD (1947–1951) Justin M. Andrews, ScD (1952–1953) Theodore J. Bauer, MD (1953–1956) Robert J. Anderson, MD, MPH (1956–1960) Clarence A. Smith, MD, MPH (1960–1962) James L. Goddard, MD, MPH (1962–1966) David J. Sencer, MD, MPH (1966–1977) William H. Foege, MD, MPH (1977–1983) James O. Mason, MD, MPH, Ph.D (1983–1989) William L. Roper, MD, MPH (1990–1993) David Satcher, MD, PhD (1993–1998) Jeffrey P. Koplan, MD, MPH (1998–2002) Julie Gerberding, MD, MPH (2002–2008) Thomas R. Frieden, MD, MPH (2009 – Jan 2017) Anne Schuchat, MD, RADM USPHS (acting, Jan–July 2017) Brenda Fitzgerald, MD (July 2017 – Jan 2018) Anne Schuchat, MD (acting, Jan–Mar 2018) Robert R. Redfield, MD (March 2018–Jan 2021) Rochelle Walensky, MD, MPH (Jan 2021–present) Datasets and survey systems CDC Scientific Data, Surveillance, Health Statistics, and Laboratory Information. Behavioral Risk Factor Surveillance System (BRFSS), the world's largest, ongoing telephone health-survey system. Mortality Medical Data System. Abortion statistics in the United States CDC WONDER (Wide-ranging ONline Data for Epidemiologic Research) Data systems of the National Center for Health Statistics Areas of focus Communicable diseases The CDC's programs address more than 400 diseases, health threats, and conditions that are major causes of death, disease, and disability. The CDC's website has information on various infectious (and noninfectious) diseases, including smallpox, measles, and others. Influenza The CDC targets the transmission of influenza, including the H1N1 swine flu, and launched websites to educate people about hygiene. Division of Select Agents and Toxins Within the division are two programs: the Federal Select Agent Program (FSAP) and the Import Permit Program. The FSAP is run jointly with an office within the U.S. Department of Agriculture, regulating agents that can cause disease in humans, animals, and plants. The Import Permit Program regulates the importation of "infectious biological materials." The CDC runs a program that protects the public from rare and dangerous substances such as anthrax and the Ebola virus. The program, called the Federal Select Agent Program, calls for inspections of labs in the U.S. that work with dangerous pathogens. During the 2014 Ebola outbreak in West Africa, the CDC helped coordinate the return of two infected American aid workers for treatment at Emory University Hospital, the home of a special unit to handle highly infectious diseases. As a response to the 2014 Ebola outbreak, Congress passed a Continuing Appropriations Resolution allocating $30,000,000 towards CDC's efforts to fight the virus. Non-communicable diseases The CDC also works on non-communicable diseases, including chronic diseases caused by obesity, physical inactivity and tobacco-use. The work of the Division for Cancer Prevention and Control, led from 2010 by Lisa C. Richardson, is also within this remit. Antibiotic resistance The CDC implemented their National Action Plan for Combating Antibiotic Resistant Bacteria as a measure against the spread of antibiotic resistance in the United States. This initiative has a budget of $161million and includes the development of the Antibiotic Resistance Lab Network. Global health Globally, the CDC works with other organizations to address global health challenges and contain disease threats at their source. They work with many international organizations such as the World Health Organization (WHO) as well as ministries of health and other groups on the front lines of outbreaks. The agency maintains staff in more than 60 countries, including some from the U.S. but more from the countries in which they operate. The agency's global divisions include the Division of Global HIV and TB (DGHT), the Division of Parasitic Diseases and Malaria (DPDM), the Division of Global Health Protection (DGHP), and the Global Immunization Division (GID). The CDC is integral in working with the WHO to implement the International Health Regulations (IHR), an agreement between 196 countries to prevent, control, and report on the international spread of disease, through initiatives including the Global Disease Detection Program (GDD). The CDC is also a lead implementer of key U.S. global health initiatives such as the President's Emergency Plan for AIDS Relief (PEPFAR) and the President's Malaria Initiative. Travelers' health The CDC collects and publishes health information for travelers in a comprehensive book, CDC Health Information for International Travel, which is commonly known as the "yellow book." The book is available online and in print as a new edition every other year and includes current travel health guidelines, vaccine recommendations, and information on specific travel destinations. The CDC also issues travel health notices on its website, consisting of three levels: "Watch": Level 1 (practice usual precautions) "Alert": Level 2 (practice enhanced precautions) "Warning": Level 3 (avoid nonessential travel) Vaccine safety The CDC monitors the safety of vaccines in the U.S. via the Vaccine Adverse Event Reporting System (VAERS), a national vaccine safety surveillance program run by CDC and the FDA. "VAERS detects possible safety issues with U.S. vaccines by collecting information about adverse events (possible side effects or health problems) after vaccination." The CDC's Safety Information by Vaccine page provides a list of the latest safety information, side effects, and answers to common questions about CDC recommended vaccines. Foundation The CDC Foundation operates independently from CDC as a private, nonprofit 501(c)(3) organization incorporated in the State of Georgia. The creation of the Foundation was authorized by section 399F of the Public Health Service Act to support the mission of CDC in partnership with the private sector, including organizations, foundations, businesses, educational groups, and individuals. Controversies Tuskegee study of untreated syphilis in Black men For 15 years, the CDC had direct oversight over the Tuskegee syphilis experiment. In the study, which lasted from 1932 to 1972, a group of Black men (nearly 400 of whom had syphilis) were studied to learn more about the disease. The disease was left untreated in the men, who had not given their informed consent to serve as research subjects. The Tuskegee Study was initiated in 1932 by the Public Health Service, with the CDC taking over the Tuskegee Health Benefit Program in 1995. Gun violence An area of partisan dispute related to CDC funding is studying firearms effectiveness. Although the CDC was one of the first agencies to study gun violence as a public health issue, in 1996 the Dickey Amendment, passed with the support of the National Rifle Association, states "none of the funds available for injury prevention and control at the Centers for Disease Control and Prevention may be used to advocate or promote gun control". Advocates for gun control oppose the amendment and have tried to overturn it. Looking at the history of the passage of the Dickey Amendment, in 1992, Mark L. Rosenberg and five CDC colleagues founded the CDC's National Center for Injury Prevention and Control, with an annual budget of approximately $260,000. They focused on "identifying causes of firearm deaths, and methods to prevent them". Their first report, published in the New England Journal of Medicine in 1993 entitled "Guns are a Risk Factor for Homicide in the Home", reported "mere presence of a gun in a home increased the risk of a firearm-related death by 2.7 percent, and suicide fivefold—a "huge" increase." In response, the NRA launched a "campaign to shut down the Injury Center." Two conservative pro-gun groups, Doctors for Responsible Gun Ownership and Doctors for Integrity and Policy Research joined the pro-gun effort, and, by 1995, politicians also supported the pro-gun initiative. In 1996, Jay Dickey (R) Arkansas introduced the Dickey Amendment statement stating "none of the funds available for injury prevention and control at the Centers for Disease Control and Prevention may be used to advocate or promote gun control" as a rider. in the 1996 appropriations bill." In 1997, "Congress re-directed all of the money for gun research to the study of traumatic brain injury." David Satcher, CDC head 1993-98 advocated for firearms research. In 2016 over a dozen "public health insiders, including current and former CDC senior leaders" told The Trace interviewers that CDC senior leaders took a cautious stance in their interpretation of the Dickey Amendment and that they could do more but were afraid of political and personal retribution. Rosenberg told The Trace, "Right now, there is nothing stopping them from addressing this life-and-death national problem!" In 2013, the American Medical Association, the American Psychological Association, and the American Academy of Pediatrics sent a letter to the leaders of the Senate Appropriations Committee asking them "to support at least $10million within the Centers for Disease Control and Prevention (CDC) in FY 2014 along with sufficient new taxes at the National Institutes of Health to support research into the causes and prevention of violence. Furthermore, we urge Members to oppose any efforts to reduce, eliminate, or condition CDC funding related to violence prevention research." Congress maintained the ban in subsequent budgets. COVID-19 The first confirmed case of COVID-19 was discovered in the U.S. on January 20, 2020. But widespread COVID-19 testing in the United States was effectively stalled until February 28, when federal officials revised a faulty CDC test, and days afterward, when the Food and Drug Administration began loosening rules that had restricted other labs from developing tests. In February 2020, as the CDC's early coronavirus test malfunctioned nationwide, CDC Director Robert R. Redfield reassured fellow officials on the White House Coronavirus Task Force that the problem would be quickly solved, according to White House officials. It took about three weeks to sort out the failed test kits, which may have been contaminated during their processing in a CDC lab. Later investigations by the FDA and the Department of Health and Human Services found that the CDC had violated its own protocols in developing its tests. In November 2020, NPR reported that an internal review document they obtained revealed that the CDC was aware that the first batch of tests which were issued in early January had a chance of being wrong 33 percent of the time, but they released them anyway. In May 2020, The Atlantic reported that the CDC was conflating the results of two different types of coronavirus tests — tests that diagnose current coronavirus infections, and tests that measure whether someone has ever had the virus. The magazine said this distorted several important metrics, provided the country with an inaccurate picture of the state of the pandemic, and overstated the country's testing ability. In July 2020, the Trump administration ordered hospitals to bypass the CDC and instead send all COVID-19 patient information to a database at the Department of Health and Human Services. Some health experts opposed the order and warned that the data might become politicized or withheld from the public. On July 15, the CDC alarmed health | Associate Director – Communication Associate Director – Laboratory Science and Safety Associate Director – Policy and Strategy The Office of Public Health Preparedness was created during the 2001 anthrax attacks shortly after the terrorist attacks of September 11, 2001. Its purpose was to coordinate among the government the response to a range of biological terrorism threats. Locations Most CDC centers are located in Atlanta. A few of the centers are based in or operate other domestic locations: The National Center for Emerging and Zoonotic Infectious Diseases' Division of Vector-Borne Diseases is based in Fort Collins, Colorado, with a branch in San Juan, Puerto Rico; its Arctic Investigations Program is based in Anchorage, Alaska. The National Center for Health Statistics is primarily located in Hyattsville, Maryland, with a branch in Research Triangle Park in North Carolina. The National Institute for Occupational Safety and Health's primary locations are Cincinnati, Ohio; Morgantown, West Virginia; Pittsburgh, Pennsylvania; Spokane, Washington; and Washington, D.C., with branches in Denver, Anchorage, and Atlanta. The CDC Washington Office is based in Washington, D.C. Building 18, which opened in 2005 at the CDC's main Roybal campus (named in honor of the late Representative Edward R. Roybal), contains the premier BSL4 laboratory in the United States. In addition, CDC operates quarantine facilities in 20 cities in the U.S. Budget CDC's budget for fiscal year 2018 is $11.9billion. The CDC offers grants that help many organizations each year advance health, safety and awareness at the community level throughout the United States. The CDC awards over 85 percent of its annual budget through these grants. Workforce CDC staff numbered approximately 15,000 personnel (including 6,000 contractors and 840 United States Public Health Service Commissioned Corps officers) in 170 occupations. Eighty percent held bachelor's degrees or higher; almost half had advanced degrees (a master's degree or a doctorate such as a PhD, D.O., or M.D.). Common CDC job titles include engineer, entomologist, epidemiologist, biologist, physician, veterinarian, behavioral scientist, nurse, medical technologist, economist, public health advisor, health communicator, toxicologist, chemist, computer scientist, and statistician.The CDC also operates a number of notable training and fellowship programs, including those indicated below. Epidemic Intelligence Service (EIS) The Epidemic Intelligence Service (EIS) is composed of "boots-on-the-ground disease detectives" who investigate public health problems domestically and globally. When called upon by a governmental body, EIS officers may embark on short-term epidemiological assistance assignments, or "Epi-Aids", to provide technical expertise in containing and investigating disease outbreaks. The EIS program is a model for the international Field Epidemiology Training Program. Public Health Associates Program The CDC also operates the Public Health Associate Program (PHAP), a two-year paid fellowship for recent college graduates to work in public health agencies all over the United States. PHAP was founded in 2007 and currently has 159 associates in 34 states. Leadership The Director of CDC is a Senior Executive Service position that may be filled either by a career employee, or as a political appointment that does not require Senate confirmation, with the latter method typically being used. The director serves at the pleasure of the President and may be fired at any time. The CDC director concurrently serves as the Administrator of the Agency for Toxic Substances and Disease Registry. Twenty directors have served the CDC or its predecessor agencies, including three who have served during the Trump administration (including Anne Schuchat who twice served as acting director) and three who have served during the Carter administration (including one acting director not shown here). Two served under Bill Clinton, but only one under the Nixon to Ford terms. Louis L. Williams Jr., MD (1942–1943) Mark D. Hollis, ScD (1944–1946) Raymond A. Vonderlehr, MD (1947–1951) Justin M. Andrews, ScD (1952–1953) Theodore J. Bauer, MD (1953–1956) Robert J. Anderson, MD, MPH (1956–1960) Clarence A. Smith, MD, MPH (1960–1962) James L. Goddard, MD, MPH (1962–1966) David J. Sencer, MD, MPH (1966–1977) William H. Foege, MD, MPH (1977–1983) James O. Mason, MD, MPH, Ph.D (1983–1989) William L. Roper, MD, MPH (1990–1993) David Satcher, MD, PhD (1993–1998) Jeffrey P. Koplan, MD, MPH (1998–2002) Julie Gerberding, MD, MPH (2002–2008) Thomas R. Frieden, MD, MPH (2009 – Jan 2017) Anne Schuchat, MD, RADM USPHS (acting, Jan–July 2017) Brenda Fitzgerald, MD (July 2017 – Jan 2018) Anne Schuchat, MD (acting, Jan–Mar 2018) Robert R. Redfield, MD (March 2018–Jan 2021) Rochelle Walensky, MD, MPH (Jan 2021–present) Datasets and survey systems CDC Scientific Data, Surveillance, Health Statistics, and Laboratory Information. Behavioral Risk Factor Surveillance System (BRFSS), the world's largest, ongoing telephone health-survey system. Mortality Medical Data System. Abortion statistics in the United States CDC WONDER (Wide-ranging ONline Data for Epidemiologic Research) Data systems of the National Center for Health Statistics Areas of focus Communicable diseases The CDC's programs address more than 400 diseases, health threats, and conditions that are major causes of death, disease, and disability. The CDC's website has information on various infectious (and noninfectious) diseases, including smallpox, measles, and others. Influenza The CDC targets the transmission of influenza, including the H1N1 swine flu, and launched websites to educate people about hygiene. Division of Select Agents and Toxins Within the division are two programs: the Federal Select Agent Program (FSAP) and the Import Permit Program. The FSAP is run jointly with an office within the U.S. Department of Agriculture, regulating agents that can cause disease in humans, animals, and plants. The Import Permit Program regulates the importation of "infectious biological materials." The CDC runs a program that protects the public from rare and dangerous substances such as anthrax and the Ebola virus. The program, called the Federal Select Agent Program, calls for inspections of labs in the U.S. that work with dangerous pathogens. During the 2014 Ebola outbreak in West Africa, the CDC helped coordinate the return of two infected American aid workers for treatment at Emory University Hospital, the home of a special unit to handle highly infectious diseases. As a response to the 2014 Ebola outbreak, Congress passed a Continuing Appropriations Resolution allocating $30,000,000 towards CDC's efforts to fight the virus. Non-communicable diseases The CDC also works on non-communicable diseases, including chronic diseases caused by obesity, physical inactivity and tobacco-use. The work of the Division for Cancer Prevention and Control, led from 2010 by Lisa C. Richardson, is also within this remit. Antibiotic resistance The CDC implemented their National Action Plan for Combating Antibiotic Resistant Bacteria as a measure against the spread of antibiotic resistance in the United States. This initiative has a budget of $161million and includes the development of the Antibiotic Resistance Lab Network. Global health Globally, the CDC works with other organizations to address global health challenges and contain disease threats at their source. They work with many international organizations such as the World Health Organization (WHO) as well as ministries of health and other groups on the front lines of outbreaks. The agency maintains staff in more than 60 countries, including some from the U.S. but more from the countries in which they operate. The agency's global divisions include the Division of Global HIV and TB (DGHT), the Division of Parasitic Diseases and Malaria (DPDM), the Division of Global Health Protection (DGHP), and the Global Immunization Division (GID). The CDC is integral in working with the WHO to implement the International Health Regulations (IHR), an agreement between 196 countries to prevent, control, and report on the international spread of disease, through initiatives including the Global Disease Detection Program (GDD). The CDC is also a lead implementer of key U.S. global health initiatives such as the President's Emergency Plan for AIDS Relief (PEPFAR) and the President's Malaria Initiative. Travelers' health The CDC collects and publishes health information for travelers in a comprehensive book, CDC Health Information for International Travel, which is commonly known as the "yellow book." The book is available online and in print as a new edition every other year and includes current travel health guidelines, vaccine recommendations, and information on specific travel destinations. The CDC also issues travel health notices on its website, consisting of three levels: "Watch": Level 1 (practice usual precautions) "Alert": Level 2 (practice enhanced precautions) "Warning": Level 3 (avoid nonessential travel) Vaccine safety The CDC monitors the safety of vaccines in the U.S. via the Vaccine Adverse Event Reporting System (VAERS), a national vaccine safety surveillance program run by CDC and the FDA. "VAERS detects possible safety issues with U.S. vaccines by collecting information about adverse events (possible side effects or health problems) after vaccination." The CDC's Safety Information by Vaccine page provides a list of the latest safety information, side effects, and answers to common questions about CDC recommended vaccines. Foundation The CDC Foundation operates independently from CDC as a private, nonprofit 501(c)(3) organization incorporated in the State of Georgia. The creation of the Foundation was authorized by section 399F of the Public Health Service Act to support the mission of CDC in partnership with the private sector, including organizations, foundations, businesses, educational groups, and individuals. Controversies Tuskegee study of untreated syphilis in Black men For 15 years, the CDC had direct oversight over the Tuskegee syphilis experiment. In the study, which lasted from 1932 to 1972, a group of Black men (nearly 400 of whom had syphilis) were studied to learn more about the disease. The disease was left untreated in the men, who had not given their informed consent to serve as research subjects. The Tuskegee Study was initiated in 1932 by the Public Health Service, with the CDC taking over the Tuskegee Health Benefit Program in 1995. Gun violence An area of partisan dispute related to CDC funding is studying firearms effectiveness. Although the CDC was one of the first agencies to study gun violence as a public health issue, in 1996 the Dickey Amendment, passed with the support of the National Rifle Association, states "none of the funds available for injury prevention and control at the Centers for Disease Control and Prevention may be used to advocate or promote gun control". Advocates for gun control oppose the amendment and have tried to overturn it. Looking at the history of the passage of the Dickey Amendment, in 1992, Mark L. Rosenberg and five CDC colleagues founded the CDC's National Center for Injury Prevention and Control, with an annual budget of approximately $260,000. They focused on "identifying causes of firearm deaths, and methods to prevent them". Their first report, published in the New England Journal of Medicine in 1993 entitled "Guns are a Risk Factor for Homicide in the Home", reported "mere presence of a gun in a home increased the risk of a firearm-related death by 2.7 percent, and suicide fivefold—a "huge" |
as a gas of nonrelativistic, non-interacting electrons and nuclei that obey Fermi–Dirac statistics. This Fermi gas model was then used by the British physicist Edmund Clifton Stoner in 1929 to calculate the relationship among the mass, radius, and density of white dwarfs, assuming they were homogeneous spheres. Wilhelm Anderson applied a relativistic correction to this model, giving rise to a maximum possible mass of approximately . In 1930, Stoner derived the internal energy–density equation of state for a Fermi gas, and was then able to treat the mass–radius relationship in a fully relativistic manner, giving a limiting mass of approximately (for ). Stoner went on to derive the pressure–density equation of state, which he published in 1932. These equations of state were also previously published by the Soviet physicist Yakov Frenkel in 1928, together with some other remarks on the physics of degenerate matter. Frenkel's work, however, was ignored by the astronomical and astrophysical community. A series of papers published between 1931 and 1935 had its beginning on a trip from India to England in 1930, where the Indian physicist Subrahmanyan Chandrasekhar worked on the calculation of the statistics of a degenerate Fermi gas. In these papers, Chandrasekhar solved the hydrostatic equation together with the nonrelativistic Fermi gas equation of state, and also treated the case of a relativistic Fermi gas, giving rise to the value of the limit shown above. Chandrasekhar reviews this work in his Nobel Prize lecture. This value was also computed in 1932 by the Soviet physicist Lev Landau, who, however, did not apply it to white dwarfs and concluded that quantum laws might be invalid for stars heavier than 1.5 solar mass. Chandrasekhar's work on the limit aroused controversy, owing to the opposition of the British astrophysicist Arthur Eddington. Eddington was aware that the existence of black holes was theoretically possible, and also realized that the existence of the limit made their formation possible. However, he was unwilling to accept that this could happen. After a talk by Chandrasekhar on the limit in 1935, he replied: Eddington's proposed solution to the perceived problem was to modify relativistic mechanics so as to make the law universally applicable, even for large . Although Niels Bohr, Fowler, Wolfgang Pauli, and other physicists agreed with Chandrasekhar's analysis, at the time, owing to Eddington's status, they were unwilling to publicly support Chandrasekhar., pp. 110–111 Through the rest of his life, Eddington held to his position in his writings, including his work on his fundamental theory. The drama associated with this disagreement is one of the main themes of Empire of the Stars, Arthur I. Miller's biography of Chandrasekhar. In Miller's view: Applications The core of a star is kept from collapsing by the heat generated by the fusion of nuclei of lighter elements into heavier ones. At various stages of stellar evolution, the nuclei required for this process are exhausted, and the core collapses, causing it to become denser and hotter. A critical situation arises when iron accumulates in the core, since iron nuclei are incapable of generating further energy through fusion. If the core becomes sufficiently dense, electron degeneracy pressure will play a significant part in stabilizing it against gravitational collapse. If a main-sequence star is not too massive (less than approximately 8 solar masses), it eventually sheds enough mass to form a white dwarf having mass below the Chandrasekhar limit, which consists of the former core of the star. For more-massive stars, electron degeneracy pressure does not keep the iron core from collapsing to very great density, leading to formation of a neutron star, black hole, or, speculatively, a quark star. (For very massive, low-metallicity stars, it is also possible that instabilities destroy the star completely.) During the collapse, neutrons are formed by the capture of electrons by protons in the process of electron capture, leading to the emission of neutrinos., pp. 1046–1047. The decrease in gravitational potential energy of the collapsing core releases a large amount of energy on the order of 1046 joules (100 foes). Most of this energy is carried away by the emitted neutrinos and the kinetic energy of the expanding shell of gas; only about 1% is emitted as optical light. This process is believed responsible for supernovae of types Ib, Ic, and II. Type Ia supernovae derive their energy from runaway fusion of the nuclei in the interior of a white dwarf. This fate may befall carbon–oxygen white dwarfs that accrete matter from a companion giant star, leading to a steadily increasing mass. As the white dwarf's mass approaches the Chandrasekhar limit, its central density increases, and, as a result of compressional heating, its temperature also increases. This eventually ignites nuclear fusion reactions, leading to an immediate carbon detonation, which disrupts the star and causes the supernova., §5.1.2 A strong indication of the reliability of Chandrasekhar's formula is that the | time. The fact that the roles of Stoner and Anderson are often overlooked in the astronomy community has been noted. Physics Electron degeneracy pressure is a quantum-mechanical effect arising from the Pauli exclusion principle. Since electrons are fermions, no two electrons can be in the same state, so not all electrons can be in the minimum-energy level. Rather, electrons must occupy a band of energy levels. Compression of the electron gas increases the number of electrons in a given volume and raises the maximum energy level in the occupied band. Therefore, the energy of the electrons increases on compression, so pressure must be exerted on the electron gas to compress it, producing electron degeneracy pressure. With sufficient compression, electrons are forced into nuclei in the process of electron capture, relieving the pressure. In the nonrelativistic case, electron degeneracy pressure gives rise to an equation of state of the form , where is the pressure, is the mass density, and is a constant. Solving the hydrostatic equation leads to a model white dwarf that is a polytrope of index – and therefore has radius inversely proportional to the cube root of its mass, and volume inversely proportional to its mass. As the mass of a model white dwarf increases, the typical energies to which degeneracy pressure forces the electrons are no longer negligible relative to their rest masses. The velocities of the electrons approach the speed of light, and special relativity must be taken into account. In the strongly relativistic limit, the equation of state takes the form . This yields a polytrope of index 3, which has a total mass, , depending only on . For a fully relativistic treatment, the equation of state used interpolates between the equations for small and for large . When this is done, the model radius still decreases with mass, but becomes zero at . This is the Chandrasekhar limit. The curves of radius against mass for the non-relativistic and relativistic models are shown in the graph. They are colored blue and green, respectively. has been set equal to 2. Radius is measured in standard solar radii or kilometers, and mass in standard solar masses. Calculated values for the limit vary depending on the nuclear composition of the mass. Chandrasekhar, eq. (36),, eq. (58),, eq. (43) gives the following expression, based on the equation of state for an ideal Fermi gas: where: is the reduced Planck constant is the speed of light is the gravitational constant is the average molecular weight per electron, which depends upon the chemical composition of the star. is the mass of the hydrogen atom. is a constant connected with the solution to the Lane–Emden equation. As is the Planck mass, the limit is of the order of The limiting mass can be obtained formally from the Chandrasekhar's white dwarf equation by taking the limit of large central density. A more accurate value of the limit than that given by this simple model requires adjusting for various factors, including electrostatic interactions between the electrons and nuclei and effects caused by nonzero temperature. Lieb and Yau have given a rigorous derivation of the limit from a relativistic many-particle Schrödinger equation. History In 1926, the British physicist Ralph H. Fowler observed that the relationship between the density, energy, and temperature of white dwarfs could be explained by viewing them as a gas of nonrelativistic, non-interacting electrons and nuclei that obey Fermi–Dirac statistics. This Fermi gas model was then used by the British physicist Edmund Clifton Stoner in 1929 to calculate the relationship among the mass, radius, and density of white dwarfs, assuming they were homogeneous spheres. Wilhelm Anderson applied a relativistic correction to this model, giving rise to a maximum possible mass of approximately . In 1930, Stoner derived the internal energy–density equation of state for a Fermi gas, and was then able to treat the mass–radius relationship in a fully relativistic manner, giving a limiting mass of approximately (for ). Stoner went on to derive the pressure–density equation of state, which he published in 1932. These equations of state were also previously published by the Soviet physicist Yakov Frenkel in 1928, together with some other remarks on the physics of degenerate matter. Frenkel's work, however, was ignored by the astronomical and astrophysical community. A series of papers published between 1931 and 1935 had its beginning on a trip from India to England in 1930, where the Indian physicist Subrahmanyan Chandrasekhar worked on the calculation of the statistics of a degenerate Fermi gas. In these papers, Chandrasekhar solved the hydrostatic equation together with the nonrelativistic Fermi gas equation of state, and also treated the case of a relativistic Fermi gas, giving rise to the value of the limit shown above. Chandrasekhar reviews this work in his Nobel Prize lecture. This value was also computed in 1932 by the Soviet physicist Lev Landau, who, however, did not apply it to white dwarfs and concluded that quantum laws might be invalid for stars heavier than 1.5 solar mass. Chandrasekhar's work on the limit aroused controversy, owing to the opposition of the British astrophysicist Arthur Eddington. Eddington was aware that the existence of black holes was theoretically possible, and also realized that the existence of the limit made their formation possible. However, he was unwilling to accept that this could happen. After a talk by Chandrasekhar on the limit in 1935, he replied: Eddington's proposed solution to the perceived problem was to modify relativistic mechanics so as to make the law universally applicable, even for large . Although Niels Bohr, Fowler, Wolfgang Pauli, and other physicists agreed with Chandrasekhar's analysis, at the time, owing to Eddington's status, they were unwilling to publicly support Chandrasekhar., pp. 110–111 Through the rest of his life, Eddington held to his position in his writings, including his work on his fundamental theory. The drama |
(1) every local church is a full realization in miniature of the entire Church of Jesus Christ; and (2) the Church, while on earth, besides the local church, can only be invisible and ideal. While other theories may insist on the truth of the former, the latter precept of congregationalism gives the entire theory a unique character among plans of church government. There is no other reference than the local congregation for the "visible church" in Congregationalism. And yet, the connection of all Christians is also asserted, albeit in a way that defenders of this view usually decline, often intentionally, to elaborate more clearly or consistently. This first, foundational principle by which congregationalism is guided results in confining it to operate with the consent of each gathering of believers. Although "congregational rule" may seem to suggest that pure democracy reigns in congregational churches, this is seldom the case. It is granted, with few exceptions (namely in some Anabaptist churches), that God has given the government of the Church into the hands of an ordained ministry. What makes congregationalism unique is its system of checks and balances, which constrains the authority of the clergy, the lay officers, and the members. Most importantly, the boundaries of the powers of the ministers and church officers are set by clear and constant reminders of the freedoms guaranteed by the Gospel to the laity, collectively and individually. With that freedom comes the responsibility upon each member to govern himself or herself under Christ. This requires lay people to exercise great charity and patience in debating issues with one another and to seek the glory and service of God as the foremost consideration in all of their decisions. The authority of all of the people, including the officers, is limited in the local congregation by a definition of union, or a covenant, by which the terms of their cooperation together are spelled out and agreed to. This might be something as minimal as a charter specifying a handful of doctrines and behavioral expectations, or even a statement only guaranteeing specific freedoms. Or, it may be a constitution describing a comprehensive doctrinal system and specifying terms under which the local church is connected to other local churches, to which participating congregations give their assent. In congregationalism, rather uniquely, the church is understood to be a truly voluntary association. Finally, the congregational theory strictly forbids ministers from ruling their local churches by themselves. Not only does the minister serve by the approval of the congregation, but committees further constrain the pastor from exercising power without consent by either the particular committee, or the entire congregation. It is a contradiction of the congregational principle if a minister makes decisions concerning the congregation without the vote of these other officers. The other officers may be called deacons, elder or session (borrowing Presbyterian terminology), or even vestry (borrowing the Anglican term) – it is not their label that is important to the theory, but rather their lay status and their equal vote, together with the pastor, in deciding the issues of the church. While other forms of church government are more likely to define tyranny as "the imposition of unjust rule", a congregationally governed church would more likely define tyranny as "transgression of liberty" or equivalently, "rule by one man". To a congregationalist, no abuse of authority is worse than the concentration of all decisive power in the hands of one ruling body, or one person. Following this sentiment, congregationalism has evolved over time to include even more participation of the congregation, more kinds of lay committees to whom various tasks are apportioned, and more decisions subject to the vote of the entire membership. One of the most notable characteristics of New England (or British)-heritage Congregationalism has been its consistent leadership role in the formation of "unions" with other churches. Such sentiments especially grew strong in the late 19th and early 20th centuries, when ecumenism evolved out of a liberal, non-sectarian perspective on relations to other Christian groups that accompanied the relaxation of Calvinist stringencies held by earlier generations. The congregationalist theory of independence within a union has been a cornerstone of most ecumenical movements since the 18th century. Baptist churches Most Baptists hold that no denominational or ecclesiastical organization has inherent authority over an individual Baptist church. Churches can properly relate to each other under this polity only through voluntary cooperation, never by any sort of coercion. Furthermore, this Baptist polity calls for freedom from governmental control. Exceptions to this local form of local governance include the Episcopal Baptists that have an episcopal system. Independent Baptist churches have no formal organizational structure above the level of the local congregation. More generally among Baptists, a variety of parachurch agencies and evangelical educational institutions may be supported generously or not at all, depending entirely upon the local congregation's customs and predilections. Usually doctrinal conformity is held as a first consideration when a church makes a decision to grant or decline financial contributions to such agencies, which are legally external and separate from the congregations they serve. These practices also find currency among non-denominational fundamentalist or charismatic fellowships, many of which derive from Baptist origins, culturally if not theologically. Most Southern Baptist and National Baptist congregations, by contrast, generally relate more closely to external groups such as mission agencies | association. Finally, the congregational theory strictly forbids ministers from ruling their local churches by themselves. Not only does the minister serve by the approval of the congregation, but committees further constrain the pastor from exercising power without consent by either the particular committee, or the entire congregation. It is a contradiction of the congregational principle if a minister makes decisions concerning the congregation without the vote of these other officers. The other officers may be called deacons, elder or session (borrowing Presbyterian terminology), or even vestry (borrowing the Anglican term) – it is not their label that is important to the theory, but rather their lay status and their equal vote, together with the pastor, in deciding the issues of the church. While other forms of church government are more likely to define tyranny as "the imposition of unjust rule", a congregationally governed church would more likely define tyranny as "transgression of liberty" or equivalently, "rule by one man". To a congregationalist, no abuse of authority is worse than the concentration of all decisive power in the hands of one ruling body, or one person. Following this sentiment, congregationalism has evolved over time to include even more participation of the congregation, more kinds of lay committees to whom various tasks are apportioned, and more decisions subject to the vote of the entire membership. One of the most notable characteristics of New England (or British)-heritage Congregationalism has been its consistent leadership role in the formation of "unions" with other churches. Such sentiments especially grew strong in the late 19th and early 20th centuries, when ecumenism evolved out of a liberal, non-sectarian perspective on relations to other Christian groups that accompanied the relaxation of Calvinist stringencies held by earlier generations. The congregationalist theory of independence within a union has been a cornerstone of most ecumenical movements since the 18th century. Baptist churches Most Baptists hold that no denominational or ecclesiastical organization has inherent authority over an individual Baptist church. Churches can properly relate to each other under this polity only through voluntary cooperation, never by any sort of coercion. Furthermore, this Baptist polity calls for freedom from governmental control. Exceptions to this local form of local governance include the Episcopal Baptists that have an episcopal system. Independent Baptist churches have no formal organizational structure above the level of the local congregation. More generally among Baptists, a variety of parachurch agencies and evangelical educational institutions may be supported generously or not at all, depending entirely upon the local congregation's customs and predilections. Usually doctrinal conformity is held as a first consideration when a church makes a decision to grant or decline financial contributions to such agencies, which are legally external and separate from the congregations they serve. These practices also find currency among non-denominational fundamentalist or charismatic fellowships, many of which derive from Baptist origins, culturally if not theologically. Most Southern Baptist and National Baptist congregations, by contrast, generally relate more closely to external groups such as mission agencies and educational institutions than do those of independent persuasion. However, they adhere to a very similar ecclesiology, refusing to permit outside control or oversight of the affairs of the local church. Churches of Christ Ecclesiastical government is congregational rather than denominational. Churches of Christ purposefully have no central headquarters, councils, or other organizational structure above the local church level. Rather, the independent congregations are a network with each congregation participating at its own discretion in various means of service and fellowship with other congregations. Churches of Christ are linked by their shared commitment to restoration principles. Congregations are generally overseen by a plurality of elders (also known in some congregations as shepherds, bishops, or pastors) who are sometimes assisted in the administration of various works by deacons. Elders are generally seen as responsible for the spiritual welfare of the congregation, while deacons are seen as responsible for the non-spiritual needs of the church. Deacons serve under the supervision of the elders, and are often assigned to direct specific ministries. Successful service as a deacon is often seen as preparation for the eldership. Elders and deacons are chosen by the congregation based on the qualifications found in Timothy 3 and Titus 1. Congregations look for elders who have a mature enough understanding of scripture to enable them to supervise the minister and to teach, as well as to perform governance functions. In lieu of willing men who meet these qualifications, congregations are sometimes overseen by an unelected committee of the congregation's men. While the early Restoration Movement had a tradition of itinerant preachers rather than "located Preachers", during the 20th century a long-term, formally trained congregational minister became the norm among Churches of Christ. Ministers are understood to serve under the oversight of the elders. While the presence of a long-term professional minister has sometimes created "significant de facto ministerial authority" and led to conflict between the minister and the elders, the eldership has remained the "ultimate locus of authority in the congregation". There is a small group within the Churches of Christ which oppose a single preacher and, instead, rotate preaching duties among qualified elders (this group tends to overlap with groups which oppose Sunday School and also have only one cup |
the 1890s. Volunteer cavalry regiments like the Rough Riders consisted of horsemen such as cowboys, ranchers and other outdoorsmen, that served as a cavalry in the United States Military. First World War Pre-war developments At the beginning of the 20th century all armies still maintained substantial cavalry forces, although there was contention over whether their role should revert to that of mounted infantry (the historic dragoon function). Following the experience of the South African War of 1899–1902 (where mounted Boer citizen commandos fighting on foot from cover proved more effective than regular cavalry) the British Army withdrew lances for all but ceremonial purposes and placed a new emphasis on training for dismounted action in 1903. An Army Order dated 1909 however instructed that the six British lancer regiments then in existence resume use of this impressive but obsolete weapon for active service. In 1882 the Imperial Russian Army converted all its line hussar and lancer regiments to dragoons, with an emphasis on mounted infantry training. In 1910 these regiments reverted to their historic roles, designations and uniforms. By 1909 official regulations dictating the role of the Imperial German cavalry had been revised to indicate an increasing realization of the realities of modern warfare. The massive cavalry charge in three waves which had previously marked the end of annual maneuvers was discontinued and a new emphasis was placed in training on scouting, raiding and pursuit; rather than main battle involvement. The perceived importance of cavalry was however still evident, with thirteen new regiments of mounted rifles (Jager zu Pferde) being raised shortly before the outbreak of war in 1914. In spite of significant experience in mounted warfare in Morocco during 1908–14, the French cavalry remained a highly conservative institution. The traditional tactical distinctions between heavy, medium, and light cavalry branches were retained. French cuirassiers wore breastplates and plumed helmets unchanged from the Napoleonic period, during the early months of World War I. Dragoons were similarly equipped, though they did not wear cuirasses and did carry lances. Light cavalry were described as being "a blaze of colour". French cavalry of all branches were well mounted and were trained to change position and charge at full gallop. One weakness in training was that French cavalrymen seldom dismounted on the march and their horses suffered heavily from raw backs in August 1914. Opening stages Europe 1914 In August 1914 all combatant armies still retained substantial numbers of cavalry and the mobile nature of the opening battles on both Eastern and Western Fronts provided a number of instances of traditional cavalry actions, though on a smaller and more scattered scale than those of previous wars. The 110 regiments of Imperial German cavalry, while as colourful and traditional as any in peacetime appearance, had adopted a practice of falling back on infantry support when any substantial opposition was encountered. These cautious tactics aroused derision amongst their more conservative French and Russian opponents but proved appropriate to the new nature of warfare. A single attempt by the German army, on 12 August 1914, to use six regiments of massed cavalry to cut off the Belgian field army from Antwerp foundered when they were driven back in disorder by rifle fire. The two German cavalry brigades involved lost 492 men and 843 horses in repeated charges against dismounted Belgian lancers and infantry. One of the last recorded charges by French cavalry took place on the night of 9/10 September 1914 when a squadron of the 16th Dragoons overran a German airfield at Soissons, while suffering heavy losses. Once the front lines stabilised on the Western Front with the start of Trench Warfare, a combination of barbed wire, uneven muddy terrain, machine guns and rapid fire rifles proved deadly to horse mounted troops and by early 1915 most cavalry units were no longer seeing front line action. On the Eastern Front a more fluid form of warfare arose from flat open terrain favorable to mounted warfare. On the outbreak of war in 1914 the bulk of the Russian cavalry was deployed at full strength in frontier garrisons and during the period that the main armies were mobilizing scouting and raiding into East Prussia and Austrian Galicia was undertaken by mounted troops trained to fight with sabre and lance in the traditional style. On 21 August 1914 the 4th Austro-Hungarian Kavalleriedivison fought a major mounted engagement at Jaroslavic with the Russian 10th Cavalry Division, in what was arguably the final historic battle to involve thousands of horsemen on both sides. While this was the last massed cavalry encounter on the Eastern Front, the absence of good roads limited the use of mechanized transport and even the technologically advanced Imperial German Army continued to deploy up to twenty-four horse-mounted divisions in the East, as late as 1917. Europe 1915–18 For the remainder of the War on the Western Front cavalry had virtually no role to play. The British and French armies dismounted many of their cavalry regiments and used them in infantry and other roles: the Life Guards for example spent the last months of the War as a machine gun corps; and the Australian Light Horse served as light infantry during the Gallipoli campaign. In September 1914 cavalry comprised 9.28% of the total manpower of the British Expeditionary Force in France—by July 1918 this proportion had fallen to 1.65%. As early as the first winter of the war most French cavalry regiments had dismounted a squadron each, for service in the trenches. The French cavalry numbered 102,000 in May 1915 but had been reduced to 63,000 by October 1918. The German Army dismounted nearly all their cavalry in the West, maintaining only one mounted division on that front by January 1917. Italy entered the war in 1915 with thirty regiments of line cavalry, lancers and light horse. While employed effectively against their Austro-Hungarian counterparts during the initial offensives across the Isonzo River, the Italian mounted forces ceased to have a significant role as the front shifted into mountainous terrain. By 1916 most cavalry machine-gun sections and two complete cavalry divisions had been dismounted and seconded to the infantry. Some cavalry were retained as mounted troops behind the lines in anticipation of a penetration of the opposing trenches that it seemed would never come. Tanks, introduced on the Western Front by the British in September 1916 during the Battle of the Somme, had the capacity to achieve such breakthroughs but did not have the reliable range to exploit them. In their first major use at the Battle of Cambrai (1917), the plan was for a cavalry division to follow behind the tanks, however they were not able to cross a canal because a tank had broken the only bridge. While no longer the main frontline of troops, cavalry was still used throughout the war in large amounts on rare occasions for offensives, such as in the Battle of Caporetto and the Battle of Moreuil Wood. It was not until the German Army had been forced to retreat in the Hundred Days Offensive of 1918, that cavalry were again able to operate in their intended role. There was a successful charge by the British 7th Dragoon Guards on the last day of the war. In the wider spaces of the Eastern Front a more fluid form of warfare continued and there was still a use for mounted troops. Some wide-ranging actions were fought, again mostly in the early months of the war. However, even here the value of cavalry was overrated and the maintenance of large mounted formations at the front by the Russian Army put a major strain on the railway system, to little strategic advantage. In February 1917 the Russian regular cavalry (exclusive of Cossacks) was reduced by nearly a third from its peak number of 200,000, as two squadrons of each regiment were dismounted and incorporated into additional infantry battalions. Their Austro-Hungarian opponents, plagued by a shortage of trained infantry, had been obliged to progressively convert most horse cavalry regiments to dismounted rifle units starting in late 1914. Middle East In the Middle East, during the Sinai and Palestine Campaign mounted forces (British, Indian, Ottoman, Australian, Arab and New Zealand) retained an important strategic role both as mounted infantry and cavalry. In Egypt the mounted infantry formations like the New Zealand Mounted Rifles Brigade and Australian Light Horse of ANZAC Mounted Division, operating as mounted infantry, drove German and Ottoman forces back from Romani to Magdhaba and Rafa and out of the Egyptian Sinai Peninsula in 1916. After a stalemate on the Gaza—Beersheba line between March and October 1917, Beersheba was captured by the Australian Mounted Division's 4th Light Horse Brigade. Their mounted charge succeeded after a coordinated attack by the British Infantry and Yeomanry cavalry and the Australian and New Zealand Light Horse and Mounted Rifles brigades. A series of coordinated attacks by these Egyptian Expeditionary Force infantry and mounted troops were also successful at the Battle of Mughar Ridge, during which the British infantry divisions and the Desert Mounted Corps drove two Ottoman armies back to the Jaffa—Jerusalem line. The infantry with mainly dismounted cavalry and mounted infantry fought in the Judean Hills to eventually almost encircle Jerusalem which was occupied shortly after. During a pause in operations necessitated by the German spring offensive in 1918 on the Western Front joint infantry and mounted infantry attacks towards Amman and Es Salt resulted in retreats back to the Jordan Valley which continued to be occupied by mounted divisions during the summer of 1918. The Australian Mounted Division was armed with swords and in September, after the successful breaching of the Ottoman line on the Mediterranean coast by the British Empire infantry XXI Corps was followed by cavalry attacks by the 4th Cavalry Division, 5th Cavalry Division and Australian Mounted Divisions which almost encircled two Ottoman armies in the Judean Hills forcing their retreat. Meanwhile, Chaytor's Force of infantry and mounted infantry in ANZAC Mounted Division held the Jordan Valley, covering the right flank to later advance eastwards to capture Es Salt and Amman and half of a third Ottoman army. A subsequent pursuit by the 4th Cavalry Division and the Australian Mounted Division followed by the 5th Cavalry Division to Damascus. Armoured cars and 5th Cavalry Division lancers were continuing the pursuit of Ottoman units north of Aleppo when the Armistice of Mudros was signed by the Ottoman Empire. Post–World War I A combination of military conservatism in almost all armies and post-war financial constraints prevented the lessons of 1914–1918 being acted on immediately. There was a general reduction in the number of cavalry regiments in the British, French, Italian and other Western armies but it was still argued with conviction (for example in the 1922 edition of the Encyclopædia Britannica) that mounted troops had a major role to play in future warfare. The 1920s saw an interim period during which cavalry remained as a proud and conspicuous element of all major armies, though much less so than prior to 1914. Cavalry was extensively used in the Russian Civil War and the Soviet-Polish War. The last major cavalry battle was the Battle of Komarów in 1920, between Poland and the Russian Bolsheviks. Colonial warfare in Morocco, Syria, the Middle East and the North West Frontier of India provided some opportunities for mounted action against enemies lacking advanced weaponry. The post-war German Army (Reichsheer) was permitted a large proportion of cavalry (18 regiments or 16.4% of total manpower) under the conditions of the Treaty of Versailles. The British Army mechanised all cavalry regiments between 1929 and 1941, redefining their role from horse to armoured vehicles to form the Royal Armoured Corps together with the Royal Tank Regiment. The U.S. Cavalry abandoned its sabres in 1934 and commenced the conversion of its horsed regiments to mechanized cavalry, starting with the First Regiment of Cavalry in January 1933. During the Turkish War of Independence Turkish cavalry under General Fahrettin Altay was instrumental in Kemalist victory over the invading Greek Army in 1922, during the Battle of Dumlupınar. V. Cavalry division was able to slip behind the Greek army, cutting off all communication and supply lines as well as all retreat venues, forcing the surrender of the remaining Greek army which may have been the last time in history cavalry played a definitive role in the outcome of a battle. During the 1930s the French Army experimented with integrating mounted and mechanised cavalry units into larger formations. Dragoon regiments were converted to motorised infantry (trucks and motor cycles), and cuirassiers to armoured units; while light cavalry (Chasseurs a' Cheval, Hussars and Spahis) remained as mounted sabre squadrons. The theory was that mixed forces comprising these diverse units could utilise the strengths of each according to circumstances. In practice mounted troops proved unable to keep up with fast moving mechanised units over any distance. The thirty-nine cavalry regiments of the British Indian Army were reduced to twenty-one as the result of a series of amalgamations immediately following World War I. The new establishment remained unchanged until 1936 when three regiments were redesignated as permanent training units, each with six, still mounted, regiments linked to them. In 1938 the process of mechanization began with the conversion of a full cavalry brigade (two Indian regiments and one British) to armoured car and tank units. By the end of 1940 all of the Indian cavalry had been mechanized initially, in the majority of cases, to motorized infantry transported in 15cwt trucks. The last horsed regiment of the British Indian Army (other than the Viceregal Bodyguard and some Indian States Forces regiments) was the 19th King George's Own Lancers which had its final mounted parade at Rawalpindi on 28 October 1939. This unit still exists in the Pakistan Army as an armored regiment. World War II While most armies still maintained cavalry units at the outbreak of World War II in 1939, significant mounted action was largely restricted to the Polish, Balkan, and Soviet campaigns. Rather than charge their mounts into battle, cavalry units were either used as mounted infantry (using horses to move into position and then dismounting for combat) or as reconnaissance units (especially in areas not suited to tracked or wheeled vehicles). Polish A popular myth is that Polish cavalry armed with lances charged German tanks during the September 1939 campaign. This arose from misreporting of a single clash on 1 September near Krojanty, when two squadrons of the Polish 18th Lancers armed with sabres scattered German infantry before being caught in the open by German armoured cars. Two examples illustrate how the myth developed. First, because motorised vehicles were in short supply, the Poles used horses to pull anti-tank weapons into position. Second, there were a few incidents when Polish cavalry was trapped by German tanks, and attempted to fight free. However, this did not mean that the Polish army chose to attack tanks with horse cavalry. Later, on the Eastern Front, the Red Army did deploy cavalry units effectively against the Germans. A more correct term would be "mounted infantry" instead of "cavalry", as horses were primarily used as a means of transportation, for which they were very suitable in view of the very poor road conditions in pre-war Poland. Another myth describes Polish cavalry as being armed with both sabres and lances; lances were used for peacetime ceremonial purposes only and the primary weapon of the Polish cavalryman in 1939 was a rifle. Individual equipment did include a sabre, probably because of well-established tradition, and in the case of a melee combat this secondary weapon would probably be more effective than a rifle and bayonet. Moreover, the Polish cavalry brigade order of battle in 1939 included, apart from the mounted soldiers themselves, light and heavy machine guns (wheeled), the Anti-tank rifle, model 35, anti-aircraft weapons, anti tank artillery such as the Bofors 37 mm, also light and scout tanks, etc. The last cavalry vs. cavalry mutual charge in Europe took place in Poland during the Battle of Krasnobród, when Polish and German cavalry units clashed with each other. The last classical cavalry charge of the war took place on March 1, 1945 during the Battle of Schoenfeld by the 1st "Warsaw" Independent Cavalry Brigade. Infantry and tanks had been employed to little effect against the German position, both of which floundered in the open wetlands only to be dominated by infantry and antitank fire from the German fortifications on the forward slope of Hill 157, overlooking the wetlands. The Germans had not taken cavalry into consideration when fortifying their position which, combined with the "Warsaw"s swift assault, overran the German anti-tank guns and consolidated into an attack into the village itself, now supported by infantry and tanks. Greek The Italian invasion of Greece in October 1940 saw mounted cavalry used effectively by the Greek defenders along the mountainous frontier with Albania. Three Greek cavalry regiments (two mounted and one partially mechanized) played an important role in the Italian defeat in this difficult terrain. Soviet The contribution of Soviet cavalry to the development of modern military operational doctrine and its importance in defeating Nazi Germany has been eclipsed by the higher profile of tanks and airplanes. Despite the view portrayed by German propaganda, Soviet cavalry contributed significantly to the defeat of the Axis armies. Their contributions included being the most mobile troops in the early stages, when trucks and other equipment were low in quality; as well as providing cover for retreating forces. Considering their relatively limited numbers, the Soviet cavalry played a significant role in giving Germany its first real defeats in the early stages of the war. The continuing potential of mounted troops was demonstrated during the Battle of Moscow, against Guderian and the powerful central German 9th Army. Cavalry were amongst the first Soviet units to complete the encirclement in the Battle of Stalingrad, thus sealing the fate of the German 6th Army. Mounted Soviet forces also played a role in the encirclement of Berlin, with some Cossack cavalry units reaching the Reichstag in April 1945. Throughout the war they performed important tasks such as the capture of bridgeheads which is considered one of the hardest jobs in battle, often doing so with inferior numbers. For instance the 8th Guards Cavalry Regiment of the 2nd Guards Cavalry Division, often fought outnumbered against the best German units. By the final stages of the war only the Soviet Union was still fielding mounted units in substantial numbers, some in combined mechanized and horse units. The advantage of this approach was that in exploitation mounted infantry could keep pace with advancing tanks. Other factors favoring the retention of mounted forces included the high quality of Russian Cossacks which made about half of all cavalry; and the relative lack of roads suitable for wheeled vehicles in many parts of the Eastern Front. Another consideration was that the logistic capacity required to support very large motorized forces exceeded that necessary for mounted troops. The main usage of the Soviet cavalry involved infiltration through front lines with subsequent deep raids, which disorganized German supply lines. Another role was the pursuit of retreating enemy forces during major frontline operations and breakthroughs. Italian The last mounted sabre charge by Italian cavalry occurred on August 24, 1942 at Isbuscenski (Russia), when a squadron of the Savoia Cavalry Regiment charged the 812th Siberian Infantry Regiment. The remainder of the regiment, together with the Novara Lancers made a dismounted attack in an action that ended with the retreat of the Russians after heavy losses on both sides. The final Italian cavalry action occurred on October 17, 1942 in Poloj (now Croatia) by a squadron of the Alexandria Cavalry Regiment against a large group of Yugoslav partisans. Other Axis Romanian, Hungarian and Italian cavalry were dispersed or disbanded following the retreat of the Axis forces from Russia. Germany still maintained some mounted (mixed with bicycles) SS and Cossack units until the last days of the War. Finnish Finland used mounted troops against Russian forces effectively in forested terrain during the Continuation War. The last Finnish cavalry unit was not disbanded until 1947. United States The U.S. Army's last horse cavalry actions were fought during World War II: a) by the 26th Cavalry Regiment—a small mounted regiment of Philippine Scouts which fought the Japanese during the retreat down the Bataan peninsula, until it was effectively destroyed by January 1942; and b) on captured German horses by the mounted reconnaissance section of the U.S. 10th Mountain Division in a spearhead pursuit of the German Army across the Po Valley in Italy in April 1945. The last horsed U.S. Cavalry (the Second Cavalry Division) were dismounted in March 1944. British Empire All British Army cavalry regiments had been mechanised since 1 March 1942 when the Queen's Own Yorkshire Dragoons (Yeomanry) was converted to a motorised role, following mounted service against the Vichy French in Syria the previous year. The final cavalry charge by British Empire forces occurred on 21 March 1942 when a 60 strong patrol of the Burma Frontier Force encountered Japanese infantry near Toungoo airfield in central Myanmar. The Sikh sowars of the Frontier Force cavalry, led by Captain Arthur Sandeman of The Central India Horse (21st King George V's Own Horse), charged in the old style with sabres and most were killed. Mongolia In the early stages of World War II, mounted units of the Mongolian People's Army were involved in the Battle of Khalkhin Gol against invading Japanese forces. Soviet forces under the command of Georgy Zhukov, together with Mongolian forces, defeated the Japanese Sixth army and effectively ended the Soviet–Japanese Border Wars. After the Soviet–Japanese Neutrality Pact of 1941, Mongolia remained neutral throughout most of the war, but its geographical situation meant that the country served as a buffer between Japanese forces and the Soviet Union. In addition to keeping around 10% of the population under arms, Mongolia provided half a million trained horses for use by the Soviet Army. In 1945 a partially mounted Soviet-Mongolian Cavalry Mechanized Group played a supporting role on the western flank of the Soviet invasion of Manchuria. The last active service seen by cavalry units of the Mongolian Army occurred in 1946–1948, during border clashes between Mongolia and the Republic of China. Post–World War II to the present day While most modern "cavalry" units have some historic connection with formerly mounted troops this is not always the case. The modern Irish Defence Forces (DF) includes a "Cavalry Corps" equipped with armoured cars and Scorpion tracked combat reconnaissance vehicles. The DF has never included horse cavalry since its establishment in 1922 (other than a small mounted escort of Blue Hussars drawn from the Artillery Corps when required for ceremonial occasions). However, the mystique of the cavalry is such that the name has been introduced for what was always a mechanised force. Some engagements in late 20th and early 21st century guerrilla wars involved mounted troops, particularly against partisan or guerrilla fighters in areas with poor transport infrastructure. Such units were not used as cavalry but rather as mounted infantry. Examples occurred in Afghanistan, Portuguese Africa and Rhodesia. The French Army used existing mounted squadrons of Spahis to a limited extent for patrol work during the Algerian War (1954–62). The Swiss Army maintained a mounted dragoon regiment for combat purposes until 1973. The Portuguese Army used horse mounted cavalry with some success in the wars of independence in Angola and Mozambique in the 1960s and 1970s. During the 1964–79 Rhodesian Bush War the Rhodesian Army created an elite mounted infantry unit called Grey's Scouts to fight unconventional actions against the rebel forces of Robert Mugabe and Joshua Nkomo. The horse mounted infantry of the Scouts were effective and reportedly feared by their opponents in the rebel African forces. In the 1978 to present Afghan Civil War period there have been several instances of horse mounted combat. Central and South American armies maintained mounted cavalry for longer than those of Asia, Europe, or North America. The Mexican Army included a number of horse mounted cavalry regiments as late as the mid-1990s and the Chilean Army had five such regiments in 1983 as mounted mountain troops. The Soviet Army retained horse cavalry divisions until 1955. At the dissolution of the Soviet Union in 1991, there was still an independent horse mounted cavalry squadron in Kyrgyzstan. Operational horse cavalry Today the Indian Army's 61st Cavalry is reported to be the largest existing horse-mounted cavalry unit still having operational potential. It was raised in 1951 from the amalgamated state cavalry squadrons of Gwalior, Jodhpur, and Mysore. While primarily utilised for ceremonial purposes, the regiment can be deployed for internal security or police roles if required. The 61st Cavalry and the President's Body Guard parade in full dress uniform in New Delhi each year in what is probably the largest assembly of traditional cavalry still to be seen in the world. Both the Indian and the Pakistani armies maintain armoured regiments with the titles of Lancers or Horse, dating back to the 19th century. As of 2007, the Chinese People's Liberation Army employed two battalions of horse-mounted border guards in Xinjiang for border patrol purposes. PLA mounted units last saw action during border clashes with Vietnam in the 1970s and 1980s, after which most cavalry units were disbanded as part of major military downsizing in the 1980s. In the wake of the 2008 Sichuan earthquake, there were calls to rebuild the army horse inventory for disaster relief in difficult terrain. Subsequent Chinese media reports confirm that the PLA maintains operational horse cavalry at squadron strength in Xinjiang and Inner Mongolia for scouting, logistical, and border security purposes. The Chilean Army still maintains a mixed armoured cavalry regiment, with elements of it acting as mounted mountain exploration troops, based in the city of Angol, being part of the III Mountain Division, and another independent exploration cavalry detachment in the town of Chaitén. The rugged mountain terrain calls for the use of special horses suited for that use. The Argentine Army has two mounted cavalry units: the Regiment of Horse Grenadiers, which performs mostly ceremonial duties but at the same time is responsible for the president´s security (in this case, acting as infantry), and the 4th Mountain Cavalry Regiment (which comprises both horse and light armoured squadrons), stationed in San Martín de los Andes, where it has an exploration role as part the 6th Mountain Brigade. Most armoured cavalry units of the Army are considered succesors to the old cavalry regiments from the Independence Wars, and keep their traditional names, such as Hussars, Cuirassiers, Lancers, etc., and uniforms. Equestrian training remains an important part of their tradition, especially among officers. Ceremonial horse cavalry and armored cavalry retaining traditional titles Cavalry or mounted gendarmerie units continue to be maintained for purely or primarily ceremonial purposes by the Algerian, Argentine, Bolivian, Brazilian, British, Bulgarian, Canadian, Chilean, Colombian, Danish, Dutch, Finnish, French, Hungarian, Indian, Italian, Jordanian, Malaysian, Moroccan, Nepalese, Nigerian, North Korean, Omani, Pakistani, Panamanian, Paraguayan, Peruvian, Polish, Portuguese, Russian, Senegalese, Spanish, Swedish, Thai, Tunisian, Turkmenistan, United States, Uruguayan and Venezuelan armed forces. A number of armoured regiments in the British Army retain the historic designations of Hussars, Dragoons, Light Dragoons, Dragoon Guards, Lancers and Yeomanry. Only the Household Cavalry (consisting of the Life Guards' mounted squadron, The Blues and Royals' mounted squadron, the State Trumpeters of The Household Cavalry and the Household Cavalry Mounted Band) are maintained for mounted (and dismounted) ceremonial duties in London. The French Army still has regiments with the historic designations of Cuirassiers, Hussars, Chasseurs, Dragoons and Spahis. Only the cavalry of the Republican Guard and a ceremonial fanfare detachment of trumpeters for the cavalry/armoured branch as a whole are now mounted. In the Canadian Army, a number of regular and reserve units have cavalry roots, including The Royal Canadian Hussars (Montreal), the Governor General's Horse Guards, Lord Strathcona's Horse, The British Columbia Dragoons, The Royal Canadian Dragoons, and the South Alberta Light Horse. Of these, only Lord Strathcona's Horse and the Governor General's Horse Guards maintain an official ceremonial horse-mounted cavalry troop or squadron. The modern Pakistan army maintains about 40 armoured regiments with the historic titles of Lancers, Cavalry or Horse. Six of these date back to the 19th century, although only the President's Body Guard remains horse-mounted. In 2002 the Army of the Russian Federation reintroduced a ceremonial mounted squadron wearing historic uniforms. Both the Australian and New Zealand armies follow the British practice of maintaining traditional titles (Light Horse or Mounted Rifles) for modern mechanised units. However, neither country retains a horse-mounted unit. Several armored units of the modern United States Army retain the designation of "armored cavalry". The United States also has "air cavalry" units equipped with helicopters. The Horse Cavalry Detachment of the U.S. Army's 1st Cavalry Division, made up of active duty soldiers, still functions as an active unit, trained to approximate the weapons, tools, equipment and techniques used by the United States Cavalry in the 1880s. Non-combat support roles The First Troop Philadelphia City Cavalry is a volunteer unit within the Pennsylvania Army National Guard which serves as a combat force when in federal service but acts in a mounted disaster relief role when in state service. In addition, the Parsons' Mounted Cavalry is a Reserve Officer Training Corps unit which forms part of the Corps of Cadets at Texas A&M University. Valley Forge Military Academy and College also has a Mounted Company, known as D-Troop . Some individual U.S. states maintain cavalry units as a part of their respective state defense forces. The Maryland Defense Force includes a cavalry unit, Cavalry Troop A, which serves primarily as a ceremonial unit. The unit training includes a saber qualification course based upon the 1926 U.S. Army course. Cavalry Troop A also assists other Maryland agencies as a rural search and rescue asset. In Massachusetts, The National Lancers trace their lineage to a volunteer cavalry militia unit established in 1836 and are currently organized as an official part of the Massachusetts Organized Militia. The National Lancers maintain three units, Troops A, B, and C, which serve in a ceremonial role and assist in search and rescue missions. In July 2004, the National Lancers were ordered into active state service to guard Camp Curtis Guild during the 2004 Democratic National Convention. The Governor's Horse Guard of Connecticut maintains two companies which are trained in urban crowd control. In 2020, the California State Guard stood up the 26th Mounted Operations Detachment, a search-and-rescue cavalry unit. Social status From the beginning of civilization to the 20th century, ownership of heavy cavalry horses has been a mark of wealth amongst settled peoples. A cavalry horse involves considerable expense in breeding, training, feeding, and equipment, and has very little productive use except as a mode of transport. For this reason, and because of their often decisive military role, the cavalry has typically been associated with high social status. This was most clearly seen in the feudal system, where a lord was expected to enter combat armored and on horseback and bring with him an entourage of lightly armed peasants on foot. If landlords and peasant levies came into conflict, the poorly trained footmen would be ill-equipped to defeat armored knights. In later national armies, service as an officer in the cavalry was generally a badge of high social status. For instance prior to 1914 most officers of British cavalry regiments came from a socially privileged background and the considerable expenses associated with their role generally required private means, even after it became possible for officers of the line infantry regiments to live on their pay. Options open to poorer cavalry officers in the various European armies included service with less fashionable (though often highly professional) frontier or colonial units. These included the British Indian cavalry, the Russian Cossacks or the French Chasseurs d' Afrique. During the 19th and early 20th centuries most monarchies maintained a mounted cavalry element in their royal or imperial guards. These ranged from small units providing ceremonial escorts and palace guards, through to large formations intended for active service. The mounted escort of the Spanish Royal Household provided an example of the former and the twelve cavalry regiments of the Prussian Imperial Guard an example of the latter. In either case the officers of such units were likely to be drawn from the aristocracies of their respective societies. On film Some sense of the noise and power of a cavalry charge can be gained from the 1970 film Waterloo, which featured some 2,000 cavalrymen, some of them Cossacks. It included detailed displays of the horsemanship required to manage animal and weapons in large numbers at the gallop (unlike the real battle of Waterloo, where deep mud significantly slowed the horses). The Gary Cooper movie They Came to Cordura contains a scene of a cavalry regiment deploying from march to battle line formation. A smaller-scale cavalry charge can be seen in The Lord of the Rings: The Return of the King (2003); although the finished scene has substantial computer-generated imagery, raw footage and reactions of the riders are shown in the Extended Version DVD Appendices. Other films that show cavalry actions include: The Charge of the Light Brigade, about the Battle of Balaclava in the Crimean War 40,000 Horsemen, about the Australian Light Horse during the Sinai and Palestine campaign of World War I The Lighthorsemen, about the Battle of Beersheba, 1917 War Horse, about the British cavalry in Europe during World War I Hubal, about the last months (September 1939 – April 1940) of Poland's first World War II guerrilla, Major Henryk Dobrzański, "Hubal" The Patriot includes light cavalry usage. And Quiet Flows the Don depicts Don Cossacks during World War I Kingdom of Heaven includes a cavalry charge during the Siege of Kerak Examples Types Heavy cavalry Cataphracts Cuirassier Polish winged hussars Light cavalry Hobelars (medieval light horse) Hussar Numidian cavalry Soldado de cuera Uhlans Horse archer Shock troops Companion cavalry Lancers Mounted infantry Dragoons Military communities Cossacks Equites / Roman cavalry Kalmyks Mamluks Polish cavalry Chariot Scythed chariot Elephantry, a cavalry unit containing elephant-mounted troops Camel cavalry Mounted police Royal Canadian Mounted Police Dubious Moose cavalry, cavalry mounted on moose (European elk) Units 2nd Armored Cavalry Regiment (United States) 278th Armored Cavalry Regiment (United States) Australian Light Horse | are the Assakenoi and Aspasioi of the Classical writings, and the Ashvakayanas and Ashvayanas in Pāṇini's Ashtadhyayi. The Assakenoi had faced Alexander with 30,000 infantry, 20,000 cavalry and 30 war elephants. Scholars have identified the Assakenoi and Aspasioi clans of Kunar and Swat valleys as a section of the Kambojas. These hardy tribes had offered stubborn resistance to Alexander (c 326 BC) during latter's campaign of the Kabul, Kunar and Swat valleys and had even extracted the praise of the Alexander's historians. These highlanders, designated as "parvatiya Ayudhajivinah" in Pāṇini's Astadhyayi, were rebellious, fiercely independent and freedom-loving cavalrymen who never easily yielded to any overlord. The Sanskrit drama Mudra-rakashas by Visakha Dutta and the Jaina work Parishishtaparvan refer to Chandragupta's (c 320 BC – c 298 BC) alliance with Himalayan king Parvataka. The Himalayan alliance gave Chandragupta a formidable composite army made up of the cavalry forces of the Shakas, Yavanas, Kambojas, Kiratas, Parasikas and Bahlikas as attested by Mudra-Rakashas (Mudra-Rakshasa 2). These hordes had helped Chandragupta Maurya defeat the ruler of Magadha and placed Chandragupta on the throne, thus laying the foundations of Mauryan Dynasty in Northern India. The cavalry of Hunas and the Kambojas is also attested in the Raghu Vamsa epic poem of Sanskrit poet Kalidasa. Raghu of Kalidasa is believed to be Chandragupta II (Vikaramaditya) (375–413/15 AD), of the well-known Gupta Dynasty. As late as the mediaeval era, the Kamboja cavalry had also formed part of the Gurjara-Pratihara armed forces from the eighth to the 10th centuries AD. They had come to Bengal with the Pratiharas when the latter conquered part of the province. Ancient Kambojas organised military sanghas and shrenis (corporations) to manage their political and military affairs, as Arthashastra of Kautiliya as well as the Mahabharata record. They are described as Ayuddha-jivi or Shastr-opajivis (nations-in-arms), which also means that the Kamboja cavalry offered its military services to other nations as well. There are numerous references to Kambojas having been requisitioned as cavalry troopers in ancient wars by outside nations. Mughal Empire The Mughal armies (lashkar) were primarily a cavalry force. The elite corps were the ahadi who provided direct service to the Emperor and acted as guard cavalry. Supplementary cavalry or dakhilis were recruited, equipped and paid by the central state. This was in contrast to the tabinan horsemen who were the followers of individual noblemen. Their training and equipment varied widely but they made up the backbone of the Mughal cavalry. Finally there were tribal irregulars led by and loyal to tributary chiefs. These included Hindus, Afghans and Turks summoned for military service when their autonomous leaders were called on by the Imperial government. European Middle Ages As the quality and availability of heavy infantry declined in Europe with the fall of the Roman Empire, heavy cavalry became more effective. Infantry that lack the cohesion and discipline of tight formations are more susceptible to being broken and scattered by shock combat—the main role of heavy cavalry, which rose to become the dominant force on the European battlefield. As heavy cavalry increased in importance, it became the main focus of military development. The arms and armour for heavy cavalry increased, the high-backed saddle developed, and stirrups and spurs were added, increasing the advantage of heavy cavalry even more. This shift in military importance was reflected in society as well; knights took centre stage both on and off the battlefield. These are considered the "ultimate" in heavy cavalry: well-equipped with the best weapons, state-of-the-art armour from head to foot, leading with the lance in battle in a full-gallop, close-formation "knightly charge" that might prove irresistible, winning the battle almost as soon as it begun. But knights remained the minority of total available combat forces; the expense of arms, armour, and horses was only affordable to a select few. While mounted men-at-arms focused on a narrow combat role of shock combat, medieval armies relied on a large variety of foot troops to fulfill all the rest (skirmishing, flank guards, scouting, holding ground, etc.). Medieval chroniclers tended to pay undue attention to the knights at the expense of the common soldiers, which led early students of military history to suppose that heavy cavalry was the only force that mattered on medieval European battlefields. But well-trained and disciplined infantry could defeat knights. Massed English longbowmen triumphed over French cavalry at Crécy, Poitiers and Agincourt, while at Gisors (1188), Bannockburn (1314), and Laupen (1339), foot-soldiers proved they could resist cavalry charges as long as they held their formation. Once the Swiss developed their pike squares for offensive as well as defensive use, infantry started to become the principal arm. This aggressive new doctrine gave the Swiss victory over a range of adversaries, and their enemies found that the only reliable way to defeat them was by the use of an even more comprehensive combined arms doctrine, as evidenced in the Battle of Marignano. The introduction of missile weapons that required less skill than the longbow, such as the crossbow and hand cannon, also helped remove the focus somewhat from cavalry elites to masses of cheap infantry equipped with easy-to-learn weapons. These missile weapons were very successfully used in the Hussite Wars, in combination with Wagenburg tactics. This gradual rise in the dominance of infantry led to the adoption of dismounted tactics. From the earliest times knights and mounted men-at-arms had frequently dismounted to handle enemies they could not overcome on horseback, such as in the Battle of the Dyle (891) and the Battle of Bremule (1119), but after the 1350s this trend became more marked with the dismounted men-at-arms fighting as super-heavy infantry with two-handed swords and poleaxes. In any case, warfare in the Middle Ages tended to be dominated by raids and sieges rather than pitched battles, and mounted men-at-arms rarely had any choice other than dismounting when faced with the prospect of assaulting a fortified position. Greater Middle East Arabs The Islamic Prophet Muhammad made use of cavalry in many of his military campaigns including the Expedition of Dhu Qarad, and the expedition of Zaid ibn Haritha in al-Is which took place in September, 627 AD, fifth month of 6 AH of the Islamic calendar. Early organized Arab mounted forces under the Rashidun caliphate comprised a light cavalry armed with lance and sword. Its main role was to attack the enemy flanks and rear. These relatively lightly armored horsemen formed the most effective element of the Muslim armies during the later stages of the Islamic conquest of the Levant. The best use of this lightly armed fast moving cavalry was revealed at the Battle of Yarmouk (636 AD) in which Khalid ibn Walid, knowing the skills of his horsemen, used them to turn the tables at every critical instance of the battle with their ability to engage, disengage, then turn back and attack again from the flank or rear. A strong cavalry regiment was formed by Khalid ibn Walid which included the veterans of the campaign of Iraq and Syria. Early Muslim historians have given it the name Mutaharrik tulai'a( متحرك طليعة ), or the Mobile guard. This was used as an advance guard and a strong striking force to route the opposing armies with its greater mobility that give it an upper hand when maneuvering against any Byzantine army. With this mobile striking force, the conquest of Syria was made easy. The Battle of Talas in 751 AD was a conflict between the Arab Abbasid Caliphate and the Chinese Tang dynasty over the control of Central Asia. Chinese infantry were routed by Arab cavalry near the bank of the River Talas. Later Mamluks were trained as cavalry soldiers. Mamluks were to follow the dictates of al-furusiyya, a code of conduct that included values like courage and generosity but also doctrine of cavalry tactics, horsemanship, archery and treatment of wounds. Maghreb The Islamic Berber states of North Africa employed elite horse mounted cavalry armed with spears and following the model of the original Arab occupiers of the region. Horse-harness and weapons were manufactured locally and the six-monthly stipends for horsemen were double those of their infantry counterparts. During the 8th century Islamic conquest of Iberia large numbers of horses and riders were shipped from North Africa, to specialise in raiding and the provision of support for the massed Berber footmen of the main armies. Maghrebi traditions of mounted warfare eventually influenced a number of sub-Saharan African polities in the medieval era. The Esos of Ikoyi, military aristocrats of the Yoruba peoples, were a notable manifestation of this phenomenon. Al-Andalus Iran Qizilbash, were a class of Safavid militant warriors in Iran during the 15th to 18th centuries, who often fought as elite cavalry. Ottoman Empire During its period of greatest expansion, from the 14th to 17th centuries, cavalry formed the powerful core of the Ottoman armies. Registers dated 1475 record 22,000 Sipahi feudal cavalry levied in Europe, 17,000 Sipahis recruited from Anatolia, and 3,000 Kapikulu (regular body-guard cavalry). During the 18th century however the Ottoman mounted troops evolved into light cavalry serving in the thinly populated regions of the Middle East and North Africa. Such frontier horsemen were largely raised by local governors and were separate from the main field armies of the Ottoman Empire. At the beginning of the 19th century modernised Nizam-I Credit ("New Army") regiments appeared, including full-time cavalry units officered from the horse guards of the Sultan. Renaissance Europe Ironically, the rise of infantry in the early 16th century coincided with the "golden age" of heavy cavalry; a French or Spanish army at the beginning of the century could have up to half its numbers made up of various kinds of light and heavy cavalry, whereas in earlier medieval and later 17th-century armies the proportion of cavalry was seldom more than a quarter. Knighthood largely lost its military functions and became more closely tied to social and economic prestige in an increasingly capitalistic Western society. With the rise of drilled and trained infantry, the mounted men-at-arms, now sometimes called gendarmes and often part of the standing army themselves, adopted the same role as in the Hellenistic age, that of delivering a decisive blow once the battle was already engaged, either by charging the enemy in the flank or attacking their commander-in-chief. From the 1550s onwards, the use of gunpowder weapons solidified infantry's dominance of the battlefield and began to allow true mass armies to develop. This is closely related to the increase in the size of armies throughout the early modern period; heavily armored cavalrymen were expensive to raise and maintain and it took years to train a skilled horseman or a horse, while arquebusiers and later musketeers could be trained and kept in the field at much lower cost, and were much easier to recruit. The Spanish tercio and later formations relegated cavalry to a supporting role. The pistol was specifically developed to try to bring cavalry back into the conflict, together with manoeuvres such as the caracole. The caracole was not particularly successful, however, and the charge (whether with lance, sword, or pistol) remained as the primary mode of employment for many types of European cavalry, although by this time it was delivered in much deeper formations and with greater discipline than before. The demi-lancers and the heavily armored sword-and-pistol reiters were among the types of cavalry whose heyday was in the 16th and 17th centuries, as for the Polish winged hussars, a heavy cavalry force that achieved great success against Swedes, Russians, and Turks. 18th-century Europe and Napoleonic Wars Cavalry retained an important role in this age of regularization and standardization across European armies. They remained the primary choice for confronting enemy cavalry. Attacking an unbroken infantry force head-on usually resulted in failure, but extended linear infantry formations were vulnerable to flank or rear attacks. Cavalry was important at Blenheim (1704), Rossbach (1757), Marengo (1800), Eylau and Friedland (1807), remaining significant throughout the Napoleonic Wars. Even with the increasing prominence of infantry, cavalry still had an irreplaceable role in armies, due to their greater mobility. Their non-battle duties often included patrolling the fringes of army encampments, with standing orders to intercept suspected shirkers and deserters as well as serving as outpost pickets in advance of the main body. During battle, lighter cavalry such as hussars and uhlans might skirmish with other cavalry, attack light infantry, or charge and either capture enemy artillery or render them useless by plugging the touchholes with iron spikes. Heavier cavalry such as cuirassiers, dragoons, and carabiniers usually charged towards infantry formations or opposing cavalry in order to rout them. Both light and heavy cavalry pursued retreating enemies, the point where most battle casualties occurred. The greatest cavalry charge of modern history was at the 1807 Battle of Eylau, when the entire 11,000-strong French cavalry reserve, led by Joachim Murat, launched a huge charge on and through the Russian infantry lines. Cavalry's dominating and menacing presence on the battlefield was countered by the use of infantry squares. The most notable examples are at the Battle of Quatre Bras and later at the Battle of Waterloo, the latter which the repeated charges by up to 9,000 French cavalrymen ordered by Michel Ney failed to break the British-Allied army, who had formed into squares. Massed infantry, especially those formed in squares were deadly to cavalry, but offered an excellent target for artillery. Once a bombardment had disordered the infantry formation, cavalry were able to rout and pursue the scattered foot soldiers. It was not until individual firearms gained accuracy and improved rates of fire that cavalry was diminished in this role as well. Even then light cavalry remained an indispensable tool for scouting, screening the army's movements, and harassing the enemy's supply lines until military aircraft supplanted them in this role in the early stages of World War I. 19th century Europe By the beginning of the 19th century, European cavalry fell into four main categories: Cuirassiers, heavy cavalry Dragoons, originally mounted infantry, but later regarded as medium cavalry Hussars, light cavalry Lancers or Uhlans, light cavalry, primarily armed with lances There were cavalry variations for individual nations as well: France had the chasseurs à cheval; Prussia had the Jäger zu Pferde; Bavaria, Saxony and Austria had the Chevaulegers; and Russia had Cossacks. Britain, from the mid-18th century, had Light Dragoons as light cavalry and Dragoons, Dragoon Guards and Household Cavalry as heavy cavalry. Only after the end of the Napoleonic wars were the Household Cavalry equipped with cuirasses, and some other regiments were converted to lancers. In the United States Army prior to 1862 the cavalry were almost always dragoons. The Imperial Japanese Army had its cavalry uniformed as hussars, but they fought as dragoons. In the Crimean War, the Charge of the Light Brigade and the Thin Red Line at the Battle of Balaclava showed the vulnerability of cavalry, when deployed without effective support. Franco-Prussian War During the Franco-Prussian War, at the Battle of Mars-la-Tour in 1870, a Prussian cavalry brigade decisively smashed the centre of the French battle line, after skilfully concealing their approach. This event became known as Von Bredow's Death Ride after the brigade commander Adalbert von Bredow; it would be used in the following decades to argue that massed cavalry charges still had a place on the modern battlefield. Imperial expansion Cavalry found a new role in colonial campaigns (irregular warfare), where modern weapons were lacking and the slow moving infantry-artillery train or fixed fortifications were often ineffective against indigenous insurgents (unless the latter offered a fight on an equal footing, as at Tel-el-Kebir, Omdurman, etc.). Cavalry "flying columns" proved effective, or at least cost-effective, in many campaigns—although an astute native commander (like Samori in western Africa, Shamil in the Caucasus, or any of the better Boer commanders) could turn the tables and use the greater mobility of their cavalry to offset their relative lack of firepower compared with European forces. In 1903 the British Indian Army maintained forty regiments of cavalry, numbering about 25,000 Indian sowars (cavalrymen), with British and Indian officers. Among the more famous regiments in the lineages of the modern Indian and Pakistani armies are: Governor General's Bodyguard (now President's Bodyguard) Skinner's Horse (now India's 1st Horse (Skinner's Horse)) Gardner's Lancers (now India's 2nd Lancers (Gardner's Horse)) Hodson's Horse (now India's 3rd Horse (Hodson's)) of the Bengal Lancers fame 6th Bengal Cavalry (later amalgamated with 7th Hariana Lancers to form 18th King Edward's Own Cavalry) now 18th Cavalry of the Indian Army Probyn's Horse (now 5th Horse, Pakistan) Royal Deccan Horse (now India's The Deccan Horse) Poona Horse (now India's The Poona Horse) Scinde Horse (now India's The Scinde Horse) Queen's Own Guides Cavalry (now Pakistan). 11th Prince Albert Victor's Own Cavalry (Frontier Force) (now 11th Cavalry (Frontier Force), Pakistan) Several of these formations are still active, though they now are armoured formations, for example the Guides Cavalry of Pakistan. The French Army maintained substantial cavalry forces in Algeria and Morocco from 1830 until the end of the Second World War. Much of the Mediterranean coastal terrain was suitable for mounted action and there was a long established culture of horsemanship amongst the Arab and Berber inhabitants. The French forces included Spahis, Chasseurs d' Afrique, Foreign Legion cavalry and mounted Goumiers. Both Spain and Italy raised cavalry regiments from amongst the indigenous horsemen of their North African territories (see regulares, Italian Spahis and savari respectively). Imperial Germany employed mounted formations in South West Africa as part of the Schutztruppen (colonial army) garrisoning the territory. United States In the early American Civil War the regular United States Army mounted rifle, dragoon, and two existing cavalry regiments were reorganized and renamed cavalry regiments, of which there were six. Over a hundred other federal and state cavalry regiments were organized, but the infantry played a much larger role in many battles due to its larger numbers, lower cost per rifle fielded, and much easier recruitment. However, cavalry saw a role as part of screening forces and in foraging and scouting. The later phases of the war saw the Federal army developing a truly effective cavalry force fighting as scouts, raiders, and, with repeating rifles, as mounted infantry. The distinguished 1st Virginia Cavalry ranks as one of the most effectual and successful cavalry units on the Confederate side. Noted cavalry commanders included Confederate general J.E.B. Stuart, Nathan Bedford Forrest, and John Singleton Mosby (a.k.a. "The Grey Ghost") and on the Union side, Philip Sheridan and George Armstrong Custer. Post Civil War, as the volunteer armies disbanded, the regular army cavalry regiments increased in number from six to ten, among them Custer's U.S. 7th Cavalry Regiment of Little Bighorn fame, and the African-American U.S. 9th Cavalry Regiment and U.S. 10th Cavalry Regiment. The black units, along with others (both cavalry and infantry), collectively became known as the Buffalo Soldiers. According to Robert M. Utley: the frontier army was a conventional military force trying to control, by conventional military methods, a people that did not behave like conventional enemies and, indeed, quite often were not enemies at all. This is the most difficult of all military assignments, whether in Africa, Asia, or the American West. These regiments, which rarely took the field as complete organizations, served throughout the American Indian Wars through the close of the frontier in the 1890s. Volunteer cavalry regiments like the Rough Riders consisted of horsemen such as cowboys, ranchers and other outdoorsmen, that served as a cavalry in the United States Military. First World War Pre-war developments At the beginning of the 20th century all armies still maintained substantial cavalry forces, although there was contention over whether their role should revert to that of mounted infantry (the historic dragoon function). Following the experience of the South African War of 1899–1902 (where mounted Boer citizen commandos fighting on foot from cover proved more effective than regular cavalry) the British Army withdrew lances for all but ceremonial purposes and placed a new emphasis on training for dismounted action in 1903. An Army Order dated 1909 however instructed that the six British lancer regiments then in existence resume use of this impressive but obsolete weapon for active service. In 1882 the Imperial Russian Army converted all its line hussar and lancer regiments to dragoons, with an emphasis on mounted infantry training. In 1910 these regiments reverted to their historic roles, designations and uniforms. By 1909 official regulations dictating the role of the Imperial German cavalry had been revised to indicate an increasing realization of the realities of modern warfare. The massive cavalry charge in three waves which had previously marked the end of annual maneuvers was discontinued and a new emphasis was placed in training on scouting, raiding and pursuit; rather than main battle involvement. The perceived importance of cavalry was however still evident, with thirteen new regiments of mounted rifles (Jager zu Pferde) being raised shortly before the outbreak of war in 1914. In spite of significant experience in mounted warfare in Morocco during 1908–14, the French cavalry remained a highly conservative institution. The traditional tactical distinctions between heavy, medium, and light cavalry branches were retained. French cuirassiers wore breastplates and plumed helmets unchanged from the Napoleonic period, during the early months of World War I. Dragoons were similarly equipped, though they did not wear cuirasses and did carry lances. Light cavalry were described as being "a blaze of colour". French cavalry of all branches were well mounted and were trained to change position and charge at full gallop. One weakness in training was that French cavalrymen seldom dismounted on the march and their horses suffered heavily from raw backs in August 1914. Opening stages Europe 1914 In August 1914 all combatant armies still retained substantial numbers of cavalry and the mobile nature of the opening battles on both Eastern and Western Fronts provided a number of instances of traditional cavalry actions, though on a smaller and more scattered scale than those of previous wars. The 110 regiments of Imperial German cavalry, while as colourful and traditional as any in peacetime appearance, had adopted a practice of falling back on infantry support when any substantial opposition was encountered. These cautious tactics aroused derision amongst their more conservative French and Russian opponents but proved appropriate to the new nature of warfare. A single attempt by the German army, on 12 August 1914, to use six regiments of massed cavalry to cut off the Belgian field army from Antwerp foundered when they were driven back in disorder by rifle fire. The two German cavalry brigades involved lost 492 men and 843 horses in repeated charges against dismounted Belgian lancers and infantry. One of the last recorded charges by French cavalry took place on the night of 9/10 September 1914 when a squadron of the 16th Dragoons overran a German airfield at Soissons, while suffering heavy losses. Once the front lines stabilised on the Western Front with the start of Trench Warfare, a combination of barbed wire, uneven muddy terrain, machine guns and rapid fire rifles proved deadly to horse mounted troops and by early 1915 most cavalry units were no longer seeing front line action. On the Eastern Front a more fluid form of warfare arose from flat open terrain favorable to mounted warfare. On the outbreak of war in 1914 the bulk of the Russian cavalry was deployed at full strength in frontier garrisons and during the period that the main armies were mobilizing scouting and raiding into East Prussia and Austrian Galicia was undertaken by mounted troops trained to fight with sabre and lance in the traditional style. On 21 August 1914 the 4th Austro-Hungarian Kavalleriedivison fought a major mounted engagement at Jaroslavic with the Russian 10th Cavalry Division, in what was arguably the final historic battle to involve thousands of horsemen on both sides. While this was the last massed cavalry encounter on the Eastern Front, the absence of good roads limited the use of mechanized transport and even the technologically advanced Imperial German Army continued to deploy up to twenty-four horse-mounted divisions in the East, as late as 1917. Europe 1915–18 For the remainder of the War on the Western Front cavalry had virtually no role to play. The British and French armies dismounted many of their cavalry regiments and used them in infantry and other roles: the Life Guards for example spent the last months of the War as a machine gun corps; and the Australian Light Horse served as light infantry during the Gallipoli campaign. In September 1914 cavalry comprised 9.28% of the total manpower of the British Expeditionary Force in France—by July 1918 this proportion had fallen to 1.65%. As early as the first winter of the war most French cavalry regiments had dismounted a squadron each, for service in the trenches. The French cavalry numbered 102,000 in May 1915 but had been reduced to 63,000 by October 1918. The German Army dismounted nearly all their cavalry in the West, maintaining only one mounted division on that front by January 1917. Italy entered the war in 1915 with thirty regiments of line cavalry, lancers and light horse. While employed effectively against their Austro-Hungarian counterparts during the initial offensives across the Isonzo River, the Italian mounted forces ceased to have a significant role as the front shifted into mountainous terrain. By 1916 most cavalry machine-gun sections and two complete cavalry divisions had been dismounted and seconded to the infantry. Some cavalry were retained as mounted troops behind the lines in anticipation of a penetration of the opposing trenches that it seemed would never come. Tanks, introduced on the Western Front by the British in September 1916 during the Battle of the Somme, had the capacity to achieve such breakthroughs but did not have the reliable range to exploit them. In their first major use at the Battle of Cambrai (1917), the plan was for a cavalry division to follow behind the tanks, however they were not able to cross a canal because a tank had broken the only bridge. While no longer the main frontline of troops, cavalry was still used throughout the war in large amounts on rare occasions for offensives, such as in the Battle of Caporetto and the Battle of Moreuil Wood. It was not until the German Army had been forced to retreat in the Hundred Days Offensive of 1918, that cavalry were again able to operate in their intended role. There was a successful charge by the British 7th Dragoon Guards on the last day of the war. In the wider spaces of the Eastern Front a more fluid form of warfare continued and there was still a use for mounted troops. Some wide-ranging actions were fought, again mostly in the early months of the war. However, even here the value of cavalry was overrated and the maintenance of large mounted formations at the front by the Russian Army put a major strain on the railway system, to little strategic advantage. In February 1917 the Russian regular cavalry (exclusive of Cossacks) was reduced by nearly a third from its peak number of 200,000, as two squadrons of each regiment were dismounted and incorporated into additional infantry battalions. Their Austro-Hungarian opponents, plagued by a shortage of trained infantry, had been obliged to progressively convert most horse cavalry regiments to dismounted rifle units starting in late 1914. Middle East In the Middle East, during the Sinai and Palestine Campaign mounted forces (British, Indian, Ottoman, Australian, Arab and New Zealand) retained an important strategic role both as mounted infantry and cavalry. In Egypt the mounted infantry formations like the New Zealand Mounted Rifles Brigade and Australian Light Horse of ANZAC Mounted Division, operating as mounted infantry, drove German and Ottoman forces back from Romani to Magdhaba and Rafa and out of the Egyptian Sinai Peninsula in 1916. After a stalemate on the Gaza—Beersheba line between March and October 1917, Beersheba was captured by the Australian Mounted Division's 4th Light Horse Brigade. Their mounted charge succeeded after a coordinated attack by the British Infantry and Yeomanry cavalry and the Australian and New Zealand Light Horse and Mounted Rifles brigades. A series of coordinated attacks by these Egyptian Expeditionary Force infantry and mounted troops were also successful at the Battle of Mughar Ridge, during which the British infantry divisions and the Desert Mounted Corps drove two Ottoman armies back to the Jaffa—Jerusalem line. The infantry with mainly dismounted cavalry and mounted infantry fought in the Judean Hills to eventually almost encircle Jerusalem which was occupied shortly after. During a pause in operations necessitated by the German spring offensive in 1918 on the Western Front joint infantry and mounted infantry attacks towards Amman and Es Salt resulted in retreats back to the Jordan Valley which continued to be occupied by mounted divisions during the summer of 1918. The Australian Mounted Division was armed with swords and in September, after the successful breaching of the Ottoman line on the Mediterranean coast by the British Empire infantry XXI Corps was followed by cavalry attacks by the 4th Cavalry Division, 5th Cavalry Division and Australian Mounted Divisions which almost encircled two Ottoman armies in the Judean Hills forcing their retreat. Meanwhile, Chaytor's Force of infantry and mounted infantry in ANZAC Mounted Division held the Jordan Valley, covering the right flank to later advance eastwards to capture Es Salt and Amman and half of a third Ottoman army. A subsequent pursuit by the 4th Cavalry Division and the Australian Mounted Division followed by the 5th Cavalry Division to Damascus. Armoured cars and 5th Cavalry Division lancers were continuing the pursuit of Ottoman units north of Aleppo when the Armistice of Mudros was signed by the Ottoman Empire. Post–World War I A combination of military conservatism in almost all armies and post-war financial constraints prevented the lessons of 1914–1918 being acted on immediately. There was a general reduction in the number of cavalry regiments in the British, French, Italian and other Western armies but it was still argued with conviction (for example in the 1922 edition of the Encyclopædia Britannica) that mounted troops had a major role to play in future warfare. The 1920s saw an interim period during which cavalry remained as a proud and conspicuous element of all major armies, though much less so than prior to 1914. Cavalry was extensively used in the Russian Civil War and the Soviet-Polish War. The last major cavalry battle was the Battle of Komarów in 1920, between Poland and the Russian Bolsheviks. Colonial warfare in Morocco, Syria, the Middle East and the North West Frontier of India provided some opportunities for mounted action against enemies lacking advanced weaponry. The post-war German Army (Reichsheer) was permitted a large proportion of cavalry (18 regiments or 16.4% of total manpower) under the conditions of the Treaty of Versailles. The British Army mechanised all cavalry regiments between 1929 and 1941, redefining their role from horse to armoured vehicles to form the Royal Armoured Corps together with the Royal Tank Regiment. The U.S. Cavalry abandoned its sabres in 1934 and commenced the conversion of its horsed regiments to mechanized cavalry, starting with the First Regiment of Cavalry in January 1933. During the Turkish War of Independence Turkish cavalry under General Fahrettin Altay was instrumental in Kemalist victory over the invading Greek Army in 1922, during the Battle of Dumlupınar. V. Cavalry division was able to slip behind the Greek army, cutting off all communication and supply lines as well as all retreat venues, forcing the surrender of the remaining Greek army which may have been the last time in history cavalry played a definitive role in the outcome of a battle. During the 1930s the French Army experimented with integrating mounted and mechanised cavalry units into larger formations. Dragoon regiments were converted to motorised infantry (trucks and motor cycles), and cuirassiers to armoured units; while light cavalry (Chasseurs a' Cheval, Hussars and Spahis) remained as mounted sabre squadrons. The theory was that mixed forces comprising these diverse units could utilise the strengths of each according to circumstances. In practice mounted troops proved unable to keep up with fast moving mechanised units over any distance. The thirty-nine cavalry regiments of the British Indian Army were reduced to twenty-one as the result of a series of amalgamations immediately following World War I. The new establishment remained unchanged until 1936 when three regiments were redesignated as permanent training units, each with six, still mounted, regiments linked to them. In 1938 the process of mechanization began with the conversion of a full cavalry brigade (two Indian regiments and one British) to armoured car and tank units. By the end of 1940 all of the Indian cavalry had been mechanized initially, in the majority of cases, to motorized infantry transported in 15cwt trucks. The last horsed regiment of the British Indian Army (other than the Viceregal Bodyguard and some Indian States Forces regiments) was the 19th King George's Own Lancers which had its final mounted parade at Rawalpindi on 28 October 1939. This unit still exists in the Pakistan Army as an armored regiment. World War II While most armies still maintained cavalry units at the outbreak of World War II in 1939, significant mounted action was largely restricted to the Polish, Balkan, and Soviet campaigns. Rather than charge their mounts into battle, cavalry units were either used as mounted infantry (using horses to move into position and then dismounting for combat) or as reconnaissance units (especially in areas not suited to tracked or wheeled vehicles). Polish A popular myth is that Polish cavalry armed with lances charged German tanks during the September 1939 campaign. This arose from misreporting of a single clash on 1 September near Krojanty, when two squadrons of the Polish 18th Lancers armed with sabres scattered German infantry before being caught in the open by German armoured cars. Two examples illustrate how the myth developed. First, because motorised vehicles were in short supply, the Poles used horses to pull anti-tank weapons into position. Second, there were a few incidents when Polish cavalry was trapped by German tanks, and attempted to fight free. However, this did not mean that the Polish army chose to attack tanks with horse cavalry. Later, on the Eastern Front, the Red Army did deploy cavalry units effectively against the Germans. A more correct term would be "mounted infantry" instead of "cavalry", as horses were primarily used as a means of transportation, for which they were very suitable in view of the very poor road conditions in pre-war Poland. Another myth describes Polish cavalry as being armed with both sabres and lances; lances were used for peacetime ceremonial purposes only and the primary weapon of the Polish cavalryman in 1939 was a rifle. Individual equipment did include a sabre, probably because of well-established tradition, and in the case of a melee combat this secondary weapon would probably be more effective than a rifle and bayonet. Moreover, the Polish cavalry brigade order of battle in 1939 included, apart from the mounted soldiers themselves, light and heavy machine guns (wheeled), the Anti-tank rifle, model 35, anti-aircraft weapons, anti tank artillery such as the Bofors 37 mm, also light and scout tanks, etc. The last cavalry vs. cavalry mutual charge in Europe took place in Poland during the Battle of Krasnobród, when Polish and German cavalry units clashed with each other. The last classical cavalry charge of the war took place on March 1, 1945 during the Battle of Schoenfeld by the 1st "Warsaw" Independent Cavalry Brigade. Infantry and tanks had been employed to little effect against the German position, both of which floundered in the open wetlands only to be dominated by infantry and antitank fire from the German fortifications on the forward slope of Hill 157, overlooking the wetlands. The Germans had not taken cavalry into consideration when fortifying their position which, combined with the "Warsaw"s swift assault, overran the German anti-tank guns and consolidated into an attack into the village itself, now supported by infantry and tanks. Greek The Italian invasion of Greece in October 1940 saw mounted cavalry used effectively by the Greek defenders along the mountainous frontier with Albania. Three Greek cavalry regiments (two mounted and one partially mechanized) played an important role in the Italian defeat in this difficult terrain. Soviet The contribution of Soviet cavalry to the development of modern military operational doctrine and its importance in defeating Nazi Germany has been eclipsed by the higher profile of tanks and airplanes. Despite the view portrayed by German propaganda, Soviet cavalry contributed significantly to the defeat of the Axis armies. Their contributions included being the most mobile troops in the early stages, when trucks and other equipment were low in quality; as well as providing cover for retreating forces. Considering their relatively limited numbers, the Soviet cavalry played a significant role in giving Germany its first real defeats in the early stages of the war. The continuing potential of mounted troops was demonstrated during the Battle of Moscow, against Guderian and the powerful central German 9th Army. Cavalry were amongst the first Soviet units to complete the encirclement in the Battle of Stalingrad, thus sealing the fate of the German 6th Army. Mounted Soviet forces also played a role in the encirclement of Berlin, with some Cossack cavalry units reaching the Reichstag in April 1945. Throughout the war they performed important tasks such as the capture of bridgeheads which is considered one of the hardest jobs in battle, often doing so with inferior numbers. For instance the 8th Guards Cavalry Regiment of the 2nd Guards Cavalry Division, often fought outnumbered against the best German units. By the final stages of the war only the Soviet Union was still fielding mounted units in substantial numbers, some in combined mechanized and horse units. The advantage of this approach was that in exploitation mounted infantry could keep pace with advancing tanks. Other factors favoring the retention of mounted forces included the high quality of Russian Cossacks which made about half of all cavalry; and the relative lack of roads suitable for wheeled vehicles in many parts of the Eastern Front. Another consideration was that the logistic capacity required to support very large motorized forces exceeded that necessary for mounted troops. The main usage of the Soviet cavalry involved infiltration through front lines with subsequent deep raids, which disorganized German supply lines. Another role was the pursuit of retreating enemy forces during major frontline operations and breakthroughs. Italian The last mounted sabre charge by Italian cavalry occurred on August 24, 1942 at Isbuscenski (Russia), when a squadron of the Savoia Cavalry Regiment charged the 812th Siberian Infantry Regiment. The remainder of the regiment, together with the Novara Lancers made a dismounted attack in an action that ended with the retreat of the Russians after heavy losses on both sides. The final Italian cavalry action occurred on October 17, 1942 in Poloj (now Croatia) by a squadron of the Alexandria Cavalry Regiment against a large group of Yugoslav partisans. Other Axis Romanian, Hungarian and Italian cavalry were dispersed or disbanded following the retreat of the Axis forces from Russia. Germany still maintained some mounted (mixed with bicycles) SS and Cossack units until the last days of the War. Finnish Finland used mounted troops against Russian forces effectively in forested terrain during the Continuation War. The last Finnish cavalry unit was not disbanded until 1947. United States The U.S. Army's last horse cavalry actions were fought during World War II: a) by the 26th Cavalry Regiment—a small mounted regiment of Philippine Scouts which fought the Japanese during the retreat down the Bataan peninsula, until it was effectively destroyed by January 1942; and b) on captured German horses by the mounted reconnaissance section of the U.S. 10th Mountain Division in a spearhead pursuit of the German Army across the Po Valley in Italy in April 1945. The last horsed U.S. Cavalry (the Second Cavalry Division) were dismounted in March |
enzyme for this conversion – succinyl-CoA:acetoacetate CoA-transferase (EC 2.8.3.5). Some variability also exists at the previous step – the conversion of 2-oxoglutarate to succinyl-CoA. While most organisms utilize the ubiquitous NAD+-dependent 2-oxoglutarate dehydrogenase, some bacteria utilize a ferredoxin-dependent 2-oxoglutarate synthase (EC 1.2.7.3). Other organisms, including obligately autotrophic and methanotrophic bacteria and archaea, bypass succinyl-CoA entirely, and convert 2-oxoglutarate to succinate via succinate semialdehyde, using EC 4.1.1.71, 2-oxoglutarate decarboxylase, and EC 1.2.1.79, succinate-semialdehyde dehydrogenase. In cancer, there are substantial metabolic derangements that occur to ensure the proliferation of tumor cells, and consequently metabolites can accumulate which serve to facilitate tumorigenesis, dubbed oncometabolites. Among the best characterized oncometabolites is 2-hydroxyglutarate which is produced through a heterozygous gain-of-function mutation (specifically a neomorphic one) in isocitrate dehydrogenase (IDH) (which under normal circumstances catalyzes the oxidation of isocitrate to oxalosuccinate, which then spontaneously decarboxylates to alpha-ketoglutarate, as discussed above; in this case an additional reduction step occurs after the formation of alpha-ketoglutarate via NADPH to yield 2-hydroxyglutarate), and hence IDH is considered an oncogene. Under physiological conditions, 2-hydroxyglutarate is a minor product of several metabolic pathways as an error but readily converted to alpha-ketoglutarate via hydroxyglutarate dehydrogenase enzymes (L2HGDH and D2HGDH) but does not have a known physiologic role in mammalian cells; of note, in cancer, 2-hydroxyglutarate is likely a terminal metabolite as isotope labelling experiments of colorectal cancer cell lines show that its conversion back to alpha-ketoglutarate is too low to measure. In cancer, 2-hydroxyglutarate serves as a competitive inhibitor for a number of enzymes that facilitate reactions via alpha-ketoglutarate in alpha-ketoglutarate-dependent dioxygenases. This mutation results in several important changes to the metabolism of the cell. For one thing, because there is an extra NADPH-catalyzed reduction, this can contribute to depletion of cellular stores of NADPH and also reduce levels of alpha-ketoglutarate available to the cell. In particular, the depletion of NADPH is problematic because NADPH is highly compartmentalized and cannot freely diffuse between the organelles in the cell. It is produced largely via the pentose phosphate pathway in the cytoplasm. The depletion of NADPH results in increased oxidative stress within the cell as it is a required cofactor in the production of GSH, and this oxidative stress can result in DNA damage. There are also changes on the genetic and epigenetic level through the function of histone lysine demethylases (KDMs) and ten-eleven translocation (TET) enzymes; ordinarily TETs hydroxylate 5-methylcytosines to prime them for demethylation. However, in the absence of alpha-ketoglutarate this cannot be done and there is hence hypermethylation of the cell's DNA, serving to promote epithelial-mesenchymal transition (EMT) and inhibit cellular differentiation. A similar phenomenon is observed for the Jumonji C family of KDMs which require a hydroxylation to perform demethylation at the epsilon-amino methyl group. Additionally, the inability of prolyl hydroxylases to catalyze reactions results in stabilization of hypoxia-inducible factor alpha, which is necessary to promote degradation of the latter (as under conditions of low oxygen there will not be adequate substrate for hydroxylation). This results in a pseudohypoxic phenotype in the cancer cell that promotes angiogenesis, metabolic reprogramming, cell growth, and migration. Regulation Allosteric regulation by metabolites. The regulation of the citric acid cycle is largely determined by product inhibition and substrate availability. If the cycle were permitted to run unchecked, large amounts of metabolic energy could be wasted in overproduction of reduced coenzyme such as NADH and ATP. The major eventual substrate of the cycle is ADP which gets converted to ATP. A reduced amount of ADP causes accumulation of precursor NADH which in turn can inhibit a number of enzymes. NADH, a product of all dehydrogenases in the citric acid cycle with the exception of succinate dehydrogenase, inhibits pyruvate dehydrogenase, isocitrate dehydrogenase, α-ketoglutarate dehydrogenase, and also citrate synthase. Acetyl-coA inhibits pyruvate dehydrogenase, while succinyl-CoA inhibits alpha-ketoglutarate dehydrogenase and citrate synthase. When tested in vitro with TCA enzymes, ATP inhibits citrate synthase and α-ketoglutarate dehydrogenase; however, ATP levels do not change more than 10% in vivo between rest and vigorous exercise. There is no known allosteric mechanism that can account for large changes in reaction rate from an allosteric effector whose concentration changes less than 10%. Citrate is used for feedback inhibition, as it inhibits phosphofructokinase, an enzyme involved in glycolysis that catalyses formation of fructose 1,6-bisphosphate, a precursor of pyruvate. This prevents a constant high rate of flux when there is an accumulation of citrate and a decrease in substrate for the enzyme. Regulation by calcium. Calcium is also used as a regulator in the citric acid cycle. Calcium levels in the mitochondrial matrix can reach up to the tens of micromolar levels during cellular activation. It activates pyruvate dehydrogenase phosphatase which in turn activates the pyruvate dehydrogenase complex. Calcium also activates isocitrate dehydrogenase and α-ketoglutarate dehydrogenase. This increases the reaction rate of many of the steps in the cycle, and therefore increases flux throughout the pathway. Transcriptional regulation. Recent work has demonstrated an important link between intermediates of the citric acid cycle and the regulation of hypoxia-inducible factors (HIF). HIF plays a role in the regulation of oxygen homeostasis, and is a transcription factor that targets angiogenesis, vascular remodeling, glucose utilization, iron transport and apoptosis. HIF is synthesized constitutively, and hydroxylation of at least one of two critical proline residues mediates their interaction with the von Hippel Lindau E3 ubiquitin ligase complex, which targets them for rapid degradation. This reaction is catalysed by prolyl 4-hydroxylases. Fumarate and succinate have been identified as potent inhibitors of prolyl hydroxylases, thus leading to the stabilisation of HIF. Major metabolic pathways converging on the citric acid cycle Several catabolic pathways converge on the citric acid cycle. Most of these reactions add intermediates to the citric acid cycle, and are therefore known as anaplerotic reactions, from the Greek meaning to "fill up". These increase the amount of acetyl CoA that the cycle is able to carry, increasing the mitochondrion's capability to carry out respiration if this is otherwise a limiting factor. Processes that remove intermediates from the cycle are termed "cataplerotic" reactions. In this section and in the next, the citric acid cycle intermediates are indicated in italics to distinguish them from other substrates and end-products. Pyruvate molecules produced by glycolysis are actively transported across the inner mitochondrial membrane, and into the matrix. Here they can be oxidized and combined with coenzyme A to form CO2, acetyl-CoA, and NADH, as in the normal cycle. However, it is also possible for pyruvate to be carboxylated by pyruvate carboxylase to form oxaloacetate. This latter reaction "fills up" the amount of oxaloacetate in the citric acid cycle, and is therefore an anaplerotic reaction, increasing the cycle's capacity to metabolize acetyl-CoA when the tissue's energy needs (e.g. in muscle) are suddenly increased by activity. In the citric acid cycle all the intermediates (e.g. citrate, iso-citrate, alpha-ketoglutarate, succinate, fumarate, malate, and oxaloacetate) are regenerated during each turn of the cycle. Adding more of any of these intermediates to the mitochondrion therefore means that that additional amount is retained within the cycle, increasing all the other intermediates as one is converted into the other. Hence the addition of any one of them to the cycle has an anaplerotic effect, and its removal has a cataplerotic effect. These anaplerotic and cataplerotic reactions will, during the course of the cycle, increase or decrease the amount of oxaloacetate available to combine with acetyl-CoA to form citric acid. This in turn increases or decreases the rate of ATP production by the mitochondrion, and thus the availability of ATP to the cell. Acetyl-CoA, on the other hand, derived from pyruvate oxidation, or from the beta-oxidation of fatty acids, is the only fuel to enter the citric acid cycle. With each turn of the cycle one molecule of acetyl-CoA is consumed for every molecule of oxaloacetate present in the mitochondrial matrix, and is never regenerated. It is the oxidation of the acetate portion of acetyl-CoA that produces CO2 and water, with the energy of O2 thus released captured in the form of ATP. The three steps of beta-oxidation resemble the steps that occur in the production of oxaloacetate from succinate in the TCA cycle. Acyl-CoA is oxidized to trans-Enoyl-CoA while FAD is reduced to FADH2, which is similar to the oxidation of succinate to fumarate. Following, trans-Enoyl-CoA is hydrated across the double bond to beta-hydroxyacyl-CoA, just like fumarate is hydrated to malate. Lastly, beta-hydroxyacyl-CoA is oxidized to beta-ketoacyl-CoA while NAD+ is reduced to NADH, which follows the same process as the oxidation of malate to oxaloacetate. In the liver, the carboxylation of cytosolic pyruvate into intra-mitochondrial oxaloacetate is an early step in the gluconeogenic pathway which converts lactate and de-aminated alanine into glucose, under the influence of high levels of glucagon and/or epinephrine in the blood. Here the addition of oxaloacetate to the mitochondrion does not have a net anaplerotic effect, as another citric acid cycle intermediate (malate) is immediately removed from the mitochondrion to be converted into cytosolic oxaloacetate, which is ultimately converted into glucose, in a process that is almost the reverse of glycolysis. In protein catabolism, proteins are broken down by proteases into their constituent amino acids. Their carbon skeletons (i.e. the de-aminated amino acids) may either enter the citric acid cycle as intermediates (e.g. alpha-ketoglutarate derived from glutamate or glutamine), having an anaplerotic effect on the cycle, or, in the case of leucine, isoleucine, lysine, phenylalanine, tryptophan, and tyrosine, they are converted into acetyl-CoA which can be burned to CO2 and water, or used to form ketone bodies, which too can only be burned in tissues other than the liver where they are formed, or excreted via the urine or breath. These latter amino acids are therefore termed "ketogenic" amino acids, | the mitochondrion. In prokaryotic cells, such as bacteria, which lack mitochondria, the citric acid cycle reaction sequence is performed in the cytosol with the proton gradient for ATP production being across the cell's surface (plasma membrane) rather than the inner membrane of the mitochondrion. The overall yield of energy-containing compounds from the TCA cycle is three NADH, one FADH2, and one GTP. Discovery Several of the components and reactions of the citric acid cycle were established in the 1930s by the research of Albert Szent-Györgyi, who received the Nobel Prize in Physiology or Medicine in 1937 specifically for his discoveries pertaining to fumaric acid, a key component of the cycle. He made this discovery by studying pigeon breast muscle. Because this tissue maintains its oxidative capacity well after breaking down in the Latapie mill and releasing in aqueous solutions, breast muscle of the pigeon was very well qualified for the study of oxidative reactions. The citric acid cycle itself was finally identified in 1937 by Hans Adolf Krebs and William Arthur Johnson while at the University of Sheffield, for which the former received the Nobel Prize for Physiology or Medicine in 1953, and for whom the cycle is sometimes named the "Krebs cycle". Overview The citric acid cycle is a key metabolic pathway that connects carbohydrate, fat, and protein metabolism. The reactions of the cycle are carried out by eight enzymes that completely oxidize acetate (a two carbon molecule), in the form of acetyl-CoA, into two molecules each of carbon dioxide and water. Through catabolism of sugars, fats, and proteins, the two-carbon organic product acetyl-CoA is produced which enters the citric acid cycle. The reactions of the cycle also convert three equivalents of nicotinamide adenine dinucleotide (NAD+) into three equivalents of reduced NAD+ (NADH), one equivalent of flavin adenine dinucleotide (FAD) into one equivalent of FADH2, and one equivalent each of guanosine diphosphate (GDP) and inorganic phosphate (Pi) into one equivalent of guanosine triphosphate (GTP). The NADH and FADH2 generated by the citric acid cycle are, in turn, used by the oxidative phosphorylation pathway to generate energy-rich ATP. One of the primary sources of acetyl-CoA is from the breakdown of sugars by glycolysis which yield pyruvate that in turn is decarboxylated by the pyruvate dehydrogenase complex generating acetyl-CoA according to the following reaction scheme: The product of this reaction, acetyl-CoA, is the starting point for the citric acid cycle. Acetyl-CoA may also be obtained from the oxidation of fatty acids. Below is a schematic outline of the cycle: The citric acid cycle begins with the transfer of a two-carbon acetyl group from acetyl-CoA to the four-carbon acceptor compound (oxaloacetate) to form a six-carbon compound (citrate). The citrate then goes through a series of chemical transformations, losing two carboxyl groups as CO2. The carbons lost as CO2 originate from what was oxaloacetate, not directly from acetyl-CoA. The carbons donated by acetyl-CoA become part of the oxaloacetate carbon backbone after the first turn of the citric acid cycle. Loss of the acetyl-CoA-donated carbons as CO2 requires several turns of the citric acid cycle. However, because of the role of the citric acid cycle in anabolism, they might not be lost, since many citric acid cycle intermediates are also used as precursors for the biosynthesis of other molecules. Most of the electrons made available by the oxidative steps of the cycle are transferred to NAD+, forming NADH. For each acetyl group that enters the citric acid cycle, three molecules of NADH are produced. The citric acid cycle includes a series of oxidation reduction reaction in mitochondria. In addition, electrons from the succinate oxidation step are transferred first to the FAD cofactor of succinate dehydrogenase, reducing it to FADH2, and eventually to ubiquinone (Q) in the mitochondrial membrane, reducing it to ubiquinol (QH2) which is a substrate of the electron transfer chain at the level of Complex III. For every NADH and FADH2 that are produced in the citric acid cycle, 2.5 and 1.5 ATP molecules are generated in oxidative phosphorylation, respectively. At the end of each cycle, the four-carbon oxaloacetate has been regenerated, and the cycle continues. Steps There are ten basic steps in the citric acid cycle, as outlined below. The cycle is continuously supplied with new carbon in the form of acetyl-CoA, entering at step 0 in the table. Two carbon atoms are oxidized to CO2, the energy from these reactions is transferred to other metabolic processes through GTP (or ATP), and as electrons in NADH and QH2. The NADH generated in the citric acid cycle may later be oxidized (donate its electrons) to drive ATP synthesis in a type of process called oxidative phosphorylation. FADH2 is covalently attached to succinate dehydrogenase, an enzyme which functions both in the CAC and the mitochondrial electron transport chain in oxidative phosphorylation. FADH2, therefore, facilitates transfer of electrons to coenzyme Q, which is the final electron acceptor of the reaction catalyzed by the succinate:ubiquinone oxidoreductase complex, also acting as an intermediate in the electron transport chain. Mitochondria in animals, including humans, possess two succinyl-CoA synthetases: one that produces GTP from GDP, and another that produces ATP from ADP. Plants have the type that produces ATP (ADP-forming succinyl-CoA synthetase). Several of the enzymes in the cycle may be loosely associated in a multienzyme protein complex within the mitochondrial matrix. The GTP that is formed by GDP-forming succinyl-CoA synthetase may be utilized by nucleoside-diphosphate kinase to form ATP (the catalyzed reaction is GTP + ADP → GDP + ATP). Products Products of the first turn of the cycle are one GTP (or ATP), three NADH, one FADH2 and two CO2. Because two acetyl-CoA molecules are produced from each glucose molecule, two cycles are required per glucose molecule. Therefore, at the end of two cycles, the products are: two GTP, six NADH, two FADH2, and four CO2. The above reactions are balanced if Pi represents the H2PO4− ion, ADP and GDP the ADP2− and GDP2− ions, respectively, and ATP and GTP the ATP3− and GTP3− ions, respectively. The total number of ATP molecules obtained after complete oxidation of one glucose in glycolysis, citric acid cycle, and oxidative phosphorylation is estimated to be between 30 and 38. Efficiency The theoretical maximum yield of ATP through oxidation of one molecule of glucose in glycolysis, citric acid cycle, and oxidative phosphorylation is 38 (assuming 3 molar equivalents of ATP per equivalent NADH and 2 ATP per FADH2). In eukaryotes, two equivalents of NADH and four equivalents of ATP are generated in glycolysis, which takes place in the cytoplasm. Transport of two of these equivalents of NADH into the mitochondria consumes two equivalents of ATP, thus reducing the net production of ATP to 36. Furthermore, inefficiencies in oxidative phosphorylation due to leakage of protons across the mitochondrial membrane and slippage of the ATP synthase/proton pump commonly reduces the ATP yield from NADH and FADH2 to less than the theoretical maximum yield. The observed yields are, therefore, closer to ~2.5 ATP per NADH and ~1.5 ATP per FADH2, further reducing the total net production of ATP to approximately 30. An assessment of the total ATP yield with newly revised proton-to-ATP ratios provides an estimate of 29.85 ATP per glucose molecule. Variation While the citric acid cycle is in general highly conserved, there is significant variability in the enzymes found in different taxa (note that the diagrams on this page are specific to the mammalian pathway variant). Some differences exist between eukaryotes and prokaryotes. The conversion of D-threo-isocitrate to 2-oxoglutarate is catalyzed in eukaryotes by the NAD+-dependent EC 1.1.1.41, while prokaryotes employ the NADP+-dependent EC 1.1.1.42. Similarly, the conversion of (S)-malate to oxaloacetate is catalyzed in eukaryotes by the NAD+-dependent EC 1.1.1.37, while most prokaryotes utilize a quinone-dependent enzyme, EC 1.1.5.4. A step with significant variability is the conversion of succinyl-CoA to succinate. Most organisms utilize EC 6.2.1.5, succinate–CoA ligase (ADP-forming) (despite its name, the enzyme operates in the pathway in the direction of ATP formation). In mammals a GTP-forming enzyme, succinate–CoA ligase (GDP-forming) (EC 6.2.1.4) also operates. The level of utilization of each isoform is tissue dependent. In some acetate-producing bacteria, such as Acetobacter aceti, an entirely different enzyme catalyzes this conversion – EC 2.8.3.18, succinyl-CoA:acetate CoA-transferase. This specialized enzyme links the TCA cycle with acetate metabolism in these organisms. Some bacteria, such as Helicobacter pylori, employ yet another enzyme for this conversion – succinyl-CoA:acetoacetate CoA-transferase (EC 2.8.3.5). Some variability also exists at the previous step – the conversion of 2-oxoglutarate to succinyl-CoA. While most organisms utilize the ubiquitous NAD+-dependent 2-oxoglutarate dehydrogenase, some bacteria utilize a ferredoxin-dependent 2-oxoglutarate synthase (EC 1.2.7.3). Other organisms, including obligately autotrophic and methanotrophic bacteria and archaea, bypass succinyl-CoA entirely, and convert 2-oxoglutarate to succinate via succinate semialdehyde, using EC 4.1.1.71, 2-oxoglutarate decarboxylase, and EC 1.2.1.79, succinate-semialdehyde dehydrogenase. In cancer, there are substantial metabolic derangements that occur to ensure the proliferation of tumor cells, and consequently metabolites can accumulate which serve to facilitate tumorigenesis, dubbed oncometabolites. Among the best characterized oncometabolites is 2-hydroxyglutarate which is produced through a heterozygous gain-of-function mutation (specifically a neomorphic one) in isocitrate dehydrogenase (IDH) (which under normal circumstances catalyzes the oxidation of isocitrate to oxalosuccinate, which then spontaneously decarboxylates to alpha-ketoglutarate, as discussed above; in this case an additional reduction step occurs after the formation of alpha-ketoglutarate via NADPH to yield 2-hydroxyglutarate), and hence IDH is considered an oncogene. Under physiological conditions, 2-hydroxyglutarate is a minor product of several metabolic pathways as an error but readily converted to alpha-ketoglutarate via hydroxyglutarate dehydrogenase enzymes (L2HGDH and D2HGDH) but does not have a known physiologic role in mammalian cells; of note, in cancer, 2-hydroxyglutarate is likely a terminal metabolite as isotope labelling experiments of colorectal cancer cell lines show that its conversion back to alpha-ketoglutarate is too low to measure. In cancer, 2-hydroxyglutarate serves as a competitive inhibitor for a number of enzymes that facilitate reactions via alpha-ketoglutarate in alpha-ketoglutarate-dependent dioxygenases. This mutation results in several important changes to the metabolism of the cell. For one thing, because there is an extra NADPH-catalyzed reduction, this can contribute to depletion of cellular stores of NADPH and also reduce levels of alpha-ketoglutarate available to the cell. In particular, the depletion of NADPH is problematic because NADPH is highly compartmentalized and cannot freely diffuse between the organelles in the cell. It is produced largely via the pentose phosphate pathway in the cytoplasm. The depletion of NADPH results in increased oxidative stress within the cell as it is a required cofactor in the production of GSH, and this oxidative stress can result in DNA damage. There are also changes on the genetic and epigenetic level through the function of histone lysine demethylases (KDMs) and ten-eleven translocation (TET) enzymes; ordinarily TETs hydroxylate 5-methylcytosines to prime them for demethylation. However, in the absence of alpha-ketoglutarate this cannot be done and there is hence hypermethylation of the cell's DNA, serving to promote epithelial-mesenchymal transition (EMT) and inhibit cellular differentiation. A similar phenomenon is observed for the Jumonji C family of KDMs which require a hydroxylation to perform demethylation at the epsilon-amino methyl group. Additionally, the inability of prolyl hydroxylases to catalyze reactions results in stabilization of hypoxia-inducible factor alpha, which is necessary to promote degradation of the latter (as under conditions of low oxygen there will not be adequate substrate for hydroxylation). This results in a pseudohypoxic phenotype in the cancer cell that promotes angiogenesis, metabolic reprogramming, cell growth, and migration. Regulation Allosteric regulation by metabolites. The regulation of the citric acid cycle is largely determined by product inhibition and substrate availability. If the cycle were permitted to run unchecked, large amounts of metabolic energy could be wasted in overproduction of reduced coenzyme |
blade from a Caterpillar D8 to a Sherman. The later M1 dozer blade was standardized to fit any Sherman with VVSS suspension and the M1A1 would fit the wider HVSS. Some M4s made for the Engineer Corps had the blades fitted permanently and the turrets removed. In the early stages of the 1944 Battle of Normandy before the Culin Cutter, breaking through the Bocage hedgerows relied heavily on Sherman dozers. M4 Doozit: Engineer Corps' Sherman dozer with demolition charge on wooden platform and T40 Whizbang rocket launcher (the Doozit did not see combat but the Whizbang did). Bridgelayer: The US field-converted a few M4 in Italy with A-frame-supported bridge and heavy rear counter-weight to make the Mobile Assault Bridge. British developments for Shermans included the fascine (used by 79th Armoured Division), Crib, Twaby Ark, Octopus, Plymouth (Bailey bridge), and AVRE (SBG bridge). Mine-Clearing: British conversions included the Sherman Crab. The US developed an extensive array of experimental types: T15/E1/E2: Series of mine resistant Shermans based on the T14 kit. Cancelled at war's end. Mine Exploder T1E1 Roller (Earthworm): Three sets of 6 discs made from armor plate. Mine Exploder T1E2 Roller: Two forward units with 7 discs only. Experimental. Mine Exploder T1E3/M1 Roller (Aunt Jemima): Two forward units with five 10' discs. Most widely used T1 variant, adopted as the M1. (picture) Mine Exploder T1E4 Roller: 16 discs. Mine Exploder T1E5 Roller: T1E3/M1 w/ smaller wheels. Experimental. Mine Exploder T1E6 Roller: T1E3/M1 w/ serrated edged discs. Experimental Mine Exploder T2 Flail: British Crab I mine flail. Mine Exploder T3 Flail: Based on British Scorpion flail. Development stopped in 1943. Mine Exploder T3E1 Flail: T3 w/ longer arms and sand filled rotor. Cancelled. Mine Exploder T3E2 Flail: E1 variant, rotor replaced with steel drum of larger diameter. Development terminated at war's end. Mine Exploder T4: British Crab II mine flail. Mine Exploder T7: Frame with small rollers with two discs each. Abandoned. Mine Exploder T8 (Johnny Walker): Steel plungers on a pivot frame designed to pound on the ground. Vehicle steering was adversely affected. Mine Exploder T9: 6' Roller. Difficult to maneuver. Mine Exploder T9E1: Lightened version, but proved unsatisfactory because it failed to explode all mines. Mine Exploder T10: Remote control unit designed to be controlled by the following tank. Cancelled. Mine Exploder T11: Six forward firing mortars to set off mines. Experimental. Mine Exploder T12: 23 forward firing mortars. Apparently effective, but cancelled. Mine Exploder T14: Direct modification to a Sherman tank, upgraded belly armor and reinforced tracks. Cancelled. Mine Excavator T4: Plough device. Developed during 1942, but abandoned. Mine Excavator T5/E1/E2: T4 variant w/ v-shaped plough. E1/E2 was a further improvement. Mine Excavator T5E3: T5E1/E2 rigged to the hydraulic lift mechanism from the M1 dozer kit to control depth. Mine Excavator T6: Based on the v-shape/T5, unable to control depth. Mine Excavator T2/E1/E2: Based on the T4/T5's, but rigged to the hydraulic lift mechanism from the M1 dozer kit to control depth. M60 M60A1 AVLB - Armored Vehicle Launched Bridge, scissors bridge on M60A1 chassis. M60 AVLM - Armored Vehicle Launched MICLIC (Mine-Clearing Line Charge), modified M60 AVLB with up to 2 MICLIC mounted over the rear of the vehicle. M60 Panther - M60 modified into a remotely controlled mine clearing tank. The turret is removed with the turret ring sealed, and the front of the vehicle is fitted with mine rollers. M728 CEV - M60A1-based Combat Engineer Vehicle fitted with a folding A-frame crane and winch attached to the front of the turret, and an M135 165mm demolition gun. Commonly fitted with the D7 bulldozer blade, or a mine-clearing equipment. M728A1 - Upgraded version of the M728 CEV. M1 M1 Grizzly Combat Mobility Vehicle (CMV) M1 Panther II Remote Controlled Mine Clearing Vehicle M104 Wolverine Heavy Assault Bridge M1 Assault Breacher Vehicle Leopard 1 Biber (Beaver) armoured vehicle-launched bridge Pionierpanzer 1 Pionierpanzer 2 Dachs (Badger) armoured engineer vehicle Leopard 2 Panzerschnellbrücke 2 (Bridge layer) Pionierpanzer 3 Kodiak T-55/54 T-54 Dozer - T-54 fitted with bulldozer blades for clearing soil, obstacles and snow. ALT-55 - Bulldozer version of the T-55 with large flat-plate superstructure, angular concave dozer blade on front and prominent hydraulic rams for dozer blade. T-55 hull fitted with an excavator body and armoured cab. T-55 MARRS - Fitted with a Vickers armoured recovery vehicle kit. It has a large flat-plate turret with slightly chamfered sides, vertical rear and very chamfered front and a large A-frame crane on the front of the turret. The crane has cylindrical winch rope fed between legs of crane. A dozer blade is fitted to the hull front. MT-55 or MTU-55 (Tankoviy Mostoukladchik) - Soviet designator for Czechoslovakian MT-55A bridge-layer tank with scissors bridge. MTU-12 (Tankoviy Mostoukladchik)- Bridge-layer tank with 12 m single-span bridge that can carry 50 tonnes. The system entered service in 1955; today only a very small number remains in service. Combat weight: 34 tonnes. MTU-20 (Ob'yekt 602) (Tankoviy Mostoukladchik) - The MTU-20 consists of a twin-treadway superstructure mounted on a modified T-54 tank chassis. Each treadway is made up of a box-type aluminum girder with a folding ramp attached to both ends to save space in the travel position. Because of that the vehicle with the bridge on board is only 11.6 m long, but the overall span length is 20 m. This is an increase of about 62% over that of the older MTU-1. The bridge is launched by the cantilever method. First the ramps are lowered and fully extended before the treadways are forward with the full load of the bridge resting on the forward support plate during launch. The span is moved out over the launching girder until the far end reaches the far bank. Next the near end is lowered onto the near bank. This method of launching gives the bridgelayer a low silhouette which makes it less vulnerable to detection and destruction. MTU-20 based on the T-55 chassis. BTS-1 (Bronetankoviy Tyagach Sredniy - Medium Armoured Tractor) - This is basically a turretless T-54A with a stowage basket. BTS-1M - improved or remanufactured BTS-1. BTS-2 (Ob'yekt 9) (Bronetankoviy Tyagach Sredniy - Medium Armoured Tractor) - BTS-1 upgraded with a hoist and a small folding crane with a capacity of 3,000 kg. It was developed on the T-54 hull in 1951; series production started in 1955. The prototype Ob.9 had a commander's cupola with DShK 1938/46 machine gun, but the production model has a square commander's hatch, opening to the right. Combat weight: 32 tons. Only a very small number remains in service. BTS-3 (Bronetankoviy Tyagach Sredniy - Medium Armoured Tractor) - JVBT-55A in service with the Soviet Army. BTS-4 (Bronetankoviy Tyagach Sredniy - Medium Armoured Tractor) - Similar to BTS-2 but with snorkel. In the West generally known as T-54T. There are many different models, based on the T-44, T-54, T-55 and T-62. BTS-4B - Dozer blade equipped armoured recovery vehicle converted from the early -odd-shaped turret versions of the T-54. BTS-4BM - Experimental version of the BTS-4B with the capacity to winch over the front of the vehicle. IMR (Ob'yekt 616) (Inzhenernaya Mashina Razgrazhdeniya) - Combat engineer vehicle. It's a T-55 that had its turret replaced with a hydraulically operated 2t crane. The crane can also be fitted with a small bucket or a pair of pincer type grabs for removing trees and other obstacles. A hydraulically operated dozer blade mounts to the front of the hull; it can be used in a straight or V-configuration only. The IMR was developed in 1969 and entered service five years later. SPK-12G (Samokhodniy Pod’yomniy Kran) - Heavy crane mounted on T-55 chassis. Only two were built. BMR-2 (Boyevaya Mashina Razminirovaniya) - Mine clearing tank based on T-55 chassis. This vehicle has no turret but a fixed superstructure, armed with an NSVT machine gun. It is fitted with a KMT-7 mine clearing set and entered service around 1987 during the war in Afghanistan. Improved version of BMR-2 that has been seen fitted with a wide variety of mine roller designs. T-64 BAT-2 – Fast combat engineering vehicle with the lower hull and "small roadwheels" & suspension of the T-64. . The vehicle is powered by a V-64-4 multi-fuel diesel engine, developing 700 hp. This engine is derived from that, used on the T-72 main battle tank. The 40-ton tractor sports a very large, all axis adjustable V-shaped hydraulic dozer blade at the front, a single soil ripper spike at the rear and a 2-ton crane on the top. The crew compartment holds 8 persons (driver, commander, radio operators plus a five-man sapper squad for dismounted tasks). The highly capable BAT-2 was designed to replace the old T-54/AT-T based BAT-M, but Warsaw Pact allies received only small numbers due to its high price and the old and new vehicles served alongside each other T-72 IMR-2 (Inzhenernaya Mashina Razgrashdeniya) - Combat engineering vehicle (CEV). It has a telescoping crane arm which can lift between 5 and 11 metric tons and utilizes a pincers for uprooting trees. Pivoted at the front of the vehicle is a dozer blade that can be used in a V-configuration or as a straight dozer blade. When not required it is raised clear of the ground. On the vehicle's rear, a mine-clearing system is mounted. IMR-2M1 - Simplified model without the mine-clearing system. Entered service in 1987. IMR-2M2 - Improved version that is better suited for operations in dangerous situations, for example in contaminated areas. It entered service in 1990 and has a modified crane arm with bucket instead off the pincers. IMR-2MA - Latest version with bigger operator's cabin armed with a 12.7 mm machine gun NSV. Klin-1 - Remote controlled IMR-2. MTU-72 (Ob'yekt 632) (Tankovyj Mostoukladchik) - bridge layer based on T-72 chassis. The overall layout and operating method of the system are similar to those of the MTU-20 and MTU bridgelayers. The bridge, when laid, has an overall length of 20 meters. The bridge has a maximum capacity of 50,000 kg, is 3.3 meters wide, and can span a gap of 18 m. By itself, the bridge weighs 6400 kg. The time required to lay the bridge is 3 minutes, and 8 minutes for retrieval. | keep up with tank formations. They were not used on D-Day but were issued to the 79th Armoured Division in Belgium during the latter part of 1944. In U.S. Forces, Sherman tanks were also fitted with dozer blades, and anti-mine roller devices were developed, enabling engineering operations and providing similar capabilities. Post war Post war, the value of the combat engineering vehicles had been proven, and armoured multi-role engineering vehicles have been added to the majority of armoured forces. Types Civilian and militarized heavy equipment Military engineering can employ a wide variety of heavy equipment in the same or similar ways to how this equipment is used outside the military. Bulldozers, cranes, graders, excavators, dump trucks, loaders, and backhoes all see extensive use by military engineers. Military engineers may also use civilian heavy equipment which was modified for military applications. Typically, this involves adding armour for protection from battlefield hazards such as artillery, unexploded ordnance, mines, and small arms fire. Often this protection is provided by armour plates and steel jackets. Some examples of armoured civilian heavy equipment are the IDF Caterpillar D9, American D7 TPK, Canadian D6 armoured bulldozer, cranes, graders, excavators, and M35 2-1/2 ton cargo truck. Militarized heavy equipment may also take on the form of traditional civilian equipment designed and built to unique military specifications. These vehicles typically sacrifice some depth of capability from civilian models in order to gain greater speed and independence from prime movers. Examples of this type of vehicle include high speed backhoes such as the Australian Army's High Mobility Engineering Vehicle (HMEV) from Thales or the Canadian Army's Multi-Purpose Engineer Vehicle (MPEV) from Arva. The main article for civilian heavy equipment is: Heavy equipment (construction) Armoured engineering vehicle Typically based on the platform of a main battle tank, these vehicles go by different names depending upon the country of use or manufacture. In the US the term "combat engineer vehicle (CEV)" is used, in the UK the terms "Armoured Vehicle Royal Engineers (AVRE)" or Armoured Repair and Recovery Vehicle (ARRV) are used, while in Canada and other commonwealth nations the term "armoured engineer vehicle (AEV)" is used. There is no set template for what such a vehicle will look like, yet likely features include a large dozer blade or mine ploughs, a large caliber demolition cannon, augers, winches, excavator arms and cranes or lifting booms. These vehicles are designed to directly conduct obstacle breaching operations and to conduct other earth-moving and engineering work on the battlefield. Good examples of this type of vehicle include the UK Trojan AVRE, the Russian IMR, and the US M728 Combat Engineer Vehicle. Although the term "armoured engineer vehicle" is used specifically to describe these multi-purpose tank based engineering vehicles, that term is also used more generically in British and Commonwealth militaries to describe all heavy tank based engineering vehicles used in the support of mechanized forces. Thus, "armoured engineer vehicle" used generically would refer to AEV, AVLB, Assault Breachers, and so on. Armoured earth mover Lighter and less multi-functional than the CEVs or AEVs described above, these vehicles are designed to conduct earth-moving work on the battlefield. These vehicles have greater high speed mobility than traditional heavy equipment and are protected against the effects of blast and fragmentation. Good examples are the American M9 ACE and the UK FV180 Combat Engineer Tractor. Breaching vehicle These vehicles are equipped with mechanical or other means for the breaching of man made obstacles. Common types of breaching vehicles include mechanical flails, mine plough vehicles, and mine roller vehicles. In some cases, these vehicles will also mount Mine-clearing line charges. Breaching vehicles may be either converted armoured fighting vehicles or purpose built vehicles. In larger militaries, converted AFV are likely to be used as assault breachers while the breached obstacle is still covered by enemy observation and fire, and then purpose built breaching vehicles will create additional lanes for following forces. Good examples of breaching vehicles include the US M1150 Assault Breacher Vehicle, the UK Aardvark JSFU, and the Singaporean Trailblazer. Bridging vehicles Several types of military bridging vehicles have been developed. An armoured vehicle-launched bridge (AVLB) is typically a modified tank hull converted to carry a bridge into battle in order to support crossing ditches, small waterways, or other gap obstacles. Another type of bridging vehicle is the truck launched bridge. The Soviet TMM bridging truck could carry and launch a 10-meter bridge that could be daisy-chained with other TMM bridges to cross larger obstacles. More recent developments have seen the conversion of AVLB and truck launched bridge with launching systems that can be mounted on either tank or truck for bridges that are capable of supporting heavy main battle tanks. Earlier examples of bridging vehicles include a type in which a converted tank hull is the bridge. On these vehicles, the hull deck comprises the main portion of the tread way while ramps extend from the front and rear of the vehicle to allow other vehicles to climb over the bridging vehicle and cross obstacles. An example of this type of armoured bridging vehicle was the Churchill Ark used in the Second World War. Combat engineer section carriers Another type of CEVs are armoured fighting vehicles which are used to transport sappers (combat engineers) and can be fitted with a bulldozer's blade and other mine-breaching devices. They are often used as APCs because of their carrying ability and heavy protection. They are usually armed with machine guns and grenade launchers and usually tracked to provide enough tractive force to push blades and rakes. Some examples are the U.S. M113 APC, IDF Puma, Nagmachon, Husky, and U.S. M1132 ESV (a Stryker variant). Military ferries and amphibious crossing vehicles One of the major tasks of military engineering is crossing major rivers. Several military engineering vehicles have been developed in various nations to achieve this task. One of the more common types is the amphibious ferry such as the M3 Amphibious Rig. These vehicles are self-propelled on land, they can transform into raft type ferries when in the water, and often multiple vehicles can connect to form larger rafts or floating bridges. Other types of military ferries, such as the Soviet Plavayushij Transportyor - Srednyj, are able to load while still on land and transport other vehicles cross country and over water. In addition to amphibious crossing vehicles, military engineers may also employ several types of boats. Military assault boats are small boats propelled by oars or an outboard motor and used to ferry dismounted infantry across water. Tank-based combat engineering vehicles Most CEVs are armoured fighting vehicles that may be based on a tank chassis and have special attachments in order to breach obstacles. Such attachments |
from the riches acquired in the Spanish colonisation of the Americas, but, in time, also carried the main burden of military expenses of the united Spanish kingdoms. After Isabella's death, Ferdinand II personally ruled both kingdoms. By virtue of descent from his maternal grandparents, Ferdinand II of Aragon and Isabella I of Castile, in 1516 Charles I of Spain became the first king to rule the Crowns of Castile and Aragon simultaneously by his own right. Following the death of his paternal (House of Habsburg) grandfather, Maximilian I, Holy Roman Emperor, he was also elected Charles V, Holy Roman Emperor, in 1519. Over the next few centuries, the Principality of Catalonia was generally on the losing side of a series of wars that led steadily to an increased centralization of power in Spain. Despite this fact, between the 16th and 18th centuries, the participation of the political community in the local and the general Catalan government grew, while the kings remained absent and its constitutional system continued to consolidate. Tensions between Catalan institutions and the Monarchy began to arise. The large and burdensome presence of the Spanish royal army in the Principality due to the Franco-Spanish War led to an uprising of peasants, provoking the Reapers' War (1640–1652), which saw Catalonia rebel (briefly as a republic led by the chairman of the Generalitat, Pau Claris) with French help against the Spanish Crown for overstepping Catalonia's rights during the Thirty Years' War. Within a brief period France took full control of Catalonia. Most of Catalonia was reconquered by the Spanish Monarchy but Catalan rights were recognised. Roussillon was lost to France by the Treaty of the Pyrenees (1659). The most significant conflict concerning the governing monarchy was the War of the Spanish Succession, which began when the childless Charles II of Spain, the last Spanish Habsburg, died without an heir in 1700. Charles II had chosen Philip V of Spain from the French House of Bourbon. Catalonia, like other territories that formed the Crown of Aragon, rose up in support of the Austrian Habsburg pretender Charles VI, Holy Roman Emperor, in his claim for the Spanish throne as Charles III of Spain. The fight between the houses of Bourbon and Habsburg for the Spanish Crown split Spain and Europe. The fall of Barcelona on 11 September 1714 to the Bourbon king Philip V militarily ended the Habsburg claim to the Spanish Crown, which became legal fact in the Treaty of Utrecht. Philip felt that he had been betrayed by the Catalan Courts, as it had initially sworn its loyalty to him when he had presided over it in 1701. In retaliation for the betrayal, and inspired by the French absolutist style of government, the first Bourbon king introduced the Nueva Planta decrees, that incorporated the lands of the Crown of Aragon, including the Principality of Catalonia, as provinces under the Crown of Castile in 1716, terminating their separate institutions, laws and rights, as well as their politics, within a united kingdom of Spain. From the second third of 18th century onwards Catalonia carried out a successful process of proto-industrialization, reinforced in the late quarter of the century when Castile's trade monopoly with American colonies ended. Late modern history At the beginning of the nineteenth century, Catalonia was severely affected by the Napoleonic Wars. In 1808, it was occupied by French troops; the resistance against the occupation eventually developed into the Peninsular War. The rejection to French dominion was institutionalized with the creation of "juntas" (councils) who, remaining loyal to the Bourbons, exercised the sovereignty and representation of the territory due to the disappearance of the old institutions. Napoleon took direct control of Catalonia to establish order, creating the Government of Catalonia under the rule of Marshall Augereau, and making Catalan briefly an official language again. Between 1812 and 1814, Catalonia was annexed to France and organized as four departments. The French troops evacuated Catalan territory at the end of 1814. After the Bourbon restoration in Spain and the death of the absolutist king Ferdinand VII, Carlist Wars erupted against the new born liberal state of Isabella II. Catalonia was divided, the coast and most industrialized areas support liberalism, while many inland areas were in the hands of Carlists, as the last ones proposed to reestablish the institutional systems suppressed in the Nueva Planta decrees in the ancient realms of the Crown of Aragon. In the second third of the 19th century, it became an industrial center. This process was boosted by, amongst other things, national (although the policy of the Spanish government during those times changed many times between free trade and protectionism) and the conditions of proto-industrialization of the prior two centuries of the Catalan urban areas and its countryside. Along the century, textile industry flourished in urban areas and in the countryside, usually in the form of company towns. To this day it remains one of the most industrialised areas of Spain. In 1832 it was inaugurated in Barcelona the factory Bonaplata, the first of the country which made use of the steam engine. In 1848 the first railway in the Iberian Peninsula was built between Barcelona and Mataró. During those years, Barcelona was the focus of important revolutionary uprisings, called "bullangues", causing a conflictive relation between many sectors of Catalan society and the central government and, in Catalonia, a republican current began to develop; also, inevitably, many Catalans favored a federalized Spain. Meanwhile, the Catalan language saw a cultural renaissance (the Renaixença) among popular class and bourgeoisie. After the fall of the First Spanish Republic (1873-1874) and the restoration of the Bourbon dynasty (1874), Catalan nationalism began to be organized politically. The Anarchists had been active throughout the early 20th century, founding the CNT trade union in 1910 and achieving one of the first eight-hour workday in Europe in 1919. Growing resentment of conscription and of the military culminated in the Tragic Week in Barcelona in 1909. Until the 1930s, under the hegemony of the Regionalist League, Catalonia gained and lost a degree of administrative unity for the first time in the Modern era. In 1914, the four Catalan provinces were authorized to create a commonwealth (Catalan: Mancomunitat de Catalunya), without any legislative power or specific political autonomy which carried out an ambitious program of modernization, but it was disbanded in 1925 by the dictatorship of Primo de Rivera (1923-1930). During the last steps of the Dictatorship, Barcelona celebrated the 1929 International Exposition, while Spain began to suffer an economic crisis. After the fall of the dictator and a brief proclamation of the Catalan Republic during the events which led to the proclamation of the Second Spanish Republic (1931-1939), it received its first Statute of Autonomy from the Spanish Republic's Parliament, granting a considerable degree of self-government to Catalonia, establishing an autonomous body, the Generalitat of Catalonia, which included a parliament, a government and a court of appeal, and the left-wing independentist leader Francesc Macià was appointed its first president. The governments of the Republican Generalitat, led by the Republican Left of Catalonia (ERC) members Francesc Macià (1931-1933) and Lluís Companys (1933-1940), sought to implement an advanced and progressive social agenda, despite the internal difficulties. This period was marked by political unrest, the effects of the economic crisis and their social repercussions. The Statute of Autonomy was suspended in 1934, due to the Events of 6 October in Barcelona, as a response to the accession of right-wing Spanish nationalist party CEDA to the government of the Republic, considered close to fascism. After the electoral victory of the Popular Front in February 1936, the Government of Catalonia was pardoned and the self-government restored. Spanish Civil War (1936–1939) and Franco's rule (1939–1975) The defeat of the military rebellion against the Republican government in Barcelona placed Catalonia firmly in the Republican side of the Spanish Civil War. During the war, there were two rival powers in Catalonia: the de jure power of the Generalitat and the de facto power of the armed popular militias. Violent confrontations between the workers' parties (CNT-FAI and POUM against the PSUC) culminated in the defeat of the first ones in 1937. The situation resolved itself progressively in favor of the Generalitat, but at the same time the Generalitat was partially losing its autonomous power within Republican Spain. In 1938 Franco's troops broke the Republican territory in two, isolating Catalonia from the rest of the Republic. The defeat of the Republican army in the Battle of the Ebro led in 1938 and 1939 to the occupation of Catalonia by Franco's forces. The defeat of the Spanish Republic in the Spanish Civil War brought to power the dictatorship of Francisco Franco, whose first ten-year rule was particularly violent, autocratic, and repressive both in a political, cultural, social, and economical sense. In Catalonia, any kind of public activities associated with Catalan nationalism, republicanism, anarchism, socialism, liberalism, democracy or communism, including the publication of books on those subjects or simply discussion of them in open meetings, was banned. Franco's regime banned the use of Catalan in government-run institutions and during public events, and also the Catalan institutions of self-government were abolished. The pro-Republic of Spain president of Catalonia, Lluís Companys, was taken to Spain from his exile in the German-occupied France, and was tortured and executed in the Montjuïc Castle of Barcelona for the crime of 'military rebellion'. During later stages of Francoist Spain, certain folkloric and religious celebrations in Catalan resumed and were tolerated. Use of Catalan in the mass media had been forbidden, but was permitted from the early 1950s in the theatre. Despite the ban during the first years and the difficulties of the next period, publishing in Catalan continued throughout his rule. The years after the war were extremely hard. Catalonia, like many other parts of Spain, had been devastated by the war. Recovery from the war damage was slow and made more difficult by the international trade embargo and the autarkic politics of Franco's regime. By the late 1950s the region had recovered its pre-war economic levels and in the 1960s was the second fastest growing economy in the world in what became known as the Spanish miracle. During this period there was a spectacular growth of industry and tourism in Catalonia that drew large numbers of workers to the region from across Spain and made the area around Barcelona into one of Europe's largest industrial metropolitan areas. Transition and democratic period (1975–present) After Franco's death in 1975, Catalonia voted for the adoption of a democratic Spanish Constitution in 1978, in which Catalonia recovered political and cultural autonomy, restoring the Generalitat (exiled since the end of the Civil War in 1939) in 1977 and adopting a new Statute of Autonomy in 1979, which defined Catalonia as a "nationality". First election to the Parliament of Catalonia under this Statute gave the Catalan presidency to Jordi Pujol, leader of Convergència i Unió (CiU), a center-right Catalan nationalist electoral coalition. Pujol would hold the position until 2003. Throughout the 1980s and 1990s, the institutions of Catalan autonomy were deployed, among them an autonomous police force (Mossos d'Esquadra, in 1983), and the broadcasting network Televisió de Catalunya and its first channel TV3, created in 1983. An extensive program of normalization of Catalan language was carried out. Today, Catalonia remains one of the most economically dynamic communities of Spain. The Catalan capital and largest city, Barcelona, is a major international cultural centre and a major tourist destination. In 1992, Barcelona hosted the Summer Olympic Games. In November 2003, elections to the Parliament of Catalonia gave the government to a left-wing catalanist coalition formed by the Socialists' Party of Catalonia (PSC-PSOE), Republican Left of Catalonia (ERC) and Initiative for Catalonia Greens (ICV), and the socialist Pasqual Maragall was appointed president. The new government redacted a new version of the Statute of Autonomy, with the aim of consolidate and expand certain aspects of self-government. The new Statute of Autonomy of Catalonia, approved after a referendum in 2006, was contested by important sectors of the Spanish society, especially by the conservative People's Party, which sent the law to the Constitutional Court of Spain. In 2010, the Court declared non-valid some of the articles that established an autonomous Catalan system of Justice, improved aspects of the financing, a new territorial division, the status of Catalan language or the symbolical declaration of Catalonia as a nation. This decision was severely contested by large sectors of Catalan society, which increased the demands of independence. Independence movement A controversial independence referendum was held in Catalonia on 1 October 2017, using a disputed voting process. It was declared illegal and suspended by the Constitutional Court of Spain, because it breached the 1978 Constitution. Subsequent developments saw, on 27 October 2017, a symbolic declaration of independence by the Parliament of Catalonia, the enforcement of direct rule by the Spanish government through the use of Article 155 of the Constitution, the dismissal of the Executive Council and the dissolution of the Parliament, with a snap regional election called for 21 December 2017, which ended with a victory of pro-independence parties. Former President Carles Puigdemont and five former cabinet ministers fled Spain and took refuge in other European countries (such as Belgium, in Puigdemont's case), whereas nine other cabinet members, including vice-president Oriol Junqueras, were sentenced to prison under various charges of rebellion, sedition, and misuse of public funds. Quim Torra became the 131st President of the Government of Catalonia on 17 May 2018, after the Spanish courts blocked three other candidates. In 2018, the Assemblea Nacional Catalana joined the Unrepresented Nations and Peoples Organization (UNPO) on behalf of Catalonia. On 14 October 2019, the Spanish Supreme court sentenced several Catalan political leaders involved in organizing a referendum on Catalonia's independence from Spain were convicted on charges ranging from sedition to misuse of public funds, with sentences ranging from 9 to 13 years in prison. This decision sparked demonstrations around Catalonia. Geography Climate The climate of Catalonia is diverse. The populated areas lying by the coast in Tarragona, Barcelona and Girona provinces feature a Hot-summer Mediterranean climate (Köppen Csa). The inland part (including the Lleida province and the inner part of Barcelona province) show a mostly Mediterranean climate (Köppen Csa). The Pyrenean peaks have a continental (Köppen D) or even Alpine climate (Köppen ET) at the highest summits, while the valleys have a maritime or oceanic climate sub-type (Köppen Cfb). In the Mediterranean area, summers are dry and hot with sea breezes, and the maximum temperature is around . Winter is cool or slightly cold depending on the location. It snows frequently in the Pyrenees, and it occasionally snows at lower altitudes, even by the coastline. Spring and autumn are typically the rainiest seasons, except for the Pyrenean valleys, where summer is typically stormy. The inland part of Catalonia is hotter and drier in summer. Temperature may reach , some days even . Nights are cooler there than at the coast, with the temperature of around . Fog is not uncommon in valleys and plains; it can be especially persistent, with freezing drizzle episodes and subzero temperatures during winter, mainly along the Ebro and Segre valleys and in Plain of Vic. Topography Catalonia has a marked geographical diversity, considering the relatively small size of its territory. The geography is conditioned by the Mediterranean coast, with of coastline, and large relief units of the Pyrenees to the north. The Catalan territory is divided into three main geomorphological units: The Pyrenees: mountainous formation that connects the Iberian Peninsula with the European continental territory, and located in the north of Catalonia; The Catalan Coastal mountain ranges or the Catalan Mediterranean System: an alternating delevacions and planes parallel to the Mediterranean coast; The Catalan Central Depression: structural unit which forms the eastern sector of the Valley of the Ebro. The Catalan Pyrenees represent almost half in length of the Pyrenees, as it extends more than . Traditionally differentiated the Axial Pyrenees (the main part) and the Pre-Pyrenees (southern from the Axial) which are mountainous formations parallel to the main mountain ranges but with lower altitudes, less steep and a different geological formation. The highest mountain of Catalonia, located north of the comarca of Pallars Sobirà is the Pica d'Estats (3,143 m), followed by the Puigpedrós (2,914 m). The Serra del Cadí comprises the highest peaks in the Pre-Pyrenees and forms the southern boundary of the Cerdanya valley. The Central Catalan Depression is a plain located between the Pyrenees and Pre-Coastal Mountains. Elevation ranges from . The plains and the water that descend from the Pyrenees have made it fertile territory for agriculture and numerous irrigation canals have been built. Another major plain is the Empordà, located in the northeast. The Catalan Mediterranean system is based on two ranges running roughly parallel to the coast (southwest–northeast), called the Coastal and the Pre-Coastal Ranges. The Coastal Range is both the shorter and the lower of the two, while the Pre-Coastal is greater in both length and elevation. Areas within the Pre-Coastal Range include Montserrat, Montseny and the Ports de Tortosa-Beseit. Lowlands alternate with the Coastal and Pre-Coastal Ranges. The Coastal Lowland is located to the East of the Coastal Range between it and the coast, while the Pre-Coastal Lowlands are located inland, between the Coastal and Pre-Coastal Ranges, and includes the Vallès and Penedès plains. Flora and fauna Catalonia is a showcase of European landscapes on a small scale. Just over hosting a variety of substrates, soils, climates, directions, altitudes and distances to the sea. The area is of great ecological diversity and a remarkable wealth of landscapes, habitats and species. The fauna of Catalonia comprises a minority of animals endemic to the region and a majority of non-native animals. Much of Catalonia enjoys a Mediterranean climate (except mountain areas), which makes many of the animals that live there adapted to Mediterranean ecosystems. Of mammals, there are plentiful wild boar, red foxes, as well as roe deer and in the Pyrenees, the Pyrenean chamois. Other large species such as the bear have been recently reintroduced. Waters of Balearic Sea are rich in biodiversity, and even the megafaunas of ocean; various type of whales (such as fin, sperm, and pilot) and dolphins live within the area. Hydrography Most of Catalonia belongs to the Mediterranean Basin. The Catalan hydrographic network consists of two important basins, the one of the Ebro and the one that comprises the internal basins of Catalonia (respectively covering 46.84% and 51.43% of the territory), all of them flow to the Mediterranean. Furthermore, there is the Garona river basin that flows to the Atlantic Ocean, but it only covers 1.73% of the Catalan territory. The hydrographic network can be divided in two sectors, an occidental slope or Ebro river slope and one oriental slope constituted by minor rivers that flow to the Mediterranean along the Catalan coast. The first slope provides an average of per year, while the second only provides an average of /year. The difference is due to the big contribution of the Ebro river, from which the Segre is an important tributary. Moreover, in Catalonia there is a relative wealth of groundwaters, although there is inequality between comarques, given the complex geological structure of the territory. In the Pyrenees there are many small lakes, remnants of the ice age. The biggest are the lake of Banyoles and the recently recovered lake of Ivars. The Catalan coast is almost rectilinear, with a length of and few landforms—the most relevant are the Cap de Creus and the Gulf of Roses to the north and the Ebro Delta to the south. The Catalan Coastal Range hugs the coastline, and it is split into two segments, one between L'Estartit and the town of Blanes (the Costa Brava), and the other at the south, at the Costes del Garraf. The principal rivers in Catalonia are the Ter, Llobregat, and the Ebro (Catalan: ), all of which run into the Mediterranean. Anthropic pressure and protection of nature The majority of Catalan population is concentrated in 30% of the territory, mainly in the coastal plains. Intensive agriculture, livestock farming and industrial activities have been accompanied by a massive tourist influx (more than 20 million annual visitors), a rate of urbanization and even of major metropolisation which has led to a strong urban sprawl: two thirds of Catalans live in the urban area of Barcelona, while the proportion of urban land increased from 4.2% in 1993 to 6.2% in 2009, a growth of 48.6% in sixteen years, complemented with a dense network of transport infrastructure. This is accompanied by a certain agricultural abandonment (decrease of 15% of all areas cultivated in Catalonia between 1993 and 2009) and a global threat to natural environment. Human activities have also put some animal species at risk, or even led to their disappearance from the territory, like the gray wolf and probably the brown bear of the Pyrenees. The pressure created by this model of life means that the country's ecological footprint exceeds its administrative area. Faced with this problems, Catalan authorities initiated several measures whose purpose is to protect natural ecosystems. Thus, in 1990, the Catalan government created the Nature Conservation Council (Catalan: ), an advisory body with the aim to study, protect and manage the natural environments and landscapes of Catalonia. In addition, the Generalitat has carried out the Plan of Spaces of Natural Interest ( or PEIN) in 1992 while eighteen Natural Spaces of Special Protection ( or ENPE) have been instituted. There's a National Park, Aigüestortes i Estany de Sant Maurici; fourteen Natural Parks, Alt Pirineu, Aiguamolls de l'Empordà, Cadí-Moixeró, Cap de Creus, Sources of Ter and Freser, Collserola, Ebro Delta, Ports, Montgrí, Medes Islands and Baix Ter, Montseny, Montserrat, Sant Llorenç del Munt and l'Obac, Serra de Montsant and the Garrotxa Volcanic Zone; as well as three Natural Places of National Interest ( or PNIN), the Pedraforca, the Poblet Forest and the Albères. Politics After Franco's death in 1975 and the adoption of a democratic constitution in Spain in 1978, Catalonia recovered and extended the powers that it had gained in the Statute of Autonomy of 1932 but lost with the fall of the Second Spanish Republic at the end of the Spanish Civil War in 1939. This autonomous community has gradually achieved more autonomy since the approval of the Spanish Constitution of 1978. The Generalitat holds exclusive jurisdiction in education, health, culture, environment, communications, transportation, commerce, public safety and local government, and only shares jurisdiction with the Spanish government in justice. In all, some analysts argue that formally the current system grants Catalonia with "more self-government than almost any other corner in Europe". The support for Catalan nationalism ranges from a demand for further autonomy and the federalisation of Spain to the desire for independence from the rest of Spain, expressed by Catalan independentists. The first survey following the Constitutional Court ruling that cut back elements of the 2006 Statute of Autonomy, published by La Vanguardia on 18 July 2010, found that 46% of the voters would support independence in a referendum. In February of the same year, a poll by the Open University of Catalonia gave more or less the same results. Other polls have shown lower support for independence, ranging from 40 to 49%. Although it is established in the whole of the territory, support for independence is significantly higher in the hinterland and the northeast, away from the more populated coastal areas such as Barcelona. Since 2011 when the question started to be regularly surveyed by the governmental Center for Public Opinion Studies (CEO), support for Catalan independence has been on the rise. According to the CEO opinion poll from July 2016, 47.7% of Catalans would vote for independence and 42.4% against it while, about the question of preferences, according to the CEO opinion poll from March 2016, a 57.2 claim to be "absolutely" or "fairly" in favour of independence. Other polls have shown lower support for independence, ranging from 40 to 49%. Other polls show more variable results, according with the Spanish CIS, as of December 2016, 47% of Catalans rejected independence and 45% supported it. In hundreds of non-binding local referendums on independence, organised across Catalonia from 13 September 2009, a large majority voted for independence, although critics argued that the polls were mostly held in pro-independence areas. In December 2009, 94% of those voting backed independence from Spain, on a turn-out of 25%. The final local referendum was held in Barcelona, in April 2011. On 11 September 2012, a pro-independence march pulled in a crowd of between 600,000 (according to the Spanish Government), 1.5 million (according to the Guàrdia Urbana de Barcelona), and 2 million (according to its promoters); whereas poll results revealed that half the population of Catalonia supported secession from Spain. Two major factors were Spain's Constitutional Court's 2010 decision to declare part of the 2006 Statute of Autonomy of Catalonia unconstitutional, as well as the fact that Catalonia contributes 19.49% of the central government's tax revenue, but only receives 14.03% of central government's spending. Parties that consider themselves either Catalan nationalist or independentist have been present in all Catalan governments since 1980. The largest Catalan nationalist party, Convergence and Union, ruled Catalonia from 1980 to 2003, and returned to power in the 2010 election. Between 2003 and 2010, a leftist coalition, composed by the Catalan Socialists' Party, the pro-independence Republican Left of Catalonia and the leftist-environmentalist Initiative for Catalonia-Greens, implemented policies that widened Catalan autonomy. In the 25 November 2012 Catalan parliamentary election, sovereigntist parties supporting a secession referendum gathered 59.01% of the votes and held 87 of the 135 seats in the Catalan Parliament. Parties supporting independence from the rest of Spain obtained 49.12% of the votes and a majority of 74 seats. Artur Mas, then the president of Catalonia, organised early elections that took place on 27 September 2015. In these elections, Convergència and Esquerra Republicana decided to join, and they presented themselves under the coalition named "Junts pel Sí" (in Catalan, "Together for Yes"). "Junts pel Sí" won 62 seats and was the most voted party, and CUP (Candidatura d'Unitat Popular, a far-left and independentist party) won another 10, so the sum of all the independentist forces/parties was 72 seats, reaching an absolute majority, but not in number of individual votes, comprising 47,74% of the total. Statute of Autonomy The Statute of Autonomy of Catalonia is the fundamental organic law, second only to the Spanish Constitution from which the Statute originates. In the Spanish Constitution of 1978 Catalonia, along with the Basque Country and Galicia, was defined as a "nationality". The same constitution gave Catalonia the automatic right to autonomy, which resulted in the Statute of Autonomy of Catalonia of 1979. Both the 1979 Statute of Autonomy and the current one, approved in 2006, state that "Catalonia, as a nationality, exercises its self-government constituted as an Autonomous Community in accordance with the Constitution and with the Statute of Autonomy of Catalonia, which is its basic institutional law, always under the law in Spain". The Preamble of the 2006 Statute of Autonomy of Catalonia states that the Parliament of Catalonia has defined Catalonia as a nation, but that "the Spanish Constitution recognizes Catalonia's national reality as a nationality". While the Statute was approved by and sanctioned by both the Catalan and Spanish parliaments, and later by referendum in Catalonia, it has been subject to a legal challenge by the surrounding autonomous communities of Aragon, Balearic Islands and Valencia, as well as by the conservative People's Party. The objections are based on various issues such as disputed cultural heritage but, especially, on the Statute's alleged breaches of the principle of "solidarity between regions" in fiscal and educational matters enshrined by the Constitution. Spain's Constitutional Court assessed the disputed articles and on 28 June 2010, issued its judgment on the principal allegation of unconstitutionality presented by the People's Party in 2006. The judgment granted clear passage to 182 articles of the 223 that make up the fundamental text. The court approved 73 of the 114 articles that the People's Party had contested, while declaring 14 articles unconstitutional in whole or in part and imposing a restrictive interpretation on 27 others. The court accepted the specific provision that described Catalonia as a "nation", however ruled that it was a historical and cultural term with no legal weight, and that Spain remained the only nation recognised by the constitution. Government and law The Catalan Statute of Autonomy establishes that Catalonia, as an autonomous community, is organised politically through the Generalitat of Catalonia (Catalan: ), conformed by the Parliament, the Presidency of the Generalitat, the Government or Executive Council and the other institutions established by the Parliament, among them the Ombudsman (), the Office of Auditors () the Council for Statutory Guarantees () or the Audiovisual Council of Catalonia (). The Parliament of Catalonia (Catalan: ) is the unicameral legislative body of the Generalitat and represents the people of Catalonia. Its 135 members (diputats) are elected by universal suffrage to serve for a four-year period. According to the Statute of Autonomy, it has powers to legislate over devolved matters such as education, health, culture, internal institutional and territorial organization, nomination of the President of the Generalitat and control the Government, budget and other affairs. The last Catalan election was held on 14 February 2021, and its current speaker (president) is Laura Borràs, incumbent since 12 March 2018. The President of the Generalitat of Catalonia (Catalan: ) is the highest representative of Catalonia, and is also responsible of leading the government's action, presiding the Executive Council. Since the restoration of the Generalitat on the return of democracy in Spain, the Presidents of Catalonia have been Josep Tarradellas (1977–1980, president in exile since 1954), Jordi Pujol (1980–2003), Pasqual Maragall (2003–2006), José Montilla (2006–2010), Artur Mas (2010–2016), Carles Puigdemont (2016–2017) and, after the imposition of direct rule from Madrid, Quim Torra (2018–2020) and Pere Aragonès (2020-). The Executive Council (Catalan: ) or Government (), is the body responsible of the government of the Generalitat, it holds executive and regulatory power, being accountable to the Catalan Parliament. It comprises the President of the Generalitat, the First Minister () or the Vice President, and the ministers () appointed by the president. Its seat is the Palau de la Generalitat, Barcelona. The current government is a coalition of two parties, the Republican Left of Catalonia (ERC) and Together for Catalonia (Junts) and is made up of 14 ministers, including the vice President, alongside to the president and a secretary of government. Security forces and Justice Catalonia has its own police force, the (officially called ), whose origins date back to the 18th century. Since 1980 they have been under the command of the Generalitat, and since 1994 they have expanded in number in order to replace the national Civil Guard and National Police Corps, which report directly to the Homeland Department of Spain. The national bodies retain personnel within Catalonia to exercise functions of national scope such as overseeing ports, airports, coasts, international borders, custom offices, the identification of documents and arms control, immigration control, terrorism prevention, arms trafficking prevention, amongst others. Most of the justice system is administered by national judicial institutions, the highest body and last judicial instance in the Catalan jurisdiction, integrating the Spanish judiciary, is the High Court of Justice of Catalonia. The criminal justice system is uniform throughout Spain, while civil law is administered separately within Catalonia. The civil laws that are subject to autonomous legislation have been codified in the Civil Code of Catalonia () since 2002. Navarre, the Basque Country and Catalonia are the Spanish communities with the highest degree of autonomy in terms of law enforcement. Administrative divisions Catalonia is organised territorially into provinces, further subdivided into comarques and municipalities. The 2006 Statute of Autonomy of Catalonia establishes the administrative organisation of three local authorities: vegueries, comarques, and municipalities. Provinces Catalonia is divided administratively into four provinces, the governing body of which is the Provincial Deputation (, ). The four provinces and their populations are: Province of Barcelona: 5,507,813 population Province of Girona: 752,026 population Province of Lleida: 439,253 population Province of Tarragona: 805,789 population Comarques Comarques (singular: "comarca") are entities composed by the municipalities to manage their responsibilities and services. The current regional division has its roots in a decree of the Generalitat de Catalunya of 1936, in effect until 1939, when it was suppressed by Franco. In 1987 the Catalan Government reestablished the comarcal division and in 1988 three new comarques were added (Alta Ribagorça, Pla d'Urgell and Pla de l'Estany). In 2015 was created an additional comarca, the Moianès. At present there are 41, excluding Aran. Every comarca is administered by a comarcal council (). The Aran Valley (Val d'Aran), previously considered a comarca, obtained in 1990 a particular status within Catalonia due to its differences in culture and language, as Occitan is the native language of the Valley, being administed by a body known as the (General Council of Aran). Since 2015 it is definied as "unique territorial entity", while the powers of the Conselh Generau were expanded. Municipalities There are at present 947 municipalities () in Catalonia. Each municipality is run by a council () elected every four years by the residents in local elections. The council consists of a number of members () depending on population, who elect the mayor ( or ). Its seat is the town hall (, or ). Vegueries The vegueria is a new type of division defined as a specific territorial area for the exercise of government and inter-local cooperation with legal personality. The current Statute of Autonomy states vegueries are intended to supersede provinces in Catalonia, and take over many of functions of the comarques. The territorial plan of Catalonia () provided six general functional areas, but was amended by Law 24/2001, of 31 December, recognizing the Alt Pirineu i Aran as a new functional area differentiated of Ponent. On 14 July 2010 the Catalan Parliament approved the creation of the functional area of the Penedès. Alt Pirineu i Aran: Alta Ribagorça, Alt Urgell, Cerdanya, Pallars Jussà, Pallars Sobirà and Val d'Aran. Àmbit Metropolità de Barcelona: Baix Llobregat, Barcelonès, Garraf, Maresme, Vallès Oriental and Vallès Occidental. Camp de Tarragona: Tarragonès, Alt Camp, Baix Camp, Conca de Barberà and Priorat. Comarques gironines: Alt Empordà, Baix Empordà, Garrotxa, Gironès, Pla de l'Estany, La Selva and Ripollès. Comarques centrals: Anoia (8 municipalities of 33), Bages, Berguedà, Osona and Solsonès. Penedès: Alt Penedès, Baix Penedès, Anoia (25 municipalities of 33) and Garraf. Ponent: Garrigues, Noguera, Segarra, Segrià, Pla d'Urgell and Urgell. Terres de l'Ebre: Baix Ebre, Montsià, Ribera d'Ebre and Terra Alta. Economy A highly industrialized land, the nominal GDP of Catalonia in 2018 was €228 billion (second after the community of Madrid, €230 billion) and the per capita GDP was €30,426 ($32,888), behind Madrid (€35,041), the Basque Country (€33,223), and Navarre (€31,389). That year, the GDP growth was 2.3%. In recent years, and increasingly following the unilateral declaration of independence in 2017, there has been a negative net relocation rate of companies based in Catalonia moving to other autonomous communities of Spain. From the 2017 independence referendum until the end of 2018, for example, Catalonia lost 5454 companies to other parts of Spain (mainly Madrid), 2359 only in 2018, gaining 467 new ones from the rest of the country during 2018. Catalonia's long-term credit rating is BB (Non-Investment Grade) according to Standard & Poor's, Ba2 (Non-Investment Grade) according to Moody's, and BBB- (Low Investment Grade) according to Fitch Ratings. Catalonia's rating is tied for worst with between 1 and 5 other autonomous communities of Spain, depending on the rating agency. The city of Barcelona occupies the eighth position as the best world city to live, work, research and visit in 2021, according to the report "The World's Best Cities 2021", prepared by Resonance Consultancy and released this Sunday, 3 January, on Barcelona's town hall . The Catalan capital, despite the current moment of crisis, is also one of the European bases of "reference for start-ups" and the fifth city in the world to establish one of these companies, behind London, Berlin, Paris and Amsterdam, according to the Eu-Starts-Up 2020 study. Barcelona is behind London, New York, Paris, Moscow, Tokyo, Dubai and Singapore and ahead of Los Angeles and Madrid. In the context of the financial crisis of 2007–2008, Catalonia was expected to suffer a recession amounting to almost a 2% contraction of its regional GDP in 2009. Catalonia's debt in 2012 was the highest of all Spain's autonomous communities, reaching €13,476 million, i.e. 38% of the total debt of the 17 autonomous communities, but in recent years its economy recovered a positive evolution and the GDP grew a 3.3% in 2015. Catalonia is amongst the List of country subdivisions by GDP over 100 billion US dollars and is a member of the Four Motors for Europe organisation. The distribution of sectors is as follows: Primary sector: 3%. The amount of land devoted to agricultural use is 33%. Secondary sector: 37% (compared to Spain's 29%) Tertiary sector: 60% (compared to Spain's 67%) The main tourist destinations in Catalonia are the city of Barcelona, the beaches of the Costa Brava in Girona, the beaches of the Costa del Maresme and Costa del Garraf from Malgrat de Mar to Vilanova i la Geltrú and the Costa Daurada in Tarragona. In the High Pyrenees there are several ski resorts, near Lleida. On 1 November 2012, Catalonia started charging a tourist tax. The revenue is used to promote tourism, and to maintain and upgrade tourism-related infrastructure. Many savings banks were based in Catalonia before the independence referendum of 2017, with 10 of the 46 Spanish savings banks having headquarters in the region at that time. This list included Europe's premier savings bank, La Caixa, who, on 7 October 2017, a week after the referendum, moved its headquarters to Palma de Mallorca, in the Balearic Islands and CaixaBank to Valencia, in the Valencian Community. The first private bank in Catalonia, Banc Sabadell, ranked fourth among all Spanish private banks, also moved its headquarters to Alicante, in the Valencian Community. The stock market of Barcelona, which in 2016 had a volume of around €152 billion, is the second largest of Spain after Madrid, and Fira de Barcelona organizes international exhibitions and congresses to do with different sectors of the economy. The main economic cost for the Catalan families is the purchase of a home. According to data from the Society of Appraisal on 31 December 2005 Catalonia is, after Madrid, the second most expensive region in Spain for housing: 3,397 €/m2 on average (see Spanish property bubble). Unemployment The unemployment rate stood at 10.5% in 2019 and was lower than the national average. Transport Airports Airports in Catalonia are owned and operated by Aena (a Spanish Government entity) except two airports in Lleida which are operated by Aeroports de Catalunya (an entity belonging to the Government of Catalonia). Barcelona El Prat Airport (Aena) Girona-Costa Brava Airport (Aena) Reus Airport (Aena) Lleida-Alguaire Airport (Aeroports de Catalunya) Sabadell Airport (Aena) La Seu d'Urgell Airport (Aeroports de Catalunya) Ports Since the Middle Ages, Catalonia has been well integrated into international maritime networks. The port of Barcelona (owned and operated by , a Spanish Government entity) is an industrial, commercial and tourist port of worldwide importance. With 1,950,000 TEUs in 2015, it is the first container port in Catalonia, the third in Spain after Valencia and Algeciras in Andalusia, the 9th in the Mediterranean Sea, the 14th in Europe and the 68th in the world. It is sixth largest cruise port in the world, the first in Europe and the Mediterranean with 2,364,292 passengers in 2014. The ports of Tarragona (owned and operated by Puertos del Estado) in the southwest and Palamós near Girona at northeast are much more modest. The port of Palamós and the other ports in Catalonia (26) are operated and administered by , a Catalan Government entity. The development of these infrastructures, resulting from the topography and history of the | a major international cultural centre and a major tourist destination. In 1992, Barcelona hosted the Summer Olympic Games. In November 2003, elections to the Parliament of Catalonia gave the government to a left-wing catalanist coalition formed by the Socialists' Party of Catalonia (PSC-PSOE), Republican Left of Catalonia (ERC) and Initiative for Catalonia Greens (ICV), and the socialist Pasqual Maragall was appointed president. The new government redacted a new version of the Statute of Autonomy, with the aim of consolidate and expand certain aspects of self-government. The new Statute of Autonomy of Catalonia, approved after a referendum in 2006, was contested by important sectors of the Spanish society, especially by the conservative People's Party, which sent the law to the Constitutional Court of Spain. In 2010, the Court declared non-valid some of the articles that established an autonomous Catalan system of Justice, improved aspects of the financing, a new territorial division, the status of Catalan language or the symbolical declaration of Catalonia as a nation. This decision was severely contested by large sectors of Catalan society, which increased the demands of independence. Independence movement A controversial independence referendum was held in Catalonia on 1 October 2017, using a disputed voting process. It was declared illegal and suspended by the Constitutional Court of Spain, because it breached the 1978 Constitution. Subsequent developments saw, on 27 October 2017, a symbolic declaration of independence by the Parliament of Catalonia, the enforcement of direct rule by the Spanish government through the use of Article 155 of the Constitution, the dismissal of the Executive Council and the dissolution of the Parliament, with a snap regional election called for 21 December 2017, which ended with a victory of pro-independence parties. Former President Carles Puigdemont and five former cabinet ministers fled Spain and took refuge in other European countries (such as Belgium, in Puigdemont's case), whereas nine other cabinet members, including vice-president Oriol Junqueras, were sentenced to prison under various charges of rebellion, sedition, and misuse of public funds. Quim Torra became the 131st President of the Government of Catalonia on 17 May 2018, after the Spanish courts blocked three other candidates. In 2018, the Assemblea Nacional Catalana joined the Unrepresented Nations and Peoples Organization (UNPO) on behalf of Catalonia. On 14 October 2019, the Spanish Supreme court sentenced several Catalan political leaders involved in organizing a referendum on Catalonia's independence from Spain were convicted on charges ranging from sedition to misuse of public funds, with sentences ranging from 9 to 13 years in prison. This decision sparked demonstrations around Catalonia. Geography Climate The climate of Catalonia is diverse. The populated areas lying by the coast in Tarragona, Barcelona and Girona provinces feature a Hot-summer Mediterranean climate (Köppen Csa). The inland part (including the Lleida province and the inner part of Barcelona province) show a mostly Mediterranean climate (Köppen Csa). The Pyrenean peaks have a continental (Köppen D) or even Alpine climate (Köppen ET) at the highest summits, while the valleys have a maritime or oceanic climate sub-type (Köppen Cfb). In the Mediterranean area, summers are dry and hot with sea breezes, and the maximum temperature is around . Winter is cool or slightly cold depending on the location. It snows frequently in the Pyrenees, and it occasionally snows at lower altitudes, even by the coastline. Spring and autumn are typically the rainiest seasons, except for the Pyrenean valleys, where summer is typically stormy. The inland part of Catalonia is hotter and drier in summer. Temperature may reach , some days even . Nights are cooler there than at the coast, with the temperature of around . Fog is not uncommon in valleys and plains; it can be especially persistent, with freezing drizzle episodes and subzero temperatures during winter, mainly along the Ebro and Segre valleys and in Plain of Vic. Topography Catalonia has a marked geographical diversity, considering the relatively small size of its territory. The geography is conditioned by the Mediterranean coast, with of coastline, and large relief units of the Pyrenees to the north. The Catalan territory is divided into three main geomorphological units: The Pyrenees: mountainous formation that connects the Iberian Peninsula with the European continental territory, and located in the north of Catalonia; The Catalan Coastal mountain ranges or the Catalan Mediterranean System: an alternating delevacions and planes parallel to the Mediterranean coast; The Catalan Central Depression: structural unit which forms the eastern sector of the Valley of the Ebro. The Catalan Pyrenees represent almost half in length of the Pyrenees, as it extends more than . Traditionally differentiated the Axial Pyrenees (the main part) and the Pre-Pyrenees (southern from the Axial) which are mountainous formations parallel to the main mountain ranges but with lower altitudes, less steep and a different geological formation. The highest mountain of Catalonia, located north of the comarca of Pallars Sobirà is the Pica d'Estats (3,143 m), followed by the Puigpedrós (2,914 m). The Serra del Cadí comprises the highest peaks in the Pre-Pyrenees and forms the southern boundary of the Cerdanya valley. The Central Catalan Depression is a plain located between the Pyrenees and Pre-Coastal Mountains. Elevation ranges from . The plains and the water that descend from the Pyrenees have made it fertile territory for agriculture and numerous irrigation canals have been built. Another major plain is the Empordà, located in the northeast. The Catalan Mediterranean system is based on two ranges running roughly parallel to the coast (southwest–northeast), called the Coastal and the Pre-Coastal Ranges. The Coastal Range is both the shorter and the lower of the two, while the Pre-Coastal is greater in both length and elevation. Areas within the Pre-Coastal Range include Montserrat, Montseny and the Ports de Tortosa-Beseit. Lowlands alternate with the Coastal and Pre-Coastal Ranges. The Coastal Lowland is located to the East of the Coastal Range between it and the coast, while the Pre-Coastal Lowlands are located inland, between the Coastal and Pre-Coastal Ranges, and includes the Vallès and Penedès plains. Flora and fauna Catalonia is a showcase of European landscapes on a small scale. Just over hosting a variety of substrates, soils, climates, directions, altitudes and distances to the sea. The area is of great ecological diversity and a remarkable wealth of landscapes, habitats and species. The fauna of Catalonia comprises a minority of animals endemic to the region and a majority of non-native animals. Much of Catalonia enjoys a Mediterranean climate (except mountain areas), which makes many of the animals that live there adapted to Mediterranean ecosystems. Of mammals, there are plentiful wild boar, red foxes, as well as roe deer and in the Pyrenees, the Pyrenean chamois. Other large species such as the bear have been recently reintroduced. Waters of Balearic Sea are rich in biodiversity, and even the megafaunas of ocean; various type of whales (such as fin, sperm, and pilot) and dolphins live within the area. Hydrography Most of Catalonia belongs to the Mediterranean Basin. The Catalan hydrographic network consists of two important basins, the one of the Ebro and the one that comprises the internal basins of Catalonia (respectively covering 46.84% and 51.43% of the territory), all of them flow to the Mediterranean. Furthermore, there is the Garona river basin that flows to the Atlantic Ocean, but it only covers 1.73% of the Catalan territory. The hydrographic network can be divided in two sectors, an occidental slope or Ebro river slope and one oriental slope constituted by minor rivers that flow to the Mediterranean along the Catalan coast. The first slope provides an average of per year, while the second only provides an average of /year. The difference is due to the big contribution of the Ebro river, from which the Segre is an important tributary. Moreover, in Catalonia there is a relative wealth of groundwaters, although there is inequality between comarques, given the complex geological structure of the territory. In the Pyrenees there are many small lakes, remnants of the ice age. The biggest are the lake of Banyoles and the recently recovered lake of Ivars. The Catalan coast is almost rectilinear, with a length of and few landforms—the most relevant are the Cap de Creus and the Gulf of Roses to the north and the Ebro Delta to the south. The Catalan Coastal Range hugs the coastline, and it is split into two segments, one between L'Estartit and the town of Blanes (the Costa Brava), and the other at the south, at the Costes del Garraf. The principal rivers in Catalonia are the Ter, Llobregat, and the Ebro (Catalan: ), all of which run into the Mediterranean. Anthropic pressure and protection of nature The majority of Catalan population is concentrated in 30% of the territory, mainly in the coastal plains. Intensive agriculture, livestock farming and industrial activities have been accompanied by a massive tourist influx (more than 20 million annual visitors), a rate of urbanization and even of major metropolisation which has led to a strong urban sprawl: two thirds of Catalans live in the urban area of Barcelona, while the proportion of urban land increased from 4.2% in 1993 to 6.2% in 2009, a growth of 48.6% in sixteen years, complemented with a dense network of transport infrastructure. This is accompanied by a certain agricultural abandonment (decrease of 15% of all areas cultivated in Catalonia between 1993 and 2009) and a global threat to natural environment. Human activities have also put some animal species at risk, or even led to their disappearance from the territory, like the gray wolf and probably the brown bear of the Pyrenees. The pressure created by this model of life means that the country's ecological footprint exceeds its administrative area. Faced with this problems, Catalan authorities initiated several measures whose purpose is to protect natural ecosystems. Thus, in 1990, the Catalan government created the Nature Conservation Council (Catalan: ), an advisory body with the aim to study, protect and manage the natural environments and landscapes of Catalonia. In addition, the Generalitat has carried out the Plan of Spaces of Natural Interest ( or PEIN) in 1992 while eighteen Natural Spaces of Special Protection ( or ENPE) have been instituted. There's a National Park, Aigüestortes i Estany de Sant Maurici; fourteen Natural Parks, Alt Pirineu, Aiguamolls de l'Empordà, Cadí-Moixeró, Cap de Creus, Sources of Ter and Freser, Collserola, Ebro Delta, Ports, Montgrí, Medes Islands and Baix Ter, Montseny, Montserrat, Sant Llorenç del Munt and l'Obac, Serra de Montsant and the Garrotxa Volcanic Zone; as well as three Natural Places of National Interest ( or PNIN), the Pedraforca, the Poblet Forest and the Albères. Politics After Franco's death in 1975 and the adoption of a democratic constitution in Spain in 1978, Catalonia recovered and extended the powers that it had gained in the Statute of Autonomy of 1932 but lost with the fall of the Second Spanish Republic at the end of the Spanish Civil War in 1939. This autonomous community has gradually achieved more autonomy since the approval of the Spanish Constitution of 1978. The Generalitat holds exclusive jurisdiction in education, health, culture, environment, communications, transportation, commerce, public safety and local government, and only shares jurisdiction with the Spanish government in justice. In all, some analysts argue that formally the current system grants Catalonia with "more self-government than almost any other corner in Europe". The support for Catalan nationalism ranges from a demand for further autonomy and the federalisation of Spain to the desire for independence from the rest of Spain, expressed by Catalan independentists. The first survey following the Constitutional Court ruling that cut back elements of the 2006 Statute of Autonomy, published by La Vanguardia on 18 July 2010, found that 46% of the voters would support independence in a referendum. In February of the same year, a poll by the Open University of Catalonia gave more or less the same results. Other polls have shown lower support for independence, ranging from 40 to 49%. Although it is established in the whole of the territory, support for independence is significantly higher in the hinterland and the northeast, away from the more populated coastal areas such as Barcelona. Since 2011 when the question started to be regularly surveyed by the governmental Center for Public Opinion Studies (CEO), support for Catalan independence has been on the rise. According to the CEO opinion poll from July 2016, 47.7% of Catalans would vote for independence and 42.4% against it while, about the question of preferences, according to the CEO opinion poll from March 2016, a 57.2 claim to be "absolutely" or "fairly" in favour of independence. Other polls have shown lower support for independence, ranging from 40 to 49%. Other polls show more variable results, according with the Spanish CIS, as of December 2016, 47% of Catalans rejected independence and 45% supported it. In hundreds of non-binding local referendums on independence, organised across Catalonia from 13 September 2009, a large majority voted for independence, although critics argued that the polls were mostly held in pro-independence areas. In December 2009, 94% of those voting backed independence from Spain, on a turn-out of 25%. The final local referendum was held in Barcelona, in April 2011. On 11 September 2012, a pro-independence march pulled in a crowd of between 600,000 (according to the Spanish Government), 1.5 million (according to the Guàrdia Urbana de Barcelona), and 2 million (according to its promoters); whereas poll results revealed that half the population of Catalonia supported secession from Spain. Two major factors were Spain's Constitutional Court's 2010 decision to declare part of the 2006 Statute of Autonomy of Catalonia unconstitutional, as well as the fact that Catalonia contributes 19.49% of the central government's tax revenue, but only receives 14.03% of central government's spending. Parties that consider themselves either Catalan nationalist or independentist have been present in all Catalan governments since 1980. The largest Catalan nationalist party, Convergence and Union, ruled Catalonia from 1980 to 2003, and returned to power in the 2010 election. Between 2003 and 2010, a leftist coalition, composed by the Catalan Socialists' Party, the pro-independence Republican Left of Catalonia and the leftist-environmentalist Initiative for Catalonia-Greens, implemented policies that widened Catalan autonomy. In the 25 November 2012 Catalan parliamentary election, sovereigntist parties supporting a secession referendum gathered 59.01% of the votes and held 87 of the 135 seats in the Catalan Parliament. Parties supporting independence from the rest of Spain obtained 49.12% of the votes and a majority of 74 seats. Artur Mas, then the president of Catalonia, organised early elections that took place on 27 September 2015. In these elections, Convergència and Esquerra Republicana decided to join, and they presented themselves under the coalition named "Junts pel Sí" (in Catalan, "Together for Yes"). "Junts pel Sí" won 62 seats and was the most voted party, and CUP (Candidatura d'Unitat Popular, a far-left and independentist party) won another 10, so the sum of all the independentist forces/parties was 72 seats, reaching an absolute majority, but not in number of individual votes, comprising 47,74% of the total. Statute of Autonomy The Statute of Autonomy of Catalonia is the fundamental organic law, second only to the Spanish Constitution from which the Statute originates. In the Spanish Constitution of 1978 Catalonia, along with the Basque Country and Galicia, was defined as a "nationality". The same constitution gave Catalonia the automatic right to autonomy, which resulted in the Statute of Autonomy of Catalonia of 1979. Both the 1979 Statute of Autonomy and the current one, approved in 2006, state that "Catalonia, as a nationality, exercises its self-government constituted as an Autonomous Community in accordance with the Constitution and with the Statute of Autonomy of Catalonia, which is its basic institutional law, always under the law in Spain". The Preamble of the 2006 Statute of Autonomy of Catalonia states that the Parliament of Catalonia has defined Catalonia as a nation, but that "the Spanish Constitution recognizes Catalonia's national reality as a nationality". While the Statute was approved by and sanctioned by both the Catalan and Spanish parliaments, and later by referendum in Catalonia, it has been subject to a legal challenge by the surrounding autonomous communities of Aragon, Balearic Islands and Valencia, as well as by the conservative People's Party. The objections are based on various issues such as disputed cultural heritage but, especially, on the Statute's alleged breaches of the principle of "solidarity between regions" in fiscal and educational matters enshrined by the Constitution. Spain's Constitutional Court assessed the disputed articles and on 28 June 2010, issued its judgment on the principal allegation of unconstitutionality presented by the People's Party in 2006. The judgment granted clear passage to 182 articles of the 223 that make up the fundamental text. The court approved 73 of the 114 articles that the People's Party had contested, while declaring 14 articles unconstitutional in whole or in part and imposing a restrictive interpretation on 27 others. The court accepted the specific provision that described Catalonia as a "nation", however ruled that it was a historical and cultural term with no legal weight, and that Spain remained the only nation recognised by the constitution. Government and law The Catalan Statute of Autonomy establishes that Catalonia, as an autonomous community, is organised politically through the Generalitat of Catalonia (Catalan: ), conformed by the Parliament, the Presidency of the Generalitat, the Government or Executive Council and the other institutions established by the Parliament, among them the Ombudsman (), the Office of Auditors () the Council for Statutory Guarantees () or the Audiovisual Council of Catalonia (). The Parliament of Catalonia (Catalan: ) is the unicameral legislative body of the Generalitat and represents the people of Catalonia. Its 135 members (diputats) are elected by universal suffrage to serve for a four-year period. According to the Statute of Autonomy, it has powers to legislate over devolved matters such as education, health, culture, internal institutional and territorial organization, nomination of the President of the Generalitat and control the Government, budget and other affairs. The last Catalan election was held on 14 February 2021, and its current speaker (president) is Laura Borràs, incumbent since 12 March 2018. The President of the Generalitat of Catalonia (Catalan: ) is the highest representative of Catalonia, and is also responsible of leading the government's action, presiding the Executive Council. Since the restoration of the Generalitat on the return of democracy in Spain, the Presidents of Catalonia have been Josep Tarradellas (1977–1980, president in exile since 1954), Jordi Pujol (1980–2003), Pasqual Maragall (2003–2006), José Montilla (2006–2010), Artur Mas (2010–2016), Carles Puigdemont (2016–2017) and, after the imposition of direct rule from Madrid, Quim Torra (2018–2020) and Pere Aragonès (2020-). The Executive Council (Catalan: ) or Government (), is the body responsible of the government of the Generalitat, it holds executive and regulatory power, being accountable to the Catalan Parliament. It comprises the President of the Generalitat, the First Minister () or the Vice President, and the ministers () appointed by the president. Its seat is the Palau de la Generalitat, Barcelona. The current government is a coalition of two parties, the Republican Left of Catalonia (ERC) and Together for Catalonia (Junts) and is made up of 14 ministers, including the vice President, alongside to the president and a secretary of government. Security forces and Justice Catalonia has its own police force, the (officially called ), whose origins date back to the 18th century. Since 1980 they have been under the command of the Generalitat, and since 1994 they have expanded in number in order to replace the national Civil Guard and National Police Corps, which report directly to the Homeland Department of Spain. The national bodies retain personnel within Catalonia to exercise functions of national scope such as overseeing ports, airports, coasts, international borders, custom offices, the identification of documents and arms control, immigration control, terrorism prevention, arms trafficking prevention, amongst others. Most of the justice system is administered by national judicial institutions, the highest body and last judicial instance in the Catalan jurisdiction, integrating the Spanish judiciary, is the High Court of Justice of Catalonia. The criminal justice system is uniform throughout Spain, while civil law is administered separately within Catalonia. The civil laws that are subject to autonomous legislation have been codified in the Civil Code of Catalonia () since 2002. Navarre, the Basque Country and Catalonia are the Spanish communities with the highest degree of autonomy in terms of law enforcement. Administrative divisions Catalonia is organised territorially into provinces, further subdivided into comarques and municipalities. The 2006 Statute of Autonomy of Catalonia establishes the administrative organisation of three local authorities: vegueries, comarques, and municipalities. Provinces Catalonia is divided administratively into four provinces, the governing body of which is the Provincial Deputation (, ). The four provinces and their populations are: Province of Barcelona: 5,507,813 population Province of Girona: 752,026 population Province of Lleida: 439,253 population Province of Tarragona: 805,789 population Comarques Comarques (singular: "comarca") are entities composed by the municipalities to manage their responsibilities and services. The current regional division has its roots in a decree of the Generalitat de Catalunya of 1936, in effect until 1939, when it was suppressed by Franco. In 1987 the Catalan Government reestablished the comarcal division and in 1988 three new comarques were added (Alta Ribagorça, Pla d'Urgell and Pla de l'Estany). In 2015 was created an additional comarca, the Moianès. At present there are 41, excluding Aran. Every comarca is administered by a comarcal council (). The Aran Valley (Val d'Aran), previously considered a comarca, obtained in 1990 a particular status within Catalonia due to its differences in culture and language, as Occitan is the native language of the Valley, being administed by a body known as the (General Council of Aran). Since 2015 it is definied as "unique territorial entity", while the powers of the Conselh Generau were expanded. Municipalities There are at present 947 municipalities () in Catalonia. Each municipality is run by a council () elected every four years by the residents in local elections. The council consists of a number of members () depending on population, who elect the mayor ( or ). Its seat is the town hall (, or ). Vegueries The vegueria is a new type of division defined as a specific territorial area for the exercise of government and inter-local cooperation with legal personality. The current Statute of Autonomy states vegueries are intended to supersede provinces in Catalonia, and take over many of functions of the comarques. The territorial plan of Catalonia () provided six general functional areas, but was amended by Law 24/2001, of 31 December, recognizing the Alt Pirineu i Aran as a new functional area differentiated of Ponent. On 14 July 2010 the Catalan Parliament approved the creation of the functional area of the Penedès. Alt Pirineu i Aran: Alta Ribagorça, Alt Urgell, Cerdanya, Pallars Jussà, Pallars Sobirà and Val d'Aran. Àmbit Metropolità de Barcelona: Baix Llobregat, Barcelonès, Garraf, Maresme, Vallès Oriental and Vallès Occidental. Camp de Tarragona: Tarragonès, Alt Camp, Baix Camp, Conca de Barberà and Priorat. Comarques gironines: Alt Empordà, Baix Empordà, Garrotxa, Gironès, Pla de l'Estany, La Selva and Ripollès. Comarques centrals: Anoia (8 municipalities of 33), Bages, Berguedà, Osona and Solsonès. Penedès: Alt Penedès, Baix Penedès, Anoia (25 municipalities of 33) and Garraf. Ponent: Garrigues, Noguera, Segarra, Segrià, Pla d'Urgell and Urgell. Terres de l'Ebre: Baix Ebre, Montsià, Ribera d'Ebre and Terra Alta. Economy A highly industrialized land, the nominal GDP of Catalonia in 2018 was €228 billion (second after the community of Madrid, €230 billion) and the per capita GDP was €30,426 ($32,888), behind Madrid (€35,041), the Basque Country (€33,223), and Navarre (€31,389). That year, the GDP growth was 2.3%. In recent years, and increasingly following the unilateral declaration of independence in 2017, there has been a negative net relocation rate of companies based in Catalonia moving to other autonomous communities of Spain. From the 2017 independence referendum until the end of 2018, for example, Catalonia lost 5454 companies to other parts of Spain (mainly Madrid), 2359 only in 2018, gaining 467 new ones from the rest of the country during 2018. Catalonia's long-term credit rating is BB (Non-Investment Grade) according to Standard & Poor's, Ba2 (Non-Investment Grade) according to Moody's, and BBB- (Low Investment Grade) according to Fitch Ratings. Catalonia's rating is tied for worst with between 1 and 5 other autonomous communities of Spain, depending on the rating agency. The city of Barcelona occupies the eighth position as the best world city to live, work, research and visit in 2021, according to the report "The World's Best Cities 2021", prepared by Resonance Consultancy and released this Sunday, 3 January, on Barcelona's town hall . The Catalan capital, despite the current moment of crisis, is also one of the European bases of "reference for start-ups" and the fifth city in the world to establish one of these companies, behind London, Berlin, Paris and Amsterdam, according to the Eu-Starts-Up 2020 study. Barcelona is behind London, New York, Paris, Moscow, Tokyo, Dubai and Singapore and ahead of Los Angeles and Madrid. In the context of the financial crisis of 2007–2008, Catalonia was expected to suffer a recession amounting to almost a 2% contraction of its regional GDP in 2009. Catalonia's debt in 2012 was the highest of all Spain's autonomous communities, reaching €13,476 million, i.e. 38% of the total debt of the 17 autonomous communities, but in recent years its economy recovered a positive evolution and the GDP grew a 3.3% in 2015. Catalonia is amongst the List of country subdivisions by GDP over 100 billion US dollars and is a member of the Four Motors for Europe organisation. The distribution of sectors is as follows: Primary sector: 3%. The amount of land devoted to agricultural use is 33%. Secondary sector: 37% (compared to Spain's 29%) Tertiary sector: 60% (compared to Spain's 67%) The main tourist destinations in Catalonia are the city of Barcelona, the beaches of the Costa Brava in Girona, the beaches of the Costa del Maresme and Costa del Garraf from Malgrat de Mar to Vilanova i la Geltrú and the Costa Daurada in Tarragona. In the High Pyrenees there are several ski resorts, near Lleida. On 1 November 2012, Catalonia started charging a tourist tax. The revenue is used to promote tourism, and to maintain and upgrade tourism-related infrastructure. Many savings banks were based in Catalonia before the independence referendum of 2017, with 10 of the 46 Spanish savings banks having headquarters in the region at that time. This list included Europe's premier savings bank, La Caixa, who, on 7 October 2017, a week after the referendum, moved its headquarters to Palma de Mallorca, in the Balearic Islands and CaixaBank to Valencia, in the Valencian Community. The first private bank in Catalonia, Banc Sabadell, ranked fourth among all Spanish private banks, also moved its headquarters to Alicante, in the Valencian Community. The stock market of Barcelona, which in 2016 had a volume of around €152 billion, is the second largest of Spain after Madrid, and Fira de Barcelona organizes international exhibitions and congresses to do with different sectors of the economy. The main economic cost for the Catalan families is the purchase of a home. According to data from the Society of Appraisal on 31 December 2005 Catalonia is, after Madrid, the second most expensive region in Spain for housing: 3,397 €/m2 on average (see Spanish property bubble). Unemployment The unemployment rate stood at 10.5% in 2019 and was lower than the national average. Transport Airports Airports in Catalonia are owned and operated by Aena (a Spanish Government entity) except two airports in Lleida which are operated by Aeroports de Catalunya (an entity belonging to the Government of Catalonia). Barcelona El Prat Airport (Aena) Girona-Costa Brava Airport (Aena) Reus Airport (Aena) Lleida-Alguaire Airport (Aeroports de Catalunya) Sabadell Airport (Aena) La Seu d'Urgell Airport (Aeroports de Catalunya) Ports Since the Middle Ages, Catalonia has been well integrated into international maritime networks. The port of Barcelona (owned and operated by , a Spanish Government entity) is an industrial, commercial and tourist port of worldwide importance. With 1,950,000 TEUs in 2015, it is the first container port in Catalonia, the third in Spain after Valencia and Algeciras in Andalusia, the 9th in the Mediterranean Sea, the 14th in Europe and the 68th in the world. It is sixth largest cruise port in the world, the first in Europe and the Mediterranean with 2,364,292 passengers in 2014. The ports of Tarragona (owned and operated by Puertos del Estado) in the southwest and Palamós near Girona at northeast are much more modest. The port of Palamós and the other ports in Catalonia (26) are operated and administered by , a Catalan Government entity. The development of these infrastructures, resulting from the topography and history of the Catalan territory, responds strongly to the administrative and political organization of this autonomous community. Roads There are of roads throughout Catalonia. The principal highways are AP-7 () and A-7 (). They follow the coast from the French border to Valencia, Murcia and Andalusia. The main roads generally radiate from Barcelona. The AP-2 () and A-2 () connect inland and onward to Madrid. Other major roads are: Public-own roads in Catalonia are either managed by the autonomous government of Catalonia (e.g., C- roads) or the Spanish government (e.g., AP- , A- , N- roads). Railways Catalonia saw the first railway construction in the Iberian Peninsula in 1848, linking Barcelona with Mataró. Given the topography most lines radiate from Barcelona. The city has both suburban and inter-city services. The main east coast line runs through the province connecting with the SNCF (French Railways) at Portbou on the coast. There are two publicly owned railway companies operating in Catalonia: the Catalan FGC that operates commuter and regional services, and the Spanish national Renfe that operates long-distance and high-speed rail services (AVE and Avant) and the main commuter and regional service , administered by the Catalan government since 2010. High-speed rail (AVE) services from Madrid currently reach Lleida, Tarragona and Barcelona. The official opening between Barcelona and Madrid took place 20 February 2008. The journey between Barcelona and Madrid now takes about two-and-a-half hours. A connection to the French high-speed TGV network has been completed (called the Perpignan–Barcelona high-speed rail line) and the Spanish AVE service began commercial services on the line 9 January 2013, later offering services |
to destroy the Egyptian fleet with fire ships that might have been successful if the wind had not failed just after the Greek ships entered Alexandria harbour. After the end of the War and the independence of Greece, Kanaris became an officer of the new Hellenic Navy, reaching the rank of admiral, and became a prominent politician. Political career Konstantinos Kanaris was one of the few with the personal confidence of Ioannis Kapodistrias, the first Head of State of independent Greece. After the assassination of Kapodistrias on 9 October 1831, he retired to the island of Syros. During the reign of King Otto I, Kanaris served as Minister in various governments and then as Prime Minister in the provisional government (16 February30 March 1844). He served a second term (15 October 184812 December 1849), and as Navy Minister in the 1854 cabinet of Alexandros Mavrokordatos. In 1862, he was among the rare War of Independence veterans who took part in the bloodless insurrection that deposed the increasingly unpopular King Otto I and led to the election of Prince William of Denmark as King George I of Greece. During his reign, Kanaris served as a Prime Minister for a third term (6 March16 April 1864), fourth term (26 July 186426 February 1865) and fifth and last term (7 June2 September 1877). Kanaris died on 2 September 1877 whilst still serving in office as Prime Minister. Following his death his government remained in power until 14 September 1877 without agreeing on a replacement at its head. He was buried in the First Cemetery of Athens and his heart was placed in a silver urn. Legacy Konstantinos Kanaris is considered a national hero in Greece and ranks amongst the most notable participants of the War of Independence. Many statues and busts have been erected in his honour. He was also featured on the one drachma coin and one hundred drachma banknote issued by the Bank of Greece. To honour Kanaris, the following ships of the Hellenic Navy have been named after him: Kanaris, a patrol boat commissioned in 1835 Kanaris, a destroyer commissioned in 1880 , a Hunt-class destroyer commissioned in 1942 , a commissioned in 1972 , an commissioned in 2002 Te Korowhakaunu / Kanáris Sound, a section of Taiari / Chalky Inlet in New Zealand's Fiordland National Park, was named after Konstantinos Kanaris by French navigator and explorer Jules de Blosseville (1802–1833). Family In 1817, Konstantinos Kanaris married Despoina Maniatis, from a historical family of Psara. They had seven children: Nikolaos Kanaris (1818–1848), killed during a military expedition in Beirut Themistoklis Kanaris (1819–1851), killed during a military expedition in Egypt Thrasyvoulos Kanaris (1820–1898), admiral Miltiadis Kanaris (1822–1901), admiral, member of the Greek Parliament for many years, Naval | uncle Dimitris Bourekas. Military career Kanaris gained his fame during the Greek War of Independence (1821–1829). Unlike most other prominent figures of the War, he had never been initiated into the Filiki Eteria (Society of Friends), which played a significant role in the uprising against the Ottoman Empire, primarily by secret recruitment of supporters against the Turkish rule. By early 1821, the movement had gained enough support to launch a revolution. This seems to have inspired Kanaris, who was in Odessa at the time. He returned to the island of Psara in haste and was present when it joined the uprising on 10 April 1821. The island formed its own fleet and the famed seamen of Psara, already known for their well-equipped ships and successful combats against sea pirates, proved to be highly effective in naval warfare. Kanaris soon distinguished himself as a fire ship captain. At Chios, on the moonless night of 6–7 June 1822, forces under his command destroyed the flagship of Nasuhzade Ali Pasha, Kapudan Pasha (Grand Admiral) of the Ottoman fleet, in revenge for the Chios massacre. The admiral was holding a Bayram celebration, allowing Kanaris and his men to position their fire ship without being noticed. When the flagship's powder store caught fire, all men aboard were instantly killed. The Turkish casualties comprised men, both naval officers and common sailors, as well as Nasuhzade Ali Pasha himself. Kanaris led another successful attack against the Ottoman fleet at Tenedos in November 1822. He was famously said to have encouraged himself by murmuring "Konstantí, you are going to die" every time he was approaching a Turkish warship on the fire boat he was about to detonate. The Ottoman fleet captured Psara on 21 June 1824. A part of the population, including Kanaris, managed to flee the island, but those who didn't were either sold into slavery or slaughtered. After the destruction of his home island, he continued to lead attacks against Turkish forces. In August 1824, he engaged in naval combats in the Dodecanese. The following year, Kanaris led the Greek raid on Alexandria, a daring attempt to destroy the Egyptian fleet with fire ships that might have been successful if the wind had not failed just after the Greek ships entered Alexandria harbour. After the end of the War and the independence of Greece, Kanaris became an officer of the new Hellenic Navy, reaching the rank of admiral, and became a prominent politician. Political career Konstantinos Kanaris was one of the few with the personal confidence of Ioannis Kapodistrias, the first Head of State of independent Greece. After the |
in response to any physical challenge to Iraqi control of the oil assets, Sagan together with his "TTAPS" colleagues and Paul Crutzen, warned in January 1991 in The Baltimore Sun and Wilmington Morning Star newspapers that if the fires were left to burn over a period of several months, enough smoke from the 600 or so 1991 Kuwaiti oil fires "might get so high as to disrupt agriculture in much of South Asia ..." and that this possibility should "affect the war plans"; these claims were also the subject of a televised debate between Sagan and physicist Fred Singer on January 22, aired on the ABC News program Nightline. In the televised debate, Sagan argued that the effects of the smoke would be similar to the effects of a nuclear winter, with Singer arguing to the contrary. After the debate, the fires burnt for many months before extinguishing efforts were complete. The results of the smoke did not produce continental-sized cooling. Sagan later conceded in The Demon-Haunted World that the prediction did not turn out to be correct: "it was pitch black at noon and temperatures dropped 4–6 °C over the Persian Gulf, but not much smoke reached stratospheric altitudes and Asia was spared". In his later years Sagan advocated the creation of an organized search for asteroids/near-Earth objects (NEOs) that might impact the Earth but to forestall or postpone developing the technological methods that would be needed to defend against them. He argued that all of the numerous methods proposed to alter the orbit of an asteroid, including the employment of nuclear detonations, created a deflection dilemma: if the ability to deflect an asteroid away from the Earth exists, then one would also have the ability to divert a non-threatening object towards Earth, creating an immensely destructive weapon. In a 1994 paper he co-authored, he ridiculed a 3-day long "Near-Earth Object Interception Workshop" held by Los Alamos National Laboratory (LANL) in 1993 that did not, "even in passing" state that such interception and deflection technologies could have these "ancillary dangers". Sagan remained hopeful that the natural NEO impact threat and the intrinsically double-edged essence of the methods to prevent these threats would serve as a "new and potent motivation to maturing international relations". Later acknowledging that, with sufficient international oversight, in the future a "work our way up" approach to implementing nuclear explosive deflection methods could be fielded, and when sufficient knowledge was gained, to use them to aid in mining asteroids. His interest in the use of nuclear detonations in space grew out of his work in 1958 for the Armour Research Foundation's Project A119, concerning the possibility of detonating a nuclear device on the lunar surface. Sagan was a critic of Plato, having said of the ancient Greek philosopher: "Science and mathematics were to be removed from the hands of the merchants and the artisans. This tendency found its most effective advocate in a follower of Pythagoras named Plato" and He (Plato) believed that ideas were far more real than the natural world. He advised the astronomers not to waste their time observing the stars and planets. It was better, he believed, just to think about them. Plato expressed hostility to observation and experiment. He taught contempt for the real world and disdain for the practical application of scientific knowledge. Plato's followers succeeded in extinguishing the light of science and experiment that had been kindled by Democritus and the other Ionians. In 1995 (as part of his book The Demon-Haunted World) Sagan popularized a set of tools for skeptical thinking called the "baloney detection kit", a phrase first coined by Arthur Felberbaum, a friend of his wife Ann Druyan. Popularizing science Speaking about his activities in popularizing science, Sagan said that there were at least two reasons for scientists to share the purposes of science and its contemporary state. Simple self-interest was one: much of the funding for science came from the public, and the public therefore had the right to know how the money was being spent. If scientists increased public admiration for science, there was a good chance of having more public supporters. The other reason was the excitement of communicating one's own excitement about science to others. Following the success of Cosmos, Sagan set up his own publishing firm, Cosmos Store, in order to publish science books for the general public. It was not successful. Criticisms While Sagan was widely adored by the general public, his reputation in the scientific community was more polarized. Critics sometimes characterized his work as fanciful, non-rigorous, and self-aggrandizing, and others complained in his later years that he neglected his role as a faculty member to foster his celebrity status. One of Sagan's harshest critics, Harold Urey, felt that Sagan was getting too much publicity for a scientist and was treating some scientific theories too casually. Urey and Sagan were said to have different philosophies of science, according to Davidson. While Urey was an "old-time empiricist" who avoided theorizing about the unknown, Sagan was by contrast willing to speculate openly about such matters. Fred Whipple wanted Harvard to keep Sagan there, but learned that because Urey was a Nobel laureate, his opinion was an important factor in Harvard denying Sagan tenure. Sagan's Harvard friend Lester Grinspoon also stated: "I know Harvard well enough to know there are people there who certainly do not like people who are outspoken." Grinspoon added: Some, like Urey, later came to realize that Sagan's popular brand of scientific advocacy was beneficial to the science as a whole. Urey especially liked Sagan's 1977 book The Dragons of Eden and wrote Sagan with his opinion: "I like it very much and am amazed that someone like you has such an intimate knowledge of the various features of the problem... I congratulate you... You are a man of many talents." Sagan was accused of borrowing some ideas of others for his own benefit and countered these claims by explaining that the misappropriation was an unfortunate side effect of his role as a science communicator and explainer, and that he attempted to give proper credit whenever possible. Social concerns Sagan believed that the Drake equation, on substitution of reasonable estimates, suggested that a large number of extraterrestrial civilizations would form, but that the lack of evidence of such civilizations highlighted by the Fermi paradox suggests technological civilizations tend to self-destruct. This stimulated his interest in identifying and publicizing ways that humanity could destroy itself, with the hope of avoiding such a cataclysm and eventually becoming a spacefaring species. Sagan's deep concern regarding the potential destruction of human civilization in a nuclear holocaust was conveyed in a memorable cinematic sequence in the final episode of Cosmos, called "Who Speaks for Earth?" Sagan had already resigned from the Air Force Scientific Advisory Board's UFO investigating Condon Committee and voluntarily surrendered his top-secret clearance in protest over the Vietnam War. Following his marriage to his third wife (novelist Ann Druyan) in June 1981, Sagan became more politically active—particularly in opposing escalation of the nuclear arms race under President Ronald Reagan. In March 1983, Reagan announced the Strategic Defense Initiative—a multibillion-dollar project to develop a comprehensive defense against attack by nuclear missiles, which was quickly dubbed the "Star Wars" program. Sagan spoke out against the project, arguing that it was technically impossible to develop a system with the level of perfection required, and far more expensive to build such a system than it would be for an enemy to defeat it through decoys and other means—and that its construction would seriously destabilize the "nuclear balance" between the United States and the Soviet Union, making further progress toward nuclear disarmament impossible. When Soviet leader Mikhail Gorbachev declared a unilateral moratorium on the testing of nuclear weapons, which would begin on August 6, 1985—the 40th anniversary of the atomic bombing of Hiroshima—the Reagan administration dismissed the dramatic move as nothing more than propaganda and refused to follow suit. In response, US anti-nuclear and peace activists staged a series of protest actions at the Nevada Test Site, beginning on Easter Sunday in 1986 and continuing through 1987. Hundreds of people in the "Nevada Desert Experience" group were arrested, including Sagan, who was arrested on two separate occasions as he climbed over a chain-link fence at the test site during the underground Operation Charioteer and United States's Musketeer nuclear test series of detonations. Sagan was also a vocal advocate of the controversial notion of testosterone poisoning, arguing in 1992 that human males could become gripped by an "unusually severe [case of] testosterone poisoning" and this could compel them to become genocidal. In his review of Moondance magazine writer Daniela Gioseffi's 1990 book Women on War, he argues that females are the only half of humanity "untainted by testosterone poisoning". One chapter of his 1993 book Shadows of Forgotten Ancestors is dedicated to testosterone and its alleged poisonous effects. In 1989, Carl Sagan was interviewed by Ted Turner whether he believed in socialism and responded that: "I'm not sure what a socialist is. But I believe the government has a responsibility to care for the people... I'm talking about making the people self-reliant." Personal life and beliefs Sagan was married three times. In 1957, he married biologist Lynn Margulis. The couple had two children, Jeremy and Dorion Sagan. After Sagan and Margulis divorced, he married artist Linda Salzman in 1968 and they also had a child together, Nick Sagan. During these marriages, Carl Sagan focused heavily on his career, a factor which may have contributed to Sagan's first divorce. In 1981, Sagan married author Ann Druyan and they later had two children, Alexandra (known as Sasha) and Samuel Sagan. Carl Sagan and Druyan remained married until his death in 1996. While teaching at Cornell, he lived in an Egyptian revival house in Ithaca perched on the edge of a cliff that had formerly been the headquarters of a Cornell secret society. While there he drove a purple 1970 Porsche 911 with the license plate PHOBOS. He also owned an orange Porsche 914. In 1994, engineers at Apple Computer code-named the Power Macintosh 7100 "Carl Sagan" in the hope that Apple would make "billions and billions" with the sale of the PowerMac 7100. The name was only used internally, but Sagan was concerned that it would become a product endorsement and sent Apple a cease-and-desist letter. Apple complied, but engineers retaliated by changing the internal codename to "BHA" for "Butt-Head Astronomer". Sagan then sued Apple for libel in federal court. The court granted Apple's motion to dismiss Sagan's claims and opined in dicta that a reader aware of the context would understand Apple was "clearly attempting to retaliate in a humorous and satirical way", and that "It strains reason to conclude that Defendant was attempting to criticize Plaintiff's reputation or competency as an astronomer. One does not seriously attack the expertise of a scientist using the undefined phrase 'butt-head'." Sagan then sued for Apple's original use of his name and likeness, but again lost. Sagan appealed the ruling. In November 1995, an out-of-court settlement was reached and Apple's office of trademarks and patents released a conciliatory statement that "Apple has always had great respect for Dr. Sagan. It was never Apple's intention to cause Dr. Sagan or his family any embarrassment or concern." Apple's third and final code name for the project was "LAW", short for "Lawyers are Wimps". In 2019, Carl Sagan's daughter Sasha Sagan released For Small Creatures Such as We: Rituals for Finding Meaning in our Unlikely World, which depicts life with her parents and her father's death when she was fourteen. Building on a theme in her father's work, Sasha Sagan argues in For Small Creatures Such as We that skepticism does not imply pessimism. Sagan was acquainted with the science fiction fandom through his friendship with Isaac Asimov, and he spoke at the Nebula Awards ceremony in 1969. Asimov described Sagan as one of only two people he ever met whose intellect surpassed his own. The other, he claimed, was the computer scientist and artificial intelligence expert Marvin Minsky. Naturalism Sagan wrote frequently about religion and the relationship between religion and science, expressing his skepticism about the conventional conceptualization of God as a sapient being. For example: Some people think God is an outsized, light-skinned male with a long white beard, sitting on a throne somewhere up there in the sky, busily tallying the fall of every sparrow. Others—for example Baruch Spinoza and Albert Einstein—considered God to be essentially the sum total of the physical laws which describe the universe. I do not know of any compelling evidence for anthropomorphic patriarchs controlling human destiny from some hidden celestial vantage point, but it would be madness to deny the existence of physical laws. In another description of his view on the concept of God, Sagan wrote: The idea that God is an oversized white male with a flowing beard who sits in the sky and tallies the fall of every sparrow is ludicrous. But if by God one means the set of physical laws that govern the universe, then clearly there is such a God. This God is emotionally unsatisfying ... it does not make much sense to pray to the law of gravity. On atheism, Sagan commented in 1981: An atheist is someone who is certain that God does not exist, someone who has compelling evidence against the existence of God. I know of no such compelling evidence. Because God can be relegated to remote times and places and to ultimate causes, we would have to know a great deal more about the universe than we do now to be sure that no such God exists. To be certain of the existence of God and to be certain of the nonexistence of God seem to me to be the confident extremes in a subject so riddled with doubt and uncertainty as to inspire very little confidence indeed. Sagan also commented on Christianity and the Jefferson Bible, stating "My long-time view about Christianity is that it represents an amalgam of two seemingly immiscible parts, the religion of Jesus and the religion of Paul. Thomas Jefferson attempted to excise the Pauline parts of the New Testament. There wasn't much left when he was done, but it was an inspiring document." Regarding spirituality and its relationship with science, Sagan stated: 'Spirit' comes from the Latin word 'to breathe'. What we breathe is air, which is certainly matter, however thin. Despite usage to the contrary, there is no necessary implication in the word 'spiritual' that we are talking of anything other than matter (including the matter of which the brain is made), or anything outside the realm of science. On occasion, I will feel free to use the word. Science is not only compatible with spirituality; it is a profound source of spirituality. When we recognize our place in an immensity of light-years and in the passage of ages, when we grasp the intricacy, beauty, and subtlety of life, then that soaring feeling, that sense of elation and humility combined, is surely spiritual. An environmental appeal, "Preserving and Cherishing the Earth", signed by Sagan with other noted scientists in January 1990, stated that "The historical record makes clear that religious teaching, example, and leadership are powerfully able to influence personal conduct and commitment... Thus, there is a vital role for religion and science." In reply to a question in 1996 about his religious beliefs, Sagan answered, "I'm agnostic." Sagan maintained that the idea of a creator God of the Universe was difficult to prove or disprove and that the only conceivable scientific discovery that could challenge it would be an infinitely old universe. Sagan's views on religion have been interpreted as a form of pantheism comparable to Einstein's belief in Spinoza's God. His son, Dorion Sagan said, "My father believed in the God of Spinoza and Einstein, God not behind nature but as nature, equivalent to it." His last wife, Ann Druyan, stated: When my husband died, because he was so famous and known for not being a believer, many people would come up to me—it still sometimes happens—and ask me if Carl changed at the end and converted to a belief in an afterlife. They also frequently ask me if I think I will see him again. Carl faced his death with unflagging courage and never sought refuge in illusions. The tragedy was that we knew we would never see each other again. I don't ever expect to be reunited with Carl. In 2006, Ann Druyan edited Sagan's 1985 Glasgow Gifford Lectures in Natural Theology into a book, The Varieties of Scientific Experience: A Personal View of the Search for God, in which he elaborates on his views of divinity in the natural world. Sagan is also widely regarded as a freethinker or skeptic; one of his most famous quotations, in Cosmos, was, "Extraordinary claims require extraordinary evidence" (called the "Sagan standard" by some). This was based on a nearly identical statement by fellow founder of the Committee for the Scientific Investigation of Claims of the Paranormal, Marcello Truzzi, "An extraordinary claim requires extraordinary proof." This idea had been earlier aphorized in Théodore Flournoy's work From India to the Planet Mars (1899) from a longer quote by Pierre-Simon Laplace (1749–1827), a French mathematician and astronomer, as the Principle of Laplace: "The weight of the evidence should be proportioned to the strangeness of the facts." Late in his life, Sagan's books elaborated on his naturalistic view of the world. In The Demon-Haunted World, he presented tools for testing arguments and detecting fallacious or fraudulent ones, essentially advocating wide use of critical thinking and the scientific method. The compilation Billions and Billions: Thoughts on Life and Death at the Brink of the Millennium, published in 1997 after Sagan's death, contains essays written by Sagan, such as his views on abortion, as well as an account by his widow, Ann Druyan, of his death in relation to his having been an agnostic and freethinker. Sagan warned against humans' tendency towards anthropocentrism. He was the faculty adviser for the Cornell Students for the Ethical Treatment of Animals. In the Cosmos chapter "Blues For a Red Planet", Sagan wrote, "If there is life on Mars, I believe we should do nothing with Mars. Mars then belongs to the Martians, even if the Martians are only microbes." Marijuana advocacy Sagan was a user and advocate of marijuana. Under the pseudonym "Mr. X", he contributed an essay about smoking cannabis to the 1971 book Marihuana Reconsidered. The essay explained that marijuana use had helped to inspire some of Sagan's works and enhance sensual and intellectual experiences. After Sagan's death, his friend Lester Grinspoon disclosed this information to Sagan's biographer, Keay Davidson. The publishing of the biography, Carl Sagan: A Life, in 1999 brought media attention to this aspect of Sagan's life. Not long after his death, his widow Ann Druyan went on to preside over the board of directors of the National Organization for the Reform of Marijuana Laws (NORML), a non-profit organization dedicated to reforming cannabis laws. UFOs In 1947, the year that inaugurated the "flying saucer" craze, the young Sagan suspected the "discs" might be alien spaceships. Sagan's interest in UFO reports prompted him on August 3, 1952, to write a letter to U.S. Secretary of State Dean Acheson to ask how the United States would respond if flying saucers turned out to be extraterrestrial. He later had several conversations on the subject in 1964 with Jacques Vallée. Though quite skeptical of any extraordinary answer to the UFO question, Sagan thought scientists should study the phenomenon, at least because there was widespread public interest in UFO reports. Stuart Appelle notes that Sagan "wrote frequently on what he perceived as the logical and empirical fallacies regarding UFOs and the abduction experience. Sagan rejected an extraterrestrial explanation for the phenomenon but felt there were both empirical and pedagogical benefits for examining UFO reports and that the subject was, therefore, a legitimate topic of study." In 1966 Sagan was a member of the Ad Hoc Committee to Review Project Blue Book, the U.S. Air Force's UFO investigation project. The committee concluded Blue Book had been lacking as a scientific study, and recommended a university-based project to give the UFO phenomenon closer scientific scrutiny. The result was the Condon Committee (1966–68), led by physicist Edward Condon, and in their final report they formally concluded that UFOs, regardless of what any of them actually were, did not behave in a manner consistent with a threat to national security. Sociologist Ron Westrum writes that "The high point of Sagan's treatment of the UFO question was the AAAS' symposium in 1969. A wide range of educated opinions on the subject were offered by participants, including not only proponents such as James McDonald and J. Allen Hynek but also skeptics like astronomers William Hartmann and Donald Menzel. The roster of speakers was balanced, and it is to Sagan's credit that this event was presented in spite of pressure from Edward Condon." With physicist Thornton Page, Sagan edited the lectures and discussions given at the symposium; these were published in 1972 as UFO's: A Scientific Debate. Some of Sagan's many books examine UFOs (as did one episode of Cosmos) and he claimed a religious undercurrent to the phenomenon. Sagan again revealed his views on interstellar travel in his 1980 Cosmos series. In one of his last written works, Sagan argued that the chances of extraterrestrial spacecraft visiting Earth are vanishingly small. However, Sagan did think it plausible that Cold War concerns contributed to governments misleading their citizens about UFOs, and wrote that "some UFO reports and analyses, and perhaps voluminous files, have been made inaccessible to the public which pays the bills ... It's time for the files to be declassified and made generally available." He cautioned against jumping to conclusions about suppressed UFO data and stressed that there was no strong evidence that aliens were visiting the Earth either in the past or present. Sagan briefly served as an adviser on Stanley Kubrick's film 2001: A Space Odyssey. Sagan proposed that the film suggest, rather than depict, extraterrestrial superintelligence. "Sagan's paradox" Sagan's contribution to the 1969 AAAS symposium was an attack on the belief that UFOs are piloted by extraterrestrial beings. Applying several logical assumptions (see Drake equation), Sagan calculated the possible number of advanced civilizations capable of interstellar travel to be about one million. He projected that any civilization wishing to check on all the others on a regular basis of, say, once a year would have to launch 10,000 spacecraft annually. Not only does that seem like an unreasonable number of launchings, but it would take all the material in one percent of the universe's stars to produce all the spaceships needed for all the civilizations to seek each other out. To argue that the Earth was being chosen for regular visitations, Sagan said, one would have to assume that the planet is somehow unique, and that assumption "goes exactly against the idea that there are lots of civilizations around. Because if there are then our sort of civilization must be pretty common. And if we're not pretty common then there aren't going to be many civilizations advanced enough to send visitors". This argument, which some called Sagan's paradox, helped to establish a new school of thought, namely the belief that extraterrestrial life exists, but it has nothing to do with UFOs. The new belief had a salutary effect on UFO studies. It helped separate researchers who wanted to distinguish UFOs from those who wanted to identify their pilots and it gave scientists opportunities to search the universe for intelligent life unencumbered by the stigma associated with UFOs. Death After suffering from myelodysplasia for two years and receiving three bone marrow transplants from his sister, Sagan died from pneumonia at the age of 62 at the Fred Hutchinson Cancer Research Center in Seattle, Washington, on December 20, 1996. His burial took place at Lake View Cemetery in Ithaca, New York. Awards and honors Annual Award for Television Excellence—1981—Ohio State University—PBS series Cosmos: A Personal Voyage Apollo Achievement Award—National Aeronautics and Space Administration NASA Distinguished Public Service Medal—National Aeronautics and Space Administration (1977) Emmy—Outstanding Individual Achievement—1981—PBS series Cosmos: A Personal Voyage Emmy—Outstanding Informational Series—1981—PBS series Cosmos: A Personal Voyage Fellow of the American Physical Society–1989 Exceptional Scientific Achievement Medal—National Aeronautics and Space Administration Helen Caldicott Leadership Award – Awarded by Women's Action for Nuclear Disarmament Hugo Award—1981—Best Dramatic Presentation—Cosmos: A Personal Voyage Hugo Award—1981—Best Related Non-Fiction Book—Cosmos Hugo Award—1998—Best Dramatic Presentation—Contact Humanist of the Year—1981—Awarded by the American Humanist Association American Philosophical Society—1995—Elected to membership. In Praise of Reason Award—1987—Committee for Skeptical Inquiry Isaac Asimov Award—1994—Committee for Skeptical Inquiry John F. Kennedy Astronautics Award—1982—American Astronautical Society Special non-fiction Campbell Memorial Award—1974—The Cosmic Connection: An Extraterrestrial Perspective Joseph Priestley Award—"For distinguished contributions to the welfare of mankind" Klumpke-Roberts Award of the Astronomical Society of the Pacific—1974 Golden Plate Award of the American Academy of Achievement—1975 Konstantin Tsiolkovsky Medal—Awarded by the Soviet Cosmonauts Federation Locus Award 1986—Contact Lowell Thomas Award—The Explorers Club—75th Anniversary Masursky Award—American Astronomical Society Miller Research Fellowship—Miller Institute (1960–1962) Oersted Medal—1990—American Association of Physics Teachers Peabody Award—1980—PBS series Cosmos: A Personal Voyage Le Prix Galabert d'astronautique—International Astronautical Federation (IAF) Public Welfare Medal—1994—National Academy of Sciences Pulitzer Prize for General Non-Fiction—1978—The Dragons of Eden Science Fiction Chronicle Award—1998—Dramatic Presentation—Contact UCLA Medal–1991 Inductee to International Space Hall of Fame in 2004 Named the "99th Greatest American" on June 5, 2005, Greatest American television series on the Discovery Channel Named an honorary member of the Demosthenian Literary Society on November 10, 2011 New Jersey Hall of Fame—2009—Inductee. Committee for Skeptical Inquiry (CSI) Pantheon of | 11, also carrying another copy of the plaque, was launched the following year. He continued to refine his designs; the most elaborate message he helped to develop and assemble was the Voyager Golden Record, which was sent out with the Voyager space probes in 1977. Sagan often challenged the decisions to fund the Space Shuttle and the International Space Station at the expense of further robotic missions. Scientific achievements Former student David Morrison described Sagan as "an 'idea person' and a master of intuitive physical arguments and 'back of the envelope' calculations", and Gerard Kuiper said that "Some persons work best in specializing on a major program in the laboratory; others are best in liaison between sciences. Dr. Sagan belongs in the latter group." Sagan's contributions were central to the discovery of the high surface temperatures of the planet Venus. In the early 1960s no one knew for certain the basic conditions of Venus' surface, and Sagan listed the possibilities in a report later depicted for popularization in a Time Life book Planets. His own view was that Venus was dry and very hot as opposed to the balmy paradise others had imagined. He had investigated radio waves from Venus and concluded that there was a surface temperature of . As a visiting scientist to NASA's Jet Propulsion Laboratory, he contributed to the first Mariner missions to Venus, working on the design and management of the project. Mariner 2 confirmed his conclusions on the surface conditions of Venus in 1962. Sagan was among the first to hypothesize that Saturn's moon Titan might possess oceans of liquid compounds on its surface and that Jupiter's moon Europa might possess subsurface oceans of water. This would make Europa potentially habitable. Europa's subsurface ocean of water was later indirectly confirmed by the spacecraft Galileo. The mystery of Titan's reddish haze was also solved with Sagan's help. The reddish haze was revealed to be due to complex organic molecules constantly raining down onto Titan's surface. Sagan further contributed insights regarding the atmospheres of Venus and Jupiter, as well as seasonal changes on Mars. He also perceived global warming as a growing, man-made danger and likened it to the natural development of Venus into a hot, life-hostile planet through a kind of runaway greenhouse effect. Sagan and his Cornell colleague Edwin Ernest Salpeter speculated about life in Jupiter's clouds, given the planet's dense atmospheric composition rich in organic molecules. He studied the observed color variations on Mars' surface and concluded that they were not seasonal or vegetational changes as most believed, but shifts in surface dust caused by windstorms. Sagan is also known for his research on the possibilities of extraterrestrial life, including experimental demonstration of the production of amino acids from basic chemicals by radiation. He is also the 1994 recipient of the Public Welfare Medal, the highest award of the National Academy of Sciences for "distinguished contributions in the application of science to the public welfare". He was denied membership in the Academy, reportedly because his media activities made him unpopular with many other scientists. , Sagan is the most cited SETI scientist and one of the most cited planetary scientists. Cosmos: popularizing science on TV In 1980 Sagan co-wrote and narrated the award-winning 13-part PBS television series Cosmos: A Personal Voyage, which became the most widely watched series in the history of American public television until 1990. The show has been seen by at least 500 million people across 60 countries. The book, Cosmos, written by Sagan, was published to accompany the series. Because of his earlier popularity as a science writer from his best-selling books, including The Dragons of Eden, which won him a Pulitzer Prize in 1977, he was asked to write and narrate the show. It was targeted to a general audience of viewers, whom Sagan felt had lost interest in science, partly due to a stifled educational system. Each of the 13 episodes was created to focus on a particular subject or person, thereby demonstrating the synergy of the universe. They covered a wide range of scientific subjects including the origin of life and a perspective of humans' place on Earth. The show won an Emmy, along with a Peabody Award, and transformed Sagan from an obscure astronomer into a pop-culture icon. Time magazine ran a cover story about Sagan soon after the show broadcast, referring to him as "creator, chief writer and host-narrator of the show". In 2000, "Cosmos" was released on a remastered set of DVDs. "Billions and billions" Sagan was invited to frequent appearances on The Tonight Show Starring Johnny Carson. After Cosmos aired, he became associated with the catchphrase "billions and billions," although he never actually used the phrase in the Cosmos series. He rather used the term "billions upon billions." Carson, however, would sometimes use the phrase during his parodies of Sagan. Sagan unit As a humorous tribute to Sagan and his association with the catchphrase "billions and billions", a sagan has been defined as a unit of measurement equivalent to a very large number – technically at least four billion (two billion plus two billion) – of anything. Sagan's number Sagan's number is the number of stars in the observable universe. This number is reasonably well defined, because it is known what stars are and what the observable universe is, but its value is highly uncertain. In 1980, Sagan estimated it to be 10 sextillion in short scale (1022). In 2003, it was estimated to be 70 sextillion (7 × 1022). In 2010, it was estimated to be 300 sextillion (3 × 1023). Scientific and critical thinking advocacy Sagan's ability to convey his ideas allowed many people to understand the cosmos better—simultaneously emphasizing the value and worthiness of the human race, and the relative insignificance of the Earth in comparison to the Universe. He delivered the 1977 series of Royal Institution Christmas Lectures in London. Sagan was a proponent of the search for extraterrestrial life. He urged the scientific community to listen with radio telescopes for signals from potential intelligent extraterrestrial life-forms. Sagan was so persuasive that by 1982 he was able to get a petition advocating SETI published in the journal Science, signed by 70 scientists, including seven Nobel Prize winners. This signaled a tremendous increase in the respectability of a then-controversial field. Sagan also helped Frank Drake write the Arecibo message, a radio message beamed into space from the Arecibo radio telescope on November 16, 1974, aimed at informing potential extraterrestrials about Earth. Sagan was chief technology officer of the professional planetary research journal Icarus for 12 years. He co-founded The Planetary Society and was a member of the SETI Institute Board of Trustees. Sagan served as Chairman of the Division for Planetary Science of the American Astronomical Society, as President of the Planetology Section of the American Geophysical Union, and as Chairman of the Astronomy Section of the American Association for the Advancement of Science (AAAS). At the height of the Cold War, Sagan became involved in nuclear disarmament efforts by promoting hypotheses on the effects of nuclear war, when Paul Crutzen's "Twilight at Noon" concept suggested that a substantial nuclear exchange could trigger a nuclear twilight and upset the delicate balance of life on Earth by cooling the surface. In 1983 he was one of five authors—the "S"—in the follow-up "TTAPS" model (as the research article came to be known), which contained the first use of the term "nuclear winter", which his colleague Richard P. Turco had coined. In 1984 he co-authored the book The Cold and the Dark: The World after Nuclear War and in 1990 the book A Path Where No Man Thought: Nuclear Winter and the End of the Arms Race, which explains the nuclear-winter hypothesis and advocates nuclear disarmament. Sagan received a great deal of skepticism and disdain for the use of media to disseminate a very uncertain hypothesis. A personal correspondence with nuclear physicist Edward Teller around 1983 began amicably, with Teller expressing support for continued research to ascertain the credibility of the winter hypothesis. However, Sagan and Teller's correspondence would ultimately result in Teller writing: "A propagandist is one who uses incomplete information to produce maximum persuasion. I can compliment you on being, indeed, an excellent propagandist, remembering that a propagandist is the better the less he appears to be one". Biographers of Sagan would also comment that from a scientific viewpoint, nuclear winter was a low point for Sagan, although, politically speaking, it popularized his image amongst the public. The adult Sagan remained a fan of science fiction, although disliking stories that were not realistic (such as ignoring the inverse-square law) or, he said, did not include "thoughtful pursuit of alternative futures". He wrote books to popularize science, such as Cosmos, which reflected and expanded upon some of the themes of A Personal Voyage and became the best-selling science book ever published in English; The Dragons of Eden: Speculations on the Evolution of Human Intelligence, which won a Pulitzer Prize; and Broca's Brain: Reflections on the Romance of Science. Sagan also wrote the best-selling science fiction novel Contact in 1985, based on a film treatment he wrote with his wife, Ann Druyan, in 1979, but he did not live to see the book's 1997 motion-picture adaptation, which starred Jodie Foster and won the 1998 Hugo Award for Best Dramatic Presentation. Sagan wrote a sequel to Cosmos, Pale Blue Dot: A Vision of the Human Future in Space, which was selected as a notable book of 1995 by The New York Times. He appeared on PBS's Charlie Rose program in January 1995. Sagan also wrote the introduction for Stephen Hawking's bestseller A Brief History of Time. Sagan was also known for his popularization of science, his efforts to increase scientific understanding among the general public, and his positions in favor of scientific skepticism and against pseudoscience, such as his debunking of the Betty and Barney Hill abduction. To mark the tenth anniversary of Sagan's death, David Morrison, a former student of Sagan, recalled "Sagan's immense contributions to planetary research, the public understanding of science, and the skeptical movement" in Skeptical Inquirer. Following Saddam Hussein's threats to light Kuwait's oil wells on fire in response to any physical challenge to Iraqi control of the oil assets, Sagan together with his "TTAPS" colleagues and Paul Crutzen, warned in January 1991 in The Baltimore Sun and Wilmington Morning Star newspapers that if the fires were left to burn over a period of several months, enough smoke from the 600 or so 1991 Kuwaiti oil fires "might get so high as to disrupt agriculture in much of South Asia ..." and that this possibility should "affect the war plans"; these claims were also the subject of a televised debate between Sagan and physicist Fred Singer on January 22, aired on the ABC News program Nightline. In the televised debate, Sagan argued that the effects of the smoke would be similar to the effects of a nuclear winter, with Singer arguing to the contrary. After the debate, the fires burnt for many months before extinguishing efforts were complete. The results of the smoke did not produce continental-sized cooling. Sagan later conceded in The Demon-Haunted World that the prediction did not turn out to be correct: "it was pitch black at noon and temperatures dropped 4–6 °C over the Persian Gulf, but not much smoke reached stratospheric altitudes and Asia was spared". In his later years Sagan advocated the creation of an organized search for asteroids/near-Earth objects (NEOs) that might impact the Earth but to forestall or postpone developing the technological methods that would be needed to defend against them. He argued that all of the numerous methods proposed to alter the orbit of an asteroid, including the employment of nuclear detonations, created a deflection dilemma: if the ability to deflect an asteroid away from the Earth exists, then one would also have the ability to divert a non-threatening object towards Earth, creating an immensely destructive weapon. In a 1994 paper he co-authored, he ridiculed a 3-day long "Near-Earth Object Interception Workshop" held by Los Alamos National Laboratory (LANL) in 1993 that did not, "even in passing" state that such interception and deflection technologies could have these "ancillary dangers". Sagan remained hopeful that the natural NEO impact threat and the intrinsically double-edged essence of the methods to prevent these threats would serve as a "new and potent motivation to maturing international relations". Later acknowledging that, with sufficient international oversight, in the future a "work our way up" approach to implementing nuclear explosive deflection methods could be fielded, and when sufficient knowledge was gained, to use them to aid in mining asteroids. His interest in the use of nuclear detonations in space grew out of his work in 1958 for the Armour Research Foundation's Project A119, concerning the possibility of detonating a nuclear device on the lunar surface. Sagan was a critic of Plato, having said of the ancient Greek philosopher: "Science and mathematics were to be removed from the hands of the merchants and the artisans. This tendency found its most effective advocate in a follower of Pythagoras named Plato" and He (Plato) believed that ideas were far more real than the natural world. He advised the astronomers not to waste their time observing the stars and planets. It was better, he believed, just to think about them. Plato expressed hostility to observation and experiment. He taught contempt for the real world and disdain for the practical application of scientific knowledge. Plato's followers succeeded in extinguishing the light of science and experiment that had been kindled by Democritus and the other Ionians. In 1995 (as part of his book The Demon-Haunted World) Sagan popularized a set of tools for skeptical thinking called the "baloney detection kit", a phrase first coined by Arthur Felberbaum, a friend of his wife Ann Druyan. Popularizing science Speaking about his activities in popularizing science, Sagan said that there were at least two reasons for scientists to share the purposes of science and its contemporary state. Simple self-interest was one: much of the funding for science came from the public, and the public therefore had the right to know how the money was being spent. If scientists increased public admiration for science, there was a good chance of having more public supporters. The other reason was the excitement of communicating one's own excitement about science to others. Following the success of Cosmos, Sagan set up his own publishing firm, Cosmos Store, in order to publish science books for the general public. It was not successful. Criticisms While Sagan was widely adored by the general public, his reputation in the scientific community was more polarized. Critics sometimes characterized his work as fanciful, non-rigorous, and self-aggrandizing, and others complained in his later years that he neglected his role as a faculty member to foster his celebrity status. One of Sagan's harshest critics, Harold Urey, felt that Sagan was getting too much publicity for a scientist and was treating some scientific theories too casually. Urey and Sagan were said to have different philosophies of science, according to Davidson. While Urey was an "old-time empiricist" who avoided theorizing about the unknown, Sagan was by contrast willing to speculate openly about such matters. Fred Whipple wanted Harvard to keep Sagan there, but learned that because Urey was a Nobel laureate, his opinion was an important factor in Harvard denying Sagan tenure. Sagan's Harvard friend Lester Grinspoon also stated: "I know Harvard well enough to know there are people there who certainly do not like people who are outspoken." Grinspoon added: Some, like Urey, later came to realize that Sagan's popular brand of scientific advocacy was beneficial to the science as a whole. Urey especially liked Sagan's 1977 book The Dragons of Eden and wrote Sagan with his opinion: "I like it very much and am amazed that someone like you has such an intimate knowledge of the various features of the problem... I congratulate you... You are a man of many talents." Sagan was accused of borrowing some ideas of others for his own benefit and countered these claims by explaining that the misappropriation was an unfortunate side effect of his role as a science communicator and explainer, and that he attempted to give proper credit whenever possible. Social concerns Sagan believed that the Drake equation, on substitution of reasonable estimates, suggested that a large number of extraterrestrial civilizations would form, but that the lack of evidence of such civilizations highlighted by the Fermi paradox suggests technological civilizations tend to self-destruct. This stimulated his interest in identifying and publicizing ways that humanity could destroy itself, with the hope of avoiding such a cataclysm and eventually becoming a spacefaring species. Sagan's deep concern regarding the potential destruction of human civilization in a nuclear holocaust was conveyed in a memorable cinematic sequence in the final episode of Cosmos, called "Who Speaks for Earth?" Sagan had already resigned from the Air Force Scientific Advisory Board's UFO investigating Condon Committee and voluntarily surrendered his top-secret clearance in protest over the Vietnam War. Following his marriage to his third wife (novelist Ann Druyan) in June 1981, Sagan became more politically active—particularly in opposing escalation of the nuclear arms race under President Ronald Reagan. In March 1983, Reagan announced the Strategic Defense Initiative—a multibillion-dollar project to develop a comprehensive defense against attack by nuclear missiles, which was quickly dubbed the "Star Wars" program. Sagan spoke out against the project, arguing that it was technically impossible to develop a system with the level of perfection required, and far more expensive to build such a system than it would be for an enemy to defeat it through decoys and other means—and that its construction would seriously destabilize the "nuclear balance" between the United States and the Soviet Union, making further progress toward nuclear disarmament impossible. When Soviet leader Mikhail Gorbachev declared a unilateral moratorium on the testing of nuclear weapons, which would begin on August 6, 1985—the 40th anniversary of the atomic bombing of Hiroshima—the Reagan administration dismissed the dramatic move as nothing more than propaganda and refused to follow suit. In response, US anti-nuclear and peace activists staged a series of protest actions at the Nevada Test Site, beginning on Easter Sunday in 1986 and continuing through 1987. Hundreds of people in the "Nevada Desert Experience" group were arrested, including Sagan, who was arrested on two separate occasions as he climbed over a chain-link fence at the test site during the underground Operation Charioteer and United States's Musketeer nuclear test series of detonations. Sagan was also a vocal advocate of the controversial notion of testosterone poisoning, arguing in 1992 that human males could become gripped by an "unusually severe [case of] testosterone poisoning" and this could compel them to become genocidal. In his review of Moondance magazine writer Daniela Gioseffi's 1990 book Women on War, he argues that females are the only half of humanity "untainted by testosterone poisoning". One chapter of his 1993 book Shadows of Forgotten Ancestors is dedicated to testosterone and its alleged poisonous effects. In 1989, Carl Sagan was interviewed by Ted Turner whether he believed in socialism and responded that: "I'm not sure what a socialist is. But I believe the government has a responsibility to care for the people... I'm talking about making the people self-reliant." Personal life and beliefs Sagan was married three times. In 1957, he married biologist Lynn Margulis. The couple had two children, Jeremy and Dorion Sagan. After Sagan and Margulis divorced, he married artist Linda Salzman in 1968 and they also had a child together, Nick Sagan. During these marriages, Carl Sagan focused heavily on his career, a factor which may have contributed to Sagan's first divorce. In 1981, Sagan married author Ann Druyan and they later had two children, Alexandra (known as Sasha) and Samuel Sagan. Carl Sagan and Druyan remained married until his death in 1996. While teaching at Cornell, he lived in an Egyptian revival house in Ithaca perched on the edge of a cliff that had formerly been the headquarters of a Cornell secret society. While there he drove a purple 1970 Porsche 911 with the license plate PHOBOS. He also owned an orange Porsche 914. In 1994, engineers at Apple Computer code-named the Power Macintosh 7100 "Carl Sagan" in the hope that Apple would make "billions and billions" with the sale of the PowerMac 7100. The name was only used internally, but Sagan was concerned that it would become a product endorsement and sent Apple a cease-and-desist letter. Apple complied, but engineers retaliated by changing the |
if the missiles were removed. On October 26 at 6:00 pm EDT, the State Department started receiving a message that appeared to be written personally by Khrushchev. It was Saturday 2:00 am in Moscow. The long letter took several minutes to arrive, and it took translators additional time to translate and transcribe it. Robert F. Kennedy described the letter as "very long and emotional". Khrushchev reiterated the basic outline that had been stated to Scali earlier in the day: "I propose: we, for our part, will declare that our ships bound for Cuba are not carrying any armaments. You will declare that the United States will not invade Cuba with its troops and will not support any other forces which might intend to invade Cuba. Then the necessity of the presence of our military specialists in Cuba will disappear." At 6:45 pm EDT, news of Fomin's offer to Scali was finally heard and was interpreted as a "set up" for the arrival of Khrushchev's letter. The letter was then considered official and accurate, although it was later learned that Fomin was almost certainly operating of his own accord without official backing. Additional study of the letter was ordered and continued into the night. Crisis continues Castro, on the other hand, was convinced that an invasion of Cuba was soon at hand, and on October 26, he sent a telegram to Khrushchev that appeared to call for a pre-emptive nuclear strike on the US in case of attack. In a 2010 interview, Castro expressed regret about his 1962 stance on first use: "After I've seen what I've seen, and knowing what I know now, it wasn't worth it at all." Castro also ordered all anti-aircraft weapons in Cuba to fire on any US aircraft; previous orders had been to fire only on groups of two or more. At 6:00 am EDT on October 27, the CIA delivered a memo reporting that three of the four missile sites at San Cristobal and both sites at Sagua la Grande appeared to be fully operational. It also noted that the Cuban military continued to organise for action but was under order not to initiate action unless attacked. At 9:00 am EDT on October 27, Radio Moscow began broadcasting a message from Khrushchev. Contrary to the letter of the night before, the message offered a new trade: the missiles on Cuba would be removed in exchange for the removal of the Jupiter missiles from Italy and Turkey. At 10:00 am EDT, the executive committee met again to discuss the situation and came to the conclusion that the change in the message was because of internal debate between Khrushchev and other party officials in the Kremlin. Kennedy realised that he would be in an "insupportable position if this becomes Khrushchev's proposal" because the missiles in Turkey were not militarily useful and were being removed anyway and "It's gonna – to any man at the United Nations or any other rational man, it will look like a very fair trade." Bundy explained why Khrushchev's public acquiescence could not be considered: "The current threat to peace is not in Turkey, it is in Cuba." McNamara noted that another tanker, the Grozny, was about out and should be intercepted. He also noted that they had not made the Soviets aware of the blockade line and suggested relaying that information to them via U Thant at the United Nations. While the meeting progressed, at 11:03 am EDT a new message began to arrive from Khrushchev. The message stated, in part: "You are disturbed over Cuba. You say that this disturbs you because it is ninety-nine miles by sea from the coast of the United States of America. But... you have placed destructive missile weapons, which you call offensive, in Italy and Turkey, literally next to us.... I therefore make this proposal: We are willing to remove from Cuba the means which you regard as offensive.... Your representatives will make a declaration to the effect that the United States... will remove its analogous means from Turkey... and after that, persons entrusted by the United Nations Security Council could inspect on the spot the fulfillment of the pledges made." The executive committee continued to meet through the day. Throughout the crisis, Turkey had repeatedly stated that it would be upset if the Jupiter missiles were removed. Italy's Prime Minister Amintore Fanfani, who was also Foreign Minister ad interim, offered to allow withdrawal of the missiles deployed in Apulia as a bargaining chip. He gave the message to one of his most trusted friends, Ettore Bernabei, general manager of RAI-TV, to convey to Arthur M. Schlesinger Jr. Bernabei was in New York to attend an international conference on satellite TV broadcasting. Unknown to the Soviets, the US regarded the Jupiter missiles as obsolete and already supplanted by the Polaris nuclear ballistic submarine missiles. On the morning of October 27, a U-2F (the third CIA U-2A, modified for air-to-air refuelling) piloted by USAF Major Rudolf Anderson, departed its forward operating location at McCoy AFB, Florida. At approximately 12:00 pm EDT, the aircraft was struck by an SA-2 surface-to-air missile launched from Cuba. The aircraft crashed, and Anderson was killed. Stress in negotiations between the Soviets and the US intensified; only later was it assumed that the decision to fire the missile was made locally by an undetermined Soviet commander, acting on his own authority. Later that day, at about 3:41 pm EDT, several US Navy RF-8A Crusader aircraft, on low-level photo-reconnaissance missions, were fired upon. On October 28, 1962, Khrushchev told his son Sergei that the shooting down of Anderson's U-2 was by the "Cuban military at the direction of Raul Castro". At 4:00 pm EDT, Kennedy recalled members of EXCOMM to the White House and ordered that a message should immediately be sent to U Thant asking the Soviets to suspend work on the missiles while negotiations were carried out. During the meeting, General Maxwell Taylor delivered the news that the U-2 had been shot down. Kennedy had earlier claimed he would order an attack on such sites if fired upon, but he decided to not act unless another attack was made. Forty years later, McNamara said: Ellsberg said that Robert Kennedy (RFK) told him in 1964 that after the U-2 was shot down and the pilot killed, he (RFK) told Soviet ambassador Dobrynin, "You have drawn first blood ... . [T]he president had decided against advice ... not to respond militarily to that attack, but he [Dobrynin] should know that if another plane was shot at, ... we would take out all the SAMs and antiaircraft ... . And that would almost surely be followed by an invasion." Drafting response Emissaries sent by both Kennedy and Khrushchev agreed to meet at the Yenching Palace Chinese restaurant in the Cleveland Park neighbourhood of Washington, DC, on Saturday evening, October 27. Kennedy suggested to take Khrushchev's offer to trade away the missiles. Unknown to most members of the EXCOMM, but with the support of his brother the president, Robert Kennedy had been meeting with the Soviet Ambassador Dobrynin in Washington to discover whether the intentions were genuine. The EXCOMM was generally against the proposal because it would undermine NATO's authority, and the Turkish government had repeatedly stated it was against any such trade. As the meeting progressed, a new plan emerged, and Kennedy was slowly persuaded. The new plan called for him to ignore the latest message and instead to return to Khrushchev's earlier one. Kennedy was initially hesitant, feeling that Khrushchev would no longer accept the deal because a new one had been offered, but Llewellyn Thompson argued that it was still possible. White House Special Counsel and Adviser Ted Sorensen and Robert Kennedy left the meeting and returned 45 minutes later, with a draft letter to that effect. The President made several changes, had it typed, and sent it. After the EXCOMM meeting, a smaller meeting continued in the Oval Office. The group argued that the letter should be underscored with an oral message to Dobrynin that stated that if the missiles were not withdrawn, military action would be used to remove them. Rusk added one proviso that no part of the language of the deal would mention Turkey, but there would be an understanding that the missiles would be removed "voluntarily" in the immediate aftermath. The president agreed, and the message was sent. At Rusk's request, Fomin and Scali met again. Scali asked why the two letters from Khrushchev were so different, and Fomin claimed it was because of "poor communications". Scali replied that the claim was not credible and shouted that he thought it was a "stinking double cross". He went on to claim that an invasion was only hours away, and Fomin stated that a response to the US message was expected from Khrushchev shortly and urged Scali to tell the State Department that no treachery was intended. Scali said that he did not think anyone would believe him, but he agreed to deliver the message. The two went their separate ways, and Scali immediately typed out a memo for the EXCOMM. Within the US establishment, it was well understood that ignoring the second offer and returning to the first put Khrushchev in a terrible position. Military preparations continued, and all active duty Air Force personnel were recalled to their bases for possible action. Robert Kennedy later recalled the mood: "We had not abandoned all hope, but what hope there was now rested with Khrushchev's revising his course within the next few hours. It was a hope, not an expectation. The expectation was military confrontation by Tuesday (October 30), and possibly tomorrow (October 29) ...." At 8:05 pm EDT, the letter drafted earlier in the day was delivered. The message read, "As I read your letter, the key elements of your proposals—which seem generally acceptable as I understand them—are as follows: 1) You would agree to remove these weapons systems from Cuba under appropriate United Nations observation and supervision; and undertake, with suitable safe-guards, to halt the further introduction of such weapon systems into Cuba. 2) We, on our part, would agree—upon the establishment of adequate arrangements through the United Nations, to ensure the carrying out and continuation of these commitments (a) to remove promptly the quarantine measures now in effect and (b) to give assurances against the invasion of Cuba." The letter was also released directly to the press to ensure it could not be "delayed". With the letter delivered, a deal was on the table. As Robert Kennedy noted, there was little expectation it would be accepted. At 9:00 pm EDT, the EXCOMM met again to review the actions for the following day. Plans were drawn up for air strikes on the missile sites as well as other economic targets, notably petroleum storage. McNamara stated that they had to "have two things ready: a government for Cuba, because we're going to need one; and secondly, plans for how to respond to the Soviet Union in Europe, because sure as hell they're going to do something there". At 12:12 am EDT, on October 27, the US informed its NATO allies that "the situation is growing shorter.... the United States may find it necessary within a very short time in its interest and that of its fellow nations in the Western Hemisphere to take whatever military action may be necessary." To add to the concern, at 6:00 am, the CIA reported that all missiles in Cuba were ready for action. On October 27, Khrushchev also received a letter from Castro, what is now known as the Armageddon Letter (dated the day before), which was interpreted as urging the use of nuclear force in the event of an attack on Cuba: "I believe the imperialists' aggressiveness is extremely dangerous and if they actually carry out the brutal act of invading Cuba in violation of international law and morality, that would be the moment to eliminate such danger forever through an act of clear legitimate defense, however harsh and terrible the solution would be," Castro wrote. Averted nuclear launch Later that same day, what the White House later called "Black Saturday", the US Navy dropped a series of "signalling" depth charges (practice depth charges the size of hand grenades) on a Soviet submarine () at the blockade line, unaware that it was armed with a nuclear-tipped torpedo with orders that allowed it to be used if the submarine was damaged by depth charges or surface fire. As the submarine was too deep to monitor any radio traffic, the captain of the B-59, Valentin Grigorievitch Savitsky, decided that a war might already have started and wanted to launch a nuclear torpedo. The decision to launch these normally only required agreement from the two commanding officers on board, the Captain and the Political Officer. However, the commander of the submarine Flotilla, Vasily Arkhipov, was aboard B-59 and so he also had to agree. Arkhipov objected and so the nuclear launch was narrowly averted. On the same day a U-2 spy plane made an accidental, unauthorised ninety-minute overflight of the Soviet Union's far eastern coast. The Soviets responded by scrambling MiG fighters from Wrangel Island; in turn, the Americans launched F-102 fighters armed with nuclear air-to-air missiles over the Bering Sea. Crisis ends On Saturday, October 27, after much deliberation between the Soviet Union and Kennedy's cabinet, Kennedy secretly agreed to remove all missiles set in Turkey and possibly southern Italy, the former on the border of the Soviet Union, in exchange for Khrushchev removing all missiles in Cuba. There is some dispute as to whether removing the missiles from Italy was part of the secret agreement. Khrushchev wrote in his memoirs that it was, and when the crisis had ended McNamara gave the order to dismantle the missiles in both Italy and Turkey. At this point, Khrushchev knew things the US did not. First, that the shooting down of the U-2 by a Soviet missile violated direct orders from Moscow, and Cuban anti-aircraft fire against other US reconnaissance aircraft also violated direct orders from Khrushchev to Castro. Second, the Soviets already had 162 nuclear warheads on Cuba that the US did not then believe were there. Third, the Soviets and Cubans on the island would almost certainly have responded to an invasion by using those nuclear weapons, even though Castro believed that every human in Cuba would likely die as a result. Khrushchev also knew but may not have considered the fact that he had submarines armed with nuclear weapons that the US Navy may not have known about. Khrushchev knew he was losing control. President Kennedy had been told in early 1961 that a nuclear war would likely kill a third of humanity, with most or all of those deaths concentrated in the US, the USSR, Europe and China; Khrushchev may well have received similar reports from his military. With this background, when Khrushchev heard Kennedy's threats relayed by Robert Kennedy to Soviet Ambassador Dobrynin, he immediately drafted his acceptance of Kennedy's latest terms from his dacha without involving the Politburo, as he had previously, and had them immediately broadcast over Radio Moscow, which he believed the US would hear. In that broadcast at 9:00 am EST, on October 28, Khrushchev stated that "the Soviet government, in addition to previously issued instructions on the cessation of further work at the building sites for the weapons, has issued a new order on the dismantling of the weapons which you describe as 'offensive' and their crating and return to the Soviet Union." At 10:00 am, October 28, Kennedy first learned of Khrushchev's solution to the crisis with the US removing the 15 Jupiters in Turkey and the Soviets would remove the rockets from Cuba. Khrushchev had made the offer in a public statement for the world to hear. Despite almost solid opposition from his senior advisers, Kennedy quickly embraced the Soviet offer. "This is a pretty good play of his," Kennedy said, according to a tape recording that he made secretly of the Cabinet Room meeting. Kennedy had deployed the Jupiters in March of the year, causing a stream of angry outbursts from Khrushchev. "Most people will think this is a rather even trade and we ought to take advantage of it," Kennedy said. Vice President Lyndon Johnson was the first to endorse the missile swap but others continued to oppose the offer. Finally, Kennedy ended the debate. "We can't very well invade Cuba with all its toil and blood," Kennedy said, "when we could have gotten them out by making a deal on the same missiles on Turkey. If that's part of the record, then you don't have a very good war." Kennedy immediately responded to Khrushchev's letter, issuing a statement calling it "an important and constructive contribution to peace". He continued this with a formal letter: Kennedy's planned statement would also contain suggestions he had received from his adviser Schlesinger Jr. in a "Memorandum for the President" describing the "Post Mortem on Cuba". Kennedy's October 28 telephone conversation with Eisenhower revealed that the President thought the crisis would result in the two superpowers being "toe to toe" in Berlin by the end of the following month. He also claimed that the Soviet leader had subsequently offered to withdraw from Cuba in exchange for the withdrawal of missiles from Turkey, but they "couldn't get into that deal." When former US President Harry Truman called President Kennedy the day of Khrushchev's offer, the President informed him that his Administration had rejected the Soviet leader's offer to withdraw missiles from Turkey and was planning on using the Soviet setback in Cuba to escalate tensions in Berlin. The US continued the blockade; in the following days, aerial reconnaissance proved that the Soviets were making progress in removing the missile systems. The 42 missiles and their support equipment were loaded onto eight Soviet ships. On November 2, 1962, Kennedy addressed the US via radio and television broadcasts regarding the dismantlement process of the Soviet R-12 missile bases located in the Caribbean region. The ships left Cuba on November 5 to 9. The US made a final visual check as each of the ships passed the blockade line. Further diplomatic efforts were required to remove the Soviet Il-28 bombers, and they were loaded on three Soviet ships on December 5 and 6. Concurrent with the Soviet commitment on the Il-28s, the US government announced the end of the blockade from 6:45 pm EST on November 20, 1962. At the time when the Kennedy administration thought that the Cuban Missile Crisis was resolved, nuclear tactical rockets stayed in Cuba since they were not part of the Kennedy-Khrushchev understandings and the Americans did not know about them. The Soviets changed their minds, fearing possible future Cuban militant steps, and on November 22, 1962, Deputy Premier of the Soviet Union Anastas Mikoyan told Castro that the rockets with the nuclear warheads were being removed as well. In his negotiations with the Soviet Ambassador Anatoly Dobrynin, Robert Kennedy informally proposed that the Jupiter missiles in Turkey would be removed "within a short time after this crisis was over". Under an operation code-named Operation Pot Pie, the removal of the Jupiters from Italy and Turkey began on 1 April and was completed by 24 April 1963. The initial plans were to recycle the missiles for use in other programs, but NASA and the USAF were not interested in retaining the missile hardware. The missile bodies were destroyed on site, warheads, guidance packages, and launching equipment worth $14 million were returned to the United States. The practical effect of the Kennedy-Khrushchev Pact was that the US would remove their rockets from Italy and Turkey and that the Soviets had no intention of resorting to nuclear war if they were out-gunned by the US. Because the withdrawal of the Jupiter missiles from NATO bases in Italy and Turkey was not made public at the time, Khrushchev appeared to have lost the conflict and become weakened. The perception was that Kennedy had won the contest between the superpowers and that Khrushchev had been humiliated. Both Kennedy and Khrushchev took every step to avoid full conflict despite pressures from their respective governments. Khrushchev held power for another two years. Nuclear forces By the time of the crisis in October 1962, the total number of nuclear weapons in the stockpiles of each country numbered approximately 26,400 for the United States and 3,300 for the Soviet Union. For the U.S., around 3,500 (with a combined yield of approximately 6,300 megatons) would have been used in attacking the Soviet Union. The Soviets had considerably less strategic firepower at their disposal: some 300–320 bombs and warheads, without submarine-based weapons in a position to threaten the U.S. mainland and most of their intercontinental delivery systems based on bombers that would have difficulty penetrating North American air defence systems. However, they had already moved 158 warheads to Cuba; between 95 and 100 would have been ready for use if the U.S. had invaded Cuba, most of which were short-ranged. The U.S. had approximately 4,375 nuclear weapons deployed in Europe, most of which were tactical weapons such as nuclear artillery, with around 450 of them for ballistic missiles, cruise missiles, and aircraft; the Soviets had more than 550 similar weapons in Europe. United States SAC ICBM: 182 (at peak alert); 121 Atlas D/E/F, 53 Titan 1, 8 Minuteman 1A Bombers: 1,595; 880 B-47, 639 B-52, 76 B-58 (1,479 bombers and 1,003 refuelling tankers available at peak alert) Atlantic Command 112 UGM-27 Polaris in seven SSBNs (16 each); five submarines with Polaris A1 and two with A2 Pacific Command 4–8 Regulus cruise missiles 16 Mace cruise missiles 3 aircraft carriers with some 40 bombs each Land-based aircraft with some 50 bombs European Command IRBM: 105; 60 Thor (UK), 45 Jupiter (30 Italy, 15 Turkey) 48–90 Mace cruise missiles 2 U.S. Sixth Fleet aircraft carriers with some 40 bombs each Land-based aircraft with some 50 bombs Soviet Union Strategic (for use against North America): ICBM: 42; four SS-6/R-7A at Plesetsk with two in reserve at Baikonur, 36 SS-7/R-16 with 26 in silos and ten on open launch pads Bombers: 160 (readiness unknown); 100 Tu-95 Bear, 60 3M Bison B Regional (mostly targeting Europe, and others targeting U.S. bases in east Asia): MRBM: 528 SS-4/R-12, 492 at soft launch sites and 36 at hard launch sites (approximately six to eight R-12s were operational in Cuba, capable of striking the U.S. mainland at any moment until the crisis was resolved) IRBM: 28 SS-5/R-14 Unknown number of Tu-16 Badger, Tu-22 Blinder, and MiG-21 aircraft tasked with nuclear strike missions Aftermath Soviet leadership The enormity of how close the world came to thermonuclear war impelled Khrushchev to propose a far-reaching easing of tensions with the US. In a letter to President Kennedy dated October 30, 1962, Khrushchev outlined a range of bold initiatives to forestall the possibility of a further nuclear crisis, including proposing a non-aggression treaty between the North Atlantic Treaty Organization (NATO) and the Warsaw Pact or even disbanding these military blocs, a treaty to cease all nuclear weapons testing and even the elimination of all nuclear weapons, resolution of the hot-button issue of Germany by both East and West formally accepting the existence of West Germany and East Germany, and US recognition of the government of mainland China. The letter invited counter-proposals and further exploration of these and other issues through peaceful negotiations. Khrushchev invited Norman Cousins, the editor of a major US periodical and an anti-nuclear weapons activist, to serve as liaison with President Kennedy, and Cousins met with Khrushchev for four hours in December 1962. Kennedy's response to Khrushchev's proposals was lukewarm but Kennedy expressed to Cousins that he felt constrained in exploring these issues due to pressure from hardliners in the US national security apparatus. The US and the USSR did shortly thereafter agree on a treaty banning atmospheric testing of nuclear weapons, known as the "Partial Nuclear Test Ban Treaty". Further after the crisis, the US and the Soviet Union created the Moscow–Washington hotline, a direct communications link between Moscow and Washington. The purpose was to have a way that the leaders of the two Cold War countries could communicate directly to solve such a crisis. The compromise embarrassed Khrushchev and the Soviet Union because the withdrawal of US missiles from Italy and Turkey was a secret deal between Kennedy and Khrushchev. Khrushchev went to Kennedy as he thought that the crisis was getting out of hand, but the Soviets were seen as retreating from circumstances that they had started. Khrushchev's fall from power two years later was in part because of the Soviet Politburo's embarrassment at both Khrushchev's eventual concessions to the US and this ineptitude in precipitating the crisis in the first place. According to Dobrynin, the top Soviet leadership took the Cuban outcome as "a blow to its prestige bordering on humiliation". Cuban leadership Cuba perceived the outcome as a betrayal by the Soviets, as decisions on how to resolve the crisis had been made exclusively by Kennedy and Khrushchev. Castro was especially upset that certain issues of interest to Cuba, such as the status of the US Naval Base in Guantánamo, were not addressed. That caused Cuban–Soviet relations to deteriorate for years to come. Romanian leadership During the crisis, Gheorghe Gheorghiu-Dej, general secretary of Romania's communist party, sent a letter to President Kennedy dissociating Romania from Soviet actions. This convinced the American administration of Bucharest's intentions of detaching itself from Moscow. US leadership The worldwide US Forces DEFCON 3 status was returned to DEFCON 4 on November 20, 1962. General Curtis LeMay told the President that the resolution of the crisis was the "greatest defeat in our history"; his was a minority position. He had pressed for an immediate invasion of Cuba as soon as the crisis began and still favoured invading Cuba even after the Soviets had withdrawn their missiles. Twenty-five years later, LeMay still believed that "We could have gotten not only the missiles out of Cuba, we could have gotten the Communists out of Cuba at that time." At least four contingency strikes were armed and launched from Florida against Cuban airfields and suspected missile sites in 1963 and 1964, although all were diverted to the Pinecastle Range Complex after the planes passed Andros island. Critics, including Seymour Melman and Seymour Hersh, suggested that the Cuban Missile Crisis encouraged the United States' use of military means, such as the case in the later Vietnam War. Human casualties U-2 pilot Anderson's body was returned to the US and was buried with full military honours in South Carolina. He was the first recipient of the newly created Air Force Cross, which was awarded posthumously. Although Anderson was the only combatant fatality during the crisis, 11 crew members of three reconnaissance Boeing RB-47 Stratojets of the 55th Strategic Reconnaissance Wing were also killed in crashes during the period between September 27 and November 11, 1962. Seven crew died when a Military Air Transport Service Boeing C-135B Stratolifter delivering ammunition to Guantanamo Bay Naval Base stalled and crashed on approach on October 23. Later revelations Schlesinger, a historian and adviser to Kennedy, told National Public Radio in an interview on October 16, 2002, that Castro did not want the missiles, but Khrushchev pressured Castro to accept them. Castro was not completely happy with the idea, but the Cuban National Directorate of the Revolution accepted them, both to protect Cuba against US attack and to aid the Soviet Union. Schlesinger believed that when the missiles were withdrawn, Castro was more angry with Khrushchev than with Kennedy because Khrushchev had not consulted Castro before deciding to remove them. Although Castro was infuriated by Khrushchev, he planned on striking the US with the remaining missiles if an invasion of the island occurred. In early 1992, it was confirmed that Soviet forces in Cuba had already received tactical nuclear warheads for their artillery rockets and Il-28 bombers when the crisis broke. Castro stated that he would have recommended their use if the US invaded despite Cuba being destroyed. Arguably, the most dangerous moment in the crisis was not recognised until the Cuban Missile Crisis Havana conference, in October 2002. Attended by many of the veterans of the crisis, they all learned that on October 27, 1962, had tracked and dropped signalling depth charges (the size of hand grenades) on , a Soviet Project 641 (NATO designation ) submarine. Unknown to the US, it was armed with a 15-kiloton nuclear torpedo. Running out of air, the Soviet submarine was surrounded by American warships and desperately needed to surface. An argument broke out among three officers aboard B-59, including submarine captain Valentin Savitsky, political officer Ivan Semonovich Maslennikov, and Deputy brigade commander Captain 2nd rank (US Navy Commander rank equivalent) Vasily Arkhipov. An exhausted Savitsky became furious and ordered that the nuclear torpedo on board be made combat ready. Accounts differ about whether Arkhipov convinced Savitsky not to make the attack or whether Savitsky himself finally concluded that the only reasonable choice left open to him was to come to the surface. During the conference, McNamara stated that nuclear war had come much closer than people had thought. Thomas Blanton, director of the National Security Archive, said, "A guy called Vasili Arkhipov saved the world." Fifty years after the crisis, Graham T. Allison wrote: BBC journalist Joe Matthews published the story, on October 13, 2012, behind the 100 tactical nuclear warheads mentioned by Graham Allison in the excerpt above. Khrushchev feared that Castro's hurt pride and widespread Cuban indignation over the concessions he had made to Kennedy might lead to a breakdown of the agreement between the Soviet Union and the US. To prevent that, Khrushchev decided to offer to give Cuba more than 100 tactical nuclear weapons that had been shipped to Cuba along with the long-range missiles but, crucially, had escaped the notice of US intelligence. Khrushchev determined that because the Americans had not listed the missiles on their list of demands, keeping them in Cuba would be in the Soviet Union's interests. Anastas Mikoyan was tasked with the negotiations with Castro over the missile transfer deal that was designed to prevent a breakdown in the relations between Cuba and the Soviet Union. While in Havana, Mikoyan witnessed the mood swings and paranoia of Castro, who was convinced that Moscow had made the agreement with the US at the expense of Cuba's defence. Mikoyan, on his own initiative, decided that Castro and his military should not be given control of weapons with an explosive force equal to 100 Hiroshima-sized bombs under any circumstances. He defused the seemingly intractable situation, which risked re-escalating the crisis, on November 22, 1962. During a tense, four-hour meeting, Mikoyan convinced Castro that despite Moscow's desire to help, it would be in breach of an unpublished Soviet law, which did not actually exist, to transfer the missiles permanently into Cuban hands and provide them with an independent nuclear deterrent. Castro was forced to give way and, much to the relief of Khrushchev and the rest of the Soviet government, the tactical nuclear weapons were crated and returned by sea to the Soviet Union during December 1962. In popular culture The American popular media, especially television, made frequent use of the events of the missile crisis in both fictional and documentary forms. Jim Willis includes the Crisis as one of the 100 "media moments that changed America". Sheldon Stern finds that a half century later there are still many "misconceptions, half-truths, and outright lies" that have shaped media versions of what happened in the White House during those harrowing two weeks. Historian William Cohn argued in a 1976 article that television programs are typically the main source used by the American public to know about and interpret the past. According to Cold War historian Andrei Kozovoi, the Soviet media proved somewhat disorganised as it was unable to generate a coherent popular history. Khrushchev lost power and was airbrushed out of the story. Cuba was no longer portrayed as a heroic David against the American Goliath. One contradiction that pervaded the Soviet media campaign was between the pacifistic rhetoric of the peace movement that emphasises the horrors of nuclear war and the militancy of the need to prepare Soviets for war against American aggression. Media representations Non fiction Thirteen Days, Robert F. Kennedy's memoir of the crisis, posthumously released in 1969; It became the basis for numerous films and documentaries. The Missiles of October, 1974 TV docudrama about the crisis. The Fog of War, 2003 American documentary film about the life and times of former US Secretary of Defense Robert S. McNamara directed by Errol Morris, which won that years' Academy Award for Best Documentary Feature." Fiction Topaz, 1969 film by Alfred Hitchcock based on the 1967 novel by Leon Uris, set during the run-up to the crisis. Matinee, 1993 film starring John Goodman set during the Cuban Missile Crisis in which an independent-filmmaker decides to seize the opportunity to debut an atomic themed film. Thirteen Days (film), based on The Kennedy Tapes: Inside the White House During the Cuban Missile Crisis, a 2000 docudrama directed by Roger Donaldson about the crisis. Command & Conquer: Red Alert 3, a | ordered a naval "quarantine" on October 22 to prevent further missiles from reaching Cuba. By using the term "quarantine" rather than "blockade" (an act of war by legal definition), the United States was able to avoid the implications of a state of war. The US announced it would not permit offensive weapons to be delivered to Cuba and demanded that the weapons already in Cuba be dismantled and returned to the Soviet Union. After several days of tense negotiations, an agreement was reached between Kennedy and Khrushchev. Publicly, the Soviets would dismantle their offensive weapons in Cuba and return them to the Soviet Union, subject to United Nations verification, in exchange for a US public declaration and agreement to not invade Cuba again. Secretly, the United States agreed that it would dismantle all of the Jupiter MRBMs, which had been deployed in Turkey against the Soviet Union. There has been debate on whether or not Italy was included in the agreement as well. While the Soviets dismantled their missiles, some Soviet bombers remained in Cuba, and the United States kept the Naval quarantine in place until November 20 of that year. When all offensive missiles and the Ilyushin Il-28 light bombers had been withdrawn from Cuba, the blockade was formally ended on November 20, 1962. The negotiations between the United States and the Soviet Union pointed out the necessity of a quick, clear, and direct communication line between the two Superpowers. As a result, the Moscow–Washington hotline was established. A series of agreements later reduced US–Soviet tensions for several years until both parties eventually resumed expanding their nuclear arsenals. Background Cuba and Berlin Wall With the end of World War II and the start of the Cold War, the United States had grown concerned about the expansion of communism. A Latin American country openly allying with the Soviet Union was regarded by the US as unacceptable. It would, for example, defy the Monroe Doctrine, a US policy limiting US involvement in European colonies and European affairs but holding that the Western Hemisphere was in the US sphere of influence. The Kennedy administration had been publicly embarrassed by the failed Bay of Pigs Invasion in April 1961, which had been launched under President John F. Kennedy by CIA-trained forces of Cuban exiles. Afterward, former President Dwight Eisenhower told Kennedy that "the failure of the Bay of Pigs will embolden the Soviets to do something that they would otherwise not do." The half-hearted invasion left Soviet first secretary Nikita Khrushchev and his advisers with the impression that Kennedy was indecisive and, as one Soviet adviser wrote, "too young, intellectual, not prepared well for decision making in crisis situations... too intelligent and too weak". US covert operations against Cuba continued in 1961 with the unsuccessful Operation Mongoose. In addition, Khrushchev's impression of Kennedy's weaknesses was confirmed by the President's response during the Berlin Crisis of 1961, particularly to the building of the Berlin Wall. Speaking to Soviet officials in the aftermath of the crisis, Khrushchev asserted, "I know for certain that Kennedy doesn't have a strong background, nor, generally speaking, does he have the courage to stand up to a serious challenge." He also told his son Sergei that on Cuba, Kennedy "would make a fuss, make more of a fuss, and then agree". In January 1962, US Army General Edward Lansdale described plans to overthrow the Cuban government in a top-secret report (partially declassified 1989), addressed to Kennedy and officials involved with Operation Mongoose. CIA agents or "pathfinders" from the Special Activities Division were to be infiltrated into Cuba to carry out sabotage and organization, including radio broadcasts. In February 1962, the US launched an embargo against Cuba, and Lansdale presented a 26-page, top-secret timetable for implementation of the overthrow of the Cuban government, mandating guerrilla operations to begin in August and September. "Open revolt and overthrow of the Communist regime" would occur in the first two weeks of October. Missile gap When Kennedy ran for president in 1960, one of his key election issues was an alleged "missile gap" with the Soviets leading. Actually, the US at that time led the Soviets by a wide margin that would only increase. In 1961, the Soviets had only four intercontinental ballistic missiles (R-7 Semyorka). By October 1962, they may have had a few dozen, with some intelligence estimates as high as 75. The US, on the other hand, had 170 ICBMs and was quickly building more. It also had eight - and ballistic missile submarines, with the capability to launch 16 Polaris missiles, each with a range of . Khrushchev increased the perception of a missile gap when he loudly boasted to the world that the Soviets were building missiles "like sausages" but Soviet missiles' numbers and capabilities were nowhere close to his assertions. The Soviet Union had medium-range ballistic missiles in quantity, about 700 of them, but they were very unreliable and inaccurate. The US had a considerable advantage in total number of nuclear warheads (27,000 against 3,600) and in the technology required for their accurate delivery. The US also led in missile defensive capabilities, naval and air power; but the Soviets had a 2–1 advantage in conventional ground forces, more pronounced in field guns and tanks, particularly in the European theatre. Soviet deployment of missiles in Cuba Justification In May 1962, Soviet First Secretary Nikita Khrushchev was persuaded by the idea of countering the US's growing lead in developing and deploying strategic missiles by placing Soviet intermediate-range nuclear missiles in Cuba, despite the misgivings of the Soviet Ambassador in Havana, Alexandr Ivanovich Alexeyev, who argued that Castro would not accept the deployment of the missiles. Khrushchev faced a strategic situation in which the US was perceived to have a "splendid first strike" capability that put the Soviet Union at a huge disadvantage. In 1962, the Soviets had only 20 ICBMs capable of delivering nuclear warheads to the US from inside the Soviet Union. The poor accuracy and reliability of the missiles raised serious doubts about their effectiveness. A newer, more reliable generation of ICBMs would become operational only after 1965. Therefore, Soviet nuclear capability in 1962 placed less emphasis on ICBMs than on medium and intermediate-range ballistic missiles (MRBMs and IRBMs). The missiles could hit American allies and most of Alaska from Soviet territory but not the Contiguous United States. Graham Allison, the director of Harvard University's Belfer Center for Science and International Affairs, points out, "The Soviet Union could not right the nuclear imbalance by deploying new ICBMs on its own soil. In order to meet the threat it faced in 1962, 1963, and 1964, it had very few options. Moving existing nuclear weapons to locations from which they could reach American targets was one." A second reason that Soviet missiles were deployed to Cuba was because Khrushchev wanted to bring West Berlin, controlled by the American, British and French within Communist East Germany, into the Soviet orbit. The East Germans and Soviets considered western control over a portion of Berlin a grave threat to East Germany. Khrushchev made West Berlin the central battlefield of the Cold War. Khrushchev believed that if the US did nothing over the missile deployments in Cuba, he could muscle the West out of Berlin using said missiles as a deterrent to western countermeasures in Berlin. If the US tried to bargain with the Soviets after it became aware of the missiles, Khrushchev could demand trading the missiles for West Berlin. Since Berlin was strategically more important than Cuba, the trade would be a win for Khrushchev, as Kennedy recognized: "The advantage is, from Khrushchev's point of view, he takes a great chance but there are quite some rewards to it." Thirdly, from the perspective of the Soviet Union and of Cuba, it seemed that the United States wanted to increase its presence in Cuba. With actions including the attempt to expel Cuba from the Organization of American States, placing economic sanctions on the nation, directly invading it in addition to conducting secret operations on containing communism and Cuba, it was assumed that America was trying to overrun Cuba. As a result, to try and prevent this, the USSR would place missiles in Cuba and neutralise the threat. This would ultimately serve to secure Cuba against attack and keep the country in the Socialist Bloc. Another major reason why Khrushchev planned to place missiles on Cuba undetected was to "level the playing field" with the evident American nuclear threat. America had the upper hand as they could launch from Turkey and destroy the USSR before they would have a chance to react. After the transmission of nuclear missiles, Khrushchev had finally established mutually assured destruction, meaning that if the U.S. decided to launch a nuclear strike against the USSR, the latter would react by launching a retaliatory nuclear strike against the U.S. Finally, placing nuclear missiles on Cuba was a way for the USSR to show their support for Cuba and support the Cuban people who viewed the United States as a threatening force, as the latter had become their ally after the Cuban Revolution of 1959. According to Khrushchev, the Soviet Union's motives were "aimed at allowing Cuba to live peacefully and develop as its people desire". Deployment In early 1962, a group of Soviet military and missile construction specialists accompanied an agricultural delegation to Havana. They obtained a meeting with Cuban prime minister Fidel Castro. The Cuban leadership had a strong expectation that the US would invade Cuba again and enthusiastically approved the idea of installing nuclear missiles in Cuba. According to another source, Castro objected to the missiles' deployment as making him look like a Soviet puppet, but he was persuaded that missiles in Cuba would be an irritant to the US and help the interests of the entire socialist camp. Also, the deployment would include short-range tactical weapons (with a range of 40 km, usable only against naval vessels) that would provide a "nuclear umbrella" for attacks upon the island. By May, Khrushchev and Castro agreed to place strategic nuclear missiles secretly in Cuba. Like Castro, Khrushchev felt that a US invasion of Cuba was imminent and that to lose Cuba would do great harm to the communists, especially in Latin America. He said he wanted to confront the Americans "with more than words.... the logical answer was missiles". The Soviets maintained their tight secrecy, writing their plans longhand, which were approved by Marshal of the Soviet Union Rodion Malinovsky on July 4 and Khrushchev on July 7. From the very beginning, the Soviets' operation entailed elaborate denial and deception, known as "maskirovka". All the planning and preparation for transporting and deploying the missiles were carried out in the utmost secrecy, with only a very few told the exact nature of the mission. Even the troops detailed for the mission were given misdirection by being told that they were headed for a cold region and being outfitted with ski boots, fleece-lined parkas, and other winter equipment. The Soviet code-name was Operation Anadyr. The Anadyr River flows into the Bering Sea, and Anadyr is also the capital of Chukotsky District and a bomber base in the far eastern region. All the measures were meant to conceal the program from both internal and external audiences. Specialists in missile construction under the guise of "machine operators", "irrigation specialists", and "agricultural specialists" arrived in July. A total of 43,000 foreign troops would ultimately be brought in. Chief Marshal of Artillery Sergei Biryuzov, Head of the Soviet Rocket Forces, led a survey team that visited Cuba. He told Khrushchev that the missiles would be concealed and camouflaged by palm trees. The Cuban leadership was further upset when on September 20, the US Senate approved Joint Resolution 230, which expressed the US was determined "to prevent in Cuba the creation or use of an externally-supported military capability endangering the security of the United States". On the same day, the US announced a major military exercise in the Caribbean, PHIBRIGLEX-62, which Cuba denounced as a deliberate provocation and proof that the US planned to invade Cuba. The Soviet leadership believed, based on its perception of Kennedy's lack of confidence during the Bay of Pigs Invasion, that he would avoid confrontation and accept the missiles as a . On September 11, the Soviet Union publicly warned that a US attack on Cuba or on Soviet ships that were carrying supplies to the island would mean war. The Soviets continued the Maskirovka program to conceal their actions in Cuba. They repeatedly denied that the weapons being brought into Cuba were offensive in nature. On September 7, Soviet Ambassador to the United States Anatoly Dobrynin assured United States Ambassador to the United Nations Adlai Stevenson that the Soviet Union was supplying only defensive weapons to Cuba. On September 11, the Telegraph Agency of the Soviet Union (TASS: Telegrafnoe Agentstvo Sovetskogo Soyuza) announced that the Soviet Union had no need or intention to introduce offensive nuclear missiles into Cuba. On October 13, Dobrynin was questioned by former Undersecretary of State Chester Bowles about whether the Soviets planned to put offensive weapons in Cuba. He denied any such plans. On October 17, Soviet embassy official Georgy Bolshakov brought President Kennedy a personal message from Khrushchev reassuring him that "under no circumstances would surface-to-surface missiles be sent to Cuba." As early as August 1962, the US suspected the Soviets of building missile facilities in Cuba. During that month, its intelligence services gathered information about sightings by ground observers of Soviet-built MiG-21 fighters and Il-28 light bombers. U-2 spy planes found S-75 Dvina (NATO designation SA-2) surface-to-air missile sites at eight different locations. CIA director John A. McCone was suspicious. Sending antiaircraft missiles into Cuba, he reasoned, "made sense only if Moscow intended to use them to shield a base for ballistic missiles aimed at the United States". On August 10, he wrote a memo to Kennedy in which he guessed that the Soviets were preparing to introduce ballistic missiles into Cuba. With important Congressional elections scheduled for November, the crisis became enmeshed in American politics. On August 31, Senator Kenneth Keating (R-New York) warned on the Senate floor that the Soviet Union was "in all probability" constructing a missile base in Cuba. He charged the Kennedy administration with covering up a major threat to the US, thereby starting the crisis. He may have received this initial "remarkably accurate" information from his friend, former congresswoman and ambassador Clare Boothe Luce, who in turn received it from Cuban exiles. A later confirming source for Keating's information possibly was the West German ambassador to Cuba, who had received information from dissidents inside Cuba that Soviet troops had arrived in Cuba in early August and were seen working "in all probability on or near a missile base" and who passed this information to Keating on a trip to Washington in early October. Air Force General Curtis LeMay presented a pre-invasion bombing plan to Kennedy in September, and spy flights and minor military harassment from US forces at Guantanamo Bay Naval Base were the subject of continual Cuban diplomatic complaints to the US government. The first consignment of Soviet R-12 missiles arrived on the night of September 8, followed by a second on September 16. The R-12 was a medium-range ballistic missile, capable of carrying a thermonuclear warhead. It was a single-stage, road-transportable, surface-launched, storable liquid propellant fuelled missile that could deliver a megaton-class nuclear weapon. The Soviets were building nine sites—six for R-12 medium-range missiles (NATO designation SS-4 Sandal) with an effective range of and three for R-14 intermediate-range ballistic missiles (NATO designation SS-5 Skean) with a maximum range of . On October 7, Cuban President Osvaldo Dorticós Torrado spoke at the UN General Assembly: "If... we are attacked, we will defend ourselves. I repeat, we have sufficient means with which to defend ourselves; we have indeed our inevitable weapons, the weapons, which we would have preferred not to acquire, and which we do not wish to employ." On October 10 in another Senate speech Sen. Keating reaffirmed his earlier warning of August 31 and stated that, "Construction has begun on at least a half dozen launching sites for intermediate range tactical missiles." Missiles reported The missiles in Cuba allowed the Soviets to effectively target most of the Continental US. The planned arsenal was forty launchers. The Cuban populace readily noticed the arrival and deployment of the missiles and hundreds of reports reached Miami. US intelligence received countless reports, many of dubious quality or even laughable, most of which could be dismissed as describing defensive missiles. Only five reports bothered the analysts. They described large trucks passing through towns at night that were carrying very long canvas-covered cylindrical objects that could not make turns through towns without backing up and maneuvering. Defensive missile transporters, it was believed, could make such turns without undue difficulty. The reports could not be satisfactorily dismissed. Aerial confirmation The United States had been sending U-2 surveillance over Cuba since the failed Bay of Pigs Invasion. The first issue that led to a pause in reconnaissance flights took place on August 30, when a U-2 operated by the US Air Force's Strategic Air Command flew over Sakhalin Island in the Soviet Far East by mistake. The Soviets lodged a protest and the US apologized. Nine days later, a Taiwanese-operated U-2 was lost over western China to an SA-2 surface-to-air missile. US officials were worried that one of the Cuban or Soviet SAMs in Cuba might shoot down a CIA U-2, initiating another international incident. In a meeting with members of the Committee on Overhead Reconnaissance (COMOR) on September 10, Secretary of State Dean Rusk and National Security Advisor McGeorge Bundy heavily restricted further U-2 flights over Cuban airspace. The resulting lack of coverage over the island for the next five weeks became known to historians as the "Photo Gap". No significant U-2 coverage was achieved over the interior of the island. US officials attempted to use a Corona photo-reconnaissance satellite to obtain coverage over reported Soviet military deployments, but imagery acquired over western Cuba by a Corona KH-4 mission on October 1 was heavily covered by clouds and haze and failed to provide any usable intelligence. At the end of September, Navy reconnaissance aircraft photographed the Soviet ship Kasimov, with large crates on its deck the size and shape of Il-28 jet bomber fuselages. In September 1962, analysts from the Defense Intelligence Agency (DIA) noticed that Cuban surface-to-air missile sites were arranged in a pattern similar to those used by the Soviet Union to protect its ICBM bases, leading DIA to lobby for the resumption of U-2 flights over the island. Although in the past the flights had been conducted by the CIA, pressure from the Defense Department led to that authority being transferred to the Air Force. Following the loss of a CIA U-2 over the Soviet Union in May 1960, it was thought that if another U-2 were shot down, an Air Force aircraft arguably being used for a legitimate military purpose would be easier to explain than a CIA flight. When the reconnaissance missions were reauthorized on October 9, poor weather kept the planes from flying. The US first obtained U-2 photographic evidence of the missiles on October 14, when a U-2 flight piloted by Major Richard Heyser took 928 pictures on a path selected by DIA analysts, capturing images of what turned out to be an SS-4 construction site at San Cristóbal, Pinar del Río Province (now in Artemisa Province), in western Cuba. President notified On October 15, the CIA's National Photographic Interpretation Center (NPIC) reviewed the U-2 photographs and identified objects that they interpreted as medium range ballistic missiles. This identification was made, in part, on the strength of reporting provided by Oleg Penkovsky, a double agent in the GRU working for the CIA and MI6. Although he provided no direct reports of the Soviet missile deployments to Cuba, technical and doctrinal details of Soviet missile regiments that had been provided by Penkovsky in the months and years prior to the Crisis helped NPIC analysts correctly identify the missiles on U-2 imagery. That evening, the CIA notified the Department of State and at 8:30 pm EDT, Bundy chose to wait until the next morning to tell the President. McNamara was briefed at midnight. The next morning, Bundy met with Kennedy and showed him the U-2 photographs and briefed him on the CIA's analysis of the images. At 6:30 pm EDT, Kennedy convened a meeting of the nine members of the National Security Council and five other key advisers, in a group he formally named the Executive Committee of the National Security Council (EXCOMM) after the fact on October 22 by National Security Action Memorandum 196. Without informing the members of EXCOMM, President Kennedy tape-recorded all of their proceedings, and Sheldon M. Stern, head of the Kennedy library transcribed some of them. On October 16, President Kennedy notified Attorney General Robert Kennedy that he was convinced the Soviets were placing missiles in Cuba and it was a legitimate threat. This made the threat of nuclear destruction by two world superpowers a reality. Robert Kennedy responded by contacting the Soviet Ambassador, Anatoly Dobrynin. Robert Kennedy expressed his "concern about what was happening" and Dobrynin "was instructed by Soviet Chairman Nikita S. Khrushchev to assure President Kennedy that there would be no ground-to-ground missiles or offensive weapons placed in Cuba". Khrushchev further assured Kennedy that the Soviet Union had no intention of "disrupting the relationship of our two countries" despite the photo evidence presented before President Kennedy. Responses considered The US had no plan in place because until recently its intelligence had been convinced that the Soviets would never install nuclear missiles in Cuba. EXCOMM, of which Vice President Lyndon B. Johnson was a member, quickly discussed several possible courses of action: Do nothing: American vulnerability to Soviet missiles was not new. Diplomacy: Use diplomatic pressure to get the Soviet Union to remove the missiles. Secret approach: Offer Castro the choice of splitting with the Soviets or being invaded. Invasion: Full-force invasion of Cuba and overthrow of Castro. Air strike: Use the US Air Force to attack all known missile sites. Blockade: Use the US Navy to block any missiles from arriving in Cuba. The Joint Chiefs of Staff unanimously agreed that a full-scale attack and invasion was the only solution. They believed that the Soviets would not attempt to stop the US from conquering Cuba. Kennedy was skeptical: Kennedy concluded that attacking Cuba by air would signal the Soviets to presume "a clear line" to conquer Berlin. Kennedy also believed that US allies would think of the country as "trigger-happy cowboys" who lost Berlin because they could not peacefully resolve the Cuban situation. The EXCOMM then discussed the effect on the strategic balance of power, both political and military. The Joint Chiefs of Staff believed that the missiles would seriously alter the military balance, but McNamara disagreed. An extra 40, he reasoned, would make little difference to the overall strategic balance. The US already had approximately 5,000 strategic warheads, but the Soviet Union had only 300. McNamara concluded that the Soviets having 340 would not therefore substantially alter the strategic balance. In 1990, he reiterated that "it made no difference.... The military balance wasn't changed. I didn't believe it then, and I don't believe it now." The EXCOMM agreed that the missiles would affect the political balance. Kennedy had explicitly promised the American people less than a month before the crisis that "if Cuba should possess a capacity to carry out offensive actions against the United States... the United States would act." Also, credibility among US allies and people would be damaged if the Soviet Union appeared to redress the strategic balance by placing missiles in Cuba. Kennedy explained after the crisis that "it would have politically changed the balance of power. It would have appeared to, and appearances contribute to reality." On October 18, Kennedy met with Soviet Minister of Foreign Affairs Andrei Gromyko, who claimed the weapons were for defensive purposes only. Not wanting to expose what he already knew and to avoid panicking the American public, Kennedy did not reveal that he was already aware of the missile buildup. By October 19, frequent U-2 spy flights showed four operational sites. Operational plans Two Operational Plans (OPLAN) were considered. OPLAN 316 envisioned a full invasion of Cuba by Army and Marine units, supported by the Navy, following Air Force and naval airstrikes. Army units in the US would have had trouble fielding mechanised and logistical assets, and the US Navy could not supply enough amphibious shipping to transport even a modest armoured contingent from the Army. OPLAN 312, primarily an Air Force and Navy carrier operation, was designed with enough flexibility to do anything from engaging individual missile sites to providing air support for OPLAN 316's ground forces. Blockade Kennedy met with members of EXCOMM and other top advisers throughout October 21, considering two remaining options: an air strike primarily against the Cuban missile bases or a naval blockade of Cuba. A full-scale invasion was not the administration's first option. McNamara supported the naval blockade as a strong but limited military action that left the US in control. The term "blockade" was problematic. According to international law, a blockade is an act of war, but the Kennedy administration did not think that the Soviets would be provoked to attack by a mere blockade. Additionally, legal experts at the State Department and Justice Department concluded that a declaration of war could be avoided if another legal justification, based on the Rio Treaty for defence of the Western Hemisphere, was obtained from a resolution by a two-thirds vote from the members of the Organization of American States (OAS). Admiral Anderson, Chief of Naval Operations wrote a position paper that helped Kennedy to differentiate between what they termed a "quarantine" of offensive weapons and a blockade of all materials, claiming that a classic blockade was not the original intention. Since it would take place in international waters, Kennedy obtained the approval of the OAS for military action under the hemispheric defence provisions of the Rio Treaty: On October 19, the EXCOMM formed separate working groups to examine the air strike and blockade options, and by the afternoon most support in the EXCOMM shifted to the blockade option. Reservations about the plan continued to be voiced as late as the October 21, the paramount concern being that once the blockade was put into effect, the Soviets would rush to complete some of the missiles. Consequently, the US could find itself bombing operational missiles if the blockade did not force Khrushchev to remove the missiles already on the island. Speech to the nation At 3:00 pm EDT on October 22, President Kennedy formally established the executive committee (EXCOMM) with National Security Action Memorandum (NSAM) 196. At 5:00 pm, he met with Congressional leaders who contentiously opposed a blockade and demanded a stronger response. In Moscow, US Ambassador Foy D. Kohler briefed Khrushchev on the pending blockade and Kennedy's speech to the nation. Ambassadors around the world gave notice to non-Eastern Bloc leaders. Before the speech, US delegations met with Canadian Prime Minister John Diefenbaker, British Prime Minister Harold Macmillan, West German Chancellor Konrad Adenauer, French President Charles de Gaulle and Secretary-General of the Organization of American States, José Antonio Mora to brief them on the US intelligence and their proposed response. All were supportive of the US position. Over the course of the crisis, Kennedy had daily telephone conversations with Macmillan, who was publicly supportive of US actions. Shortly before his speech, Kennedy called former President Dwight Eisenhower. Kennedy's conversation with the former president also revealed that the two were consulting during the Cuban Missile Crisis. The two also anticipated that Khrushchev would respond to the Western world in a manner similar to his response during the Suez Crisis and would possibly wind up trading off Berlin. On October 22 at 7:00 pm EDT, Kennedy delivered a nationwide televised address on all of the major networks announcing the discovery of the missiles. He noted: Kennedy described the administration's plan: During the speech, a directive went out to all US forces worldwide, placing them on DEFCON 3. The heavy cruiser was designated flagship for the blockade, with as Newport Newss destroyer escort. Kennedy's speech writer Ted Sorensen stated in 2007 that the address to the nation was "Kennedy's most important speech historically, in terms of its impact on our planet." Crisis deepens On October 24, at 11:24 am EDT, a cable, drafted by George Wildman Ball to the US Ambassador in Turkey and NATO, notified them that they were considering making an offer to withdraw what the US knew to be nearly-obsolete missiles from Italy and Turkey, in exchange for the Soviet withdrawal from Cuba. Turkish officials replied that they would "deeply resent" any trade involving the US missile presence in their country. One day later, on the morning of October 25, American journalist Walter Lippmann proposed the same thing in his syndicated column. Castro reaffirmed Cuba's right to self-defense and said that all of its weapons were defensive and Cuba would not allow an inspection. International response Three days after Kennedy's speech, the Chinese People's Daily announced that "650,000,000 Chinese men and women were standing by the Cuban people." In West Germany, newspapers supported the US response by contrasting it with the weak American actions in the region during the preceding months. They also expressed some fear that the Soviets might retaliate in Berlin. In France on October 23, the crisis made the front page of all the daily newspapers. The next day, an editorial in Le Monde expressed doubt about the authenticity of the CIA's photographic evidence. Two days later, after a visit by a high-ranking CIA agent, the newspaper accepted the validity of the photographs. Also in France, in the October 29 issue of Le Figaro, Raymond Aron wrote in support of the American response. On October 24, Pope John XXIII sent a message to the Soviet embassy in Rome to be transmitted to the Kremlin in which he voiced his concern for peace. In this message he stated, "We beg all governments not to remain deaf to this cry of humanity. That they do all that is in their power to save peace." Soviet broadcast and communications The crisis was continuing unabated, and in the evening of October 24, the Soviet TASS news agency broadcast a telegram from Khrushchev to Kennedy in which Khrushchev warned that the United States' "outright piracy" would lead to war. That was followed at 9:24 pm by a telegram from Khrushchev to Kennedy, which was received at 10:52 pm EDT. Khrushchev stated, "if you weigh the present situation with a cool head without giving way to passion, you will understand that the Soviet Union cannot afford not to decline the despotic demands of the USA" and that the Soviet Union views the blockade as "an act of aggression" and their ships will be instructed to ignore it. After October 23, Soviet communications with the USA increasingly showed indications of having been rushed. Undoubtedly a product of pressure, it was not uncommon for Khrushchev to repeat himself and send messages lacking simple editing. With President Kennedy making his aggressive intentions of a possible air-strike followed by an invasion on Cuba known, Khrushchev rapidly sought a diplomatic compromise. Communications between the two super-powers had entered into a unique and revolutionary period; with the newly developed threat of mutual destruction through the deployment of nuclear weapons, diplomacy now demonstrated how power and coercion could dominate negotiations. US alert level raised The US requested an emergency meeting of the United Nations Security Council on October 25. US Ambassador to the United Nations Adlai Stevenson confronted Soviet Ambassador Valerian Zorin in an emergency meeting of the Security Council, challenging him to admit the existence of the missiles. Ambassador Zorin refused to answer. The next day at 10:00 pm EDT, the US raised the readiness level of SAC forces to DEFCON 2. For the only confirmed time in US history, B-52 bombers went on continuous airborne alert, and B-47 medium bombers were dispersed to various military and civilian airfields and made ready to take off, fully equipped, on 15 minutes' notice. One eighth of SAC's 1,436 bombers were on airborne alert, and some 145 intercontinental ballistic missiles stood on ready alert, some of which targeted Cuba. Air Defense Command (ADC) redeployed 161 nuclear-armed interceptors to 16 dispersal fields within nine hours, with one third maintaining 15-minute alert status. Twenty-three nuclear-armed B-52s were sent to orbit points within striking distance of the Soviet Union so it would believe that the US was serious. Jack J. Catton later estimated that about 80 percent of SAC's planes were ready for launch during the crisis; David A. Burchinal recalled that, by contrast: By October 22, Tactical Air Command (TAC) had 511 fighters plus supporting tankers and reconnaissance aircraft deployed to face Cuba on one-hour alert status. TAC and the Military Air Transport Service had problems. The concentration of aircraft in Florida strained command and support echelons, which faced critical undermanning in security, armaments, and communications; the absence of initial authorization for war-reserve stocks |
blades 3-lobed -partite, and lobes lobulate and obtuse. The cauline leaves are similar to the basal ones, while the upper ones are bract like. The hermaphrodite (bisexual) flowers are terminal to stem and branches. They are usually pentamerous (with five spreading perianth petaloid sepal segments). Five tubular honey-leaves are semi erect with a flat limb and spurred or saccate at the base. The spur is directed backwards and secretes nectar. Stamens are numerous (often more than 50) in whorls of 5, the innermost being scarious staminodes. There are ten membranaceous intrastaminal scales. There are five pistils and the Carpels are free. The fruit has several (five to 15) follicles which are semi erect and slightly connate downwards. These hold many seeds and are formed at the end of the pistils. The nectar is mainly consumed by long-beaked birds such as hummingbirds. Almost all Aquilegia species have a ring of staminodia around the base of the stigma, which may help protect against insects. Chromosome number is x=7. Relatives Columbines are closely related to plants in the genera Actaea (baneberries) and Aconitum (wolfsbanes/monkshoods), which like Aquilegia produce cardiogenic toxins. Insects They are used as food plants by some Lepidoptera (butterfly and moth) caterpillars. These are mainly of noctuid moths – noted for feeding on many poisonous plants without harm – such as cabbage moth (Mamestra brassicae), dot moth (Melanchra persicariae) and mouse moth (Amphipyra tragopoginis). the engrailed (Ectropis crepuscularia), a geometer moth, also uses columbine as a larval food plant. The larvae of the Papaipema leucostigma also feed on columbine. Plants in the genus Aquilegia are a major food source for Bombus hortorum, a | the innermost being scarious staminodes. There are ten membranaceous intrastaminal scales. There are five pistils and the Carpels are free. The fruit has several (five to 15) follicles which are semi erect and slightly connate downwards. These hold many seeds and are formed at the end of the pistils. The nectar is mainly consumed by long-beaked birds such as hummingbirds. Almost all Aquilegia species have a ring of staminodia around the base of the stigma, which may help protect against insects. Chromosome number is x=7. Relatives Columbines are closely related to plants in the genera Actaea (baneberries) and Aconitum (wolfsbanes/monkshoods), which like Aquilegia produce cardiogenic toxins. Insects They are used as food plants by some Lepidoptera (butterfly and moth) caterpillars. These are mainly of noctuid moths – noted for feeding on many poisonous plants without harm – such as cabbage moth (Mamestra brassicae), dot moth (Melanchra persicariae) and mouse moth (Amphipyra tragopoginis). the engrailed (Ectropis crepuscularia), a geometer moth, also uses columbine as a larval food plant. The larvae of the Papaipema leucostigma also feed on columbine. Plants in the genus Aquilegia are a major food source for Bombus hortorum, a species of bumblebee. Specifically, they have been found to forage on species of Aquilegia vulgaris in Belgium and Aquilegia chrysantha in North America and Belgium. The bees do not show any preference in color of the flowers. Cultivation Columbine is a hardy perennial, which propagates by seed. It will grow to a height of . It will grow in full sun; however, it prefers growing in partial shade and well drained soil, and is able to tolerate average soils and dry soil conditions. Columbine is rated at hardiness zone 3 in the United States so does not require mulching or protection in the winter. Large numbers of hybrids are available for the garden, since the European A. vulgaris was hybridized with other European and North American varieties. Aquilegia species are very interfertile, and will self-sow. Some varieties are short-lived so are better treated as biennials. The British National Collection of Aquilegias was held by Mrs Carrie Thomas at Killay near Swansea. Some time during or before 2014 the collection started to succumb to Aquilegia Downy Mildew Peronospora aquilegiicola which was at the time an emerging disease to which the plants had no resistance. By 2018 the entire collection had been lost. Aquilegia can be grown from seeds or rhizomes. Uses The flowers of various species of columbine were consumed in moderation by Native Americans as a condiment with other fresh greens, and are reported to be very sweet, and safe if consumed in small quantities. The plant's seeds and roots, however, are highly poisonous and contain cardiogenic toxins which cause both severe gastroenteritis and heart palpitations if consumed as food. Native Americans used very small amounts of Aquilegia root as a treatment for ulcers. However, the medical use of this plant is better avoided due to its high toxicity; columbine poisonings may be fatal. An acute toxicity test in mice has demonstrated that ethanol extract mixed with isocytisoside, the main flavonoid compound from the leaves and stems of Aquilegia vulgaris, can be classified |
last-level cache. These caches have grown to handle synchronisation primitives between threads and atomic operations, and interface with a CPU-style MMU. DSPs Digital signal processors have similarly generalised over the years. Earlier designs used scratchpad memory fed by DMA, but modern DSPs such as Qualcomm Hexagon often include a very similar set of caches to a CPU (e.g. Modified Harvard architecture with shared L2, split L1 I-cache and D-cache). Translation lookaside buffer A memory management unit (MMU) that fetches page table entries from main memory has a specialized cache, used for recording the results of virtual address to physical address translations. This specialized cache is called a translation lookaside buffer (TLB). In-network cache Information-centric networking Information-centric networking (ICN) is an approach to evolve the Internet infrastructure away from a host-centric paradigm, based on perpetual connectivity and the end-to-end principle, to a network architecture in which the focal point is identified information (or content or data). Due to the inherent caching capability of the nodes in an ICN, it can be viewed as a loosely connected network of caches, which has unique requirements of caching policies. However, ubiquitous content caching introduces the challenge to content protection against unauthorized access, which requires extra care and solutions. Unlike proxy servers, in ICN the cache is a network-level solution. Therefore, it has rapidly changing cache states and higher request arrival rates; moreover, smaller cache sizes further impose a different kind of requirements on the content eviction policies. In particular, eviction policies for ICN should be fast and lightweight. Various cache replication and eviction schemes for different ICN architectures and applications have been proposed. Policies Time aware least recently used (TLRU) The Time aware Least Recently Used (TLRU) is a variant of LRU designed for the situation where the stored contents in cache have a valid life time. The algorithm is suitable in network cache applications, such as Information-centric networking (ICN), Content Delivery Networks (CDNs) and distributed networks in general. TLRU introduces a new term: TTU (Time to Use). TTU is a time stamp of a content/page which stipulates the usability time for the content based on the locality of the content and the content publisher announcement. Owing to this locality based time stamp, TTU provides more control to the local administrator to regulate in network storage. In the TLRU algorithm, when a piece of content arrives, a cache node calculates the local TTU value based on the TTU value assigned by the content publisher. The local TTU value is calculated by using a locally defined function. Once the local TTU value is calculated the replacement of content is performed on a subset of the total content stored in cache node. The TLRU ensures that less popular and small life content should be replaced with the incoming content. Least frequent recently used (LFRU) The Least Frequent Recently Used (LFRU) cache replacement scheme combines the benefits of LFU and LRU schemes. LFRU is suitable for 'in network' cache applications, such as Information-centric networking (ICN), Content Delivery Networks (CDNs) and distributed networks in general. In LFRU, the cache is divided into two partitions called privileged and unprivileged partitions. The privileged partition can be defined as a protected partition. If content is highly popular, it is pushed into the privileged partition. Replacement of the privileged partition is done as follows: LFRU evicts content from the unprivileged partition, pushes content from privileged partition to unprivileged partition, and finally inserts new content into the privileged partition. In the above procedure the LRU is used for the privileged partition and an approximated LFU (ALFU) scheme is used for the unprivileged partition, hence the abbreviation LFRU. The basic idea is to filter out the locally popular contents with ALFU scheme and push the popular contents to one of the privileged partition. Weather forecast Back in 2010 The New York Times suggested "Type 'weather' followed by your zip code." By 2011, the use of smartphones with weather forecasting options was overly taxing AccuWeather servers; two requests within the same park would generate separate requests. An optimization by edge-servers to truncate the GPS coordinates to fewer decimal places meant that the cached results from the earlier query would be used. The number of to-the-server lookups per day dropped by half. Software caches Disk cache While CPU caches are generally managed entirely by hardware, a variety of software manages other caches. The page cache in main memory, which is an example of disk cache, is managed by the operating system kernel. While the disk buffer, which is an integrated part of the hard disk drive, is sometimes misleadingly referred to as "disk cache", its main functions are write sequencing and read prefetching. Repeated cache hits are relatively rare, due to the small size of the buffer in comparison to the drive's capacity. However, high-end disk controllers often have their own on-board cache of the hard disk drive's data blocks. Finally, a fast local hard disk drive can also cache information held on even slower data storage devices, such as remote servers (web cache) or local tape drives or optical jukeboxes; such a scheme is the main concept of hierarchical storage management. Also, fast flash-based solid-state drives (SSDs) can be used as caches for slower rotational-media hard disk drives, working together as hybrid drives or solid-state hybrid drives (SSHDs). Web cache Web browsers and web proxy servers employ web caches to store previous responses from web servers, such as web pages and images. Web caches reduce the amount of information that needs to be transmitted across the network, as information previously stored in the cache can often be re-used. This reduces bandwidth and processing requirements of the web server, and helps to improve responsiveness for users of the web. Web browsers employ a built-in web cache, but some Internet service providers (ISPs) or organizations also use a caching proxy server, which is a web cache that is shared among all users of that network. Another form of cache is P2P caching, where the files most sought for by peer-to-peer applications are stored in an ISP cache to accelerate P2P transfers. Similarly, decentralised equivalents exist, which allow communities to perform the same task for P2P traffic, for example, Corelli. Memoization A cache can store data that is computed on demand rather than retrieved from a backing store. Memoization is an optimization technique that stores the results of resource-consuming function calls within a lookup table, allowing subsequent calls to reuse the stored results and avoid repeated computation. It is related to the dynamic programming algorithm design methodology, which can also be thought of as a means of caching. Other caches The BIND DNS daemon caches a mapping of domain names to IP addresses, as does a resolver library. Write-through operation is common when operating over unreliable networks (like an Ethernet LAN), because of the enormous complexity of the coherency protocol required between multiple write-back caches when communication is unreliable. For instance, web page caches and client-side network file system caches (like those in NFS or SMB) are typically read-only or write-through specifically to keep the network protocol simple and reliable. Search engines also frequently make web pages they have indexed available from their cache. For example, Google provides a "Cached" link next to each search result. This can prove useful when web pages from a web server are temporarily or permanently inaccessible. Another type of caching is storing computed results that will likely be needed again, or memoization. For example, ccache is a program that caches the output of the compilation, in order to speed up later compilation runs. Database caching can substantially improve the throughput of database applications, for example in the processing of indexes, data dictionaries, and frequently used subsets of data. A distributed cache uses networked hosts to provide scalability, reliability and performance to the application. The hosts can be co-located or spread over different geographical regions. Buffer vs. cache The semantics of a "buffer" and a "cache" are not totally different; even so, there are fundamental differences in intent between the process of caching and the process of buffering. Fundamentally, caching realizes a performance increase for transfers of data that is being repeatedly transferred. While a caching system may realize a performance increase upon the initial (typically write) transfer of a data item, this performance increase is due to buffering occurring within the caching system. With read caches, a data item must have been fetched from its residing location at least once in order for subsequent reads of the data item to realize a performance increase by virtue of being able to be fetched from the cache's (faster) intermediate storage rather than the data's residing location. With write caches, a performance increase of writing a data item may be realized upon the first write of the data item by virtue of the data item immediately being stored in the cache's intermediate storage, deferring the transfer of the data item to its residing storage at a later stage or else occurring as a background process. Contrary to strict buffering, a caching process must adhere to a (potentially distributed) cache coherency protocol in order to maintain consistency between the cache's intermediate storage and the location where the data resides. Buffering, on the other hand, reduces the number of transfers for otherwise novel data amongst communicating processes, which amortizes overhead involved for several small transfers over fewer, larger transfers, provides an intermediary for communicating | the newly retrieved data. The heuristic used to select the entry to replace is known as the replacement policy. One popular replacement policy, "least recently used" (LRU), replaces the oldest entry, the entry that was accessed less recently than any other entry (see cache algorithm). More efficient caching algorithms compute the use-hit frequency against the size of the stored contents, as well as the latencies and throughputs for both the cache and the backing store. This works well for larger amounts of data, longer latencies, and slower throughputs, such as that experienced with hard drives and networks, but is not efficient for use within a CPU cache. Writing policies When a system writes data to cache, it must at some point write that data to the backing store as well. The timing of this write is controlled by what is known as the write policy. There are two basic writing approaches: Write-through: write is done synchronously both to the cache and to the backing store. Write-back (also called write-behind): initially, writing is done only to the cache. The write to the backing store is postponed until the modified content is about to be replaced by another cache block. A write-back cache is more complex to implement, since it needs to track which of its locations have been written over, and mark them as dirty for later writing to the backing store. The data in these locations are written back to the backing store only when they are evicted from the cache, an effect referred to as a lazy write. For this reason, a read miss in a write-back cache (which requires a block to be replaced by another) will often require two memory accesses to service: one to write the replaced data from the cache back to the store, and then one to retrieve the needed data. Other policies may also trigger data write-back. The client may make many changes to data in the cache, and then explicitly notify the cache to write back the data. Since no data is returned to the requester on write operations, a decision needs to be made on write misses, whether or not data would be loaded into the cache. This is defined by these two approaches: Write allocate (also called fetch on write): data at the missed-write location is loaded to cache, followed by a write-hit operation. In this approach, write misses are similar to read misses. No-write allocate (also called write-no-allocate or write around): data at the missed-write location is not loaded to cache, and is written directly to the backing store. In this approach, data is loaded into the cache on read misses only. Both write-through and write-back policies can use either of these write-miss policies, but usually they are paired in this way: A write-back cache uses write allocate, hoping for subsequent writes (or even reads) to the same location, which is now cached. A write-through cache uses no-write allocate. Here, subsequent writes have no advantage, since they still need to be written directly to the backing store. Entities other than the cache may change the data in the backing store, in which case the copy in the cache may become out-of-date or stale. Alternatively, when the client updates the data in the cache, copies of those data in other caches will become stale. Communication protocols between the cache managers which keep the data consistent are known as coherency protocols. Prefetch On a cache read miss, caches with an demand paging policy read the minimum amount from the backing store. For example, demand-paging virtual memory reads one page of virtual memory (often 4 kBytes) from disk into the disk cache in RAM. For example, a typical CPU reads a single L2 cache line of 128 bytes from DRAM into the L2 cache, and a single L1 cache line of 64 bytes from the L2 cache into the L1 cache. Caches with a prefetch input queue or more general anticipatory paging policy go further—they not only read the chunk requested, but guess that the next chunk or two will soon be required, and so prefetch that data into the cache ahead of time. Anticipatory paging is especially helpful when the backing store has a long latency to read the first chunk and much shorter times to sequentially read the next few chunks, such as disk storage and DRAM. A few operating systems go further with a loader (computing) that always pre-loads the entire executable into RAM. A few caches go even further, not only pre-loading an entire file, but also starting to load other related files that may soon be requested, such as the page cache associated with a prefetcher or the web cache associated with link prefetching. Examples of hardware caches CPU cache Small memories on or close to the CPU can operate faster than the much larger main memory. Most CPUs since the 1980s have used one or more caches, sometimes in cascaded levels; modern high-end embedded, desktop and server microprocessors may have as many as six types of cache (between levels and functions). Examples of caches with a specific function are the D-cache and I-cache and the translation lookaside buffer for the MMU. GPU cache Earlier graphics processing units (GPUs) often had limited read-only texture caches, and introduced Morton order swizzled textures to improve 2D cache coherency. Cache misses would drastically affect performance, e.g. if mipmapping was not used. Caching was important to leverage 32-bit (and wider) transfers for texture data that was often as little as 4 bits per pixel, indexed in complex patterns by arbitrary UV coordinates and perspective transformations in inverse texture mapping. As GPUs advanced (especially with GPGPU compute shaders) they have developed progressively larger and increasingly general caches, including instruction caches for shaders, exhibiting increasingly common functionality with CPU caches. For example, GT200 architecture GPUs did not feature an L2 cache, while the Fermi GPU has 768 KB of last-level cache, the Kepler GPU has 1536 KB of last-level cache, and the Maxwell GPU has 2048 KB of last-level cache. These caches have grown to handle synchronisation primitives between threads and atomic operations, and interface with a CPU-style MMU. DSPs Digital signal processors have similarly generalised over the years. Earlier designs used scratchpad memory fed by DMA, but modern DSPs such as Qualcomm Hexagon often include a very similar set of caches to a CPU (e.g. Modified Harvard architecture with shared L2, split L1 I-cache and D-cache). Translation lookaside buffer A memory management unit |
announced in February 2011 that the company name would revert to Meritor, Inc. Cummins, Inc. is by far the region's largest employer, and the Infotech Park accounts for a sizable number of research jobs in Columbus proper. Just south of Columbus are the North American headquarters of Toyota Material Handling, U.S.A., Inc., the world's largest material handling (forklift) manufacturer. Other notable industries include architecture, a discipline for which Columbus is famous worldwide. The late J. Irwin Miller (then president and chairman of Cummins Engine Company) launched the Cummins Foundation, a charitable program that helps subsidize a large number of architectural projects throughout the city by up-and-coming engineers and architects. Early in the 20th century, Columbus also was home to a number of pioneering car manufacturers, including Reeves, which produced the unusual four-axle Octoauto and the twin rear-axle Sextoauto, both around 1911. In addition to the Columbus Historic District and Irwin Union Bank, the city has numerous buildings listed on the National Register of Historic Places, including seven National Historic Landmarks of modernist architecture: Bartholomew County Courthouse, Columbus City Hall, First Baptist Church, First Christian Church, Haw Creek Leather Company, Mabel McDowell Elementary School, McEwen-Samuels-Marr House, McKinley School, Miller House, North Christian Church, and The Republic Newspaper Office. Geography Columbus is located at (39.213998, −85.911056). The Driftwood and Flatrock Rivers converge at Columbus to form the East Fork of the White River. According to the 2010 census, Columbus has a total area of , of which (or 98.62%) is land and (or 1.38%) is water. Demographics 2010 census As of the census of 2010, there were 44,061 people, 17,787 households, and 11,506 families residing in the city. The population density was . There were 19,700 housing units at an average density of . The racial makeup of the city was 86.9% White, 2.7% African American, 0.2% Native American, 5.6% Asian, 0.1% Pacific Islander, 2.5% from other races, and 2.0% from two or more races. Hispanic or Latino of any race were 5.8% of the population. There were 17,787 households, of which 33.5% had children under the age of 18 living with them, 48.5% were married couples living together, 11.7% had a female householder with no husband present, 4.5% had a male householder with no wife present, and 35.3% were non-families. 29.7% of all households were made up of individuals, and 11.5% had someone living alone who was 65 years of age or older. The average household size was 2.43 and the average family size was 3.00. The median age in the city was 37.1 years. 25.2% of residents were under the age of 18; 8.1% were between the ages of 18 and 24; 27.3% were from 25 to 44; 24.9% were from 45 to 64; and 14.4% were 65 years of age or older. The gender makeup of the city was 48.4% male and 51.6% female. 2000 census As of the census of 2000, there were 39,059 people, 15,985 households, and 10,566 families residing in the city. The population density was 1,505.3 people per square mile (581.1/km). There were 17,162 housing units at an average density of 661.4 per square mile (255.3/km). The racial makeup of the city was 91.32% White, 2.71% Black or African American, 0.13% Native American, 3.23% Asian, 0.05% Pacific Islander, 1.39% from other races, and 1.19% from two or more races. 2.81% of the population were Hispanic or Latino of any race. There were 15,985 households, out of which 31.8% had children under the age of 18 living with them, 51.9% were married couples living together, 11.0% had a female householder with no husband present, and 33.9% were non-families. 29.1% of all households were composed of individuals, and 10.7% had someone living alone who was 65 years of age or older. The average household size was 2.39, and the average family size was 2.94. In the city, the population was spread out, with 25.7% under the age of 18, 8.0% from 18 to 24 years, 29.5% from 25 to 44 years, 23.0% from 45 to 64 years, and 13.7% over the age of 65. The median age was 36 years. There were 92.8 males for every 100 females and 89.6 males for every 100 females over age 18. The median income for a household in the city was $41,723, and the median income for a family was $52,296. Males had a median income of $40,367 versus $24,446 for females, and the per capita income was $22,055. About 6.5% of families and 8.1% of the population were below the poverty line, including 9.7% of those under age 18 and 8.8% of those age 65 or over. Arts and culture Columbus is a city known for its modern architecture and public art. J. Irwin Miller, 2nd CEO and a nephew of a co-founder of Cummins Inc., the Columbus-headquartered diesel engine manufacturer, instituted a program in which the Cummins Foundation paid the architects' fees, provided the client selected a firm from a list compiled by the foundation. The plan was initiated with public schools and was so successful that the foundation decided to offer such design support to other non-profit and civic organizations. The high number of notable public buildings and public art in the Columbus area, designed by such individuals as Eero Saarinen, I.M. Pei, Robert Venturi, Cesar Pelli, and Richard Meier, led to Columbus earning the nickname "Athens on the Prairie." Seven buildings, constructed between 1942 and 1965, are National Historic Landmarks, and approximately 60 other buildings sustain the Bartholomew County seat's reputation as a showcase of modern architecture. National Public Radio once devoted an article to the town's architecture. In 2015, Landmark Columbus was created as a program of Heritage Fund - The Community Foundation of Bartholomew county. National Historic Landmarks First Baptist Church was designed by Harry Weese without windows and | 18; 8.1% were between the ages of 18 and 24; 27.3% were from 25 to 44; 24.9% were from 45 to 64; and 14.4% were 65 years of age or older. The gender makeup of the city was 48.4% male and 51.6% female. 2000 census As of the census of 2000, there were 39,059 people, 15,985 households, and 10,566 families residing in the city. The population density was 1,505.3 people per square mile (581.1/km). There were 17,162 housing units at an average density of 661.4 per square mile (255.3/km). The racial makeup of the city was 91.32% White, 2.71% Black or African American, 0.13% Native American, 3.23% Asian, 0.05% Pacific Islander, 1.39% from other races, and 1.19% from two or more races. 2.81% of the population were Hispanic or Latino of any race. There were 15,985 households, out of which 31.8% had children under the age of 18 living with them, 51.9% were married couples living together, 11.0% had a female householder with no husband present, and 33.9% were non-families. 29.1% of all households were composed of individuals, and 10.7% had someone living alone who was 65 years of age or older. The average household size was 2.39, and the average family size was 2.94. In the city, the population was spread out, with 25.7% under the age of 18, 8.0% from 18 to 24 years, 29.5% from 25 to 44 years, 23.0% from 45 to 64 years, and 13.7% over the age of 65. The median age was 36 years. There were 92.8 males for every 100 females and 89.6 males for every 100 females over age 18. The median income for a household in the city was $41,723, and the median income for a family was $52,296. Males had a median income of $40,367 versus $24,446 for females, and the per capita income was $22,055. About 6.5% of families and 8.1% of the population were below the poverty line, including 9.7% of those under age 18 and 8.8% of those age 65 or over. Arts and culture Columbus is a city known for its modern architecture and public art. J. Irwin Miller, 2nd CEO and a nephew of a co-founder of Cummins Inc., the Columbus-headquartered diesel engine manufacturer, instituted a program in which the Cummins Foundation paid the architects' fees, provided the client selected a firm from a list compiled by the foundation. The plan was initiated with public schools and was so successful that the foundation decided to offer such design support to other non-profit and civic organizations. The high number of notable public buildings and public art in the Columbus area, designed by such individuals as Eero Saarinen, I.M. Pei, Robert Venturi, Cesar Pelli, and Richard Meier, led to Columbus earning the nickname "Athens on the Prairie." Seven buildings, constructed between 1942 and 1965, are National Historic Landmarks, and approximately 60 other buildings sustain the Bartholomew County seat's reputation as a showcase of modern architecture. National Public Radio once devoted an article to the town's architecture. In 2015, Landmark Columbus was created as a program of Heritage Fund - The Community Foundation of Bartholomew county. National Historic Landmarks First Baptist Church was designed by Harry Weese without windows and was dedicated in 1965. Its architectural features include a high-pitched roof and skylight. First Christian Church was designed by Eliel Saarinen with a 160-ft (49m) tower and was dedicated in 1942. Among the first Modern religious buildings in America, it includes a sunken terrace and a 900-person sanctuary. Irwin Union Bank was designed by Eero Saarinen and includes an addition by Kevin Roche. The building was dedicated in 1954 and is possibly the first financial institution in America to use glass walls and an open floor plan. The Mabel McDowell School opened in 1960 and was designed by John Carl Warnecke early in his career, using his "early comprehensive diverse approach." The architect fee was the second to be funded by the Cummins Engine Foundation. The Miller House and Garden was constructed in 1957 and was designed by Eero Saarinen and landscaped by Dan Kiley. One of the few residential designs by Saarinen, the home is an expression of International Style and was built for J. Irwin Miller of the Cummins Engine corporation and foundation. North Christian Church was designed by Eero Saarinen and held its first worship in 1964. The hexagonal-shaped building includes a 192-ft (59m) spire and houses a Holtkamp organ. The Republic Newspaper Office was designed by Myron Goldsmith of Skidmore, Owings & Merrill. Other notable Modern buildings St. Bartholomew Catholic Church, by William Browne Jr. and Steven Risting Cleo Rogers Memorial Library, by I. M. Pei Columbus East High School, by Romaldo Giurgola Commons Centre and Mall, by César Pelli St. Peter's Lutheran Church, by Gunnar Birkerts Lincoln Elementary School, by Gunnar Birkerts Otter Creek Golf Course, by Harry Weese Fire Station No. 4, by Robert Venturi Columbus Regional Hospital, by Robert A.M. Stern Notable historic buildings Bartholomew County Courthouse by Isaac Hodgson Columbus Power House by Harrison Albright The Crump Theatre by Charles Franklin Sparrell Public art Chaos I by Jean Tinguely Friendship Way by William A. Johnson, containing an untitled neon sculpture by Cork Marcheschi Irwin Gardens at the Inn at Irwin Gardens Large Arch by Henry Moore 2 Arcs de 212.5˚ by Bernar Venet Horses by Costantino Nivola The Family by Harris Barron Yellow Neon Chandelier and Persians by Dale Chihuly C by Robert Indiana Sermon on the Mount by Loja Saarinen and Eliel Saarinen History and Mystery by William T. Wiley Exploded Engine by Rudolph de Harak Eos by Dessa Kirk Exhibit Columbus In May 2016, Landmark Columbus launched Exhibit Columbus as a way to continue the ambitious traditions of the past into the future. Exhibit Columbus features annual programming that alternates between symposium and exhibition years. Sports Columbus High School was home to footwear pioneer Chuck Taylor, who played basketball in Columbus before setting out to promote his now famous shoes and the sport |
(PNG) Börje Langefors Chris Lattner – creator of Swift (programming language) and LLVM compiler infrastructure Steve Lawrence Edward D. Lazowska Joshua Lederberg Manny M Lehman Charles E. Leiserson – cache-oblivious algorithms, provably good work-stealing, coauthor of Introduction to Algorithms Douglas Lenat – artificial intelligence, Cyc Yann LeCun Rasmus Lerdorf – PHP Max Levchin – Gausebeck–Levchin test and PayPal Leonid Levin – computational complexity theory Kevin Leyton-Brown – artificial intelligence J.C.R. Licklider David Liddle Jochen Liedtke – microkernel operating systems Eumel, L3, L4 John Lions – Lions' Commentary on UNIX 6th Edition, with Source Code (Lions Book) Charles H. Lindsey – IFIP WG 2.1 member, Revised Report on ALGOL 68 Richard J. Lipton – computational complexity theory Barbara Liskov – programming languages Yanhong Annie Liu – programming languages, algorithms, program design, program optimization, software systems, optimizing, analysis, and transformations, intelligent systems, distributed computing, computer security, IFIP WG 2.1 member Darrell Long – computer data storage, computer security Patricia D. Lopez – broadening participation in computing Gillian Lovegrove Ada Lovelace – first programmer David Luckham – Lisp, Automated theorem proving, Stanford Pascal Verifier, Complex event processing, Rational Software cofounder (Ada compiler) Eugene Luks Nancy Lynch M Nadia Magnenat Thalmann – computer graphics, virtual actor Tom Maibaum Zohar Manna – fuzzy logic James Martin – information engineering Robert C. Martin (Uncle Bob) – software craftsmanship John Mashey Yuri Matiyasevich – solving Hilbert's tenth problem Yukihiro Matsumoto – Ruby (programming language) John Mauchly (1907–1980) – designed ENIAC, first general-purpose electronic digital computer, as well as EDVAC, BINAC and UNIVAC I, the first commercial computer; worked with Jean Bartik on ENIAC and Grace Murray Hopper on UNIVAC Ujjwal Maulik (1965–) Multi-objective Clustering and Bioinformatics Derek McAuley – ubiquitous computing, computer architecture, networking John McCarthy – Lisp (programming language), ALGOL, IFIP WG 2.1 member, artificial intelligence Andrew McCallum Douglas McIlroy – macros, pipes, Unix philosophy Chris McKinstry – artificial intelligence, Mindpixel Marshall Kirk McKusick – BSD, Berkeley Fast File System Lambert Meertens – ALGOL 68, IFIP WG 2.1 member, ABC (programming language) Kurt Mehlhorn – algorithms, data structures, LEDA Bertrand Meyer – Eiffel (programming language) Silvio Micali – cryptography Robin Milner – ML (programming language) Jack Minker – database logic Marvin Minsky – artificial intelligence, perceptrons, Society of Mind James G. Mitchell – WATFOR compiler, Mesa (programming language), Spring (operating system), ARM architecture Tom M. Mitchell Arvind Mithal – formal verification of large digital systems, developing dynamic dataflow architectures, parallel computing programming languages (Id, pH), compiling on parallel machines Paul Mockapetris – Domain Name System (DNS) Cleve Moler – numerical analysis, MATLAB Faron Moller – concurrency theory John P. Moon – inventor, Apple Inc. Charles H. Moore – Forth language Edward F. Moore – Moore machine Gordon Moore – Moore's law J Strother Moore – string searching, ACL2 theorem prover Roger Moore – co-developed APL\360, created IPSANET, co-founded I. P. Sharp Associates Hans Moravec – robotics Carroll Morgan – formal methods Robert Tappan Morris – Morris worm Joel Moses – Macsyma Rajeev Motwani – randomized algorithm Oleg A. Mukhanov – quantum computing developer, co-founder and CTO of SeeQC Stephen Muggleton – Inductive Logic Programming Klaus-Robert Müller – machine learning, artificial intelligence Alan Mycroft – programming languages Musharaf M.M.Hussain – Parallel Computing and Multicore Supper Scaler Processor N Mihai Nadin – anticipation research Makoto Nagao – machine translation, natural language processing, digital library Frieder Nake – pioneered computer arts Bonnie Nardi – human–computer interaction Peter Naur (1928–2016) – Backus–Naur form (BNF), ALGOL 60, IFIP WG 2.1 member Roger Needham – computer security James G. Nell – Generalised Enterprise Reference Architecture and Methodology (GERAM) Greg Nelson (1953–2015) – satisfiability modulo theories, extended static checking, program verification, Modula-3 committee, Simplify theorem prover in ESC/Java Bernard de Neumann – massively parallel autonomous cellular processor, software engineering research Klara Dan von Neumann (1911–1963) – early computers, ENIAC programmer and control designer John von Neumann (1903–1957) – early computers, von Neumann machine, set theory, functional analysis, mathematics pioneer, linear programming, quantum mechanics Allen Newell – artificial intelligence, Computer Structures Max Newman – Colossus computer, MADM Andrew Ng – artificial intelligence, machine learning, robotics Nils John Nilsson (1933–2019) – artificial intelligence G.M. Nijssen – Nijssen's Information Analysis Methodology (NIAM) object-role modeling Tobias Nipkow – proof assistance Maurice Nivat – theoretical computer science, Theoretical Computer Science journal, ALGOL, IFIP WG 2.1 member Phiwa Nkambule – Fintech, artificial intelligence, machine learning, robotics Jerre Noe – computerized banking Peter Nordin – artificial intelligence, genetic programming, evolutionary robotics Donald Norman – user interfaces, usability Peter Norvig – artificial intelligence, Director of Research at Google George Novacky – University of Pittsburgh: assistant department chair, senior lecturer in computer science, assistant dean of CAS for undergraduate studies Kristen Nygaard – Simula, object-oriented programming O Martin Odersky – Scala programming language Peter O'Hearn – separation logic, bunched logic, Infer Static Analyzer T. William Olle – Ferranti Mercury Steve Omohundro Severo Ornstein John O'Sullivan – Wi-Fi John Ousterhout – Tcl programming language Mark Overmars – video game programming P Larry Page – co-founder of Google Sankar Pal Paritosh Pandya Christos Papadimitriou David Park (1935–1990) – first Lisp implementation, expert in fairness, program schemas, bisimulation in concurrent computing David Parnas – information hiding, modular programming DJ Patil – former Chief Data Scientist of United States Yale Patt – Instruction-level parallelism, speculative architectures David A. Patterson – reduced instruction set computer (RISC), RISC-V, redundant arrays of inexpensive disks (RAID), Berkeley Network of Workstations (NOW) Mike Paterson – algorithms, analysis of algorithms (complexity) Mihai Pătraşcu – data structures Lawrence Paulson – ML Randy Pausch (1960–2008) – human–computer interaction, Carnegie professor, "Last Lecture" Juan Pavón – software agents Judea Pearl – artificial intelligence, search algorithms David Pearson – CADES, computer graphics Alan Perlis – Programming Pearls Radia Perlman – spanning tree protocol Pier Giorgio Perotto – computer designer at Olivetti, designer of the Programma 101 programmable calculator Rózsa Péter – recursive function theory Simon Peyton Jones – functional programming Kathy Pham – data, artificial intelligence, civic technology, healthcare, ethics Roberto Pieraccini – speech technologist, engineering director at Google Gordon Plotkin Amir Pnueli – temporal logic Willem van der Poel – computer graphics, robotics, geographic information systems, imaging, multimedia, virtual environments, games Cicely Popplewell (1920–1995) – British software engineer in 1960s Emil Post – mathematics Jon Postel – Internet Franco Preparata – computer engineering, computational geometry, parallel algorithms, computational biology William H. Press – numerical algorithms R Rapelang Rabana Grzegorz Rozenberg – natural computing, automata theory, graph transformations and concurrent systems Michael O. Rabin – nondeterministic machine Dragomir R. Radev – natural language processing, information retrieval T. V. Raman – accessibility, Emacspeak Brian Randell – ALGOL 60, software fault tolerance, dependability, pre-1950 history of computing hardware Anders P. Ravn – Duration Calculus Raj Reddy – artificial intelligence David P. Reed Trygve Reenskaug – model–view–controller (MVC) software architecture pattern John C. Reynolds – continuations, definitional interpreters, defunctionalization, Forsythe, Gedanken language, intersection types, polymorphic lambda calculus, relational parametricity, separation logic, ALGOL Joyce K. Reynolds – Internet Reinder van de Riet – Editor: Europe of Data and Knowledge Engineering, COLOR-X event modeling language Bernard Richards – medical informatics Martin Richards – BCPL Adam Riese C. J. van Rijsbergen Dennis Ritchie – C (programming language), Unix Ron Rivest – RSA, MD5, RC4 Ken Robinson – formal methods Colette Rolland – REMORA methodology, meta modelling John Romero – codeveloped Doom Azriel Rosenfeld Douglas T. Ross – Automatically Programmed Tools (APT), Computer-aided design, structured analysis and design technique, ALGOL X Guido van Rossum – Python (programming language) Winston W. Royce – waterfall model Rudy Rucker – mathematician, writer, educator Steven Rudich – complexity theory, cryptography Jeff Rulifson James Rumbaugh – Unified Modeling Language, Object Management Group Peter Ružička – Slovak computer scientist and mathematician S George Sadowsky Umar Saif Gerard Salton – information retrieval Jean E. Sammet – programming languages Claude Sammut – artificial intelligence researcher Carl Sassenrath – operating systems, programming languages, Amiga, REBOL Mahadev Satyanarayanan – file systems, distributed systems, mobile computing, pervasive computing Walter Savitch – discovery of complexity class NL, Savitch's theorem, natural language processing, mathematical linguistics Jonathan Schaeffer Wilhelm Schickard – one of the first calculating machines Jürgen Schmidhuber – artificial intelligence, deep learning, artificial neural networks, recurrent neural networks, Gödel machine, artificial curiosity, meta-learning Steve Schneider – formal methods, security Bruce Schneier – cryptography, security Fred B. Schneider – concurrent and distributed computing Sarita Schoenebeck — human–computer interaction Glenda Schroeder – command-line shell, e-mail Bernhard Schölkopf – machine learning, artificial intelligence Dana Scott – domain theory Michael L. Scott – programming languages, algorithms, distributed computing Robert Sedgewick – algorithms, data structures Ravi Sethi – compilers, 2nd Dragon Book Nigel Shadbolt Adi Shamir – RSA, cryptanalysis Claude Shannon – information theory David E. Shaw – computational finance, computational biochemistry, parallel architectures Cliff Shaw – systems programmer, artificial intelligence Scott Shenker – networking Ben Shneiderman – human–computer interaction, information visualization Edward H. Shortliffe – MYCIN (medical diagnostic expert system) Daniel Siewiorek – electronic design automation, reliability computing, context aware mobile computing, wearable computing, computer-aided design, rapid prototyping, fault tolerance Joseph Sifakis – model checking Herbert A. Simon – artificial intelligence Munindar P. Singh – multiagent systems, software engineering, artificial intelligence, social networks Ramesh Sitaraman – helped build Akamai's high performance network Daniel Sleator – splay tree, amortized analysis Aaron Sloman – artificial intelligence and cognitive science Arne Sølvberg – information modelling Brian Cantwell Smith – reflection (computer science), 3lisp Steven Spewak – enterprise architecture planning Carol Spradling Robert Sproull Rohini Kesavan Srihari – information retrieval, text analytics, multilingual text mining Sargur Srihari – pattern recognition, machine learning, computational criminology, CEDAR-FOX Maciej Stachowiak – GNOME, Safari, WebKit Richard Stallman (born 1953) – GNU Project Ronald Stamper Richard E. Stearns – computational complexity theory Guy L. Steele, Jr. – Scheme, Common Lisp Thomas Sterling – creator of Beowulf clusters Alexander Stepanov – generic programming W. Richard Stevens (1951–1999) – author of books, including TCP/IP Illustrated and Advanced Programming in the Unix Environment Larry Stockmeyer – computational complexity, distributed computing Salvatore Stolfo – computer security, machine learning Michael Stonebraker – relational database practice and theory Olaf Storaasli – finite element machine, linear algebra, high performance computing Christopher Strachey – denotational semantics Volker Strassen – matrix multiplication, integer multiplication, Solovay–Strassen primality test Bjarne Stroustrup – C++ Madhu Sudan – computational complexity theory, coding theory Gerald Jay Sussman – Scheme Bert Sutherland – graphics, Internet Ivan Sutherland – graphics Mario Szegedy – complexity theory, quantum computing T Parisa Tabriz – Google Director of Engineering, also known as the Security Princess Roberto Tamassia – computational geometry, computer security Andrew S. Tanenbaum – operating systems, MINIX Austin Tate – Artificial Intelligence Applications, AI Planning, Virtual Worlds Bernhard Thalheim – conceptual modelling foundation Éva Tardos Gábor Tardos Robert Tarjan – splay tree Valerie Taylor Mario Tchou – Italian engineer, of Chinese descent, leader of Olivetti Elea project Jaime Teevan Shang-Hua Teng – analysis of algorithms Larry Tesler – human–computer interaction, graphical user interface, Apple Macintosh Avie Tevanian – Mach kernel team, NeXT, Mac OS X Charles P. Thacker – Xerox Alto, Microsoft Research Daniel Thalmann – computer graphics, virtual actor Ken Thompson – Unix Sebastian Thrun – AI researcher, pioneered autonomous driving Walter F. Tichy – RCS Seinosuke Toda – computation complexity, recipient of 1998 Gödel Prize Linus Torvalds – Linux kernel, Git Leonardo Torres y Quevedo (1852–1936) – invented El Ajedrecista (the chess player) in 1912, a true automaton built to play chess without human guidance. In his work Essays on Automatics (1913), introduced the idea of floating-point arithmetic. In 1920, built an early electromechanical device of the Analytical Engine. Godfried Toussaint – computational geometry, computational music theory Gloria Townsend Edwin E. Tozer – business information systems Joseph F Traub – computational complexity of scientific problems John V. Tucker – computability theory | Bresenham's algorithm Sergey Brin – co-founder of Google David J. Brown – unified memory architecture, binary compatibility Per Brinch Hansen (surname "Brinch Hansen") – RC 4000 multiprogramming system, operating system kernels, microkernels, monitors, concurrent programming, Concurrent Pascal, distributed computing & processes, parallel computing Sjaak Brinkkemper – methodology of product software development Fred Brooks – System 360, OS/360, The Mythical Man-Month, No Silver Bullet Rod Brooks Margaret Burnett – visual programming languages, end-user software engineering, and gender-inclusive software Michael Butler – Event-B C Tracy Camp – wireless computing Martin Campbell-Kelly – history of computing Rosemary Candlin Bryan Cantrill – invented DTrace Luca Cardelli – John Carmack – codeveloped Doom Edwin Catmull – computer graphics Vinton Cerf – Internet, TCP/IP Gregory Chaitin Robert Cailliau – Belgian computer scientist Zhou Chaochen – duration calculus Peter Chen – entity-relationship model, data modeling, conceptual model Leonardo Chiariglione – founder of MPEG Tracy Chou – computer scientist and activist Alonzo Church – mathematics of combinators, lambda calculus Alberto Ciaramella – speech recognition, patent informatics Edmund M. Clarke – model checking John Cocke – RISC Edgar F. Codd (1923–2003) – formulated the database relational model Jacques Cohen – computer science professor Ian Coldwater – computer security Simon Colton – computational creativity Alain Colmerauer – Prolog Douglas Comer – Xinu Paul Justin Compton – Ripple Down Rules Gordon Cormack – co-invented dynamic Markov compression Stephen Cook – NP-completeness James Cooley – Fast Fourier transform (FFT) Danese Cooper – open-source software Fernando J. Corbató – Compatible Time-Sharing System (CTSS), Multics Kit Cosper – open-source software Patrick Cousot – abstract interpretation Ingemar Cox – digital watermarking Seymour Cray – Cray Research, supercomputer Nello Cristianini – machine learning, pattern analysis, artificial intelligence Jon Crowcroft – networking W. Bruce Croft Glen Culler – interactive computing, computer graphics, high performance computing Haskell Curry D Luigi Dadda – designer of the Dadda multiplier Ole-Johan Dahl – Simula, object-oriented programming Ryan Dahl – founder of node.js project Andries van Dam – computer graphics, hypertext Samir Das – Wireless Networks, Mobile Computing, Vehicular ad hoc network, Sensor Networks, Mesh networking, Wireless ad hoc network Neil Daswani – computer security, co-founder and co-director of Stanford Advanced Computer Security Program, co-founder of Dasient (acquired by Twitter), former chief information security of LifeLock and Symantec's Consumer Business Unit Christopher J. Date – proponent of database relational model Jeff Dean – Bigtable, MapReduce, Spanner of Google Erik Demaine – computational origami Tom DeMarco Richard DeMillo – computer security, software engineering, educational technology Dorothy E. Denning – computer security Peter J. Denning – identified the use of an operating system's working set and balance set, President of ACM Michael Dertouzos – Director of Massachusetts Institute of Technology (MIT) Laboratory for Computer Science (LCS) from 1974 to 2001 Alexander Dewdney Robert Dewar – IFIP WG 2.1 member, ALGOL 68, chairperson; AdaCore cofounder, president, CEO Vinod Dham – P5 Pentium processor Jan Dietz (born 1945) (decay constant) – information systems theory and Design & Engineering Methodology for Organizations Whitfield Diffie (born 1944) (linear response function) – public key cryptography, Diffie–Hellman key exchange Edsger Dijkstra – algorithms, Dijkstra's algorithm, Go To Statement Considered Harmful, semaphore (programming), IFIP WG 2.1 member Matthew Dillon – DragonFly BSD with LWKT, vkernel OS-level virtualisation, file systems: HAMMER1, HAMMER2 Alan Dix – wrote important university level textbook on human–computer interaction Jack Dongarra – linear algebra high performance computing (HCI) Marco Dorigo – ant colony optimization Paul Dourish – human computer interaction Charles Stark Draper (1901–1987) – designer of Apollo Guidance Computer, "father of inertial navigation", MIT professor Susan Dumais – information retrieval Adam Dunkels – Contiki, lwIP, uIP, protothreads Jon Michael Dunn – founding dean of Indiana University School of Informatics, information based logics especially relevance logic Schahram Dustdar – Distributed Systems, TU Wien, Austria E Peter Eades – graph drawing Annie J. Easley Wim Ebbinkhuijsen – COBOL John Presper Eckert – ENIAC Alan Edelman – Edelman's Law, stochastic operator, Interactive Supercomputing, Julia (programming language) cocreator, high performance computing, numerical computing Brendan Eich – JavaScript, Mozilla Philip Emeagwali – supercomputing E. Allen Emerson – model checking Douglas Engelbart – tiled windows, hypertext, computer mouse Barbara Engelhardt - latent variable models, genomics, quantitative trait locus (QTL) David Eppstein Andrey Ershov – languages ALPHA, Rapira; first Soviet time-sharing system AIST-0, electronic publishing system RUBIN, multiprocessing workstation MRAMOR, IFIP WG 2.1 member, Aesthetics and the Human Factor in Programming Don Estridge (1937–1985) – led development of original IBM Personal Computer (PC); known as "father of the IBM PC" Oren Etzioni – MetaCrawler, Netbot Christopher Riche Evans David C. Evans – computer graphics Shimon Even F Scott Fahlman Edward Feigenbaum – intelligence Edward Felten – computer security Tim Finin Raphael Finkel Donald Firesmith Gary William Flake Tommy Flowers – Colossus computer Robert Floyd – NP-completeness Sally Floyd – Internet congestion control Lawrence J. Fogel – evolutionary programming James D. Foley Ken Forbus L. R. Ford, Jr. Lance Fortnow Martin Fowler Robert France Herbert W. Franke Edward Fredkin Yoav Freund Daniel P. Friedman Charlotte Froese Fischer – computational theoretical physics Ping Fu Xiaoming Fu Kunihiko Fukushima – neocognitron, artificial neural networks, convolutional neural network architecture, unsupervised learning, deep learning D. R. Fulkerson G Richard P. Gabriel – Maclisp, Common Lisp, Worse is Better, League for Programming Freedom, Lucid Inc., XEmacs Zvi Galil Bernard Galler – MAD (programming language) Hector Garcia-Molina Michael Garey – NP-completeness Hugo de Garis Bill Gates – cofounder of Microsoft David Gelernter Lisa Gelobter – was the Chief Digital Service Officer for the U.S. Department of Education, founder of teQuitable Charles Geschke Zoubin Ghahramani Sanjay Ghemawat Jeremy Gibbons – generic programming, functional programming, formal methods, computational biology, bioinformatics Juan E. Gilbert – human-centered computing Lee Giles – CiteSeer Seymour Ginsburg – formal languages, automata theory, AFL theory, database theory Robert L. Glass Kurt Gödel – computability; not a computer scientist per se, but his work was invaluable in the field Ashok Goel Joseph Goguen Hardik Gohel E. Mark Gold – Language identification in the limit Adele Goldberg – Smalltalk Andrew V. Goldberg – algorithms, algorithm engineering Ian Goldberg – cryptographer, off-the-record messaging Oded Goldreich – cryptography, computational complexity theory Shafi Goldwasser – cryptography, computational complexity theory Gene Golub – Matrix computation Martin Charles Golumbic – algorithmic graph theory Gastón Gonnet – cofounder of Waterloo Maple Inc. Ian Goodfellow – machine learning James Gosling – Network extensible Window System (NeWS), Java Paul Graham – Viaweb, On Lisp, Arc Robert M. Graham – programming language compilers (GAT, Michigan Algorithm Decoder (MAD)), virtual memory architecture, Multics Susan L. Graham – compilers, programming environments Jim Gray – database Sheila Greibach – Greibach normal form, Abstract family of languages (AFL) theory Ralph Griswold – SNOBOL Bill Gropp – Message Passing Interface, Portable, Extensible Toolkit for Scientific Computation (PETSc) Tom Gruber – ontology engineering Shelia Guberman – handwriting recognition Ramanathan V. Guha – Resource Description Framework (RDF), Netscape, RSS, Epinions Neil J. Gunther – computer performance analysis, capacity planning Jürg Gutknecht – with Niklaus Wirth: Lilith computer; Modula-2, Oberon, Zonnon programming languages; Oberon operating system Michael Guy – Phoenix, work on number theory, computer algebra, higher dimension polyhedra theory; with John Horton Conway H Nico Habermann – work on operating systems, software engineering, inter-process communication, process synchronization, deadlock avoidance, software verification, programming languages: ALGOL 60, BLISS, Pascal, Ada Philipp Matthäus Hahn – mechanical calculator Eldon C. Hall – Apollo Guidance Computer Wendy Hall Joseph Halpern Margaret Hamilton – ultra-reliable software design Richard Hamming – Hamming code, founder of the Association for Computing Machinery Jiawei Han – data mining Frank Harary – graph theory Juris Hartmanis – computational complexity theory Johan Håstad – computational complexity theory Les Hatton – software failure and vulnerabilities Igor Hawryszkiewycz, (born 1948) – American computer scientist and organizational theorist He Jifeng – provably correct systems Eric Hehner – predicative programming, formal methods, quote notation, ALGOL Martin Hellman – encryption Gernot Heiser – operating system teaching, research, commercialising, Open Kernel Labs, OKL4, Wombat James Hendler – Semantic Web John L. Hennessy – computer architecture Andrew Herbert Carl Hewitt Kelsey Hightower – open source, cloud computing Danny Hillis – Connection Machine Geoffrey Hinton Julia Hirschberg Tin Kam Ho – artificial intelligence, machine learning C. A. R. Hoare – logic, rigor, communicating sequential processes (CSP) Louis Hodes (1934–2008) – Lisp, pattern recognition, logic programming, cancer research Betty Holberton – ENIAC programmer, developed the first Sort Merge Generator John Henry Holland – genetic algorithms Herman Hollerith (1860–1929) – invented recording of data on a machine readable medium, using punched cards Gerard Holzmann – software verification, logic model checking (SPIN) John Hopcroft – compilers Admiral Grace Hopper (1906–1992) – developed early compilers: FLOW-Matic, COBOL; worked on UNIVAC; gave speeches on computer history, where she gave out nano-seconds Eric Horvitz – artificial intelligence Alston Householder Paul Hudak (1952–2015) – Haskell language design David A. Huffman (1925–1999) – Huffman coding, used in data compression John Hughes – structuring computations with arrows; QuickCheck randomized program testing framework; Haskell language design Roger Hui – co-created J language Watts Humphrey (1927–2010) – Personal Software Process (PSP), Software quality, Team Software Process (TSP) I Jean Ichbiah – Ada Roberto Ierusalimschy – Lua (programming language) Dan Ingalls – Smalltalk, BitBlt, Lively Kernel Mary Jane Irwin Kenneth E. Iverson – APL, J J Ivar Jacobson – Unified Modeling Language, Object Management Group Anil K. Jain (born 1948) Ramesh Jain Jonathan James David S. Johnson Stephen C. Johnson Cliff Jones – Vienna Development Method (VDM) Michael I. Jordan Mathai Joseph Aravind K. Joshi Bill Joy (born 1954) – Sun Microsystems, BSD UNIX, vi, csh Dan Jurafsky – natural language processing K William Kahan – numerical analysis Robert E. Kahn – TCP/IP Avinash Kak – digital image processing Poul-Henning Kamp – invented GBDE, FreeBSD Jails, Varnish cache David Karger Richard Karp – NP-completeness Narendra Karmarkar – Karmarkar's algorithm Marek Karpinski – NP optimization problems Ted Kaehler – Smalltalk, Squeak, HyperCard Alan Kay – Dynabook, Smalltalk, overlapping windows Neeraj Kayal – AKS primality test Manolis Kellis - computational biology John George Kemeny – BASIC Ken Kennedy – compiling for parallel and vector machines Brian Kernighan (born 1942) – Unix, the 'k' in AWK Carl Kesselman – grid computing Gregor Kiczales – CLOS, reflection, aspect-oriented programming Peter T. Kirstein – Internet Stephen Cole Kleene – Kleene closure, recursion theory Dan Klein – Natural language processing, Machine translation Leonard Kleinrock – ARPANET, queueing theory, packet switching, hierarchical routing Donald Knuth – The Art of Computer Programming, MIX/MMIX, TeX, literate programming Andrew Koenig – C++ Daphne Koller – Artificial intelligence, bayesian network Michael Kölling – BlueJ Andrey Nikolaevich Kolmogorov – algorithmic complexity theory Janet L. Kolodner – case-based reasoning David Korn – KornShell Kees Koster – ALGOL 68 Robert Kowalski – logic programming John Koza – genetic programming John Krogstie – SEQUAL framework Joseph Kruskal – Kruskal's algorithm Thomas E. Kurtz (born 1928) – BASIC programming language; Dartmouth College computer professor L Richard E. Ladner Monica S. Lam Leslie Lamport – algorithms for distributed computing, LaTeX Butler Lampson – SDS 940, founding member Xerox PARC, Xerox Alto, Turing Award Peter Landin – ISWIM, J operator, SECD machine, off-side rule, syntactic sugar, ALGOL, IFIP WG 2.1 member, advanced lambda calculus to model programming languages (aided functional programming), denotational semantics Tom Lane – Independent JPEG Group, PostgreSQL, Portable Network Graphics (PNG) Börje Langefors Chris Lattner – creator of Swift (programming language) and LLVM compiler infrastructure Steve Lawrence Edward D. Lazowska Joshua Lederberg Manny M Lehman Charles E. Leiserson – cache-oblivious algorithms, provably good work-stealing, coauthor of Introduction to Algorithms Douglas Lenat – artificial intelligence, Cyc Yann LeCun Rasmus Lerdorf – PHP Max Levchin – Gausebeck–Levchin test and PayPal Leonid Levin – computational complexity theory Kevin Leyton-Brown – artificial intelligence J.C.R. Licklider David Liddle Jochen Liedtke – microkernel operating systems Eumel, L3, L4 John Lions – Lions' Commentary on UNIX 6th Edition, with Source Code (Lions Book) Charles H. Lindsey – IFIP WG 2.1 member, Revised Report on ALGOL 68 Richard J. Lipton – computational complexity theory Barbara Liskov – programming languages Yanhong Annie Liu – programming languages, algorithms, program design, program optimization, |
them expensive to run. Two of the leading centres have been the University of Rennes (France) and the University of Birmingham (UK). A more recent development has been a pulsed version of the CRESU, which requires far less gas and therefore smaller pumps. Kinetics Most species have a negligible vapour pressure at such low temperatures and this means that they quickly condense on the sides of the apparatus. Essentially, the CRESU technique provides a "wall-less flow tube," which allows the kinetics of gas phase reactions to be investigated at much lower temperatures than otherwise possible. Chemical kinetics experiments can then be carried out in a pump-probe fashion using a laser to initiate the reaction (for example by preparing one of the reagents by photolysis of a precursor), followed by observation of that same species (for example by laser-induced fluorescence) after a known time delay. The fluorescence signal is captured by a photomultiplier a known distance downstream of the de Laval nozzle. The time delay can be varied up to the maximum corresponding to the flow time over that known distance. By studying how quickly the reagent species disappears in the presence of differing concentrations of a (usually stable) co-reagent species the reaction rate constant at the low temperature of the CRESU flow can be determined. Reactions studied | species have a negligible vapour pressure at such low temperatures and this means that they quickly condense on the sides of the apparatus. Essentially, the CRESU technique provides a "wall-less flow tube," which allows the kinetics of gas phase reactions to be investigated at much lower temperatures than otherwise possible. Chemical kinetics experiments can then be carried out in a pump-probe fashion using a laser to initiate the reaction (for example by preparing one of the reagents by photolysis of a precursor), followed by observation of that same species (for example by laser-induced fluorescence) after a known time delay. The fluorescence signal is captured by a photomultiplier a known distance downstream of the de Laval nozzle. The time delay can be varied up to the |
it does not create them as their semantics are not fully POSIX-compliant. The POSIX API for handling access control lists (ACLs) is supported and maps to the Windows NT ACL system. Special formats of /etc/passwd and /etc/group are provided that include pointers to the Windows equivalent SIDs (in the Gecos field), allowing for mapping between Unix and Windows users and groups. The fork system call for duplicating a process is fully implemented, but it does not map well to the Windows API. For example, the copy-on-write optimization strategy could not be used. As a result, Cygwin's fork is rather slow compared with Linux and others. (That overhead can often be avoided by replacing uses of the fork/exec technique with calls to the spawn functions declared in the Windows-specific process.h header). The Cygwin DLL contains a console driver that emulates a Unix-style terminal within the Windows console. Cygwin's default user interface is the bash shell running in the Cygwin console. The DLL also implements pseudo terminal (pty) devices. Cygwin ships with a number of terminal emulators that are based on them, including mintty, rxvt(-unicode), and xterm. These are more compliant with Unix terminal standards and user interface conventions than the Cygwin console, but are less suited for running Windows console programs. Various utilities are provided for converting between Windows and Unix paths and file formats, for handling line ending (CRLF/LF) issues, for displaying the DLLs that an executable is linked with, etc. Apart from always being linked against the Cygwin DLL, Cygwin executables are normal Windows executables. This means that Cygwin programs have full access to the Windows API and other Windows libraries, which allows gradual porting of programs from one platform to the other. However, programmers need to be careful about mixing conflicting POSIX and Windows functions. The version of gcc that comes with Cygwin has various extensions for creating Windows DLLs, specifying whether a program is a windowing or console-mode program, adding resources, etc. Support for compiling programs that do not require the POSIX compatibility layer provided by the Cygwin DLL used to be included in the default gcc, but is provided by cross-compilers contributed by the MinGW-w64 project. Cygwin is used heavily for porting many popular pieces of software to the Windows platform. It is used to compile Sun Java, LibreOffice, and even web server software like Lighttpd and Hiawatha. The Cygwin API library is licensed under the GNU Lesser General Public License version 3 (or later) with an exception to allow linking to any free and open-source software whose license conforms to the Open Source Definition (less strict than the Free Software Definition). History Cygwin began in 1995 as a project of Steve Chamberlain, a Cygnus engineer who observed that Windows NT and 95 used COFF as their object file format, and that GNU already included support for x86 and COFF, and the C library newlib. He thought it would be possible to retarget GCC and produce a cross compiler generating executables that could run on Windows. This proved practical and a prototype was quickly developed. The next step was to attempt to bootstrap the compiler on a Windows system, requiring sufficient emulation of Unix to let the GNU configure shell script run. A Bourne shell-compatible command interpreter, such as bash, was needed and in turn a fork system call emulation and standard input/output. Windows includes similar functionality, so the Cygwin library just needed to provide a POSIX-compatible application programming interface (API) and properly translate calls and manage private versions of data, such as file descriptors. Initially, Cygwin was called gnuwin32 (not to be confused with the current GnuWin32 project). The name was changed to Cygwin32 to emphasize Cygnus' role in creating it. When Microsoft registered the trademark Win32, the 32 was dropped to simply become Cygwin. By 1996, other engineers had joined in, because it was clear that Cygwin would be a useful way to provide Cygnus' embedded tools hosted on Windows systems (the previous strategy had been to use DJGPP). It was especially attractive because it was possible to do a three-way cross-compile, for instance to use a hefty Sun Microsystems workstation to build, say, a Windows-x-MIPS cross-compiler, which was faster than using the PC at the time. In 1999, Cygnus offered Cygwin 1.0 as a commercial product of interest in its own right although subsequent versions have not been released, instead relying on continued open source releases. Geoffrey Noer was the project lead from 1996 to 1999. Christopher Faylor was the project lead from 1999 to mid-2014. Corinna Vinschen became co-lead since 2004 when Faylor left Red Hat and has been lead since mid-2014, when Faylor withdrew from active participation in the project. From June 23, 2016 the Cygwin library version 2.5.2 was licensed under the GNU Lesser General Public License (LGPL) version 3, so is now possible to link against closed source applications. Before this was available there were two possibilities: You could release the source code of your application or buy a Cygwin license to release a closed source application. Features Cygwin's base package selection is fairly small (about 100 MB), containing little more than the bash (interactive user) and dash (installation) shells and the core file and text manipulation utilities expected of a Unix command line. Additional packages are available as optional installs from within Cygwin's package manager ("setup-x86.exe" – 32bit & "setup-x86_64.exe" – 64bit). These include (among many others): Shells (i.e. command line interpreters): bash, dash, fish, pdksh, tcsh, zsh, mksh File and system utilities: coreutils, findutils, util-linux Text utilities: grep, sed, diff, patch, awk Terminals: mintty, rxvt, screen Editors: ed, emacs, joe, mined, nano, | are normal Windows executables. This means that Cygwin programs have full access to the Windows API and other Windows libraries, which allows gradual porting of programs from one platform to the other. However, programmers need to be careful about mixing conflicting POSIX and Windows functions. The version of gcc that comes with Cygwin has various extensions for creating Windows DLLs, specifying whether a program is a windowing or console-mode program, adding resources, etc. Support for compiling programs that do not require the POSIX compatibility layer provided by the Cygwin DLL used to be included in the default gcc, but is provided by cross-compilers contributed by the MinGW-w64 project. Cygwin is used heavily for porting many popular pieces of software to the Windows platform. It is used to compile Sun Java, LibreOffice, and even web server software like Lighttpd and Hiawatha. The Cygwin API library is licensed under the GNU Lesser General Public License version 3 (or later) with an exception to allow linking to any free and open-source software whose license conforms to the Open Source Definition (less strict than the Free Software Definition). History Cygwin began in 1995 as a project of Steve Chamberlain, a Cygnus engineer who observed that Windows NT and 95 used COFF as their object file format, and that GNU already included support for x86 and COFF, and the C library newlib. He thought it would be possible to retarget GCC and produce a cross compiler generating executables that could run on Windows. This proved practical and a prototype was quickly developed. The next step was to attempt to bootstrap the compiler on a Windows system, requiring sufficient emulation of Unix to let the GNU configure shell script run. A Bourne shell-compatible command interpreter, such as bash, was needed and in turn a fork system call emulation and standard input/output. Windows includes similar functionality, so the Cygwin library just needed to provide a POSIX-compatible application programming interface (API) and properly translate calls and manage private versions of data, such as file descriptors. Initially, Cygwin was called gnuwin32 (not to be confused with the current GnuWin32 project). The name was changed to Cygwin32 to emphasize Cygnus' role in creating it. When Microsoft registered the trademark Win32, the 32 was dropped to simply become Cygwin. By 1996, other engineers had joined in, because it was clear that Cygwin would be a useful way to provide Cygnus' embedded tools hosted on Windows systems (the previous strategy had been to use DJGPP). It was especially attractive because it was possible to do a three-way cross-compile, for instance to use a hefty Sun Microsystems workstation to build, say, a Windows-x-MIPS cross-compiler, which was faster than using the PC at the time. In 1999, Cygnus offered Cygwin 1.0 as a commercial product of interest in its own right although subsequent versions have not been released, instead relying on continued open source releases. Geoffrey Noer was the project lead from 1996 to 1999. Christopher Faylor was the project lead from 1999 to mid-2014. Corinna Vinschen became co-lead since 2004 when Faylor left Red Hat and has been lead since mid-2014, when Faylor withdrew from active participation in the project. From June 23, 2016 the Cygwin library version 2.5.2 was licensed under the GNU Lesser General Public License (LGPL) version 3, so is now possible to link against closed source applications. Before this was available there were two possibilities: You could release the source code of your application or buy a Cygwin license to release a closed source application. Features Cygwin's base package selection is fairly small (about 100 MB), containing little more than the bash (interactive user) and dash (installation) shells and the core file and text manipulation utilities expected of a Unix command line. Additional packages are available as optional installs from within Cygwin's package manager ("setup-x86.exe" – 32bit & "setup-x86_64.exe" – 64bit). These include (among many others): Shells (i.e. command line interpreters): bash, dash, fish, pdksh, tcsh, zsh, mksh File and system utilities: coreutils, findutils, util-linux Text utilities: grep, sed, diff, patch, awk Terminals: mintty, rxvt, screen Editors: ed, emacs, joe, mined, nano, vim Remote login: ssh, rsh, telnet Remote file transfer/synchronization: ftp, scp, rsync, unison, rtorrent Compression/archiving: tar, gzip, bzip2, lzma, zip Text processing: TeX, groff, Ghostscript Programming languages: C, C++, Objective-C, Fortran, Gambas, Perl, Python, Ruby, Tcl, Ada, CLISP, Scheme, OCaml, Prolog Development tools: make, autotools, flex, bison, doxygen Version control systems: cvs, subversion, git, mercurial Servers: Apache, BIND, PostgreSQL, Pure-FTPd, OpenSSH, telnetd, exim, UW IMAP Clients: Mutt (email), Lynx (web), Irssi (IRC), tin (newsgroups) The Cygwin/X project contributes an implementation of the X Window System that allows graphical Unix programs to display their user interfaces on the Windows desktop. This can be used with both local and remote programs. Cygwin/X supports over 500 packages including major X window managers, desktop environments, and applications, for example: Terminals: rxvt-unicode, xterm Editors: emacs-X11, gvim Text processors/viewers: LyX, xpdf, xdvi WWW browsers: epiphany, konqueror, links, lynx, midori, qupzilla, w3m In addition to the low-level Xlib/XCB libraries for developing X applications, Cygwin also ships with various higher-level and cross-platform GUI frameworks, including GTK+ and Qt. The Cygwin Ports project provided many additional packages that were not available in the Cygwin distribution itself. Examples included GNOME and K Desktop Environment 3 |
city center. Local bus service is also available. Railways The metre gauge railway from Athens and Pireaeus reached Corinth in 1884. This station closed to regular public transport in 2007. In 2005, two years prior, the city was connected to the Proastiakos/Suburban, the Athens suburban rail network, following the completion of the new Corinth railway station. The journey from Athens to Corinth is estimated to approx. 55 minutes. Train station is 5 minutes by car from the city center and parking is available for free. Port The port of Corinth, located north of the city centre and close to the northwest entrance of the Corinth Canal, at 37 56.0’ N / 22 56.0’ E, serves the local needs of industry and agriculture. It is mainly a cargo exporting facility. It is an artificial harbour (depth approximately , protected by a concrete mole (length approximately 930 metres, width 100 metres, mole surface 93,000 m2). A new pier finished in the late 1980s doubled the capacity of the port. The reinforced mole protects anchored vessels from strong northern winds. Within the port operates a customs office facility and a Hellenic Coast Guard post. Sea traffic is limited to trade in the export of local produce, mainly citrus fruits, grapes, marble, aggregates and some domestic imports. The port operates as a contingency facility for general cargo ships, bulk carriers and ROROs, in case of strikes at Piraeus port. Ferries There was formerly a ferry link to Catania, Sicily and Genoa in Italy. Canal The Corinth Canal, carrying ship traffic between the western Mediterranean Sea and the Aegean Sea, is about east of the city, cutting through the Isthmus of Corinth that connects the Peloponnesian peninsula to the Greek mainland, thus effectively making the former an island. The builders dug the canal through the Isthmus at sea level; no locks are employed. It is in length and only wide at its base, making it impassable for most modern ships. It now has little economic importance. The canal was mooted in classical times and an abortive effort was made to build it in the 1st century AD. Julius Caesar and Caligula both considered digging the canal but died before starting the construction. The emperor Nero was the first to attempt to construct the canal. The Roman workforce responsible for the initial digging consisted of 6,000 Jewish prisoners of war. Modern construction started in 1882, after Greece gained independence from the Ottoman Empire, but was hampered by geological and financial problems that bankrupted the original builders. It was completed in 1893, but due to the canal's narrowness, navigational problems and periodic closures to repair landslips from its steep walls, it failed to attract the level of traffic anticipated by its operators. It is now used mainly for tourist traffic. Sport The city's association football team is Korinthos F.C. (Π.Α.E. Κόρινθος), established in 1999 after the merger of Pankorinthian Football Club (Παγκορινθιακός) and Corinth Football Club (Κόρινθος). During the 2006–2007 season, the team played in the Greek Fourth Division's Regional Group 7. The team went undefeated that season and it earned the top spot. This granted the team a promotion to the Gamma Ethnikí (Third Division) for the 2007–2008 season. For the 2008–2009 season, Korinthos F.C. competed in the Gamma Ethniki (Third Division) southern grouping. Twin towns/sister cities Corinth is twinned with: Syracuse, Sicily Notable people Costas Soukoulis (1951–), Professor of Physics at Iowa State University George Kollias (1977–), drummer for US technical death metal band Nile. Ioannis Papadiamantopoulos (1766–1826), revolutionary leader during the Greek War of Independence. Georgios Leonardopoulos, army officer Irene Papas, Greek actress Macarius (1731–1805), Metropolitan bishop of Corinth Anastasios Bakasetas (1993–), Greek footballer Evangelos | Mountains, and the monolithic rock of Acrocorinth, where the medieval acropolis was built. History Corinth derives its name from Ancient Corinth, a city-state of antiquity. The site was occupied from before 3000 BC. Historical references begin with the early 8th century BC, when Corinth began to develop as a commercial center. Between the 8th and 7th centuries, the Bacchiad family ruled Corinth. Cypselus overthrew the Bacchiad family, and between 657 and 550 BC, he and his son Periander ruled Corinth as the Tyrants. In about 550 BC, an oligarchical government seized power. This government allied with Sparta within the Peloponnesian League, and Corinth participated in the Persian Wars and Peloponnesian War as an ally of Sparta. After Sparta's victory in the Peloponnesian war, the two allies fell out with one another, and Corinth pursued an independent policy in the various wars of the early 4th century BC. After the Macedonian conquest of Greece, the Acrocorinth was the seat of a Macedonian garrison until 243 BC, when the city was liberated and joined the Achaean League. Nearly a century later, in 146 BC, Corinth was captured and was completely destroyed by the Roman army. As a newly rebuilt Roman colony in 44 BC, Corinth flourished and became the administrative capital of the Roman province of Achaea. In 1858, the old city, now known as Ancient Corinth (Αρχαία Κόρινθος, Archaia Korinthos), located southwest of the modern city, was totally destroyed by a magnitude 6.5 earthquake. New Corinth (Nea Korinthos) was then built to the north-east of it, on the coast of the Gulf of Corinth. In 1928, a magnitude 6.3 earthquake devastated the new city, which was then rebuilt on the same site. In 1933, there was a great fire, and the new city was rebuilt again. Demographics The Municipality of Corinth (Δήμος Κορινθίων) had a population of 58,192 according to the 2011 census, the second most populous municipality in the Peloponnese Region after Kalamata. The municipal unit of Corinth had 38,132 inhabitants, of which Corinth itself had 30,176 inhabitants, placing it in third place behind Kalamata and Tripoli among the cities of the Peloponnese Region. The municipal unit of Corinth (Δημοτική ενότητα Κορινθίων) includes apart from Corinth proper the town of Archaia Korinthos (2,198 inhabitants in 2011), the town of Examilia (2,905 inhabitants), and the smaller settlements of Xylokeriza (1,316 inhabitants) and Solomos (817 inhabitants). The municipal unit has an area of 102.187 km2. Economy Industry Corinth is a major industrial hub at a national level. The Corinth Refinery is one of the largest oil refining industrial complexes in Europe. Ceramic tiles, copper cables, gums, gypsum, leather, marble, meat products, medical equipment, mineral water and beverages, petroleum products, and salt are produced nearby. , a period of Economic changes commenced as a large pipework complex, a textile factory and a meat packing facility diminished their operations. Transport Roads Corinth is a major road hub. The A7 toll motorway for Tripoli and Kalamata, (and Sparta via A71 toll), branches off the A8/European route E94 toll motorway from Athens at Corinth. Corinth is the main entry point to the Peloponnesian peninsula, the southernmost area of continental Greece. Bus KTEL Korinthias provides intercity bus service in the peninsula and to Athens via the Isthmos station southeast of the city center. Local bus service is also available. Railways The metre gauge railway from Athens and Pireaeus reached Corinth in 1884. This station closed |
but whose metropolitan bishop Nunechius of Laodicea, the capital of the Roman province of Phrygia Pacatiana signed the acts on his behalf. Byzantine period and decline The city's fame and renowned status continued into the Byzantine period, and in 858, it was distinguished as a Metropolitan See. The Byzantines also built the church of St. Michael in the vicinity of Colossae, one of the largest church buildings in the Middle East. Nevertheless, sources suggest that the town may have decreased in size or may even been completely abandoned due to Arab invasions in the seventh and eighth centuries, forcing the population to flee to resettle in the nearby city of Chonai (modern day Honaz). Colossae's famous church was destroyed in 1192/3 during the Byzantine civil wars. It was a suffragan diocese of Laodicea in Phyrigia Pacatiane but was replaced in the Byzantine period by the Chonae settlement on higher ground Modern study and archeology As of 2019, Colossae has never been excavated, as most archeological attention has been focused on nearby Laodicea and Hierapolis, though plans are reported for an Australian-led expedition to the site. The present site exhibits a biconical acropolis almost 100 feet high, and encompasses an area of almost 22 acres. On the eastern slope there sits a theater which probably seated around 5,000 people, suggesting a total population of 25,000 - 30,000 people. The theater was probably built during the Roman period, and may be near an agora that abuts the Cardo Maximus, or the city's main north-south road. Ceramic finds around the theater confirm the city's early occupation in the third and second millennia BC. Northeast of the tell, and most likely outside the city walls, a necropolis displays Hellenistic tombs with two main styles of burial: one with an antecedent room connected to an inner chamber, and tumuli, or underground chambers accessed by stairs leading to the entrance. Outside the tell there are also remains of sections of columns that may have marked a processional way or the cardo. Today, the remains of one column marks the location where locals believe a church once stood, possibly that of St. Michael. Near the Lycus River, there is evidence that water channels had been cut out of the rock with a complex of pipes and sluice gates to divert water for bathing and for agricultural and industrial purposes. Modern legacy The holiness and healing properties associated with the waters of Colossae during the Byzantine Era continue to this day, | name The medieval poet Manuel Philes, incorrectly, imagined that the name "Colossae" was connected to the Colossus of Rhodes. More recently, in an interpretation which ties Colossae to an Indo-European root that happens to be shared with the word kolossos, Jean-Pierre Vernant has connected the name to the idea of setting up a sacred space or shrine. Another proposal relates the name to the Greek kolazo, "to punish". Others believe the name derives from the manufacture of its famous dyed wool, or colossinus. History Before the Pauline period The first mention of the city may be in a 17th-century BC Hittite inscription, which speaks of a city called Huwalušija, which some archeologists believe refer to early Colossae. The Fifth Century geographer Herodotus first mentions Colossae by name and as a "great city in Phrygia", which accommodates the Persian King Xerxes I while en route to wage war against the Greeks - showing the city had already reached a certain level of wealth and size by this time. Writing in the 5th century BC, Xenophon refers to Colossae as "a populous city, wealthy and of considerable magnitude". It was famous for its wool trade. Strabo notes that the city drew great revenue from the flocks, and that the wool of Colossae gave its name to colour colossinus. In 396 BC, Colossae was the site of the execution of the rebellious Persian satrap Tissaphernes who was lured there and slain by an agent of the party of Cyrus the Younger. Pauline period Although during the Hellenistic period, the town was of some mercantile importance, by the 1st century it had dwindled greatly in size and significance. Paul's letter to the Colossians points to the existence of an early Christian community. The town was known for its fusion of religious influences (syncretism), which included Jewish, Gnostic, and pagan influences that in the first century AD were described as an angel-cult. This unorthodox cult venerated the archangel Michael who is said to have caused a curative spring to gush from a fissure in the Earth. The worship of angels showed analogies with the cult of pre-Christian pagan deities like Zeus. Saint Theodoret of Cyrrhus told about their surviving in Phrygia during the fourth century. The canonical biblical text Epistle to the Colossians is addressed to the Christian community in Colossae. The epistle has traditionally been attributed to Paul the Apostle due to its autobiographical salutation and style, but some modern critical scholars now believe it to be written by another author some time after Paul's death. It is believed that one aim of the letter was to address the challenges that the Colossian community faced in its context of the syncretistic Gnostic religions that were developing in Asia Minor. According to the Epistle to the Colossians, Epaphras seems to have been a person of some importance in the Christian community in Colossae, and tradition presents him as its first bishop. The epistle also seems to imply that Paul had never visited the city, because it only speaks of him having "heard" of the Colossians' faith, and in the Epistle to Philemon Paul tells Philemon of his hope to visit Colossae upon being freed from prison. Tradition also gives Philemon as the second bishop of the see. The city was decimated by an earthquake in the 60s AD, and was rebuilt independent of the support of Rome. The Apostolic Constitutions list Philemon as a Bishop of Colossae. On the other hand, the Catholic Encyclopedia considers Philemon doubtful. The first historically documented bishop is Epiphanius, who was not personally at the Council of Chalcedon, but whose metropolitan bishop Nunechius of Laodicea, the capital of the Roman province of Phrygia Pacatiana signed the acts on his behalf. Byzantine period and decline The city's fame and renowned status continued into the Byzantine period, and in 858, it was distinguished as a Metropolitan See. The Byzantines also built the church of St. Michael in the vicinity of Colossae, one of the largest church buildings in the Middle East. Nevertheless, sources suggest that the town |
The Golden Ass, Isis delivers what Ceisiwr Serith calls "essentially a charge of a goddess". This is rather different from the modern version known in Wicca, though they have the same premise, that of the rules given by a great Mother Goddess to her faithful. The Charge of the Goddess is also known under the title Leviter Veslis. This has been identified by the historian Ronald Hutton, cited in an article by Roger Dearnsley "The Influence of Aleister Crowley on Ye Bok of Ye Art Magical, as a piece of medieval ecclesiastical Latin used to mean "lifting the veil." However, Hutton's interpretation does not reflect the Latin grammar as it currently stands. It may represent Gardner's attempt to write Levetur Velis, which has the literal meaning of "Let the veil be lifted." This expression would, by coincidence or design, grammatically echo the famous fiat lux (Gen. 1:3) of the Latin Vulgate. Origins The earliest known Wiccan version is found in a document dating from the late 1940s, Gerald Gardner's ritual notebook titled Ye Bok of Ye Art Magical. The oldest identifiable source contained in this version is the final line, which is traceable to the 17th-century Centrum Naturae Concentratum of Alipili (or Ali Puli). This version also draws extensively from Charles Godfrey Leland's Aradia, or the Gospel of the Witches (1899) and other modern sources, particularly from the works of Aleister Crowley. It is believed to have been compiled by Gerald Gardner or possibly another member of the New Forest coven. Gardner intended his version to be a theological statement justifying the Gardnerian sequence of initiations. Like the Charge found in Freemasonry, where the charge is a set of instructions read to a candidate standing | of the Heart Girt with the Serpent by Aleister Crowley. The charge affirms that all acts of love and pleasure are sacred to the Goddess, e.g.: History Ancient precedents In book eleven, chapter 47 of Apuleius's The Golden Ass, Isis delivers what Ceisiwr Serith calls "essentially a charge of a goddess". This is rather different from the modern version known in Wicca, though they have the same premise, that of the rules given by a great Mother Goddess to her faithful. The Charge of the Goddess is also known under the title Leviter Veslis. This has been identified by the historian Ronald Hutton, cited in an article by Roger Dearnsley "The Influence of Aleister Crowley on Ye Bok of Ye Art Magical, as a piece of medieval ecclesiastical Latin used to mean "lifting the veil." However, Hutton's interpretation does not reflect the Latin grammar as it currently stands. It may represent Gardner's attempt to write Levetur Velis, which has the literal meaning of "Let the veil be lifted." This expression would, by coincidence or design, grammatically echo the famous fiat lux (Gen. 1:3) of the Latin Vulgate. Origins The earliest known Wiccan version is found in a document dating from the late 1940s, Gerald Gardner's ritual notebook titled Ye Bok of Ye Art Magical. The oldest identifiable source contained in this version is the final line, which is traceable to the 17th-century Centrum Naturae Concentratum of Alipili (or Ali Puli). This version also draws extensively from Charles Godfrey Leland's Aradia, or the Gospel of the Witches (1899) and other modern sources, particularly from the works of Aleister Crowley. It is believed to have been compiled by Gerald Gardner or possibly another member of the New Forest coven. Gardner intended his version to be a theological statement justifying the Gardnerian sequence of initiations. Like the Charge found in Freemasonry, where the charge is a set of instructions read to a candidate standing in a temple, the Charge of the Goddess was intended to be read immediately before an initiation. Valiente felt that the influence of Crowley on the Charge was too obvious, and she did not want "the Craft" (a common term for Wicca) associated with Crowley. Gardner invited her to rewrite the Charge. She proceeded to do so, her first version being into verse. The initial verse version by Doreen Valiente consisted of eight verses, the second of which was: Valiente was unhappy with this version, saying that "people seemed to have some difficulty with this, because of the various goddess-names which they found hard to pronounce", and |
box score known containing the name Young came from that season. In that game, Young played first base and had three hits in three at-bats. After the season, Young received an offer to play for the minor league Canton team, which started Young's professional career. Professional baseball career Minor leagues Young began his professional career in 1889 with the Canton, Ohio, team of the Tri-State League, a professional minor league. During his tryout, Young impressed the scouts, recalling years later, "I almost tore the boards off the grandstand with my fast ball." Cy Young's nickname came from the fences that he had destroyed using his fastball. The fences looked like a cyclone had hit them. Reporters later shortened the name to "Cy", which became the nickname Young used for the rest of his life. During Young's one year with the Canton team, he won 15 games and lost 15. Franchises in the National League, the major professional baseball league at the time, wanted the best players available to them. Therefore, in 1890, Young signed with the Cleveland Spiders, a team which had moved from the American Association to the National League the previous year. Cleveland Spiders On August 6, 1890, Young's major league debut, he pitched a three-hit 8–1 victory over the Chicago Colts. While Young was on the Spiders, Chief Zimmer was his catcher more often than any other player. Bill James, a baseball statistician, estimated that Zimmer caught Young in more games than any other battery in baseball history. Early on, Young established himself as one of the harder-throwing pitchers in the game. Bill James wrote that Zimmer often put a piece of beefsteak inside his baseball glove to protect his catching hand from Young's fastball. In the absence of radar guns, however, it is impossible to say just how hard Young actually threw. Young continued to perform at a high level during the 1890 season. On the last day of the season, Young won both games of a doubleheader. In the first weeks of Young's career, Cap Anson, the player-manager of the Chicago Colts spotted Young's ability. Anson told Spiders manager Gus Schmelz, "He's too green to do your club much good, but I believe if I taught him what I know, I might make a pitcher out of him in a couple of years. He's not worth it now, but I'm willing to give you $1,000 ($ today) for him." Schmelz replied, "Cap, you can keep your thousand and we'll keep the rube." Two years after Young's debut, the National League moved the pitcher's position back by . Since 1881, pitchers had pitched within a "box" whose front line was from home base, and since 1887 they had been compelled to toe the back line of the box when delivering the ball. The back line was away from home. In 1893, was added to the back line, yielding the modern pitching distance of . In the book The Neyer/James Guide to Pitchers, sports journalist Rob Neyer wrote that the speed with which pitchers like Cy Young, Amos Rusie, and Jouett Meekin threw was the impetus that caused the move. The 1892 regular season was a success for Young, who led the National League in wins (36), ERA (1.93), and shutouts (9). Just as many contemporary Minor League Baseball leagues operate today, the National League was using a split season format during the 1892 season. The Boston Beaneaters won the first-half title, and the Spiders won the second-half title, with a best-of-nine series determining the league champion. Despite the Spiders' second half run, the Beaneaters swept the series, five games to none. Young pitched three complete games in the series, but lost two decisions. He also threw a complete game shutout, but the game ended in a scoreless tie. The Spiders faced the Baltimore Orioles in the Temple Cup, a precursor to the World Series, in 1895. Young won three games in the series and Cleveland won the Cup, four games to one. It was around this time that Young added what he called a "slow ball" to his pitching repertoire to reduce stress on his arm. The pitch today is called a changeup. In 1896, Young lost a no-hitter with two outs in the ninth inning when Ed Delahanty of the Philadelphia Phillies hit a single. On September 18, 1897, Young pitched the first no-hitter of his career in a game against the Cincinnati Reds. Although Young did not walk a batter, the Spiders committed four errors while on defense. One of the errors had originally been ruled a hit, but the Cleveland third baseman sent a note to the press box after the eighth inning, saying he had made an error, and the ruling was changed. Young later said that, despite his teammate's gesture, he considered the game to be a one-hitter. Shift to St. Louis Prior to the 1899 season, Frank Robison, the Spiders owner, bought the St. Louis Browns, thus owning two clubs simultaneously. The Browns were renamed the "Perfectos", and restocked with Cleveland talent. Just weeks before the season opener, most of the better Spiders players were transferred to St. Louis, including three future Hall of Famers: Young, Jesse Burkett, and Bobby Wallace. The roster maneuvers failed to create a powerhouse Perfectos team, as St. Louis finished fifth in both 1899 and 1900. Meanwhile, the depleted Spiders lost 134 games, the most in MLB history, before folding. Young spent two years with St. Louis, which is where he found his favorite catcher, Lou Criger. The two men were teammates for a decade. Move to Boston of the American League In 1901, the rival American League declared major league status and set about raiding National League rosters. Young left St. Louis and joined the American League's Boston Americans for a $3,500 contract ($ today). Young would remain with the Boston team until 1909. In his first year in the American League, Young was dominant. Pitching to Criger, who had also jumped to Boston, Young led the league in wins, strikeouts, and ERA, thus earning the colloquial AL Triple Crown for pitchers. Young won almost 42% of his team's games in 1901, accounting | Young to face him so that he could repeat his performance against Boston's ace. Three days later, Young pitched a perfect game against Waddell and the Athletics. It was the first perfect game in American League history. Waddell was the 27th and last batter, and when he flied out, Young shouted, "How do you like that, you hayseed?" Waddell had picked an inauspicious time to issue his challenge. Young's perfect game was the centerpiece of a pitching streak. Young set major league records for the most consecutive scoreless innings pitched and the most consecutive innings without allowing a hit; the latter record still stands at 25.1 innings, or 76 hitless batters. Even after he allowed a hit, Young's scoreless streak reached a then-record 45 shutout innings. Before Young, only two pitchers had thrown perfect games. This occurred in 1880, when Lee Richmond and John Montgomery Ward pitched perfect games within five days of each other, although under somewhat different rules: the front edge of the pitcher's box was only from home base (the modern release point is about farther away); walks required eight balls; and pitchers were obliged to throw side-armed. Young's perfect game was the first under the modern rules established in 1893. One year later, on July 4, 1905, Rube Waddell beat Young and the Americans, 4–2, in a 20-inning matchup. Young pitched 13 consecutive scoreless innings before he gave up a pair of unearned runs in the final inning. Young did not walk a batter and was later quoted: "For my part, I think it was the greatest game of ball I ever took part in." In 1907, Young and Waddell faced off in a scoreless 13-inning tie. In 1908, Young pitched the third no-hitter of his career. Three months past his 41st birthday, Cy Young was the oldest pitcher to record a no-hitter, a record which would stand 82 years until 43-year-old Nolan Ryan surpassed the feat. Only a walk kept Young from his second perfect game. After that runner was caught stealing, no other batter reached base. At this time, Young was the second-oldest player in either league. In another game one month before his no-hitter, he allowed just one single while facing 28 batters. On August 13, 1908, the league celebrated "Cy Young Day". No American League games were played on that day, and a group of All-Stars from the league's other teams gathered in Boston to play against Young and the Red Sox. When the season ended, he posted a 1.26 ERA, which gave him not only the lowest in his career, but also a major league record of being the oldest pitcher with 150+ innings pitched to post a season ERA under 1.50. Cleveland Naps and retirement Young was traded back to Cleveland, the place where he played over half his career, before the 1909 season, to the Cleveland Naps of the American League. The following season, 1910, he won his 500th career game on July 19 against Washington. He split 1911, his final year, between the Naps and the Boston Rustlers. On September 22, 1911, Young shut out the Pittsburgh Pirates, 1–0, for his last career victory. In his final start two weeks later, the last eight batters of Young's career combined to hit a triple, four singles, and three doubles. By the time of his retirement, Young's control had faltered. He had also gained weight. In two of his last three years, he was the oldest player in the league. Career accomplishments Young established numerous pitching records, some of which have stood for over a century. Young compiled 511 wins, which is the most in major league history and 94 ahead of Walter Johnson, second on the list. At the time of Young's retirement, Pud Galvin had the second most career wins with 364. In addition to wins, Young still holds the major league records for most career innings pitched (7,356), most career games started (815), and most complete games (749). He also retired with 316 losses, the most in MLB history. Young's career record for strikeouts was broken by Johnson in 1921. Young's 76 career shutouts are fourth all-time. Young led his league in wins five times (1892, 1895, and 1901–1903), finishing second twice. His career high was 36 in 1892. He won at least 30 games in a season five times. He had 15 seasons with 20 or more wins, two more than the runners-up, Christy Mathewson and Warren Spahn. Young won two ERA titles during his career, in 1892 (1.93) and in 1901 (1.62), and was three times the runner-up. Young's earned run average was below 2.00 six times, but this was not uncommon during the dead-ball era. Although Young threw over 400 innings in each of his first four full seasons, he did not lead his league until 1902. He had 40 or more complete games nine times. Young also led his league in strikeouts twice (with 140 in 1896, and 158 in 1901), and in shutouts seven times. Young led his league in fewest walks per nine innings fourteen times and finished second one season. Only twice in his 22-year career did Young finish lower than 5th in the category. Although the WHIP ratio was not calculated until well after Young's death, Young was the retroactive league leader in this category seven times and was second or third another seven times. Young is tied with Roger Clemens for the most career wins by a Boston Red Sox pitcher. They each won 192 games while with the franchise. In addition, Young pitched three no-hitters, including the third perfect game in baseball history, first in baseball's "modern era". Young also was an above average hitting pitcher in his career. He posted a .210 batting average (623-for-2960) with 325 runs, 18 home runs, 290 RBI and drew 81 bases on balls. From 1891 through 1905, he drove in 10 or more runs for 15 straight seasons, with a high of 28 RBI in 1896. Pitching style Particularly after his fastball slowed, Young relied upon his control. He was once quoted as saying, "Some may have thought it was essential to know how to curve a ball before anything else. Experience, to my mind, teaches to the contrary. Any young player who has good control will become a successful curve pitcher long before the pitcher who is endeavoring to master both curves and control at the same time. The curve is merely an accessory to control." In addition to his exceptional control, Young was also a workhorse who avoided injury, owing partly to his ability to pitch in different arm positions (overhand, three-quarters, sidearm and even submarine). For 19 consecutive years, from 1891 through 1909, Young was in his league's top 10 for innings pitched; in 14 of the seasons, he was in the top five. Not until 1900, a decade into his career, did Young pitch two consecutive incomplete games. By habit, Young restricted his practice throws in spring training. "I figured the old arm had just so many throws in it," said Young, "and there wasn't any use wasting them." He once described his approach before a game: I never warmed up ten, fifteen minutes before a game like most pitchers do. I'd loosen up, three, four minutes. Five at the outside. And I never went to the bullpen. Oh, I'd relieve all right, plenty of times, but I went right from the bench to the box, and I'd take a few warm-up pitches and be ready. Then I had good control. I aimed to make the batter hit the ball, and I threw as few pitches as possible. That's why I was able to work every other day. Managerial record * Stepped down to a player only role. Later life In 1910, it was reported that Young was a vegetarian. Beginning in 1912, Young lived and worked on his farm. In 1913, he served as manager of the Cleveland Green Sox of the Federal League, which was at the time an outlaw league. However, he never worked in baseball after that. In 1916, he ran for county treasurer in Tuscarawas County, Ohio. Young's wife, Roba, whom he had known since childhood, died in 1933. After she died, Young tried several jobs, and eventually moved in with friends John and Ruth Benedum and did odd jobs for them. Young took part in many baseball events after his retirement. In 1937, 26 years after he retired from baseball, Young was inducted into the Baseball Hall of Fame. He was among the first to donate mementos to the Hall. By 1940, Young's only source of income was stock dividends of $300 per year ($ today). On November 4, 1955, Young died on the Benedums' farm at the age of 88. He was buried in Peoli, Ohio. Legacy Young's career is seen as a bridge from baseball's earliest days to its modern era; he pitched against stars |
justifiable way to write him out while still leaving enough scope for a possible return. The decision was made that Brian should die. Quinten was in Los Angeles when the storyline was decided, and upon his return to the United Kingdom, he was shocked at Brian's fate and threatened to fly back to America so that scenes could not be filmed. He was talked round by co-star Helen Worth, who pointed out that he might be blacklisted by Equity if he quit the programme abruptly. Brian Tilsley's death was broadcast on 15 February 1989. After the breakdown of his marriage to Gail, Brian started spending his evenings going to discos and meeting up with various women. He tried to protect a young lady from a group of thugs outside a nightclub, but was stabbed in the stomach. He died as a result of his injuries. The stabbing brought massive complaints from viewers and Mary Whitehouse delivered an angry sermon about television violence. Between 1980 and 1989, Coronation Street underwent some of the most radical changes since its launch. By May 1984, William Roache stood as the only original cast member, after the departures of Violet Carson (Ena Sharples) in 1980, Doris Speed (Annie Walker) in 1983, and both Pat Phoenix (Elsie Tanner) and Jack Howarth (Albert Tatlock) in 1984. Albert Tatlock's departure came when his character's off screen death was announced several months after the death of actor Jack Howarth at the age of 88. While the press predicted the end of Corrie, H. V. Kershaw declared that "There are no stars in Coronation Street. The show had also gained a new rival on Channel 4 with the launch of Brookside, and BBC was preparing to launch EastEnders, which would first air in February 1985. " Writers drew on the show's many archetypes, with established characters stepping into the roles left by the original cast. Phyllis Pearce (Jill Summers) was hailed as the new Ena Sharples in 1982, the Duckworths moved into No.9 in 1983 and slipped into the role once held by the Ogdens, while Percy Sugden (Bill Waddington) appeared in 1983 and took over the grumpy war veteran role from Albert Tatlock. The question of who would take over the Rovers Return after Annie Walker's 1983 exit was answered in 1985 when Bet Lynch (who also mirrored the vulnerability and strength of Elsie Tanner) was installed as landlady. In 1983, Shirley Armitage (Lisa Lewis) became the first major black character in her role as machinist at Baldwin's Casuals. Ken Barlow married Deirdre Langton (Anne Kirkbride) on 27 July 1981. The episode was watched by over 15 million viewers – more ITV viewers than the wedding of Prince Charles and Lady Diana two days later. In the 1980s relationships were cemented between established characters: Alf Roberts (Bryan Mosley) married Audrey Potter (Sue Nicholls) in 1985; Kevin Webster (Michael Le Vell) married Sally Seddon (Sally Whittaker) in 1986; Bet Lynch married Alec Gilroy (Roy Barraclough) in 1987; and 1988 saw the marriages of widowed Ivy Tilsley to Don Brennan (Geoffrey Hinsliff), and the long-awaited union of Mavis Riley and Derek Wilton (Peter Baldwin), after over a decade of on-off romances and a failed marriage attempt in 1984. In 1982, the arrival of Channel 4, and its edgy new soap opera Brookside, sparked one of the biggest changes for Coronation Street. Unlike Coronation Street, which had a very nostalgic view of working-class life, Brookside brought together working and middle-class families in a more contemporary environment. The dialogue often included expletives and the stories were more hard-hitting, and of the current Zeitgeist. Whereas stories at this time in Coronation Street were largely about family affairs, Brookside concentrated on social affairs such as industrial action, unemployment, drugs, rape, and the black market. The BBC also introduced a new prime time soap opera, EastEnders in 1985. Like Brookside, EastEnders had a more gritty premise than Coronation Street, although unlike Brookside it tended to steer clear of blue language and politicised stories. Both of these shows were quickly well-received by the media and viewing public, although they were not without their controversies and critics. While ratings for Coronation Street remained consistent throughout the decade, EastEnders regularly obtained higher viewing figures due to its omnibus episodes shown at weekends. The Coronation Street episode broadcast on 2 January 1985 attracted 21.40 million viewers, making it the most-watched episode in the show's history based on a single showing. Subsequent episodes would achieve higher figures when the original broadcast and omnibus edition figures were combined. With prime time competition, Corrie was again seen as being old fashioned, with the introduction of the 'normal' Clayton family in 1985 being a failure with viewers, being written out the following year. Between 1988 and 1989, many aspects of the show were modernised by new producer David Liddiment. A new exterior set had been built in 1982, and in 1989 it was redeveloped to include new houses and shops. Production techniques were also changed with a new studio being built, and the inclusion of more location filming, which had moved the show from being shot on film to videotape in 1988. Due to new pressures, an introduction of the third weekly episode aired on 20 October 1989, to broadcast each Friday at 7:30 pm. The 1980s featured some of the most prominent storylines in the programme's history, such as Deirdre Barlow's affair with Mike Baldwin (Johnny Briggs) in 1983, the first soap storyline to receive widespread media attention. The feud between Ken Barlow and Mike Baldwin would continue for many years, with Mike even marrying Ken's daughter, Susan (Wendy Jane Walker). In 1986, there was a fire at the Rovers Return. The episode that aired on Christmas Day 1987, attracted a combined audience (original and omnibus) of 26.65 million – a figure helped by the fact that this episode heralded the departure of immensely-popular character Hilda Ogden (Jean Alexander). Between 1986 and 1989, the story of Rita Fairclough's (Barbara Knox) psychological abuse at the hands of Alan Bradley (Mark Eden), and then his subsequent death under the wheels of a Blackpool tram in December 1989, was played out. This storyline gave the show its highest combined viewing figure in its history with 26.93 million for the episode that aired on 15 (and 19) March 1989, where Alan is hiding from the police after trying to kill Rita in the previous episode. This rating is sometimes incorrectly credited to the 8 December 1989 tram death episode. Other stories included the birth of Nicky Tilsley (Warren Jackson) in 1980, Elsie Tanner's departure and Stan Ogden's funeral in 1984, the birth of Sarah-Louise Tilsley (Lynsay King) in 1987, and Brian Tilsley's murder in 1989. The 1980s saw further new and mostly younger characters being introduced, including until Terry Duckworth (Nigel Pivaro), Curly Watts (Kevin Kennedy), Martin Platt (Sean Wilson), Reg Holdsworth (Ken Morley), and the McDonald family; one of whom, Simon Gregson, started on the show as Steve McDonald a week after his 15th birthday, and has been on the show ever since. His parents Jim (Charles Lawson) and Liz (Beverley Callard) have made several departures and comebacks since debuting in 1989. 1990s In spite of updated sets and production changes, Coronation Street still received criticism. In 1992, chairman of the Broadcasting Standards Council, Lord Rees-Mogg, criticised the low representation of ethnic minorities, and the programme's portrayal of the cosy familiarity of a bygone era, particularly as many comparable neighbours in the real life Greater Manchester area had a significant percentage of black and Asian residents. Some newspapers ran headlines such as "Coronation Street shuts out blacks" (The Times), and "'Put colour in t'Street" (Daily Mirror). Patrick Stoddart of The Times wrote: "The millions who watch Coronation Street – and who will continue to do so despite Lord Rees-Mogg – know real life when they see it ... in the most confident and accomplished soap opera television has ever seen". Black and Asian characters had appeared from time to time over the years, but it was not until 1999 that the show featured its first regular non-white family, the Desai family. New characters Des (Philip Middlemiss) and Steph Barnes (Amelia Bullmore) moved into one of the new houses in 1990, being dubbed by the media as Yuppies. Raquel Wolstenhulme (Sarah Lancashire) first appeared at the beginning of 1991 and went on to become one of the most popular characters of the era until her departure in 1996, followed by a brief comeback three years later. The McDonald family were developed and the fiery relationships between Liz (Beverly Callard), Jim (Charles Lawson), Steve (Simon Gregson) and Andy (Nicholas Cochrane) interested viewers. Other newcomers were wheelchair user and pensioner Maud Grimes (Elizabeth Bradley), middle-aged cafe owner Roy Cropper (David Neilson), young married couple Gary and Judy Mallett (Ian Mercer and Gaynor Faye), as well as middle-aged butcher Fred Elliott (John Savident) and his son Ashley Peacock (Steven Arnold). The amount of slapstick and physical humour in storylines increased during the 1990s, with comical characters such as supermarket manager Reg Holdsworth (Ken Morley) and his water bed. In the early 1990s storylines included the death of newborn Katie McDonald in January 1992, Mike Baldwin's (Johnny Briggs) wedding to Alma Sedgewick (Amanda Barrie) later that year, Tommy Duckworth being sold by his father Terry (Nigel Pivaro) in 1993, Deirdre Barlow's (Anne Kirkbride) marriage to Moroccan Samir Rachid (Al Nedjari), and the rise of Tanya Pooley (Eva Pope) between 1993 and 1994. In 1995, Julie Goodyear (Bet Lynch) left the show, 29 years after her first appearance and 25 years after becoming a regular cast member. She made brief re-appearances in 2002 and 2003. In 1997, Brian Park took over as producer, with the idea of promoting young characters as opposed to the older cast. On his first day, he cut the characters of Derek Wilton (Peter Baldwin), Don Brennan (Geoffrey Hinsliff), Percy Sugden (Bill Waddington), Bill Webster (Peter Armitage), Billy Williams (Frank Mills) and Maureen Holdsworth (Sherrie Hewson). Thelma Barlow, who played Derek's wife Mavis, was angered by the firing of her co-star and resigned. The production team lost some of its key writers when Barry Hill, Adele Rose and Julian Roach all resigned as well. In line with Park's suggestion, younger characters were introduced during 1997 and 1998. A teenage Nick Tilsley was recast, played by Adam Rickitt following the departure of original actor Warren Jackson, single mother Zoe Tattersall (Joanne Froggatt) first appeared, and the problem Battersby family moved into No.5. Storylines focussed on tackling 'issues', such as drug dealers, eco-warriors, religious cults, and a transsexual woman. Park quit in 1998, after deciding that he had done what he intended to do; he maintained that his biggest achievement was the introduction of Hayley Patterson (Julie Hesmondhalgh), the first transsexual character in a British soap. The character married Roy Cropper soon after her arrival. Some viewers were alienated by the new Coronation Street, and sections of the media voiced their disapproval. Having received criticism of being too out of touch, Corrie now struggled to emulate the more modern Brookside and EastEnders. In the Daily Mirror, Victor Lewis-Smith wrote: "Apparently it doesn't matter that this is a first-class soap opera, superbly scripted and flawlessly performed by a seasoned repertory company." One of Coronation Street'''s best known storylines took place in March/April 1998, with Deirdre Rachid (Anne Kirkbride) being wrongfully imprisoned after a relationship with con-man Jon Lindsay (Owen Aaronovitch). The episode in which Deirdre was sent to prison had an audience of 19 million viewers, and 'Free the Weatherfield One' campaigns sprung up in a media frenzy. Then Prime Minister Tony Blair even passed comment on Deirdre's sentencing in Parliament. Deirdre was freed after three weeks, with Granada stating that they had always intended for her to be released, in spite of the media interest. 2000s On 8 December 2000, the show celebrated its 40th anniversary by broadcasting a live, hour-long episode. The Prince of Wales appeared as himself in an ITV News bulletin report. Earlier in the year, 13-year-old Sarah-Louise Platt (Tina O'Brien) had become pregnant and given birth to a baby girl, Bethany, on 4 June. The February episode where Gail was told of her daughter's pregnancy was watched by 15 million viewers. From 1999 to 2001, issue-led storylines were introduced such as Toyah Battersby's (Georgia Taylor) rape, Roy and Hayley Cropper (David Neilson and Julie Hesmondhalgh) abducting their foster child, Sarah Platt's Internet chat room abduction and Alma Halliwell's (Amanda Barrie) death from cervical cancer. Such storylines were unpopular with viewers and ratings dropped and in October 2001, Macnaught was abruptly moved to another Granada department and Carolyn Reynolds took over. In 2002, Kieran Roberts was appointed as producer and aimed to re-introduce "gentle storylines and humour", after deciding that the Street should not try to compete with other soaps. In July 2002, Gail married Richard Hillman (Brian Capron), a recently-introduced financial advisor who had already left Duggie Ferguson (John Bowe) to die after he fell down a set of ladders during an argument, and murdered his ex-wife Patricia (Annabelle Apsion), before going on to kill neighbour Maxine Peacock (Tracy Shaw); and attempt to kill both his mother-in-law Audrey Roberts (Sue Nicholls) and her longtime friend, Emily Bishop (Eileen Derbyshire), for financial gain due to his mounting debts. After confessing his crimes to Gail in a two-episode handler in February 2003, Hillman left the street for two weeks before returning with the intent of killing himself as well as Gail, her children Sarah and David (Jack P. Shepherd), and grand-daughter Bethany, before driving them into a canal – though the Platt family survived whilst Richard drowned. This came just months after Sarah had survived serious injuries after being passenger in a stolen car which crashed. The storyline received wide press attention, and viewing figures peaked at 19.4 million, with Hillman dubbed a "serial killer" by the media. Todd Grimshaw (Bruno Langley) became Corrie's first regular homosexual character. In 2003, another gay male character was introduced, Sean Tully (Antony Cotton). The bigamy of Peter Barlow (Chris Gascoyne) and his addiction to alcohol, later in the decade, Maya Sharma's (Sasha Behar) revenge on former lover Dev Alahan (Jimmi Harkishin), Charlie Stubbs's (Bill Ward) psychological abuse of Shelley Unwin (Sally Lindsay), and the deaths of Mike Baldwin (Johnny Briggs), Vera Duckworth (Liz Dawn) and Fred Elliott (John Savident). In 2007, Tracy Barlow (Kate Ford) murdered Charlie Stubbs and claiming it was self-defence; the audience during this storyline peaked at 13.3 million. At the 2007 British Soap Awards, it won Best Storyline, and Ford was voted Best Actress for her portrayal. Other storylines included Leanne Battersby (Jane Danson) becoming a prostitute and the show's first bisexual love triangle (between Michelle Connor (Kym Marsh), Sonny Dhillon (Pal Aron), and Sean Tully (Antony Cotton)). In July 2007, after 34 years in the role of Vera Duckworth, Liz Dawn left the show due to ill health. After conversation between Dawn and producers Kieran Roberts and Steve Frost, the decision was made to kill Vera off. In January 2008, shortly before plans to retire to Blackpool, Vera's husband Jack (William Tarmey) found that she had died in her armchair. Tina O'Brien revealed in the British press on 4 April 2007 that she would be leaving Coronation Street later in the year. Sarah-Louise, who was involved in some of the decade's most controversial stories, left in December 2007 with her daughter, Bethany. In 2008, Michelle learning that Ryan (Ben Thompson) was not her biological son, having been accidentally swapped at birth with Alex Neeson (Dario Coates). Carla Connor (Alison King) turned to Liam for comfort and developed feelings for him. In spite of knowing about her feelings, Liam married Maria Sutherland (Samia Longchambon). Maria and Liam's baby son was stillborn in April, and during an estrangement from Maria upon the death of their baby, Liam had a one-night stand with Carla, a story which helped pave the way for his departure. Gail Platt's (Helen Worth) son David (Jack P. Shepherd) pushed her down the stairs. Enraged that Gail refused to press charges, David vandalised the Street and was sent to a young offenders' facility for several months. In May 2008, Gail finally met Ted Page (Michael Byrne), the father she had never known and in 2009, Gail's boyfriend Joe McIntyre (Reece Dinsdale) became addicted to painkillers, which came to a head when he broke into the medical centre. In August 2008, Jed Stone (Kenneth Cope) returned after 42 years. Liam Connor and his ex-sister-in-law Carla gave into their feelings for each other and began an affair. Carla's fiancée Tony Gordon (Gray O'Brien) discovered the affair and had Liam killed in a hit-and-run in October. Carla struggled to come to terms with Liam's death, but decided she still loved Tony and married him on 3 December, in an episode attracting 10.3 million viewers. In April 2009 it was revealed that Eileen Grimshaw's (Sue Cleaver) father, Colin (Edward de Souza) – the son of Elsie Tanner's (Pat Phoenix) cousin Arnley – had slept with Eileen's old classmate, Paula Carp (Sharon Duce) while she was still at school, and that Paula's daughter Julie (Katy Cavanagh) was in fact also Colin's daughter. Other stories in 2009 included Maria giving birth to Liam's son and her subsequent relationship with Liam's killer Tony, Steve McDonald's (Simon Gregson) marriage to Becky Granger (Katherine Kelly) and Kevin Webster's (Michael Le Vell) affair with Molly Dobbs (Vicky Binns). On Christmas Day 2009, Sally Webster (Sally Dynevor) told husband Kevin that she had breast cancer, just as he was about to leave her for lover Molly. 2010s The show began broadcasting in high-definition in May 2010, and on 17 September that year, Coronation Street entered Guinness World Records as the world's longest-running television soap opera after the American soap opera As the World Turns concluded. William Roache was listed as the world's longest-running soap actor. Coronation Street 50th anniversary week was celebrated with seven episodes, plus a special one-hour live episode, broadcast from 6–10 December. The episodes averaged 14 million viewers, a 52.1% share of the audience. The anniversary was also publicised with ITV specials and news broadcasts. In the storyline, Nick Tilsley and Leanne Battersby's bar — The Joinery — exploded during Peter Barlow's stag party. As a result, the viaduct was destroyed, sending a Metrolink tram careering onto the street, destroying D&S Alahan's Corner Shop and The Kabin. Two characters, Ashley Peacock (Steven Arnold) and Molly Dobbs (Vicky Binns), along with an unknown taxi driver, were killed as a result of the disaster. Rita Sullivan (Barbara Knox) survived, despite being trapped under the rubble of her destroyed shop. Fiz Stape (Jennie McAlpine) prematurely gave birth to a baby girl, Hope. The episode of EastEnders broadcast on the same day as Coronation Street 50th anniversary episode included a tribute, with the character Dot Branning (June Brown, who briefly appeared in the show during the 1970s) saying that she never misses an episode of Coronation Street. 2020s On Friday 7 February 2020, with its 60th anniversary less than a year away, Coronation Street aired its landmark 10,000th episode, the runtime of which was extended to 60 minutes. Producers stated that the episode would contain "a nostalgic trip down memory lane" and "a nod to its own past". A month later, ITV announced that production on the soap would have to be suspended, as the United Kingdom was put into a national lockdown due to the COVID-19 pandemic (see impact of the COVID-19 pandemic on television). After an 11-week intermission for all cast and crew members, filming resumed in June 2020. The episodes would feature social distancing to adhere to the guidelines set by the British government, and it was confirmed that all actors over 70, as well as those with underlying health conditions, would not be allowed to be on set until it was safe to do so. This included Coronation Street veterans William Roache (Ken Barlow) at 88, Barbara Knox (Rita Tanner) at 87, Malcolm Hebden (Norris Cole) at 80 and Sue Nichols (Audrey Roberts) at 76. Maureen Lipman (Evelyn Plummer) and David Neilson (Roy Cropper) returned to set slightly earlier due to being 73 and 71 respectively, as it was deemed safe to do so. By December all cast members had returned to set and on Wednesday 9 December 2020, the soap celebrated its 60th anniversary, with original plans for the episode forced to change due to COVID-19 guidelines. The anniversary week saw the conclusion of a long-running coercive control storyline that began in May 2019, with Geoff Metcalfe (Ian Bartholomew) abusing Yasmeen Nazir (Shelley King). The showdown, which resulted in the death of Geoff allowed social distancing rules to be relaxed on the condition that the crew members involved formed a social bubble prior to the filming. In late 2021 series producer Iain MacLeod announced that the original plans for the 60th Anniversary would now take place in a special week of episodes in October 2021. On 12 October 2021, it was announced that Coronation Street would partake in a special crossover event involving multiple British soaps to promote the topic of climate change ahead of the 2021 United Nations Climate Change Conference. During the week, beginning from 1 November, social media clips featuring Liam Cavanagh and Amelia Spencer from Emmerdale, as well as Daniel Granger from Doctors were featured on the programme, while events from Holby City were also referenced. A similar clip featuring Maria Connor was also featured on EastEnders. On 24 January 2022, ITV announced that as part of an overhaul of their evening programming, Coronation Street will permanently air as three 60-minute episodes per week from March 2022 onwards. Characters Since 1960, Coronation Street has featured many characters whose popularity with viewers and critics has differed greatly. The original cast was created by Tony Warren, with the characters of Ena Sharples (Violet Carson), Elsie Tanner (Pat Phoenix) and Annie Walker (Doris Speed) as central figures. These three women remained with the show for at least 20 years, and became archetypes of British soap opera, often being emulated by other serials. Ena was the street's busybody, battle-axe and self-proclaimed moral voice. Elsie was the tart with a heart, who was constantly hurt by men in the search for true love. Annie Walker, landlady of the Rovers Return Inn, had delusions of grandeur and saw herself as better than the other residents.Coronation Street became known for the portrayal of strong female characters, including original cast characters like Ena, Annie and Elsie, and later Hilda Ogden (Jean Alexander), who first appeared in 1964; all four became household names during the 1960s. Warren's programme was largely matriarchal, which some commentators put down to the female-dominant environment in which he grew up. Consequently, the show has a long tradition of psychologically-abused husbands, most famously Stan Ogden (Bernard Youens) and Jack Duckworth (Bill Tarmey), husbands of Hilda and Vera Duckworth (Liz Dawn), respectively. Coronation Street's longest-serving character, Ken Barlow (William Roache) entered the storyline as a young radical, reflecting the youth of 1960s Britain, where figures like the Beatles, the Rolling Stones and the model Twiggy were to reshape the concept of youthful rebellion. Though the rest of the original Barlow family were killed off before the end of the 1970s, Ken, who for 27 years was the only character from the first episode remaining, has remained the constant link throughout the entire series. In 2011, Dennis Tanner (Philip Lowrie), another character from the first episode, returned to Coronation Street after a 43-year absence. Since 1984, Ken Barlow has been the show's only remaining original character. Emily Bishop (Eileen Derbyshire) had appeared in the series since January 1961, when the show was just weeks old, and was the show's longest-serving female character before she departed in January 2016 after 55 years. Rita Tanner (Barbara Knox) appeared on the show for one episode in December 1964, before returning as a full-time cast member in January 1972. She is currently the second longest-serving original cast member on the show. Roache and Knox are also the two oldest-working cast members on the soap at 89 and 88 years-old respectively. Stan and Hilda Ogden were introduced in 1964, with Hilda becoming one of the most famous British soap opera characters of all time. In a 1982 poll, she was voted fourth-most recognisable woman in Britain, after Queen Elizabeth The Queen Mother, Queen Elizabeth II and Diana, Princess of Wales. Hilda's best-known attributes were her pinny, hair curlers, and the "muriel" in her living room with three "flying" duck ornaments. Hilda Ogden's departure on Christmas Day 1987, remains the highest-rated episode of Coronation Street ever, with nearly 27,000,000 viewers. Stan Ogden had been killed off in 1984 following the death of actor Bernard Youens after a long illness which had restricted his appearances towards the end. Bet Lynch (Julie Goodyear) first appeared in 1966, before becoming a regular in 1970, and went on to become one of the most famous Corrie characters. Bet stood as the central character of the show from 1985 until departing in 1995, often being dubbed as "Queen of the Street" by the media, and indeed herself. The character briefly returned in June 2002.Coronation Street and its characters often rely heavily on archetypes, with the characterisation of some of its current and recent cast based loosely on former characters. Phyllis Pearce (Jill Summers), Blanche Hunt (Maggie Jones) and Sylvia Goodwin (Stephanie Cole) embodied the role of the acid-tongued busybody originally held by Ena, Sally Webster (Sally Dynevor) has grown snobbish, like Annie, and a number of the programme's female characters, such as Carla Connor (Alison King), mirror the vulnerability of Elsie and Bet. Other recurring archetypes include the war veteran such as Albert Tatlock (Jack Howarth), Percy Sugden (Bill Waddington) and Gary Windass (Mikey North), the bumbling retail manager like Leonard Swindley (Arthur Lowe), Reg Holdsworth (Ken Morley), Norris Cole (Malcolm Hebden), quick-tempered, tough tradesmen like Len Fairclough (Peter Adamson), Jim McDonald (Charles Lawson), Tommy Harris (Thomas Craig) and Owen Armstrong (Ian Puleston-Davies), and the perennial losers such as Stan and Hilda, Jack and Vera, Les Battersby (Bruce Jones), Beth Tinker (Lisa George) and Kirk Sutherland (Andrew Whyment). Villains are also common character types, such as Tracy Barlow (Kate Ford), Alan Bradley (Mark Eden), Jenny Bradley (Sally Ann Matthews), Rob Donovan (Marc Baylis), Frank Foster (Andrew Lancel), Tony Gordon (Gray O'Brien), Caz Hammond (Rhea Bailey), Richard Hillman (Brian Capron), Greg Kelly (Stephen Billington), Will Chatterton (Leon Ockenden), Nathan Curtis (Christopher Harper), Callum Logan (Sean Ward), Karl Munro (John Michie), Pat Phelan (Connor McIntyre), David Platt (Jack P. Shepherd), Maya Sharma (Sasha Behar), Kirsty Soames (Natalie Gumede), John Stape (Graeme Hawley), Geoff Metcalfe (Ian Bartholomew) and Gary Windass (Mikey North). The show's former archivist and scriptwriter Daran Little disagreed with the characterisation of the show as a collection of stereotypes. "Rather, remember that Elsie, Ena and others were the first of their kind ever seen on British television. If later characters are stereotypes, it's because they are from the same original mould. It is the hundreds of programmes that have followed which have copied Coronation Street." Production Broadcast format Between 9 December 1960 and 3 March 1961, Coronation Street was broadcast twice weekly, on Wednesday and Friday. During this period, the Friday episode was broadcast live, with the Wednesday episode being pre-recorded 15 minutes later. When the programme went fully networked on 6 March 1961, broadcast days changed to Monday and Wednesday. The last regular episode to be shown live was broadcast on 3 February 1961. The series was transmitted in black and white for the majority of the 1960s. Preparations were made to film episode 923, to be transmitted Wednesday 29 October 1969, in colour. This installment featured the street's residents on a coach trip to the Lake District. In the end, suitable colour film stock for the cameras could not be found and the footage was shot in black and white. The following episode, transmitted Monday 3 November, was videotaped in colour but featured black and white film inserts and title sequence. Like BBC1, the ITV network was officially broadcast in black and white at this point (though programmes were actually broadcast in colour as early as July that year for colour transmission testing and adjustment) so the episode was seen by most in black and white. The ITV network, like BBC1, began full colour transmissions on 15 November 1969. Daran Little, for many years the official programme archivist, claims that the first episode to be transmitted in colour was episode 930 shown on 24 November 1969. In October 1970 a technicians' dispute turned into a work-to-rule when sound staff were denied a pay rise given to camera staff the year before for working with colour recording equipment. The terms of the work-to-rule were that staff refused to work with the new equipment (though the old black and white equipment had been disposed of by then) and therefore programmes were recorded and transmitted in black and white, including Coronation Street The dispute was resolved in early 1971 and the last black and white episode was broadcast on 10 February 1971, although the episodes transmitted on 22 and 24 February 1971 had contained black and white location inserts. Episode 5191, originally broadcast on 7 January 2002, was the first to be broadcast in 16:9 widescreen format. Coronation Street was the last UK-wide soap to make the switch to 16:9 (Take the High Road remained in 4:3 until it finished in 2003). From 22 March 2010, Coronation Street was produced in 1080/50i for transmission on HDTV platforms on ITV HD. The first transmission in this format was episode 7351 on 31 May 2010 with a new set of titles and re-recorded theme tune. On 26 May 2010 ITV previewed the new HD titles on the Coronation Street website. Due to copyright reasons only viewers residing in the UK could see them on the ITV site. Production staffCoronation Street's creator, Tony Warren, wrote the first 13 episodes of the programme in 1960, and continued to write for the programme intermittently until 1976. He later became a novelist, but retained links with Coronation Street. Warren died in 2016. Harry Kershaw was the script editor for Coronation Street when the programme began in 1960, working alongside Tony Warren. Kershaw was also a script writer for the programme and the show's producer between 1962 and 1971. He remains the only person, along with John Finch, to have held the three posts of script editor, writer and producer. Adele Rose was Coronation Street's first female writer and the show's longest-serving writer, completing 455 scripts between 1961 and 1998. She also created Byker Grove. Rose also won a BAFTA award in 1993 for her work on the show. Bill Podmore was the show's longest serving producer. By the time he stepped down in 1988 he had completed 13 years at the production helm. Nicknamed the "godfather" by the tabloid press, he was renowned for his tough, uncompromising style and was feared by both crew and cast alike. He is known for sacking Peter Adamson, the show's Len Fairclough, in 1983. Iain MacLeod is the current series producer. Michael Apted, best known for the Up! series of documentaries, was a director on the programme in the early 1960s. This period of his career marked the first of his many collaborations with writer Jack Rosenthal. Rosenthal, noted for such television plays as Bar Mitzvah Boy, began his career on the show, writing over 150 episodes between 1961 and 1969. Paul Abbott was a story editor on the programme in the 1980s and began writing episodes in 1989, but left in 1993 to produce Cracker, for which he later wrote, before creating his own dramas such as Touching Evil and Shameless. Russell T Davies was briefly a storyliner on the programme in the mid-1990s, also writing the script for the direct-to-video special "Viva Las Vegas!" He, too, has become a noted writer of his own high-profile television drama programmes, including Queer as Folk and the 2005 revival of Doctor Who. Jimmy McGovern also wrote some episodes. Theme music The show's theme music, a cornet piece, accompanied by a brass band plus clarinet and double bass, reminiscent of northern band music, was written by Eric Spear. The original theme tune was called Lancashire Blues and Spear was paid a £6 commission in 1960 to write it. The identity of the trumpeter was not public knowledge until 1994, when jazz musician and journalist Ron Simmonds revealed that it was the Surrey musician Ronnie Hunt. He added, "an attempt was made in later years to re-record that solo, using Stan Roderick, but it sounded too good, and they reverted to the old one." In 2004, the Manchester Evening News published a contradictory story that a young musician from Wilmslow called David Browning played the trumpet on both the original recording of the theme in 1960 and a re-recording in 1964, for a one-off payment of £36. A new, completely re-recorded version of the theme tune replaced the original when the series started broadcasting in HD on 31 May 2010. It accompanied a new montage-style credits sequence featuring images of Manchester and Weatherfield. A reggae version of the theme tune was recorded by The I-Royals and released by Media Marvels and WEA in 1983. On 31 March 2017, it was revealed on the YouTube channel of Corrie that some of the soap's cast would sing a specially written lyric, of which will be added to the new theme song that will be played, as of the first episode of the evening of Monday, 3 April 2017, but it turned out to be an April Fool's joke. Viewing figures Episodes in the 1960s, 70s, and 80s, regularly attracted figures of between 18 and 21 million viewers, and during the 1990s and early 2000s, 14 to 16 million per episode would be typical. | his character was married to Gail and the story conference called to write Brian out struggled to find a justifiable way to write him out while still leaving enough scope for a possible return. The decision was made that Brian should die. Quinten was in Los Angeles when the storyline was decided, and upon his return to the United Kingdom, he was shocked at Brian's fate and threatened to fly back to America so that scenes could not be filmed. He was talked round by co-star Helen Worth, who pointed out that he might be blacklisted by Equity if he quit the programme abruptly. Brian Tilsley's death was broadcast on 15 February 1989. After the breakdown of his marriage to Gail, Brian started spending his evenings going to discos and meeting up with various women. He tried to protect a young lady from a group of thugs outside a nightclub, but was stabbed in the stomach. He died as a result of his injuries. The stabbing brought massive complaints from viewers and Mary Whitehouse delivered an angry sermon about television violence. Between 1980 and 1989, Coronation Street underwent some of the most radical changes since its launch. By May 1984, William Roache stood as the only original cast member, after the departures of Violet Carson (Ena Sharples) in 1980, Doris Speed (Annie Walker) in 1983, and both Pat Phoenix (Elsie Tanner) and Jack Howarth (Albert Tatlock) in 1984. Albert Tatlock's departure came when his character's off screen death was announced several months after the death of actor Jack Howarth at the age of 88. While the press predicted the end of Corrie, H. V. Kershaw declared that "There are no stars in Coronation Street. The show had also gained a new rival on Channel 4 with the launch of Brookside, and BBC was preparing to launch EastEnders, which would first air in February 1985. " Writers drew on the show's many archetypes, with established characters stepping into the roles left by the original cast. Phyllis Pearce (Jill Summers) was hailed as the new Ena Sharples in 1982, the Duckworths moved into No.9 in 1983 and slipped into the role once held by the Ogdens, while Percy Sugden (Bill Waddington) appeared in 1983 and took over the grumpy war veteran role from Albert Tatlock. The question of who would take over the Rovers Return after Annie Walker's 1983 exit was answered in 1985 when Bet Lynch (who also mirrored the vulnerability and strength of Elsie Tanner) was installed as landlady. In 1983, Shirley Armitage (Lisa Lewis) became the first major black character in her role as machinist at Baldwin's Casuals. Ken Barlow married Deirdre Langton (Anne Kirkbride) on 27 July 1981. The episode was watched by over 15 million viewers – more ITV viewers than the wedding of Prince Charles and Lady Diana two days later. In the 1980s relationships were cemented between established characters: Alf Roberts (Bryan Mosley) married Audrey Potter (Sue Nicholls) in 1985; Kevin Webster (Michael Le Vell) married Sally Seddon (Sally Whittaker) in 1986; Bet Lynch married Alec Gilroy (Roy Barraclough) in 1987; and 1988 saw the marriages of widowed Ivy Tilsley to Don Brennan (Geoffrey Hinsliff), and the long-awaited union of Mavis Riley and Derek Wilton (Peter Baldwin), after over a decade of on-off romances and a failed marriage attempt in 1984. In 1982, the arrival of Channel 4, and its edgy new soap opera Brookside, sparked one of the biggest changes for Coronation Street. Unlike Coronation Street, which had a very nostalgic view of working-class life, Brookside brought together working and middle-class families in a more contemporary environment. The dialogue often included expletives and the stories were more hard-hitting, and of the current Zeitgeist. Whereas stories at this time in Coronation Street were largely about family affairs, Brookside concentrated on social affairs such as industrial action, unemployment, drugs, rape, and the black market. The BBC also introduced a new prime time soap opera, EastEnders in 1985. Like Brookside, EastEnders had a more gritty premise than Coronation Street, although unlike Brookside it tended to steer clear of blue language and politicised stories. Both of these shows were quickly well-received by the media and viewing public, although they were not without their controversies and critics. While ratings for Coronation Street remained consistent throughout the decade, EastEnders regularly obtained higher viewing figures due to its omnibus episodes shown at weekends. The Coronation Street episode broadcast on 2 January 1985 attracted 21.40 million viewers, making it the most-watched episode in the show's history based on a single showing. Subsequent episodes would achieve higher figures when the original broadcast and omnibus edition figures were combined. With prime time competition, Corrie was again seen as being old fashioned, with the introduction of the 'normal' Clayton family in 1985 being a failure with viewers, being written out the following year. Between 1988 and 1989, many aspects of the show were modernised by new producer David Liddiment. A new exterior set had been built in 1982, and in 1989 it was redeveloped to include new houses and shops. Production techniques were also changed with a new studio being built, and the inclusion of more location filming, which had moved the show from being shot on film to videotape in 1988. Due to new pressures, an introduction of the third weekly episode aired on 20 October 1989, to broadcast each Friday at 7:30 pm. The 1980s featured some of the most prominent storylines in the programme's history, such as Deirdre Barlow's affair with Mike Baldwin (Johnny Briggs) in 1983, the first soap storyline to receive widespread media attention. The feud between Ken Barlow and Mike Baldwin would continue for many years, with Mike even marrying Ken's daughter, Susan (Wendy Jane Walker). In 1986, there was a fire at the Rovers Return. The episode that aired on Christmas Day 1987, attracted a combined audience (original and omnibus) of 26.65 million – a figure helped by the fact that this episode heralded the departure of immensely-popular character Hilda Ogden (Jean Alexander). Between 1986 and 1989, the story of Rita Fairclough's (Barbara Knox) psychological abuse at the hands of Alan Bradley (Mark Eden), and then his subsequent death under the wheels of a Blackpool tram in December 1989, was played out. This storyline gave the show its highest combined viewing figure in its history with 26.93 million for the episode that aired on 15 (and 19) March 1989, where Alan is hiding from the police after trying to kill Rita in the previous episode. This rating is sometimes incorrectly credited to the 8 December 1989 tram death episode. Other stories included the birth of Nicky Tilsley (Warren Jackson) in 1980, Elsie Tanner's departure and Stan Ogden's funeral in 1984, the birth of Sarah-Louise Tilsley (Lynsay King) in 1987, and Brian Tilsley's murder in 1989. The 1980s saw further new and mostly younger characters being introduced, including until Terry Duckworth (Nigel Pivaro), Curly Watts (Kevin Kennedy), Martin Platt (Sean Wilson), Reg Holdsworth (Ken Morley), and the McDonald family; one of whom, Simon Gregson, started on the show as Steve McDonald a week after his 15th birthday, and has been on the show ever since. His parents Jim (Charles Lawson) and Liz (Beverley Callard) have made several departures and comebacks since debuting in 1989. 1990s In spite of updated sets and production changes, Coronation Street still received criticism. In 1992, chairman of the Broadcasting Standards Council, Lord Rees-Mogg, criticised the low representation of ethnic minorities, and the programme's portrayal of the cosy familiarity of a bygone era, particularly as many comparable neighbours in the real life Greater Manchester area had a significant percentage of black and Asian residents. Some newspapers ran headlines such as "Coronation Street shuts out blacks" (The Times), and "'Put colour in t'Street" (Daily Mirror). Patrick Stoddart of The Times wrote: "The millions who watch Coronation Street – and who will continue to do so despite Lord Rees-Mogg – know real life when they see it ... in the most confident and accomplished soap opera television has ever seen". Black and Asian characters had appeared from time to time over the years, but it was not until 1999 that the show featured its first regular non-white family, the Desai family. New characters Des (Philip Middlemiss) and Steph Barnes (Amelia Bullmore) moved into one of the new houses in 1990, being dubbed by the media as Yuppies. Raquel Wolstenhulme (Sarah Lancashire) first appeared at the beginning of 1991 and went on to become one of the most popular characters of the era until her departure in 1996, followed by a brief comeback three years later. The McDonald family were developed and the fiery relationships between Liz (Beverly Callard), Jim (Charles Lawson), Steve (Simon Gregson) and Andy (Nicholas Cochrane) interested viewers. Other newcomers were wheelchair user and pensioner Maud Grimes (Elizabeth Bradley), middle-aged cafe owner Roy Cropper (David Neilson), young married couple Gary and Judy Mallett (Ian Mercer and Gaynor Faye), as well as middle-aged butcher Fred Elliott (John Savident) and his son Ashley Peacock (Steven Arnold). The amount of slapstick and physical humour in storylines increased during the 1990s, with comical characters such as supermarket manager Reg Holdsworth (Ken Morley) and his water bed. In the early 1990s storylines included the death of newborn Katie McDonald in January 1992, Mike Baldwin's (Johnny Briggs) wedding to Alma Sedgewick (Amanda Barrie) later that year, Tommy Duckworth being sold by his father Terry (Nigel Pivaro) in 1993, Deirdre Barlow's (Anne Kirkbride) marriage to Moroccan Samir Rachid (Al Nedjari), and the rise of Tanya Pooley (Eva Pope) between 1993 and 1994. In 1995, Julie Goodyear (Bet Lynch) left the show, 29 years after her first appearance and 25 years after becoming a regular cast member. She made brief re-appearances in 2002 and 2003. In 1997, Brian Park took over as producer, with the idea of promoting young characters as opposed to the older cast. On his first day, he cut the characters of Derek Wilton (Peter Baldwin), Don Brennan (Geoffrey Hinsliff), Percy Sugden (Bill Waddington), Bill Webster (Peter Armitage), Billy Williams (Frank Mills) and Maureen Holdsworth (Sherrie Hewson). Thelma Barlow, who played Derek's wife Mavis, was angered by the firing of her co-star and resigned. The production team lost some of its key writers when Barry Hill, Adele Rose and Julian Roach all resigned as well. In line with Park's suggestion, younger characters were introduced during 1997 and 1998. A teenage Nick Tilsley was recast, played by Adam Rickitt following the departure of original actor Warren Jackson, single mother Zoe Tattersall (Joanne Froggatt) first appeared, and the problem Battersby family moved into No.5. Storylines focussed on tackling 'issues', such as drug dealers, eco-warriors, religious cults, and a transsexual woman. Park quit in 1998, after deciding that he had done what he intended to do; he maintained that his biggest achievement was the introduction of Hayley Patterson (Julie Hesmondhalgh), the first transsexual character in a British soap. The character married Roy Cropper soon after her arrival. Some viewers were alienated by the new Coronation Street, and sections of the media voiced their disapproval. Having received criticism of being too out of touch, Corrie now struggled to emulate the more modern Brookside and EastEnders. In the Daily Mirror, Victor Lewis-Smith wrote: "Apparently it doesn't matter that this is a first-class soap opera, superbly scripted and flawlessly performed by a seasoned repertory company." One of Coronation Street'''s best known storylines took place in March/April 1998, with Deirdre Rachid (Anne Kirkbride) being wrongfully imprisoned after a relationship with con-man Jon Lindsay (Owen Aaronovitch). The episode in which Deirdre was sent to prison had an audience of 19 million viewers, and 'Free the Weatherfield One' campaigns sprung up in a media frenzy. Then Prime Minister Tony Blair even passed comment on Deirdre's sentencing in Parliament. Deirdre was freed after three weeks, with Granada stating that they had always intended for her to be released, in spite of the media interest. 2000s On 8 December 2000, the show celebrated its 40th anniversary by broadcasting a live, hour-long episode. The Prince of Wales appeared as himself in an ITV News bulletin report. Earlier in the year, 13-year-old Sarah-Louise Platt (Tina O'Brien) had become pregnant and given birth to a baby girl, Bethany, on 4 June. The February episode where Gail was told of her daughter's pregnancy was watched by 15 million viewers. From 1999 to 2001, issue-led storylines were introduced such as Toyah Battersby's (Georgia Taylor) rape, Roy and Hayley Cropper (David Neilson and Julie Hesmondhalgh) abducting their foster child, Sarah Platt's Internet chat room abduction and Alma Halliwell's (Amanda Barrie) death from cervical cancer. Such storylines were unpopular with viewers and ratings dropped and in October 2001, Macnaught was abruptly moved to another Granada department and Carolyn Reynolds took over. In 2002, Kieran Roberts was appointed as producer and aimed to re-introduce "gentle storylines and humour", after deciding that the Street should not try to compete with other soaps. In July 2002, Gail married Richard Hillman (Brian Capron), a recently-introduced financial advisor who had already left Duggie Ferguson (John Bowe) to die after he fell down a set of ladders during an argument, and murdered his ex-wife Patricia (Annabelle Apsion), before going on to kill neighbour Maxine Peacock (Tracy Shaw); and attempt to kill both his mother-in-law Audrey Roberts (Sue Nicholls) and her longtime friend, Emily Bishop (Eileen Derbyshire), for financial gain due to his mounting debts. After confessing his crimes to Gail in a two-episode handler in February 2003, Hillman left the street for two weeks before returning with the intent of killing himself as well as Gail, her children Sarah and David (Jack P. Shepherd), and grand-daughter Bethany, before driving them into a canal – though the Platt family survived whilst Richard drowned. This came just months after Sarah had survived serious injuries after being passenger in a stolen car which crashed. The storyline received wide press attention, and viewing figures peaked at 19.4 million, with Hillman dubbed a "serial killer" by the media. Todd Grimshaw (Bruno Langley) became Corrie's first regular homosexual character. In 2003, another gay male character was introduced, Sean Tully (Antony Cotton). The bigamy of Peter Barlow (Chris Gascoyne) and his addiction to alcohol, later in the decade, Maya Sharma's (Sasha Behar) revenge on former lover Dev Alahan (Jimmi Harkishin), Charlie Stubbs's (Bill Ward) psychological abuse of Shelley Unwin (Sally Lindsay), and the deaths of Mike Baldwin (Johnny Briggs), Vera Duckworth (Liz Dawn) and Fred Elliott (John Savident). In 2007, Tracy Barlow (Kate Ford) murdered Charlie Stubbs and claiming it was self-defence; the audience during this storyline peaked at 13.3 million. At the 2007 British Soap Awards, it won Best Storyline, and Ford was voted Best Actress for her portrayal. Other storylines included Leanne Battersby (Jane Danson) becoming a prostitute and the show's first bisexual love triangle (between Michelle Connor (Kym Marsh), Sonny Dhillon (Pal Aron), and Sean Tully (Antony Cotton)). In July 2007, after 34 years in the role of Vera Duckworth, Liz Dawn left the show due to ill health. After conversation between Dawn and producers Kieran Roberts and Steve Frost, the decision was made to kill Vera off. In January 2008, shortly before plans to retire to Blackpool, Vera's husband Jack (William Tarmey) found that she had died in her armchair. Tina O'Brien revealed in the British press on 4 April 2007 that she would be leaving Coronation Street later in the year. Sarah-Louise, who was involved in some of the decade's most controversial stories, left in December 2007 with her daughter, Bethany. In 2008, Michelle learning that Ryan (Ben Thompson) was not her biological son, having been accidentally swapped at birth with Alex Neeson (Dario Coates). Carla Connor (Alison King) turned to Liam for comfort and developed feelings for him. In spite of knowing about her feelings, Liam married Maria Sutherland (Samia Longchambon). Maria and Liam's baby son was stillborn in April, and during an estrangement from Maria upon the death of their baby, Liam had a one-night stand with Carla, a story which helped pave the way for his departure. Gail Platt's (Helen Worth) son David (Jack P. Shepherd) pushed her down the stairs. Enraged that Gail refused to press charges, David vandalised the Street and was sent to a young offenders' facility for several months. In May 2008, Gail finally met Ted Page (Michael Byrne), the father she had never known and in 2009, Gail's boyfriend Joe McIntyre (Reece Dinsdale) became addicted to painkillers, which came to a head when he broke into the medical centre. In August 2008, Jed Stone (Kenneth Cope) returned after 42 years. Liam Connor and his ex-sister-in-law Carla gave into their feelings for each other and began an affair. Carla's fiancée Tony Gordon (Gray O'Brien) discovered the affair and had Liam killed in a hit-and-run in October. Carla struggled to come to terms with Liam's death, but decided she still loved Tony and married him on 3 December, in an episode attracting 10.3 million viewers. In April 2009 it was revealed that Eileen Grimshaw's (Sue Cleaver) father, Colin (Edward de Souza) – the son of Elsie Tanner's (Pat Phoenix) cousin Arnley – had slept with Eileen's old classmate, Paula Carp (Sharon Duce) while she was still at school, and that Paula's daughter Julie (Katy Cavanagh) was in fact also Colin's daughter. Other stories in 2009 included Maria giving birth to Liam's son and her subsequent relationship with Liam's killer Tony, Steve McDonald's (Simon Gregson) marriage to Becky Granger (Katherine Kelly) and Kevin Webster's (Michael Le Vell) affair with Molly Dobbs (Vicky Binns). On Christmas Day 2009, Sally Webster (Sally Dynevor) told husband Kevin that she had breast cancer, just as he was about to leave her for lover Molly. 2010s The show began broadcasting in high-definition in May 2010, and on 17 September that year, Coronation Street entered Guinness World Records as the world's longest-running television soap opera after the American soap opera As the World Turns concluded. William Roache was listed as the world's longest-running soap actor. Coronation Street 50th anniversary week was celebrated with seven episodes, plus a special one-hour live episode, broadcast from 6–10 December. The episodes averaged 14 million viewers, a 52.1% share of the audience. The anniversary was also publicised with ITV specials and news broadcasts. In the storyline, Nick Tilsley and Leanne Battersby's bar — The Joinery — exploded during Peter Barlow's stag party. As a result, the viaduct was destroyed, sending a Metrolink tram careering onto the street, destroying D&S Alahan's Corner Shop and The Kabin. Two characters, Ashley Peacock (Steven Arnold) and Molly Dobbs (Vicky Binns), along with an unknown taxi driver, were killed as a result of the disaster. Rita Sullivan (Barbara Knox) survived, despite being trapped under the rubble of her destroyed shop. Fiz Stape (Jennie McAlpine) prematurely gave birth to a baby girl, Hope. The episode of EastEnders broadcast on the same day as Coronation Street 50th anniversary episode included a tribute, with the character Dot Branning (June Brown, who briefly appeared in the show during the 1970s) saying that she never misses an episode of Coronation Street. 2020s On Friday 7 February 2020, with its 60th anniversary less than a year away, Coronation Street aired its landmark 10,000th episode, the runtime of which was extended to 60 minutes. Producers stated that the episode would contain "a nostalgic trip down memory lane" and "a nod to its own past". A month later, ITV announced that production on the soap would have to be suspended, as the United Kingdom was put into a national lockdown due to the COVID-19 pandemic (see impact of the COVID-19 pandemic on television). After an 11-week intermission for all cast and crew members, filming resumed in June 2020. The episodes would feature social distancing to adhere to the guidelines set by the British government, and it was confirmed that all actors over 70, as well as those with underlying health conditions, would not be allowed to be on set until it was safe to do so. This included Coronation Street |
he was bored. While repeating the earlier stories, the later sources of Suetonius and Cassius Dio provide additional tales of insanity. They accuse Caligula of incest with his sisters, Agrippina the Younger, Drusilla, and Livilla, and say he prostituted them to other men. Additionally, they mention affairs with various men including his brother-in-law Marcus Lepidus. They state he sent troops on illogical military exercises, turned the palace into a brothel, and, most famously, planned or promised to make his horse, Incitatus, a consul, and actually appointed him a priest. The validity of these accounts is debatable. In Roman political culture, insanity and sexual perversity were often presented hand-in-hand with poor government. Assassination and aftermath Caligula's actions as emperor were described as being especially harsh to the Senate, to the nobility and to the equestrian order. According to Josephus, these actions led to several failed conspiracies against Caligula. Eventually, officers within the Praetorian Guard led by Cassius Chaerea succeeded in murdering the emperor. The plot is described as having been planned by three men, but many in the Senate, army and equestrian order were said to have been informed of it and involved in it. The situation had escalated when, in 40, Caligula announced to the Senate that he planned to leave Rome permanently and to move to Alexandria in Egypt, where he hoped to be worshipped as a living god. The prospect of Rome losing its emperor and thus its political power was the final straw for many. Such a move would have left both the Senate and the Praetorian Guard powerless to stop Caligula's repression and debauchery. With this in mind Chaerea convinced his fellow conspirators, who included Marcus Vinicius and Lucius Annius Vinicianus, to put their plot into action quickly. According to Josephus, Chaerea had political motivations for the assassination. Suetonius sees the motive in Caligula calling Chaerea derogatory names. Caligula considered Chaerea effeminate because of a weak voice and for not being firm with tax collection. Caligula would mock Chaerea with names like "Priapus" and "Venus". On diēs Mārtis 24 January 41, Cassius Chaerea and other guardsmen accosted Caligula as he addressed an acting troupe of young men beneath the palace, during a series of games and dramatics being held for the Divine Augustus. Details recorded on the events vary somewhat from source to source, but they agree that Chaerea stabbed Caligula first, followed by a number of conspirators. Suetonius records that Caligula's death resembled that of Julius Caesar. He states that both the elder Gaius Julius Caesar (Julius Caesar) and the younger Gaius Julius Caesar (Caligula) were stabbed 30 times by conspirators led by a man named Cassius (Cassius Longinus and Cassius Chaerea respectively). By the time Caligula's loyal Germanic guard responded, the Emperor was already dead. The Germanic guard, stricken with grief and rage, responded with a rampaging attack on the assassins, conspirators, innocent senators and bystanders alike. These wounded conspirators were treated by the physician Arcyon. The cryptoporticus (underground corridor) beneath the imperial palaces on the Palatine Hill where this event took place was discovered by archaeologists in 2008. The Senate attempted to use Caligula's death as an opportunity to restore the Republic. Chaerea tried to persuade the military to support the Senate. The military, though, remained loyal to the idea of imperial monarchy. Uncomfortable with lingering imperial support, the assassins sought out and killed Caligula's wife, Caesonia, and killed their young daughter, Julia Drusilla, by smashing her head against a wall. They were unable to reach Caligula's uncle, Claudius. After a soldier, Gratus, found Claudius hiding behind a palace curtain, he was spirited out of the city by a sympathetic faction of the Praetorian Guard to their nearby camp. Claudius became emperor after procuring the support of the Praetorian Guard. Claudius granted a general amnesty, although he executed a few junior officers involved in the conspiracy, including Chaerea. According to Suetonius, Caligula's body was placed under turf until it was burned and entombed by his sisters. He was buried within the Mausoleum of Augustus; in 410, during the Sack of Rome, the ashes in the tomb were scattered. Legacy Historiography The facts and circumstances of Caligula's reign are mostly lost to history. Only two sources contemporary with Caligula have survived – the works of Philo and Seneca. Philo's works, On the Embassy to Gaius and Flaccus, give some details on Caligula's early reign, but mostly focus on events surrounding the Jewish population in Judea and Egypt with whom he sympathizes. Seneca's various works give mostly scattered anecdotes on Caligula's personality. Seneca was almost put to death by Caligula in AD 39 likely due to his associations with conspirators. At one time, there were detailed contemporaneous histories on Caligula, but they are now lost. Additionally, the historians who wrote them are described as biased, either overly critical or praising of Caligula. Nonetheless, these lost primary sources, along with the works of Seneca and Philo, were the basis of surviving secondary and tertiary histories on Caligula written by the next generations of historians. A few of the contemporaneous historians are known by name. Fabius Rusticus and Cluvius Rufus both wrote condemning histories on Caligula that are now lost. Fabius Rusticus was a friend of Seneca who was known for historical embellishment and misrepresentation. Cluvius Rufus was a senator involved in the assassination of Caligula. Caligula's sister, Agrippina the Younger, wrote an autobiography that certainly included a detailed explanation of Caligula's reign, but it too is lost. Agrippina was banished by Caligula for her connection to Marcus Lepidus, who conspired against him. The inheritance of Nero, Agrippina's son and the future emperor, was seized by Caligula. Gaetulicus, a poet, produced a number of flattering writings about Caligula, but they are lost. The bulk of what is known of Caligula comes from Suetonius and Cassius Dio. Suetonius wrote his history on Caligula 80 years after his death, while Cassius Dio wrote his history over 180 years after Caligula's death. Cassius Dio's work is invaluable because it alone gives a loose chronology of Caligula's reign. A handful of other sources add a limited perspective on Caligula. Josephus gives a detailed description of Caligula's assassination. Tacitus provides some information on Caligula's life under Tiberius. In a now lost portion of his Annals, Tacitus gave a detailed history of Caligula. Pliny the Elder's Natural History has a few brief references to Caligula. There are few surviving sources on Caligula and none of them paints Caligula in a favourable light. The paucity of sources has resulted in significant gaps in modern knowledge of the reign of Caligula. Little is written on the first two years of Caligula's reign. Additionally, there are only limited details on later significant events, such as the annexation of Mauretania, Caligula's military actions in Britannia, and his feud with the Roman Senate. According to legend, during his military actions in Britannia Caligula grew addicted to a steady diet of European sea eels, which led to their Latin name being Coluber caligulensis. Health All surviving sources, except Pliny the Elder, characterize Caligula as insane. However, it is not known whether they are speaking figuratively or literally. Additionally, given Caligula's unpopularity among the surviving sources, it is difficult to separate fact from fiction. Recent sources are divided in attempting to ascribe a medical reason for his behavior, citing as possibilities encephalitis, epilepsy or meningitis. The question of whether Caligula was insane (especially after his illness early in his reign) remains unanswered. Philo of Alexandria, Josephus and Seneca state that Caligula was insane, but describe this madness as a personality trait that came through experience. Seneca states that Caligula became arrogant, angry and insulting once he became emperor and uses his personality flaws as examples his readers can learn from. According to Josephus, power made Caligula incredibly conceited and led him to think he was a god. Philo of Alexandria reports that Caligula became ruthless after nearly dying of an illness in the eighth month of his reign in 37. Juvenal reports he was given a magic potion that drove him insane. Suetonius said that Caligula suffered from "falling sickness", or epilepsy, when he was young. Modern historians have theorized that Caligula lived with a daily fear of seizures. Despite swimming being a part of imperial education, Caligula could not swim. Epileptics are discouraged from swimming in open waters because unexpected fits could lead to death because a timely rescue would be difficult. Caligula reportedly talked to the full moon: Epilepsy was long associated with the moon. Suetonius described Caligula as sickly-looking, skinny and pale: "he was tall, very pale, ill-shaped, his neck and legs very slender, his eyes and temples hollow, his brows broad and knit, his hair thin, and the crown of the head bald. The other parts of his body were much covered with hair ... He was crazy both in body and mind, being subject, when a boy, to the falling sickness. When he arrived at the age of manhood he endured fatigue tolerably well. Occasionally he was liable to faintness, during which he remained incapable of any effort". Based on scientific reconstructions of his official painted busts, Caligula had brown hair, brown eyes, and fair skin. Some modern historians think that Caligula suffered from hyperthyroidism. This diagnosis is mainly attributed to Caligula's irritability and his "stare" as described by Pliny the Elder. Possible rediscovery of burial site On 17 January 2011, police in Nemi, Italy, announced that they believed they had discovered the site of Caligula's burial, after arresting a thief caught smuggling a statue which they believed to be of the emperor. The claim has been met with scepticism by Cambridge historian Mary Beard. Gallery Cultural depictions In film and series Welsh actor Emlyn Williams was cast as Caligula in the never-completed 1937 film I, Claudius. He was played by Ralph Bates in the 1968 ITV historical drama series, The Caesars. American actor Jay Robinson famously portrayed a sinister and scene-stealing Caligula in two epic films of the 1950s, The Robe (1953) and its sequel Demetrius and the Gladiators (1954). He was played by John Hurt in the 1976 BBC mini-series I, Claudius. A feature-length historical film Caligula was completed in 1979 with Malcolm McDowell in the lead role. The film contains explicit sex and violence. Caligula is a character in the 2015 NBC series A.D. The Bible Continues and is played by British actor Andrew Gower. His portrayal emphasises Caligula's "dabauched and dangerous" persona as well as his sexual appetite, quick temper, and violent nature. The third season of the Roman Empire series (released on Netflix in 2019) is named Caligula: The Mad Emperor with South African actor Ido Drent in the leading role. In literature and theatre Caligula, by French author Albert Camus, is a play in which Caligula returns after deserting the palace for three days and three nights following the death of his beloved sister, Drusilla. The young emperor then uses his unfettered power to "bring the impossible into the realm of the likely". In the novel I, Claudius by English writer Robert Graves, Caligula is presented as being a murderous sociopath from his childhood, who became clinically insane early in his reign. At the age of only ten, he drove his father Germanicus to despair and death by secretly terrorizing him. Graves's Caligula commits incest with all three of his sisters and is implied to have murdered Drusilla. This was adapted for television in the 1976 BBC mini-series of the same name. References Bibliography Primary sources Cassius Dio, Roman History, Book 59 Josephus, Antiquities of the Jews, (trans. W.Whiston), Books XVIII–XIX Philo of Alexandria, (trans. C.D.Yonge, London, H. G. Bohn, 1854–1890): On the Embassy to Gaius Flaccus Seneca the Younger On Firmness On Anger To Marcia, On Consolation On Tranquility of Mind On the Shortness of Life To Polybius, On Consolation To Helvia, On Consolation On Benefits On the Terrors of Death (Epistle IV) On Taking One's Own Life (Epistle LXXVII) On the Value of Advice (Epistle XCIV) Suetonius, The Lives of Twelve Caesars, Life of Caligula Tacitus, Annals, Book 6 Secondary material External links The portrait of Caligula in the Digital Sculpture Project Caligula Attempts to Conquer Britain in AD 40 Biography from De Imperatoribus Romanis Franz Lidz, "Caligula’s Garden of Delights, Unearthed and Restored", New York Times, Jan. 12, 2021 12 births 41 deaths 1st-century murdered monarchs 1st-century Roman emperors Assassinated heads of state Burials at the Mausoleum of Augustus Capri, Campania Children of Germanicus Deaths by stabbing in Rome Incest Julii Caesares Julio-Claudian dynasty People from Anzio People with epilepsy Politicide perpetrators Roman emperors murdered by the Praetorian Guard Roman quaestors Royalty and nobility with disabilities Roman emperors to suffer posthumous | the reign of Tiberius. He aided those who lost property in fires, abolished certain taxes, and gave out prizes to the public at gymnastic events. He allowed new members into the equestrian and senatorial orders. Perhaps most significantly, he restored the practice of elections. Cassius Dio said that this act "though delighting the rabble, grieved the sensible, who stopped to reflect, that if the offices should fall once more into the hands of the many ... many disasters would result". During the same year, though, Caligula was criticized for executing people without full trials and for forcing the Praetorian prefect, Macro, to commit suicide. Macro had fallen out of favor with the emperor, probably due to an attempt to ally himself with Gemellus when it appeared that Caligula might die of fever. Financial crisis and famine According to Cassius Dio, a financial crisis emerged in 39. Suetonius places the beginning of this crisis in 38. Caligula's political payments for support, generosity and extravagance had exhausted the state's treasury. Ancient historians state that Caligula began falsely accusing, fining and even killing individuals for the purpose of seizing their estates. Historians describe a number of Caligula's other desperate measures. To gain funds, Caligula asked the public to lend the state money. He levied taxes on lawsuits, weddings and prostitution. Caligula began auctioning the lives of the gladiators at shows. Wills that left items to Tiberius were reinterpreted to leave the items instead to Caligula. Centurions who had acquired property by plunder were forced to turn over spoils to the state. The current and past highway commissioners were accused of incompetence and embezzlement and forced to repay money. According to Suetonius, in the first year of Caligula's reign he squandered 2.7 billion sesterces that Tiberius had amassed. His nephew Nero both envied and admired the fact that Gaius had run through the vast wealth Tiberius had left him in so short a time. However, some historians have shown scepticism towards the large number of sesterces quoted by Suetonius and Dio. According to Wilkinson, Caligula's use of precious metals to mint coins throughout his principate indicates that the treasury most likely never fell into bankruptcy. He does point out, however, that it is difficult to ascertain whether the purported 'squandered wealth' was from the treasury alone due to the blurring of "the division between the private wealth of the emperor and his income as head of state." Furthermore, Alston points out that Caligula's successor, Claudius, was able to donate 15,000 sesterces to each member of the Praetorian Guard in 41, suggesting the Roman treasury was solvent. A brief famine of unknown extent occurred, perhaps caused by this financial crisis, but Suetonius claims it resulted from Caligula's seizure of public carriages; according to Seneca, grain imports were disrupted because Caligula re-purposed grain boats for a pontoon bridge. Construction Despite financial difficulties, Caligula embarked on a number of construction projects during his reign. Some were for the public good, though others were for himself. Josephus describes Caligula's improvements to the harbours at Rhegium and Sicily, allowing increased grain imports from Egypt, as his greatest contributions. These improvements may have been in response to the famine. Caligula completed the temple of Augustus and the theatre of Pompey and began an amphitheatre beside the Saepta. He expanded the imperial palace. He began the aqueducts Aqua Claudia and Anio Novus, which Pliny the Elder considered engineering marvels. He built a large racetrack known as the circus of Gaius and Nero and had an Egyptian obelisk (now known as the "Vatican Obelisk") transported by sea and erected in the middle of Rome. At Syracuse, he repaired the city walls and the temples of the gods. He had new roads built and pushed to keep roads in good condition. He had planned to rebuild the palace of Polycrates at Samos, to finish the temple of Didymaean Apollo at Ephesus and to found a city high up in the Alps. He planned to dig a canal through the Isthmus of Corinth in Greece and sent a chief centurion to survey the work. In 39, Caligula performed a spectacular stunt by ordering a temporary floating bridge to be built using ships as pontoons, stretching for over two miles from the resort of Baiae to the neighbouring port of Puteoli. It was said that the bridge was to rival the Persian king Xerxes' pontoon bridge crossing of the Hellespont. Caligula, who could not swim, then proceeded to ride his favourite horse Incitatus across, wearing the breastplate of Alexander the Great. This act was in defiance of a prediction by Tiberius's soothsayer Thrasyllus of Mendes that Caligula had "no more chance of becoming emperor than of riding a horse across the Bay of Baiae". Caligula had two large ships constructed for himself (which were recovered from the bottom of Lake Nemi around 1930). The ships were among the largest vessels in the ancient world. The smaller ship was designed as a temple dedicated to Diana. The larger ship was essentially an elaborate floating palace with marble floors and plumbing. The ships burned in 1944 after an attack in the Second World War; almost nothing remains of their hulls, though many archaeological treasures remain intact in the museum at Lake Nemi and in the Museo Nazionale Romano (Palazzo Massimo) at Rome. Feud with the senate In 39, relations between Caligula and the Roman Senate deteriorated. The subject of their disagreement is unknown. A number of factors, though, aggravated this feud. The Senate had become accustomed to ruling without an emperor between the departure of Tiberius for Capri in 26 and Caligula's accession. Additionally, Tiberius' treason trials had eliminated a number of pro-Julian senators such as Asinius Gallus. Caligula reviewed Tiberius' records of treason trials and decided, based on their actions during these trials, that numerous senators were not trustworthy. He ordered a new set of investigations and trials. He replaced the consul and had several senators put to death. Suetonius reports that other senators were degraded by being forced to wait on him and run beside his chariot. Soon after his break with the Senate, Caligula faced a number of additional conspiracies against him. A conspiracy involving his brother-in-law was foiled in late 39. Soon afterwards, the Governor of Germany, Gnaeus Cornelius Lentulus Gaetulicus, was executed for connections to a conspiracy. Western expansion In 40, Caligula expanded the Roman Empire into Mauretania and made a significant attempt at expanding into Britannia. The conquest of Britannia was later achieved during the reign of his successor, Claudius. Mauretania Mauretania was a client kingdom of Rome ruled by Ptolemy of Mauretania. Caligula invited Ptolemy to Rome and then suddenly had him executed. Mauretania was annexed by Caligula and subsequently divided into two provinces, Mauretania Tingitana and Mauretania Caesariensis, separated by the river Malua. Pliny claims that division was the work of Caligula, but Dio states that in 42 an uprising took place, which was subdued by Gaius Suetonius Paulinus and Gnaeus Hosidius Geta, and the division only took place after this. This confusion might mean that Caligula decided to divide the province, but the division was postponed because of the rebellion. The first known equestrian governor of the two provinces was Marcus Fadius Celer Flavianus, in office in 44. Details on the Mauretanian events of 39–44 are unclear. Cassius Dio wrote an entire chapter on the annexation of Mauretania by Caligula, but it is now lost. Caligula's move seemingly had a strictly personal political motive – fear and jealousy of his cousin Ptolemy – and thus the expansion may not have been prompted by pressing military or economic needs. However, the rebellion of Tacfarinas had shown how exposed Africa Proconsularis was to its west and how the Mauretanian client kings were unable to provide protection to the province, and it is thus possible that Caligula's expansion was a prudent response to potential future threats. Britannia There seems to have been a northern campaign to Britannia that was aborted. This campaign is derided by ancient historians with accounts of Gauls dressed up as Germanic tribesmen at his triumph and Roman troops ordered to collect seashells as "spoils of the sea". The few primary sources disagree on what precisely occurred. Modern historians have put forward numerous theories in an attempt to explain these actions. This trip to the English Channel could have merely been a training and scouting mission. The mission may have been to accept the surrender of the British chieftain Adminius. "Seashells", or conchae in Latin, may be a metaphor for something else such as female genitalia (perhaps the troops visited brothels) or boats (perhaps they captured several small British boats). Claims of divinity When several client kings came to Rome to pay their respects to him and argued about their nobility of descent, he allegedly cried out the Homeric line: "Let there be one lord, one king." In 40, Caligula began implementing very controversial policies that introduced religion into his political role. Caligula began appearing in public dressed as various gods and demigods such as Hercules, Mercury, Venus and Apollo. Reportedly, he began referring to himself as a god when meeting with politicians and he was referred to as "Jupiter" on occasion in public documents. A sacred precinct was set apart for his worship at Miletus in the province of Asia and two temples were erected for worship of him in Rome. The Temple of Castor and Pollux on the forum was linked directly to the imperial residence on the Palatine and dedicated to Caligula. He would appear there on occasion and present himself as a god to the public. Caligula had the heads removed from various statues of gods located across Rome and replaced them with his own. It is said that he wished to be worshipped as Neos Helios, the "New Sun". Indeed, he was represented as a sun god on Egyptian coins. Caligula's religious policy was a departure from that of his predecessors. According to Cassius Dio, living emperors could be worshipped as divine in the east and dead emperors could be worshipped as divine in Rome. Augustus had the public worship his spirit on occasion, but Dio describes this as an extreme act that emperors generally shied away from. Caligula took things a step further and had those in Rome, including senators, worship him as a tangible, living god. Eastern policy Caligula needed to quell several riots and conspiracies in the eastern territories during his reign. Aiding him in his actions was his good friend, Herod Agrippa, who became governor of the territories of Batanaea and Trachonitis after Caligula became emperor in 37. The cause of tensions in the east was complicated, involving the spread of Greek culture, Roman law and the rights of Jews in the empire. Caligula did not trust the prefect of Egypt, Aulus Avilius Flaccus. Flaccus had been loyal to Tiberius, had conspired against Caligula's mother and had connections with Egyptian separatists. In 38, Caligula sent Agrippa to Alexandria unannounced to check on Flaccus. According to Philo, the visit was met with jeers from the Greek population who saw Agrippa as the king of the Jews. As a result, riots broke out in the city. Caligula responded by removing Flaccus from his position and executing him. In 39, Agrippa accused |
effectively calculable function. Although the thesis has near-universal acceptance, it cannot be formally proven as the concept of an effectively calculability is only informally defined. Since its inception, variations on the original thesis have arisen, including statements about what can physically be realized by a computer in our universe (physical Church-Turing thesis) and what can be efficiently computed (Church–Turing thesis (complexity theory)). These variations are not due to Church or Turing, but arise from later work in complexity theory and digital physics. The thesis also has implications for the philosophy of mind (see below). Statement in Church's and Turing's words addresses the notion of "effective computability" as follows: "Clearly the existence of CC and RC (Church's and Rosser's proofs) presupposes a precise definition of 'effective'. 'Effective method' is here used in the rather special sense of a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps". Thus the adverb-adjective "effective" is used in a sense of "1a: producing a decided, decisive, or desired effect", and "capable of producing a result". In the following, the words "effectively calculable" will mean "produced by any intuitively 'effective' means whatsoever" and "effectively computable" will mean "produced by a Turing-machine or equivalent mechanical device". Turing's "definitions" given in a footnote in his 1938 Ph.D. thesis Systems of Logic Based on Ordinals, supervised by Church, are virtually the same: We shall use the expression "computable function" to mean a function calculable by a machine, and let "effectively calculable" refer to the intuitive idea without particular identification with any one of these definitions. The thesis can be stated as: Every effectively calculable function is a computable function. Church also stated that "No computational procedure will be considered as an algorithm unless it can be represented as a Turing Machine". Turing stated it this way: It was stated ... that "a function is effectively calculable if its values can be found by some purely mechanical process". We may take this literally, understanding that by a purely mechanical process one which could be carried out by a machine. The development ... leads to ... an identification of computability with effective calculability. [ is the footnote quoted above.] History One of the important problems for logicians in the 1930s was the Entscheidungsproblem of David Hilbert and Wilhelm Ackermann, which asked whether there was a mechanical procedure for separating mathematical truths from mathematical falsehoods. This quest required that the notion of "algorithm" or "effective calculability" be pinned down, at least well enough for the quest to begin. But from the very outset Alonzo Church's attempts began with a debate that continues to this day. the notion of "effective calculability" to be (i) an "axiom or axioms" in an axiomatic system, (ii) merely a definition that "identified" two or more propositions, (iii) an empirical hypothesis to be verified by observation of natural events, or (iv) just a proposal for the sake of argument (i.e. a "thesis"). Circa 1930–1952 In the course of studying the problem, Church and his student Stephen Kleene introduced the notion of λ-definable functions, and they were able to prove that several large classes of functions frequently encountered in number theory were λ-definable. The debate began when Church proposed to Gödel that one should define the "effectively computable" functions as the λ-definable functions. Gödel, however, was not convinced and called the proposal "thoroughly unsatisfactory". Rather, in correspondence with Church (c. 1934–35), Gödel proposed axiomatizing the notion of "effective calculability"; indeed, in a 1935 letter to Kleene, Church reported that: But Gödel offered no further guidance. Eventually, he would suggest his recursion, modified by Herbrand's suggestion, that Gödel had detailed in his 1934 lectures in Princeton NJ (Kleene and Rosser transcribed the notes). But he did not think that the two ideas could be satisfactorily identified "except heuristically". Next, it was necessary to identify and prove the equivalence of two notions of effective calculability. Equipped with the λ-calculus and "general" recursion, Stephen Kleene with help of Church and J. Barkley Rosser produced proofs (1933, 1935) to show that the two calculi are equivalent. Church subsequently modified his methods to include use of Herbrand–Gödel recursion and then proved (1936) that the Entscheidungsproblem is unsolvable: there is no algorithm that can determine whether a well formed formula has a beta normal form. Many years later in a letter to Davis (c. 1965), Gödel said that "he was, at the time of these [1934] lectures, not at all convinced that his concept of recursion comprised all possible recursions". By 1963–64 Gödel would disavow Herbrand–Gödel recursion and the λ-calculus in favor of the Turing machine as the definition of "algorithm" or "mechanical procedure" or "formal system". A hypothesis leading to a natural law?: In late 1936 Alan Turing's paper (also proving that the Entscheidungsproblem is unsolvable) was delivered orally, but had not yet appeared in print. On the other hand, Emil Post's 1936 paper had appeared and was certified independent of Turing's work. Post strongly disagreed with Church's "identification" of effective computability with the λ-calculus and recursion, stating: Rather, he regarded the notion of "effective calculability" as merely a "working hypothesis" that might lead by inductive reasoning to a "natural law" rather than by "a definition or an axiom". This idea was "sharply" criticized by Church. Thus Post in his 1936 paper was also discounting Kurt Gödel's suggestion to Church in 1934–35 that the thesis might be expressed as an axiom or set of axioms. Turing adds another definition, Rosser equates all three: Within just a short time, Turing's 1936–37 paper "On Computable Numbers, with an Application | thesis have arisen, including statements about what can physically be realized by a computer in our universe (physical Church-Turing thesis) and what can be efficiently computed (Church–Turing thesis (complexity theory)). These variations are not due to Church or Turing, but arise from later work in complexity theory and digital physics. The thesis also has implications for the philosophy of mind (see below). Statement in Church's and Turing's words addresses the notion of "effective computability" as follows: "Clearly the existence of CC and RC (Church's and Rosser's proofs) presupposes a precise definition of 'effective'. 'Effective method' is here used in the rather special sense of a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps". Thus the adverb-adjective "effective" is used in a sense of "1a: producing a decided, decisive, or desired effect", and "capable of producing a result". In the following, the words "effectively calculable" will mean "produced by any intuitively 'effective' means whatsoever" and "effectively computable" will mean "produced by a Turing-machine or equivalent mechanical device". Turing's "definitions" given in a footnote in his 1938 Ph.D. thesis Systems of Logic Based on Ordinals, supervised by Church, are virtually the same: We shall use the expression "computable function" to mean a function calculable by a machine, and let "effectively calculable" refer to the intuitive idea without particular identification with any one of these definitions. The thesis can be stated as: Every effectively calculable function is a computable function. Church also stated that "No computational procedure will be considered as an algorithm unless it can be represented as a Turing Machine". Turing stated it this way: It was stated ... that "a function is effectively calculable if its values can be found by some purely mechanical process". We may take this literally, understanding that by a purely mechanical process one which could be carried out by a machine. The development ... leads to ... an identification of computability with effective calculability. [ is the footnote quoted above.] History One of the important problems for logicians in the 1930s was the Entscheidungsproblem of David Hilbert and Wilhelm Ackermann, which asked whether there was a mechanical procedure for separating mathematical truths from mathematical falsehoods. This quest required that the notion of "algorithm" or "effective calculability" be pinned down, at least well enough for the quest to begin. But from the very outset Alonzo Church's attempts began with a debate that continues to this day. the notion of "effective calculability" to be (i) an "axiom or axioms" in an axiomatic system, (ii) merely a definition that "identified" two or more propositions, (iii) an empirical hypothesis to be verified by observation of natural events, or (iv) just a proposal for the sake of argument (i.e. a "thesis"). Circa 1930–1952 In the course of studying the problem, Church and his student Stephen Kleene introduced the notion of λ-definable functions, and they were able to prove that several large classes of functions frequently encountered in number theory were λ-definable. The debate began when Church proposed to Gödel that one should define the "effectively computable" functions as the λ-definable functions. Gödel, however, was not convinced and called the proposal "thoroughly unsatisfactory". Rather, in correspondence with Church (c. 1934–35), Gödel proposed axiomatizing the notion of "effective calculability"; indeed, in a 1935 letter to Kleene, Church reported that: But Gödel offered no further guidance. Eventually, he would suggest his recursion, modified by Herbrand's suggestion, that Gödel had detailed in his 1934 lectures in Princeton NJ (Kleene and Rosser transcribed the notes). But he did not think that the two ideas could be satisfactorily identified "except heuristically". Next, it was necessary to identify and prove the equivalence of two notions of effective calculability. Equipped with the λ-calculus and "general" recursion, Stephen Kleene with help of Church and J. Barkley Rosser produced proofs (1933, 1935) to show that the two calculi are equivalent. Church subsequently modified his methods to include use of Herbrand–Gödel recursion and then proved (1936) that the Entscheidungsproblem is unsolvable: there is no algorithm that can determine whether a well formed formula has a beta normal form. Many years later in a letter to Davis (c. 1965), Gödel said that "he was, at the time of these [1934] lectures, not at all convinced that his concept of recursion comprised all possible recursions". By 1963–64 Gödel would disavow Herbrand–Gödel recursion and the λ-calculus in favor of the Turing machine as the definition of "algorithm" or "mechanical procedure" or "formal system". A hypothesis leading to a natural law?: In late 1936 Alan Turing's paper (also proving that the Entscheidungsproblem is unsolvable) was delivered orally, but had not yet appeared in print. On the other hand, Emil Post's 1936 paper had appeared and was certified independent of Turing's work. Post strongly disagreed with Church's "identification" of effective computability with the λ-calculus and recursion, stating: Rather, he regarded the notion of "effective calculability" as merely a "working hypothesis" that might lead by inductive reasoning to a "natural law" rather than by "a definition or an axiom". This idea was "sharply" criticized by Church. Thus Post in his 1936 paper was also discounting Kurt Gödel's suggestion to Church in 1934–35 that the thesis might be expressed as an axiom or set of axioms. Turing adds another definition, Rosser equates all three: Within just a short time, Turing's 1936–37 paper "On Computable Numbers, with an Application to the Entscheidungsproblem" appeared. In it he stated another notion of "effective computability" with the introduction of his a-machines (now known as the Turing machine abstract computational model). And in a proof-sketch added as an "Appendix" to his 1936–37 paper, Turing showed that the classes of functions defined by λ-calculus and Turing machines coincided. Church was quick to recognise how compelling Turing's analysis was. In his review of Turing's paper he made clear that Turing's notion made "the identification with effectiveness in the ordinary (not explicitly defined) sense evident immediately". In a few years (1939) Turing would propose, like Church and Kleene before him, that his formal definition of mechanical computing agent was the correct one. Thus, by 1939, both Church (1934) and Turing (1939) had individually proposed that their "formal systems" should be definitions of "effective calculability"; neither framed their statements as theses. Rosser (1939) formally identified the three notions-as-definitions: Kleene proposes Thesis I: This left the overt expression of a "thesis" to Kleene. In 1943 Kleene proposed his "THESIS I": The Church–Turing Thesis: Stephen Kleene, in Introduction To Metamathematics, finally goes on to formally name "Church's Thesis" and "Turing's Thesis", using his theory of recursive realizability. Kleene having switched from presenting his work in the terminology of Church-Kleene lambda definability, to that of Gödel-Kleene recursiveness (partial recursive functions). In this transition, Kleene modified Gödel's general recursive functions to allow for proofs of the unsolvability of problems in the Intuitionism of E. J. Brouwer. In his graduate textbook on logic, "Church's thesis" is introduced and basic mathematical results are demonstrated to be unrealizable. Next, Kleene proceeds to present "Turing's thesis", where results are shown to be uncomputable, using his simplified derivation of a Turing machine based on the work of Emil Post. Both theses are proven equivalent by use of "Theorem XXX". Kleene, finally, uses for the first time the term the "Church-Turing thesis" in a section in which he helps to give clarifications to concepts in Alan Turing's paper "The Word Problem in Semi-Groups with Cancellation", as demanded in a critique from William Boone. Later developments An attempt to understand the notion of "effective computability" better led Robin Gandy (Turing's student and friend) in 1980 to analyze machine computation (as opposed to human-computation acted out by a Turing machine). Gandy's curiosity about, and analysis of, cellular automata (including Conway's game |
with the surname include: Alejandro Chomski (born 1968), Argentine film director and screenwriter Aviva Chomsky (born 1957), American historian Carol (Schatz) Chomsky (1930–2008), American linguist and wife of Noam Chomsky Judith Chomsky (born 1942), American human rights lawyer and co-founder of the Juvenile Law Center Marvin J. Chomsky (born 1929), American television and film director Noam Chomsky (born 1928), American linguist and political activist, professor | at MIT (born 1957), Polish speedway rider and coach William Chomsky (1896–1977), American scholar of Hebrew (1925–2016), Soviet and Russian theater director See also Gryf coat of arms Odrowąż coat of arms Slavic-language surnames Polish-language surnames Surnames of Polish origin Polish toponymic surnames |
serving many users by multitasking their individual programs. In multiprogramming systems, a task runs until it must wait for an external event or until the operating system's scheduler forcibly swaps the running task out of the CPU. Real-time systems such as those designed to control industrial robots, require timely processing; a single processor might be shared between calculations of machine movement, communications, and user interface. Often multitasking operating systems include measures to change the priority of individual tasks, so that important jobs receive more processor time than those considered less significant. Depending on the operating system, a task might be as large as an entire application program, or might be made up of smaller threads that carry out portions of the overall program. A processor intended for use with multitasking operating systems may include special hardware to securely support multiple tasks, such as memory protection, and protection rings that ensure the supervisory software cannot be damaged or subverted by user-mode program errors. The term "multitasking" has become an international term, as the same word is used in many other languages such as German, Italian, Dutch, Romanian, Czech, Danish and Norwegian. Multiprogramming In the early days of computing, CPU time was expensive, and peripherals were very slow. When the computer ran a program that needed access to a peripheral, the central processing unit (CPU) would have to stop executing program instructions while the peripheral processed the data. This was usually very inefficient. The first computer using a multiprogramming system was the British Leo III owned by J. Lyons and Co. During batch processing, several different programs were loaded in the computer memory, and the first one began to run. When the first program reached an instruction waiting for a peripheral, the context of this program was stored away, and the second program in memory was given a chance to run. The process continued until all programs finished running. The use of multiprogramming was enhanced by the arrival of virtual memory and virtual machine technology, which enabled individual programs to make use of memory and operating system resources as if other concurrently running programs were, for all practical purposes, nonexistent. Multiprogramming gives no guarantee that a program will run in a timely manner. Indeed, the first program may very well run for hours without needing access to a peripheral. As there were no users waiting at an interactive terminal, this was no problem: users handed in a deck of punched cards to an operator, and came back a few hours later for printed results. Multiprogramming greatly reduced wait times when multiple batches were being processed. Cooperative multitasking Early multitasking systems used applications that voluntarily ceded time to one another. This approach, which was eventually supported by many computer operating systems, is known today as cooperative multitasking. Although it is now rarely used in larger systems except for specific applications such as CICS or the JES2 subsystem, cooperative multitasking was once the only scheduling scheme employed by Microsoft Windows and classic Mac OS to enable multiple applications to run simultaneously. Cooperative multitasking is still used today on RISC OS systems. As a cooperatively multitasked system relies on each process regularly giving up time to other processes on the system, one poorly designed program can consume all of the CPU time for itself, either by performing extensive calculations or by busy waiting; both would cause the whole system to hang. In a server environment, this is a hazard that makes the entire environment unacceptably fragile. Preemptive multitasking Preemptive multitasking allows the computer system to more reliably guarantee to each process a regular "slice" of operating time. It also allows the system to deal rapidly with important external events like incoming data, which might require the immediate attention of one or another process. Operating systems were developed to take advantage of these hardware capabilities and run multiple processes preemptively. Preemptive multitasking was implemented in the PDP-6 Monitor and MULTICS in 1964, in OS/360 MFT in 1967, and in Unix in 1969, and was available in some operating systems for computers as small as DEC's PDP-8; | runs until it must wait for an external event or until the operating system's scheduler forcibly swaps the running task out of the CPU. Real-time systems such as those designed to control industrial robots, require timely processing; a single processor might be shared between calculations of machine movement, communications, and user interface. Often multitasking operating systems include measures to change the priority of individual tasks, so that important jobs receive more processor time than those considered less significant. Depending on the operating system, a task might be as large as an entire application program, or might be made up of smaller threads that carry out portions of the overall program. A processor intended for use with multitasking operating systems may include special hardware to securely support multiple tasks, such as memory protection, and protection rings that ensure the supervisory software cannot be damaged or subverted by user-mode program errors. The term "multitasking" has become an international term, as the same word is used in many other languages such as German, Italian, Dutch, Romanian, Czech, Danish and Norwegian. Multiprogramming In the early days of computing, CPU time was expensive, and peripherals were very slow. When the computer ran a program that needed access to a peripheral, the central processing unit (CPU) would have to stop executing program instructions while the peripheral processed the data. This was usually very inefficient. The first computer using a multiprogramming system was the British Leo III owned by J. Lyons and Co. During batch processing, several different programs were loaded in the computer memory, and the first one began to run. When the first program reached an instruction waiting for a peripheral, the context of this program was stored away, and the second program in memory was given a chance to run. The process continued until all programs finished running. The use of multiprogramming was enhanced by the arrival of virtual memory and virtual machine technology, which enabled individual programs to make use of memory and operating system resources as if other concurrently running programs were, for all practical purposes, nonexistent. Multiprogramming gives no guarantee that a program will run in a timely manner. Indeed, the first program may very well run for hours without needing access to a peripheral. As there were no users waiting at an interactive terminal, this was no problem: users handed in a deck of punched cards to an operator, and came back a few hours later for printed results. Multiprogramming greatly reduced wait times when multiple batches were being processed. Cooperative multitasking Early multitasking systems used applications that voluntarily ceded time to one another. This approach, which was eventually supported by many computer operating systems, is known today as cooperative multitasking. Although it is now rarely used in larger systems except for specific applications such as CICS or the JES2 subsystem, cooperative multitasking was once the only scheduling scheme employed by Microsoft Windows and classic Mac OS to enable multiple applications to run simultaneously. Cooperative multitasking is still used today on RISC OS systems. As a cooperatively multitasked system relies on each process regularly giving up time to other processes on the system, one poorly designed program can consume all of the CPU time for itself, either by performing extensive calculations or by busy waiting; both would cause the whole system to hang. In a server environment, this is a hazard that makes the entire environment unacceptably fragile. Preemptive multitasking Preemptive multitasking allows the computer system to more reliably guarantee to each process a regular "slice" of operating time. It also allows the system to deal rapidly with important external events like incoming data, which might require the immediate attention of one or another process. Operating systems were developed to take advantage of these hardware capabilities and run multiple processes preemptively. Preemptive multitasking was implemented in the PDP-6 Monitor and MULTICS in 1964, in OS/360 MFT in 1967, and in Unix in 1969, and was available in some operating systems for computers as small as DEC's PDP-8; it is a core feature of all Unix-like operating systems, such as Linux, Solaris and BSD with its derivatives, as well as modern versions of Windows. At any specific time, processes can be |
with Christianity, nationalism and authoritarianism that have some similarities to fascism. Frederic Wakeman suggested that the New Life Movement was "Confucian fascism". Under Chiang’s rule, there also existed the Blue Shirts Society, which was largely modelled on those of the Blackshirts in the National Fascist Party and the Sturmabteilung of the NSDAP. Its ideology was to expel foreign (Japanese and Western) imperialists from China and to crush communism. Close Sino-German ties also promoted cooperation between the Kuomintang and the Nazi Party (NSDAP). Chiang has often been interpreted as being pro-capitalism, but this conclusion is problematic. Shanghai capitalists did briefly support him out of fear of communism in 1927, but this support eroded in 1928 when Chiang turned his tactics of intimidation on them. The relationship between Chiang Kai-shek and Chinese capitalists remained poor throughout the period of his administration. Chiang blocked Chinese capitalists from gaining any political power or voice within his regime. Once Chiang Kai-shek was done with his White Terror on pro-communist laborers, he proceeded to turn on the capitalists. Gangster connections allowed Chiang to attack them in the International Settlement, successfully forcing capitalists to back him up with their assets for his military expeditions. Chiang viewed all of the foreign great powers with suspicion, writing in a letter that they "all have it in their minds to promote the interests of their own respective countries at the cost of other nations" and seeing it as hypocritical for any of them to condemn each other's foreign policy. He used diplomatic persuasion on the United States, Germany, and the Soviet Union to regain lost Chinese territories as he viewed all foreign powers as imperialists who were attempting to curtail and suppress China's power and national resurrection. Mass deaths under Nationalist rule Some sources blame Chiang Kai-shek for millions of deaths in scattered events caused by the Nationalist Government of China. Rudolph Rummel, however, puts some of the responsibility on the Nationalist regime as whole rather than all on Chiang Kai-Shek in particular. Rummel writes that from its founding down to its defeat in 1949, the Nationalist government probably killed between roughly 6 and 18.5 million deaths. The major causes include: Thousands of communists and communist sympathizers killed during and in the year after the 1927 Shanghai massacre. In 1938, to stop Japanese advance Chiang ordered the Yellow River dikes to be breached. An official postwar commission estimated that the total number of those who perished from malnutrition, famine, disease, or drowning might be as high as 800,000. In 1943, 1.75 to 2.5 million Henan civilians starved to death due to grain being confiscated and sold for the profit of Nationalist government officials. 4,212,000 Chinese perished during the Second Sino-Japanese War and Civil War starving to death or dying from disease during conscription campaigns. First phase of the Chinese Civil War In Nanjing, on April 1931, Chiang Kai-shek attended a national leadership conference with Zhang Xueliang and General Ma Fuxiang, in which Chiang and Zhang dauntlessly upheld that Manchuria was part of China in the face of the Japanese invasion. After the Japanese invasion of Manchuria in 1931, Chiang resigned as Chairman of the National Government. He returned shortly afterwards, adopting the slogan "first internal pacification, then external resistance". However, this policy of avoiding a frontal war against the Japanese was widely unpopular. In 1932, while Chiang was seeking first to defeat the Communists, Japan launched an advance on Shanghai and bombarded Nanjing. This disrupted Chiang's offensives against the Communists for a time, although it was the northern factions of Hu Hanmin's Kwangtung government (notably the 19th Route Army) that primarily led the offensive against the Japanese during this skirmish. Brought into the Nationalist army immediately after the battle, the 19th Route Army's career under Chiang would be cut short after it was disbanded for demonstrating socialist tendencies. In December 1936, Chiang flew to Xi'an to coordinate a major assault on the Red Army and the Communist Republic that had retreated into Yan'an. However, Chiang's allied commander Zhang Xueliang, whose forces were used in his attack and whose homeland of Manchuria had been recently invaded by the Japanese, did not support the attack on the Communists. On 12 December, Zhang and several other Nationalist generals headed by Yang Hucheng of Shaanxi kidnapped Chiang for two weeks in what is known as the Xi'an Incident. They forced Chiang into making a "Second United Front" with the Communists against Japan. After releasing Chiang and returning to Nanjing with him, Zhang was placed under house arrest and the generals who had assisted him were executed. Chiang's commitment to the Second United Front was nominal at best, and it was all but broken up in 1941. Second Sino-Japanese War The Second Sino-Japanese War broke out in July 1937, and in August of that year Chiang sent of his best-trained and equipped soldiers to defend Shanghai. With over 200,000 Chinese casualties, Chiang lost the political cream of his Whampoa-trained officers. Although Chiang lost militarily, the battle dispelled Japanese claims that it could conquer China in three months and demonstrated to the Western powers that the Chinese would continue the fight. By December, the capital city of Nanjing had fallen to the Japanese resulting in the Nanking massacre. Chiang moved the government inland, first to Wuhan and later to Chongqing. Having lost most of China's economic and industrial centers, Chiang withdrew into the hinterlands, stretching the Japanese supply lines and bogging down Japanese soldiers in the vast Chinese interior. As part of a policy of protracted resistance, Chiang authorized the use of scorched earth tactics, resulting in many civilian deaths. During the Nationalists' retreat from Zhengzhou, the dams around the city were deliberately destroyed by the Nationalist army to delay the Japanese advance, killing 500,000 people in the subsequent 1938 Yellow River flood. After heavy fighting, the Japanese occupied Wuhan in the fall of 1938 and the Nationalists retreated farther inland, to Chongqing. While en route to Chongqing, the Nationalist army intentionally started the "fire of Changsha", as a part of the scorched earth policy. The fire destroyed much of the city, killed twenty thousand civilians, and left hundreds of thousands of people homeless. Due to an organizational error (it was claimed), the fire was begun without any warning to the residents of the city. The Nationalists eventually blamed three local commanders for the fire and executed them. Newspapers across China blamed the fire on (non-KMT) arsonists, but the blaze contributed to a nationwide loss of support for the KMT. In 1939 Muslim leaders Isa Yusuf Alptekin and Ma Fuliang were sent by Chiang to several Middle Eastern countries, including Egypt, Turkey, and Syria, to gain support for the Chinese War against Japan, and to express his support for Muslims. The Japanese, controlling the puppet-state of Manchukuo and much of China's eastern seaboard, appointed Wang Jingwei as a Quisling-ruler of the occupied Chinese territories around Nanjing. Wang named himself President of the Executive Yuan and Chairman of the National Government (not the same 'National Government' as Chiang's), and led a surprisingly large minority of anti-Chiang/anti-Communist Chinese against his old comrades. He died in 1944, within a year of the end of World War II. The Hui Muslim Xidaotang sect pledged allegiance to the Kuomintang after their rise to power and Hui Muslim General Bai Chongxi acquainted Chiang Kaishek with the Xidaotang jiaozhu Ma Mingren in 1941 in Chongqing. In 1942 Chiang went on tour in northwestern China in Xinjiang, Gansu, Ningxia, Shaanxi, and Qinghai, where he met both Muslim Generals Ma Buqing and Ma Bufang. He also met the Muslim Generals Ma Hongbin and Ma Hongkui separately. A border crisis erupted with Tibet in 1942. Under orders from Chiang, Ma Bufang repaired Yushu airport to prevent Tibetan separatists from seeking independence. Chiang also ordered Ma Bufang to put his Muslim soldiers on alert for an invasion of Tibet in 1942. Ma Bufang complied and moved several thousand troops to the border with Tibet. Chiang also threatened the Tibetans with aerial bombardment if they worked with the Japanese. Ma Bufang attacked the Tibetan Buddhist Tsang monastery in 1941. He also constantly attacked the Labrang Monastery. With the attack on Pearl Harbor and the opening of the Pacific War, China became one of the Allied Powers. During and after World War II, Chiang and his American-educated wife Soong Mei-ling, known in the United States as "Madame Chiang", held the support of the China Lobby in the United States, which saw in them the hope of a Christian and democratic China. Chiang was even named the Supreme Commander of Allied forces in the China war zone. He was appointed Knight Grand Cross of the Order of the Bath in 1942. General Joseph Stilwell, an American military adviser to Chiang during World War II, strongly criticized Chiang and his generals for what he saw as their incompetence and corruption. In 1944, the United States Army Air Corps commenced Operation Matterhorn to bomb Japan's steel industry from bases to be constructed in mainland China. This was meant to fulfill President Roosevelt's promise to Chiang Kai-shek to begin bombing operations against Japan by November 1944. However, Chiang Kai-shek's subordinates refused to take airbase construction seriously until enough capital had been delivered to permit embezzlement on a massive scale. Stilwell estimated that at least half of the $100 million spent on construction of airbases was embezzled by Nationalist party officials. Chiang played the Soviets and Americans against each other during the war. He first told the Americans that they would be welcome in talks between the Soviet Union and China, then secretly told the Soviets that the Americans were unimportant and that their opinions would not be considered. Chiang also used American support and military power in China against the ambitions of the Soviet Union to dominate the talks, stopping the Soviets from taking full advantage of the situation in China with the threat of American military action against the Soviets. French Indochina U.S. President Franklin D. Roosevelt, through General Stilwell, privately made it clear that they preferred that the French not reacquire French Indochina (modern day Vietnam, Cambodia and Laos) after the war was over. Roosevelt offered Chiang control of all of Indochina. It was said that Chiang replied: "Under no circumstances!" After the war, 200,000 Chinese troops under General Lu Han were sent by Chiang Kai-shek to northern Indochina (north of the 16th parallel) to accept the surrender of Japanese occupying forces there, and remained in Indochina until 1946, when the French returned. The Chinese used the VNQDD, the Vietnamese branch of the Chinese Kuomintang, to increase their influence in Indochina and to put pressure on their opponents. Chiang Kai-shek threatened the French with war in response to maneuvering by the French and Ho Chi Minh's forces against each other, forcing them to come to a peace agreement. In February 1946 he also forced the French to surrender all of their concessions in China and to renounce their extraterritorial privileges in exchange for the Chinese withdrawing from northern Indochina and allowing French troops to reoccupy the region. Following France's agreement to these demands, the withdrawal of Chinese troops began in March 1946. Ryukyus During the Cairo Conference in 1943, Chiang said that Roosevelt asked him whether China would like to claim the Ryukyu Islands from Japan in addition to retaking Taiwan, the Pescadores, and Manchuria. Chiang claims that he said he was in favor of an international presence on the islands. However, the U.S. became the occupier of the Ryukyus in 1945 until 1971, when Kishi successfully negotiated with Nixon to sign the Okinawa reversion agreement and return Okinawa to Japan. Second phase of the Chinese Civil War Treatment and use of Japanese soldiers In 1945, when Japan surrendered, Chiang's Chongqing government was ill-equipped and ill-prepared to reassert its authority in formerly Japanese-occupied China, and it asked the Japanese to postpone their surrender until Kuomintang (KMT) authority could arrive to take over. American troops and weapons soon bolstered KMT forces, allowing them to reclaim cities. The countryside, however, remained largely under Communist control. For over a year after the Japanese surrender, rumors circulated throughout China that the Japanese had entered into a secret agreement with Chiang, in which the Japanese would assist the Nationalists in fighting the Communists in exchange for the protection of Japanese persons and property there. Many top nationalist generals, including Chiang, had studied and trained in Japan before the Nationalists had returned to the mainland in the 1920s, and maintained close personal friendships with top Japanese officers. The Japanese general in charge of all forces in China, General Yasuji Okamura, had personally trained officers who later became generals in Chiang's staff. Reportedly, General Okamura, before surrendering command of all Japanese military forces in Nanjing, offered Chiang control of all 1.5 million Japanese military and civilian support staff then present in China. Reportedly, Chiang seriously considered accepting this offer, but declined only in the knowledge that the United States would certainly be outraged by the gesture. Even so, armed Japanese troops remained in China well into 1947, with some noncommissioned officers finding their way into the Nationalist officer corps. That the Japanese in China came to regard Chiang as a magnanimous figure to whom many Japanese owed their lives and livelihoods was a fact attested by both Nationalist and Communist sources. Conditions during the Chinese Civil War Westad says the Communists won the Civil War because they made fewer military mistakes than Chiang Kai-Shek, and because in his search for a powerful centralized government, Chiang antagonized too many interest groups in China. Furthermore, his party was weakened in the war against Japan. Meanwhile, the Communists told different groups, such as peasants, exactly what they wanted to hear, and cloaked themselves in the cover of Chinese Nationalism. Following the war, the United States encouraged peace talks between Chiang and Communist leader Mao Zedong in Chongqing. Due to concerns about widespread and well-documented corruption in Chiang's government throughout his rule, the U.S. government limited aid to Chiang for much of the period of 1946 to 1948, in the midst of fighting against the People's Liberation Army led by Mao Zedong. Alleged infiltration of the U.S. government by Chinese Communist agents may have also played a role in the suspension of American aid. Chiang's right-hand man, the secret police Chief Dai Li, was both anti-American and anti-Communist. Dai ordered Kuomintang agents to spy on American officers. Earlier, Dai had been involved with the Blue Shirts Society, a fascist-inspired paramilitary group within the Kuomintang, which wanted to expel Western and Japanese imperialists, crush the Communists, and eliminate feudalism. Dai Li died in a plane crash, which while some suspect to be an assassination orchestrated by Chiang, the assassination was also rumoured to have been arranged by the American Office of Strategic Services due to Dai’s anti-Americanism, because it happened on an American plane. Although Chiang had achieved status abroad as a world leader, his government deteriorated as the result of corruption and inflation. In his diary on June 1948, Chiang wrote that the KMT had failed, not because of external enemies but because of rot from within. The war had severely weakened the Nationalists, while the Communists were strengthened by their popular land-reform policies, and by a rural population that supported and trusted them. The Nationalists initially had superiority in arms and men, but their lack of popularity, infiltration by Communist agents, low morale, and disorganization soon allowed the Communists to gain the upper hand in the civil war. Competition with Li Zongren A new Constitution was promulgated in 1947, and Chiang was elected by the National Assembly as the first term President of the Republic of China on 20 May 1948. This marked the beginning of what was termed the "democratic constitutional government" period by the KMT political orthodoxy, but the Communists refused to recognize the new Constitution, and its government, as legitimate. Chiang resigned as President on 21 January 1949, as KMT forces suffered terrible losses and defections to the Communists. After Chiang's resignation the vice-president of the ROC, Li Zongren, became China's acting president. Shortly after Chiang's resignation the Communists halted their advances and attempted to negotiate the virtual surrender of the ROC. Li attempted to negotiate milder terms that would have ended the civil war, but without success. When it became clear that Li was unlikely to accept Mao's terms, the Communists issued an ultimatum in April 1949, warning that they would resume their attacks if Li did not agree within five days. Li refused. Li's attempts to carry out his policies faced varying degrees of opposition from Chiang's supporters, and were generally unsuccessful. Chiang especially antagonized Li by taking possession of (and moving to Taiwan) US$200 million of gold and US dollars belonging to the central government that Li desperately needed to cover the government's soaring expenses. When the Communists captured the Nationalist capital of Nanjing in April 1949, Li refused to accompany the central government as it fled to Guangdong, instead expressing his dissatisfaction with Chiang by retiring to Guangxi. The former warlord Yan Xishan, who had fled to Nanjing only one month before, quickly insinuated himself within the Li-Chiang rivalry, attempting to have Li and Chiang reconcile their differences in the effort to resist the Communists. At Chiang's request Yan visited Li to convince Li not to withdraw from public life. Yan broke down in tears while talking of the loss of his home province of Shanxi to the Communists, and warned Li that the Nationalist cause was doomed unless Li went to Guangdong. Li agreed to return under the condition that Chiang surrender most of the gold and US dollars in his possession that belonged to the central government, and that Chiang stop overriding Li's authority. After Yan communicated these demands and Chiang agreed to comply with them, Li departed for Guangdong. In Guangdong, Li attempted to create a new government composed of both Chiang supporters and those opposed to Chiang. Li's first choice of premier was Chu Cheng, a veteran member of the Kuomintang who had been virtually driven into exile due to his strong opposition to Chiang. After the Legislative Yuan rejected Chu, Li was obliged to choose Yan Xishan instead. By this time Yan was well known for his adaptability and Chiang welcomed his appointment. Conflict between Chiang and Li persisted. Although he had agreed to do so as a prerequisite of Li's return, Chiang refused to surrender more than a fraction of the wealth that he had sent to Taiwan. Without being backed by gold or foreign currency, the money issued by Li and Yan quickly declined in value until it became virtually worthless. Although he did not hold a formal executive position in the government, Chiang continued to issue orders to the army, and many officers continued to obey Chiang rather than Li. The inability of Li to coordinate KMT military forces led him to put into effect a plan of defense that he had contemplated in 1948. Instead of attempting to defend all of southern China, Li ordered what remained of the Nationalist armies to withdraw to Guangxi and Guangdong, hoping that he could concentrate all available defenses on this smaller, and more easily defensible, area. The object of Li's strategy was to maintain a foothold on the Chinese mainland in the hope that the United States would eventually be compelled to enter the war in China on the Nationalist side. Final Communist advance Chiang opposed Li's plan of defense because it would have placed most of the troops still loyal to Chiang under the control of Li and Chiang's other opponents in the central government. To overcome Chiang's intransigence Li began ousting Chiang's supporters within the central government. Yan Xishan continued in his attempts to work with both sides, creating the impression among Li's supporters that he was a "stooge" of Chiang, while those who supported Chiang began to bitterly resent Yan for his willingness to work with Li. Because of the rivalry between Chiang and Li, Chiang refused to allow Nationalist troops loyal to him to aid in the defense of Kwangsi and Canton, with the result that Communist forces occupied Canton in October 1949. After Canton fell to the Communists, Chiang relocated the government to Chongqing, while Li effectively surrendered his powers and flew to New York for treatment of his chronic duodenum illness at the Hospital of Columbia University. Li visited the President of the United States, Harry S. Truman, and denounced Chiang as a dictator and an usurper. Li vowed that he would "return to crush" Chiang once he returned to China. Li remained in exile, and did not return to Taiwan. In the early morning of 10 December 1949, Communist troops laid siege to Chengdu, the last KMT-controlled city in mainland China, where Chiang Kai-shek and his son Chiang Ching-kuo directed the defense at the Chengtu Central Military Academy. Flying out of Chengdu Fenghuangshan Airport, Chiang Kai-shek, father and son, were evacuated to Taiwan via Guangdong on an aircraft called May-ling and arrived the same day. Chiang Kai-shek would never return to the mainland. Chiang did not re-assume the presidency until 1 March 1950. On January 1952, Chiang commanded the Control Yuan, now in Taiwan, to impeach Li in the "Case of Li Zongren's Failure to carry out Duties due to Illegal Conduct" (李宗仁違法失職案). Chiang relieved Li of the position as vice-president in the National Assembly in March 1954. On Taiwan Preparations to retake the mainland Chiang moved the government to Taipei, Taiwan, where he resumed his duties as President of the Republic of China on 1 March 1950. Chiang was reelected by the National Assembly to be the President of the Republic of China (ROC) on 20 May 1954, and again in 1960, 1966, and 1972. He continued to claim sovereignty over all of China, including the territories held by his government and the People's Republic, as well as territory the latter ceded to foreign governments, such as Tuva and Outer Mongolia. In the context of the Cold War, most of the Western world recognized this position and the ROC represented China in the United Nations and other international organizations until the 1970s. During his presidency on Taiwan, Chiang continued making preparations to take back mainland China. He developed the ROC army to prepare for an invasion of the mainland, and to defend Taiwan in case of an attack by the Communist forces. He also financed armed groups in mainland China, such as Muslim soldiers of the ROC Army left in Yunnan under Li Mi, who continued to fight. It was not until the 1980s that these troops were finally airlifted to Taiwan. He promoted the Uyghur Yulbars Khan to Governor during the Islamic insurgency on the mainland for resisting the Communists, even though the government had already evacuated to Taiwan. He planned an invasion of the mainland in 1962. In the 1950s Chiang's airplanes dropped supplies to Kuomintang Muslim insurgents in Amdo. Regime Despite the democratic constitution, the government under Chiang was a one-party state, consisting almost completely of mainlanders; the "Temporary Provisions Effective During the Period of Communist Rebellion" greatly enhanced executive powers, and the goal of retaking mainland China allowed the KMT to maintain a monopoly on power and the prohibition of opposition parties. The government's official line for these martial law provisions stemmed from the claim that emergency provisions were necessary, since the Communists and KMT were still in a state of war. Seeking to promote Chinese nationalism, Chiang's government actively ignored and suppressed local cultural expression, even forbidding the use of local languages in mass media broadcasts or during class sessions. As a result of Taiwan's anti-government uprising in 1947, known as the February 28 incident, the KMT-led political repression resulted in the death or disappearance of over 30,000 Taiwanese intellectuals, activists, and people suspected of opposition to the KMT. The first decades after the Nationalists moved the seat of government to the province of Taiwan are associated with the organized effort to resist Communism known as the "White Terror", during which about 140,000 Taiwanese were imprisoned for their real or perceived opposition to the Kuomintang. Most of those prosecuted were labeled by the Kuomintang as "bandit spies" (匪諜), meaning spies for Chinese Communists, and punished as such. Under Chiang, the government recognized limited civil liberties, economic freedoms, property rights (personal and intellectual) and other liberties. Despite these restrictions, free debate within the confines of the legislature was permitted. Under the pretext that new elections could not be held in Communist-occupied constituencies, the National Assembly, Legislative Yuan, and Control Yuan members held their posts indefinitely. The Temporary Provisions also allowed Chiang to remain as president beyond the two-term limit in the Constitution. He was reelected by the National Assembly as president four times—doing so in 1954, 1960, 1966, and 1972. Believing that corruption and a lack of morals were key reasons that the KMT lost mainland China to the Communists, Chiang attempted to purge corruption by dismissing members of the KMT accused of graft. Some major figures in the previous mainland Chinese government, such as Chiang's brothers-in-law H. H. Kung and T. V. Soong, exiled themselves to the United States. Although politically authoritarian and, to some extent, dominated by government-owned industries, Chiang's new Taiwanese state also encouraged economic development, especially in the export sector. A popular sweeping Land Reform Act, as well as American foreign aid during the 1950s, laid the foundation for Taiwan's economic success, becoming one of the Four Asian Tigers. Chiang personally had the power to review the rulings of all military tribunals which during the martial law period tried civilians as well. In 1950 Lin Pang-chun and two other men were arrested on charges of financial crimes and sentenced to 3–10 years in prison. Chiang reviewed the sentences of all three and ordered them executed instead. In 1954 Changhua monk Kao Chih-te and two others were sentenced to 12 years in prison for providing aid to accused communists, Chiang sentenced them to death after reviewing the case. This control over the decision of military tribunals violated the ROC constitution. After Chiang's death, the next president, his son, Chiang Ching-kuo, and Chiang Ching-kuo's successor, Lee Teng-hui, a native Taiwanese, would in the 1980s and 1990s increase native Taiwanese representation in the government and loosen the many authoritarian controls of the early era of ROC control in Taiwan. Relationship with Japan In 1971, the Australian Opposition Leader Gough Whitlam, who became Prime Minister in 1972 and swiftly relocated the Australian mission from Taipei to Beijing, visited Japan. After meeting with the Japanese Prime Minister, Eisaku Sato, Whitlam observed that the reason Japan at that time was hesitant to withdraw recognition from the Nationalist government was "the presence of a treaty between the Japanese government and that of Chiang Kai-shek". Sato explained that the continued recognition of Japan towards the Nationalist government was due largely to the personal relationship that various members of the Japanese government felt towards Chiang. This relationship was rooted largely in the generous and lenient treatment of Japanese prisoners-of-war by the Nationalist government in the years immediately following the Japanese surrender in 1945, and was felt especially strongly as a bond of personal obligation by the most senior members then in power. Although Japan recognized the People's Republic in 1972, shortly after Kakuei Tanaka succeeded Sato as Prime Minister of Japan, the memory of this relationship was strong enough to be reported by The New York Times (15 April 1978) as a significant factor inhibiting trade between Japan and the mainland. There is speculation that a clash between Communist forces and a Japanese warship in 1978 was caused by Chinese anger after Prime Minister Takeo Fukuda attended Chiang's funeral. Historically, Japanese attempts to normalize their relationship with the People's Republic were met with accusations of ingratitude in Taiwan. Relationship with the United States Chiang was suspicious that covert operatives of the United States plotted a coup against him. In 1950, Chiang Ching-kuo became director of the secret police (Bureau of Investigation and Statistics), which he remained until 1965. Chiang was also suspicious of politicians who were overly friendly to the United States, and considered them his enemies. In 1953, seven days after surviving an assassination attempt, Wu Kuo-chen lost his position as governor of Taiwan Province to Chiang Ching-kuo. After fleeing to United States the same year, he became a vocal critic of Chiang's family and government. Chiang Ching-kuo, educated in the Soviet Union, initiated Soviet-style military organization in the Republic of China Military. He reorganized and Sovietized the political officer corps, and propagated Kuomintang ideology throughout the military. Sun Li-jen, who was educated at the American Virginia Military Institute, was opposed to this. Chiang Ching-kuo orchestrated the controversial court-martial and arrest of General Sun Li-jen in August 1955, for plotting a coup d'état with the American Central Intelligence Agency (CIA) against his father Chiang Kai-shek and the Kuomintang. The CIA allegedly wanted to help Sun take control of Taiwan and declare its independence. Death In 1975, 26 years after Chiang came to Taiwan, he died in Taipei at the age of 87. He had suffered a heart attack and pneumonia in the foregoing months and died from renal failure aggravated with advanced cardiac failure on 5 April. Chiang's funeral was held on 16 April. A month of mourning was declared. Chinese music composer Hwang Yau-tai wrote the "Chiang Kai-shek Memorial Song". In mainland China, however, Chiang's death was met with little apparent mourning and Communist state-run newspapers gave the brief headline "Chiang Kai-shek Has Died". Chiang's body was put in a copper coffin and temporarily interred at his favorite residence in Cihu, Daxi, Taoyuan. His funeral was attended by dignitaries from many nations, including American Vice President Nelson Rockefeller, South Korean Prime Minister Kim Jong-pil and two former Japanese prime ministers : Nobusuke Kishi and Eisaku Sato. (蔣公逝世紀念日) was established on 5 April. The memorial day was disestablished in 2007. When his son Chiang Ching-kuo died in 1988, he was entombed in a separate mausoleum in nearby Touliao (頭寮). The hope was to | West and in the Soviet Union, Chiang Kai-shek was known as the "Red General". Movie theaters in the Soviet Union showed newsreels and clips of Chiang. At Moscow, Sun Yat-sen University portraits of Chiang were hung on the walls; and, in the Soviet May Day Parades that year, Chiang's portrait was to be carried along with the portraits of Karl Marx, Friedrich Engels, Vladimir Lenin, Joseph Stalin, Mao Zedong, Ho Chi Minh and other Communist leaders. The United States consulate and other Westerners in Shanghai were concerned about the approach of "Red General" Chiang as his army was seizing control of large areas of the country in the Northern Expedition. Rule Having gained control of China, Chiang's party remained surrounded by "surrendered" warlords who remained relatively autonomous within their own regions. On 10 October 1928, Chiang was named director of the State Council, the equivalent to President of the country, in addition to his other titles. As with his predecessor Sun Yat-sen, the Western media dubbed him "Generalissimo". According to Sun Yat-sen's plans, the Kuomintang (KMT) was to rebuild China in three steps: military rule, political tutelage, and constitutional rule. The ultimate goal of the KMT revolution was democracy, which was not considered to be feasible in China's fragmented state. Since the KMT had completed the first step of revolution through seizure of power in 1928, Chiang's rule thus began a period of what his party considered to be "political tutelage" in Sun Yat-sen's name. During this so-called Republican Era, many features of a modern, functional Chinese state emerged and developed. From 1928 to 1937, a time period known as the Nanjing decade, some aspects of foreign imperialism, concessions and privileges in China were moderated through diplomacy. The government acted to modernize the legal and penal systems, attempted to stabilize prices, amortize debts, reform the banking and currency systems, build railroads and highways, improve public health facilities, legislate against traffic in narcotics, and augment industrial and agricultural production. Not all of these projects were successfully completed. Efforts were made towards improving education standards, and in an effort to unify Chinese society, the New Life Movement was launched to encourage Confucian moral values and personal discipline. Guoyu ("national language") was promoted as a standard tongue, and the establishment of communications facilities (including radio) were used to encourage a sense of Chinese nationalism in a way that was not possible when the nation lacked an effective central government. Any successes that the Nationalists did make, however, were met with constant political and military upheavals. While much of the urban areas were now under the control of the KMT, much of the countryside remained under the influence of weakened yet undefeated warlords and Communists. Chiang often resolved issues of warlord obstinacy through military action, but such action was costly in terms of men and material. The 1930 Central Plains War alone nearly bankrupted the Nationalist government and caused almost casualties on both sides. In 1931, Hu Hanmin, Chiang's old supporter, publicly voiced a popular concern that Chiang's position as both premier and president flew in the face of the democratic ideals of the Nationalist government. Chiang had Hu put under house arrest, but he was released after national condemnation, after which he left Nanjing and supported a rival government in Canton. The split resulted in a military conflict between Hu's Kwangtung government and Chiang's Nationalist government. Chiang only won the campaign against Hu after a shift in allegiance by Zhang Xueliang, who had previously supported Hu Hanmin. Throughout his rule, complete eradication of the Communists remained Chiang's dream. After assembling his forces in Jiangxi, Chiang led his armies against the newly established Chinese Soviet Republic. With help from foreign military advisers, Chiang's Fifth Campaign finally surrounded the Chinese Red Army in 1934. The Communists, tipped off that a Nationalist offensive was imminent, retreated in the Long March, during which Mao Zedong rose from a mere military official to the most influential leader of the Chinese Communist Party. Chiang, as a nationalist and a Confucianist, was against the iconoclasm of the May Fourth Movement. Motivated by his sense of nationalism, he viewed some Western ideas as foreign, and he believed that the great introduction of Western ideas and literature that the May Fourth Movement promoted was not beneficial to China. He and Dr. Sun criticized the May Fourth intellectuals as corrupting the morals of China's youth. Contrary to Communist propaganda that he was pro-capitalism, Chiang antagonized the capitalists of Shanghai, often attacking them and confiscating their capital and assets for the use of the government. Chiang confiscated the wealth of capitalists even while he denounced and fought against communists. Chiang crushed pro-communist worker and peasant organizations and rich Shanghai capitalists at the same time. Chiang continued the anti-capitalist ideology of Sun Yat-sen, directing Kuomintang media to openly attack capitalists and capitalism, while demanding government controlled industry instead. Some have classified his rule as fascist. The New Life Movement which was initiated by Chiang was based upon Confucianism, mixed with Christianity, nationalism and authoritarianism that have some similarities to fascism. Frederic Wakeman suggested that the New Life Movement was "Confucian fascism". Under Chiang’s rule, there also existed the Blue Shirts Society, which was largely modelled on those of the Blackshirts in the National Fascist Party and the Sturmabteilung of the NSDAP. Its ideology was to expel foreign (Japanese and Western) imperialists from China and to crush communism. Close Sino-German ties also promoted cooperation between the Kuomintang and the Nazi Party (NSDAP). Chiang has often been interpreted as being pro-capitalism, but this conclusion is problematic. Shanghai capitalists did briefly support him out of fear of communism in 1927, but this support eroded in 1928 when Chiang turned his tactics of intimidation on them. The relationship between Chiang Kai-shek and Chinese capitalists remained poor throughout the period of his administration. Chiang blocked Chinese capitalists from gaining any political power or voice within his regime. Once Chiang Kai-shek was done with his White Terror on pro-communist laborers, he proceeded to turn on the capitalists. Gangster connections allowed Chiang to attack them in the International Settlement, successfully forcing capitalists to back him up with their assets for his military expeditions. Chiang viewed all of the foreign great powers with suspicion, writing in a letter that they "all have it in their minds to promote the interests of their own respective countries at the cost of other nations" and seeing it as hypocritical for any of them to condemn each other's foreign policy. He used diplomatic persuasion on the United States, Germany, and the Soviet Union to regain lost Chinese territories as he viewed all foreign powers as imperialists who were attempting to curtail and suppress China's power and national resurrection. Mass deaths under Nationalist rule Some sources blame Chiang Kai-shek for millions of deaths in scattered events caused by the Nationalist Government of China. Rudolph Rummel, however, puts some of the responsibility on the Nationalist regime as whole rather than all on Chiang Kai-Shek in particular. Rummel writes that from its founding down to its defeat in 1949, the Nationalist government probably killed between roughly 6 and 18.5 million deaths. The major causes include: Thousands of communists and communist sympathizers killed during and in the year after the 1927 Shanghai massacre. In 1938, to stop Japanese advance Chiang ordered the Yellow River dikes to be breached. An official postwar commission estimated that the total number of those who perished from malnutrition, famine, disease, or drowning might be as high as 800,000. In 1943, 1.75 to 2.5 million Henan civilians starved to death due to grain being confiscated and sold for the profit of Nationalist government officials. 4,212,000 Chinese perished during the Second Sino-Japanese War and Civil War starving to death or dying from disease during conscription campaigns. First phase of the Chinese Civil War In Nanjing, on April 1931, Chiang Kai-shek attended a national leadership conference with Zhang Xueliang and General Ma Fuxiang, in which Chiang and Zhang dauntlessly upheld that Manchuria was part of China in the face of the Japanese invasion. After the Japanese invasion of Manchuria in 1931, Chiang resigned as Chairman of the National Government. He returned shortly afterwards, adopting the slogan "first internal pacification, then external resistance". However, this policy of avoiding a frontal war against the Japanese was widely unpopular. In 1932, while Chiang was seeking first to defeat the Communists, Japan launched an advance on Shanghai and bombarded Nanjing. This disrupted Chiang's offensives against the Communists for a time, although it was the northern factions of Hu Hanmin's Kwangtung government (notably the 19th Route Army) that primarily led the offensive against the Japanese during this skirmish. Brought into the Nationalist army immediately after the battle, the 19th Route Army's career under Chiang would be cut short after it was disbanded for demonstrating socialist tendencies. In December 1936, Chiang flew to Xi'an to coordinate a major assault on the Red Army and the Communist Republic that had retreated into Yan'an. However, Chiang's allied commander Zhang Xueliang, whose forces were used in his attack and whose homeland of Manchuria had been recently invaded by the Japanese, did not support the attack on the Communists. On 12 December, Zhang and several other Nationalist generals headed by Yang Hucheng of Shaanxi kidnapped Chiang for two weeks in what is known as the Xi'an Incident. They forced Chiang into making a "Second United Front" with the Communists against Japan. After releasing Chiang and returning to Nanjing with him, Zhang was placed under house arrest and the generals who had assisted him were executed. Chiang's commitment to the Second United Front was nominal at best, and it was all but broken up in 1941. Second Sino-Japanese War The Second Sino-Japanese War broke out in July 1937, and in August of that year Chiang sent of his best-trained and equipped soldiers to defend Shanghai. With over 200,000 Chinese casualties, Chiang lost the political cream of his Whampoa-trained officers. Although Chiang lost militarily, the battle dispelled Japanese claims that it could conquer China in three months and demonstrated to the Western powers that the Chinese would continue the fight. By December, the capital city of Nanjing had fallen to the Japanese resulting in the Nanking massacre. Chiang moved the government inland, first to Wuhan and later to Chongqing. Having lost most of China's economic and industrial centers, Chiang withdrew into the hinterlands, stretching the Japanese supply lines and bogging down Japanese soldiers in the vast Chinese interior. As part of a policy of protracted resistance, Chiang authorized the use of scorched earth tactics, resulting in many civilian deaths. During the Nationalists' retreat from Zhengzhou, the dams around the city were deliberately destroyed by the Nationalist army to delay the Japanese advance, killing 500,000 people in the subsequent 1938 Yellow River flood. After heavy fighting, the Japanese occupied Wuhan in the fall of 1938 and the Nationalists retreated farther inland, to Chongqing. While en route to Chongqing, the Nationalist army intentionally started the "fire of Changsha", as a part of the scorched earth policy. The fire destroyed much of the city, killed twenty thousand civilians, and left hundreds of thousands of people homeless. Due to an organizational error (it was claimed), the fire was begun without any warning to the residents of the city. The Nationalists eventually blamed three local commanders for the fire and executed them. Newspapers across China blamed the fire on (non-KMT) arsonists, but the blaze contributed to a nationwide loss of support for the KMT. In 1939 Muslim leaders Isa Yusuf Alptekin and Ma Fuliang were sent by Chiang to several Middle Eastern countries, including Egypt, Turkey, and Syria, to gain support for the Chinese War against Japan, and to express his support for Muslims. The Japanese, controlling the puppet-state of Manchukuo and much of China's eastern seaboard, appointed Wang Jingwei as a Quisling-ruler of the occupied Chinese territories around Nanjing. Wang named himself President of the Executive Yuan and Chairman of the National Government (not the same 'National Government' as Chiang's), and led a surprisingly large minority of anti-Chiang/anti-Communist Chinese against his old comrades. He died in 1944, within a year of the end of World War II. The Hui Muslim Xidaotang sect pledged allegiance to the Kuomintang after their rise to power and Hui Muslim General Bai Chongxi acquainted Chiang Kaishek with the Xidaotang jiaozhu Ma Mingren in 1941 in Chongqing. In 1942 Chiang went on tour in northwestern China in Xinjiang, Gansu, Ningxia, Shaanxi, and Qinghai, where he met both Muslim Generals Ma Buqing and Ma Bufang. He also met the Muslim Generals Ma Hongbin and Ma Hongkui separately. A border crisis erupted with Tibet in 1942. Under orders from Chiang, Ma Bufang repaired Yushu airport to prevent Tibetan separatists from seeking independence. Chiang also ordered Ma Bufang to put his Muslim soldiers on alert for an invasion of Tibet in 1942. Ma Bufang complied and moved several thousand troops to the border with Tibet. Chiang also threatened the Tibetans with aerial bombardment if they worked with the Japanese. Ma Bufang attacked the Tibetan Buddhist Tsang monastery in 1941. He also constantly attacked the Labrang Monastery. With the attack on Pearl Harbor and the opening of the Pacific War, China became one of the Allied Powers. During and after World War II, Chiang and his American-educated wife Soong Mei-ling, known in the United States as "Madame Chiang", held the support of the China Lobby in the United States, which saw in them the hope of a Christian and democratic China. Chiang was even named the Supreme Commander of Allied forces in the China war zone. He was appointed Knight Grand Cross of the Order of the Bath in 1942. General Joseph Stilwell, an American military adviser to Chiang during World War II, strongly criticized Chiang and his generals for what he saw as their incompetence and corruption. In 1944, the United States Army Air Corps commenced Operation Matterhorn to bomb Japan's steel industry from bases to be constructed in mainland China. This was meant to fulfill President Roosevelt's promise to Chiang Kai-shek to begin bombing operations against Japan by November 1944. However, Chiang Kai-shek's subordinates refused to take airbase construction seriously until enough capital had been delivered to permit embezzlement on a massive scale. Stilwell estimated that at least half of the $100 million spent on construction of airbases was embezzled by Nationalist party officials. Chiang played the Soviets and Americans against each other during the war. He first told the Americans that they would be welcome in talks between the Soviet Union and China, then secretly told the Soviets that the Americans were unimportant and that their opinions would not be considered. Chiang also used American support and military power in China against the ambitions of the Soviet Union to dominate the talks, stopping the Soviets from taking full advantage of the situation in China with the threat of American military action against the Soviets. French Indochina U.S. President Franklin D. Roosevelt, through General Stilwell, privately made it clear that they preferred that the French not reacquire French Indochina (modern day Vietnam, Cambodia and Laos) after the war was over. Roosevelt offered Chiang control of all of Indochina. It was said that Chiang replied: "Under no circumstances!" After the war, 200,000 Chinese troops under General Lu Han were sent by Chiang Kai-shek to northern Indochina (north of the 16th parallel) to accept the surrender of Japanese occupying forces there, and remained in Indochina until 1946, when the French returned. The Chinese used the VNQDD, the Vietnamese branch of the Chinese Kuomintang, to increase their influence in Indochina and to put pressure on their opponents. Chiang Kai-shek threatened the French with war in response to maneuvering by the French and Ho Chi Minh's forces against each other, forcing them to come to a peace agreement. In February 1946 he also forced the French to surrender all of their concessions in China and to renounce their extraterritorial privileges in exchange for the Chinese withdrawing from northern Indochina and allowing French troops to reoccupy the region. Following France's agreement to these demands, the withdrawal of Chinese troops began in March 1946. Ryukyus During the Cairo Conference in 1943, Chiang said that Roosevelt asked him whether China would like to claim the Ryukyu Islands from Japan in addition to retaking Taiwan, the Pescadores, and Manchuria. Chiang claims that he said he was in favor of an international presence on the islands. However, the U.S. became the occupier of the Ryukyus in 1945 until 1971, when Kishi successfully negotiated with Nixon to sign the Okinawa reversion agreement and return Okinawa to Japan. Second phase of the Chinese Civil War Treatment and use of Japanese soldiers In 1945, when Japan surrendered, Chiang's Chongqing government was ill-equipped and ill-prepared to reassert its authority in formerly Japanese-occupied China, and it asked the Japanese to postpone their surrender until Kuomintang (KMT) authority could arrive to take over. American troops and weapons soon bolstered KMT forces, allowing them to reclaim cities. The countryside, however, remained largely under Communist control. For over a year after the Japanese surrender, rumors circulated throughout China that the Japanese had entered into a secret agreement with Chiang, in which the Japanese would assist the Nationalists in fighting the Communists in exchange for the protection of Japanese persons and property there. Many top nationalist generals, including Chiang, had studied and trained in Japan before the Nationalists had returned to the mainland in the 1920s, and maintained close personal friendships with top Japanese officers. The Japanese general in charge of all forces in China, General Yasuji Okamura, had personally trained officers who later became generals in Chiang's staff. Reportedly, General Okamura, before surrendering command of all Japanese military forces in Nanjing, offered Chiang control of all 1.5 million Japanese military and civilian support staff then present in China. Reportedly, Chiang seriously considered accepting this offer, but declined only in the knowledge that the United States would certainly be outraged by the gesture. Even so, armed Japanese troops remained in China well into 1947, with some noncommissioned officers finding their way into the Nationalist officer corps. That the Japanese in China came to regard Chiang as a magnanimous figure to whom many Japanese owed their lives and livelihoods was a fact attested by both Nationalist and Communist sources. Conditions during the Chinese Civil War Westad says the Communists won the Civil War because they made fewer military mistakes than Chiang Kai-Shek, and because in his search for a powerful centralized government, Chiang antagonized too many interest groups in China. Furthermore, his party was weakened in the war against Japan. Meanwhile, the Communists told different groups, such as peasants, exactly what they wanted to hear, and cloaked themselves in the cover of Chinese Nationalism. Following the war, the United States encouraged peace talks between Chiang and Communist leader Mao Zedong in Chongqing. Due to concerns about widespread and well-documented corruption in Chiang's government throughout his rule, the U.S. government limited aid to Chiang for much of the period of 1946 to 1948, in the midst of fighting against the People's Liberation Army led by Mao Zedong. Alleged infiltration of the U.S. government by Chinese Communist agents may have also played a role in the suspension of American aid. Chiang's right-hand man, the secret police Chief Dai Li, was both anti-American and anti-Communist. Dai ordered Kuomintang agents to spy on American officers. Earlier, Dai had been involved with the Blue Shirts Society, a fascist-inspired paramilitary group within the Kuomintang, which wanted to expel Western and Japanese imperialists, crush the Communists, and eliminate feudalism. Dai Li died in a plane crash, which while some suspect to be an assassination orchestrated by Chiang, the assassination was also rumoured to have been arranged by the American Office of Strategic Services due to Dai’s anti-Americanism, because it happened on an American plane. Although Chiang had achieved status abroad as a world leader, his government deteriorated as the result of corruption and inflation. In his diary on June 1948, Chiang wrote that the KMT had failed, not because of external enemies but because of rot from within. The war had severely weakened the Nationalists, while the Communists were strengthened by their popular land-reform policies, and by a rural population that supported and trusted them. The Nationalists initially had superiority in arms and men, but their lack of popularity, infiltration by Communist agents, low morale, and disorganization soon allowed the Communists to gain the upper hand in the civil war. Competition with Li Zongren A new Constitution was promulgated in 1947, and Chiang was elected by the National Assembly as the first term President of the Republic of China on 20 May 1948. This marked the beginning of what was termed the "democratic constitutional government" period by the KMT political orthodoxy, but the Communists refused to recognize the new Constitution, and its government, as legitimate. Chiang resigned as President on 21 January 1949, as KMT forces suffered terrible losses and defections to the Communists. After Chiang's resignation the vice-president of the ROC, Li Zongren, became China's acting president. Shortly after Chiang's resignation the Communists halted their advances and attempted to negotiate the virtual surrender of the ROC. Li attempted to negotiate milder terms that would have ended the civil war, but without success. When it became clear that Li was unlikely to accept Mao's terms, the Communists issued an ultimatum in April 1949, warning that they would resume their attacks if Li did not agree within five days. Li refused. Li's attempts to carry out his policies faced varying degrees of opposition from Chiang's supporters, and were generally unsuccessful. Chiang especially antagonized Li by taking possession of (and moving to Taiwan) US$200 million of gold and US dollars belonging to the central government that Li desperately needed to cover the government's soaring expenses. When the Communists captured the Nationalist capital of Nanjing in April 1949, Li refused to accompany the central government as it fled to Guangdong, instead expressing his dissatisfaction with Chiang by retiring to Guangxi. The former warlord Yan Xishan, who had fled to Nanjing only one month before, quickly insinuated himself within the Li-Chiang rivalry, attempting to have Li and Chiang reconcile their differences in the effort to resist the Communists. At Chiang's request Yan visited Li to convince Li not to withdraw from public life. Yan broke down in tears while talking of the loss of his home province of Shanxi to the Communists, and warned Li that the Nationalist cause was doomed unless Li went to Guangdong. Li agreed to return under the condition that Chiang surrender most of the gold and US dollars in his possession that belonged to the central government, and that Chiang stop overriding Li's authority. After Yan communicated these demands and Chiang agreed to comply with them, Li departed for Guangdong. In Guangdong, Li attempted to create a new government composed of both Chiang supporters and those opposed to Chiang. Li's first choice of premier was Chu Cheng, a veteran member of the Kuomintang who had been virtually driven into exile due to his strong opposition to Chiang. After the Legislative Yuan rejected Chu, Li was obliged to choose Yan Xishan instead. By this time Yan was well known for his adaptability and Chiang welcomed his appointment. Conflict between Chiang and Li persisted. Although he had agreed to do so as a prerequisite of Li's return, Chiang refused to surrender more than a fraction of the wealth that he had sent to Taiwan. Without being backed by gold or foreign currency, the money issued by Li and Yan quickly declined in value until it became virtually worthless. Although he did not hold a formal executive position in the government, Chiang continued to issue orders to the army, and many officers continued to obey Chiang rather than Li. The inability of Li to coordinate KMT military forces led him to put into effect a plan of defense that he had contemplated in 1948. Instead of attempting to defend all of southern China, Li ordered what remained of the Nationalist armies to withdraw to Guangxi and Guangdong, hoping that he could concentrate all available defenses on this smaller, and more easily defensible, area. The object of Li's strategy was to maintain a foothold on the Chinese mainland in the hope that the United States would eventually be compelled to enter the war in China on the Nationalist side. Final Communist advance Chiang opposed Li's plan of defense because it would have placed most of the troops still loyal to Chiang under the control of Li and Chiang's other opponents in the central government. To overcome Chiang's intransigence Li began ousting Chiang's supporters within the central government. Yan Xishan continued in his attempts to work with both sides, creating the impression among Li's supporters that he was a "stooge" of Chiang, while those who supported Chiang began to bitterly resent Yan for his willingness to work with Li. Because of the rivalry between Chiang and Li, Chiang refused to allow Nationalist troops loyal to him to aid in the defense of Kwangsi and Canton, with the result that Communist forces occupied Canton in October 1949. After Canton fell to the Communists, Chiang relocated the government to Chongqing, while Li effectively surrendered his powers and flew to New York for treatment of his chronic duodenum illness at the Hospital of Columbia University. Li visited the President of the United States, Harry S. Truman, and denounced Chiang as a dictator and an usurper. Li vowed that he would "return to crush" Chiang once he returned to China. Li remained in exile, and did not return to Taiwan. In the early morning of 10 December 1949, Communist troops laid siege to Chengdu, the last KMT-controlled city in mainland China, where Chiang Kai-shek and his son Chiang Ching-kuo directed the defense at the Chengtu Central Military Academy. Flying out of Chengdu Fenghuangshan Airport, Chiang Kai-shek, father and son, were evacuated to Taiwan via Guangdong on an aircraft called May-ling and arrived the same day. Chiang Kai-shek would never return to the mainland. Chiang did not re-assume the presidency until 1 March 1950. On January 1952, Chiang commanded the Control Yuan, now in Taiwan, to impeach Li in the "Case of Li Zongren's Failure to carry out Duties due to Illegal Conduct" (李宗仁違法失職案). Chiang relieved Li of the position as vice-president in the National Assembly in March 1954. On Taiwan Preparations to retake the mainland Chiang moved the government to Taipei, Taiwan, where he resumed his duties as President of the Republic of China on 1 March 1950. Chiang was reelected by the National Assembly to be the President of the Republic of China (ROC) on 20 May 1954, and again in 1960, 1966, and 1972. He continued to claim sovereignty over all of China, including the territories held by his government and the People's Republic, as well as territory the latter ceded to foreign governments, such as Tuva and Outer Mongolia. In the context of the Cold War, most of the Western world recognized this position and the ROC represented China in the United Nations and other international organizations until the 1970s. During his presidency on Taiwan, Chiang continued making preparations to take back mainland China. He developed the ROC army to prepare for an invasion of the mainland, and to defend Taiwan in case of an attack by the Communist forces. He also financed armed groups in mainland China, such as Muslim soldiers of the ROC Army left in Yunnan under Li Mi, who continued to fight. It was not until the 1980s that these troops were finally airlifted to Taiwan. He promoted the Uyghur Yulbars Khan to Governor during the Islamic insurgency on the mainland for resisting the Communists, even though the government had already evacuated to Taiwan. He planned an invasion of the mainland in 1962. In the 1950s Chiang's airplanes dropped supplies to Kuomintang Muslim insurgents in Amdo. Regime Despite the democratic constitution, the government under Chiang was a one-party state, consisting almost completely of mainlanders; the "Temporary Provisions Effective During the Period of Communist Rebellion" greatly enhanced executive powers, and the goal of retaking mainland China allowed the KMT to maintain a monopoly on power and the prohibition of opposition parties. The government's official line for these martial law provisions stemmed from the claim that emergency provisions were necessary, since the Communists and KMT were still in a state of war. Seeking to promote Chinese nationalism, Chiang's government actively ignored and suppressed local cultural expression, even forbidding the use of local languages in mass media broadcasts or during class sessions. As a result of Taiwan's anti-government uprising in 1947, known as the February 28 incident, the KMT-led political repression resulted in the death or disappearance of over 30,000 Taiwanese intellectuals, activists, and people suspected of opposition to the KMT. The first decades after the Nationalists moved the seat of government to the province of Taiwan are associated with the organized effort to resist Communism known as the "White Terror", during which about 140,000 Taiwanese were imprisoned for their real or perceived opposition to the Kuomintang. Most of those prosecuted were labeled by the Kuomintang as "bandit spies" (匪諜), meaning spies for Chinese Communists, and punished as such. Under Chiang, the government recognized limited civil liberties, economic freedoms, property rights (personal and intellectual) and other liberties. Despite these restrictions, free debate within the confines of the legislature was permitted. Under the pretext that new elections could not be held in Communist-occupied constituencies, the National Assembly, Legislative Yuan, and Control Yuan members held their posts indefinitely. The Temporary Provisions also allowed Chiang to remain as president beyond the two-term limit in the Constitution. He was reelected by the National Assembly as president four times—doing so in 1954, 1960, 1966, and 1972. Believing that corruption and a lack of morals were key reasons that the KMT lost mainland China to the Communists, Chiang attempted to purge corruption by dismissing members of the KMT accused of graft. Some major figures in the previous mainland Chinese government, such as Chiang's brothers-in-law H. H. Kung and T. V. Soong, exiled themselves to the United States. Although politically authoritarian and, to some extent, dominated by government-owned industries, Chiang's new Taiwanese state also encouraged economic development, especially in the export sector. A popular sweeping Land Reform Act, as well as American foreign aid during the 1950s, laid the foundation for Taiwan's economic success, becoming one of the Four Asian Tigers. Chiang personally had the power to review the rulings of all military tribunals which during the martial law period tried civilians as well. In 1950 Lin Pang-chun and two other men were arrested on charges of financial crimes and sentenced to 3–10 years in prison. Chiang reviewed the sentences of all three and ordered them executed instead. In 1954 Changhua monk Kao Chih-te and two others were sentenced to 12 years in prison for providing aid to accused communists, Chiang sentenced them to death after reviewing the case. This control over the decision of military tribunals violated the ROC constitution. After Chiang's death, the next president, his son, Chiang Ching-kuo, and Chiang Ching-kuo's successor, Lee Teng-hui, a native Taiwanese, would in the 1980s and 1990s increase native Taiwanese representation in the government and loosen the many authoritarian controls of the early era of ROC control in Taiwan. Relationship with Japan In 1971, the Australian Opposition Leader Gough Whitlam, who became Prime Minister in 1972 and swiftly relocated the Australian mission from Taipei to Beijing, visited Japan. After meeting with the Japanese Prime Minister, Eisaku Sato, Whitlam observed that the reason Japan at that time was hesitant to withdraw recognition from the Nationalist government was "the presence of a treaty between the Japanese government and that of Chiang Kai-shek". Sato explained that the continued recognition of Japan towards the Nationalist government was due largely to the personal relationship that various members of the Japanese government felt towards Chiang. This relationship was rooted largely in the generous and lenient treatment of Japanese prisoners-of-war by the Nationalist government in the years immediately following the Japanese surrender in 1945, and was felt especially strongly as a bond of personal obligation by the most senior members then in power. Although Japan recognized the People's Republic in 1972, shortly after Kakuei Tanaka succeeded Sato as Prime Minister of Japan, the memory of this relationship was strong enough to be reported by The New York Times (15 April 1978) as a significant factor inhibiting trade between Japan and the mainland. There is speculation that a clash between Communist forces and a Japanese warship in 1978 was caused by Chinese anger after Prime Minister Takeo Fukuda attended Chiang's funeral. Historically, Japanese attempts to normalize their relationship with the People's Republic were met with accusations of ingratitude in Taiwan. Relationship with the United States Chiang was suspicious that covert operatives of the United States plotted a coup against him. In 1950, Chiang Ching-kuo became director of the secret police (Bureau of Investigation and Statistics), which he remained until 1965. Chiang was also suspicious of politicians who were overly friendly to the United States, and considered them his enemies. In 1953, seven days after surviving an assassination attempt, Wu Kuo-chen lost his position as governor of Taiwan Province to Chiang Ching-kuo. After fleeing to United States the same year, he became a vocal critic of Chiang's family and government. Chiang Ching-kuo, educated in the Soviet Union, initiated Soviet-style military organization in the Republic of China Military. He reorganized and Sovietized the political officer corps, and propagated Kuomintang ideology throughout the military. Sun Li-jen, who was educated at the American Virginia Military Institute, was opposed to this. Chiang Ching-kuo orchestrated the controversial court-martial and arrest of General Sun Li-jen in August 1955, for plotting a coup d'état with the American Central Intelligence Agency (CIA) against his father Chiang Kai-shek and the Kuomintang. The CIA allegedly wanted to help Sun take control of Taiwan and declare its independence. Death In 1975, 26 years after Chiang came to Taiwan, he died in Taipei at the age of 87. He had suffered a heart attack and pneumonia in the foregoing months and died from renal failure aggravated with advanced cardiac failure on 5 April. Chiang's funeral was held on 16 April. A month of mourning was declared. Chinese music composer Hwang Yau-tai wrote the "Chiang Kai-shek Memorial Song". In mainland China, however, Chiang's death was met with little apparent mourning and Communist state-run newspapers gave the brief headline "Chiang Kai-shek Has Died". Chiang's body was put in a copper coffin and temporarily interred at his favorite residence in Cihu, Daxi, Taoyuan. His funeral was attended by dignitaries from many nations, including American Vice President Nelson Rockefeller, South Korean Prime Minister Kim Jong-pil and two former Japanese prime ministers : Nobusuke Kishi and Eisaku Sato. (蔣公逝世紀念日) was established on 5 April. The memorial day was disestablished in 2007. When his son Chiang Ching-kuo died in 1988, he was entombed in a separate mausoleum in nearby Touliao (頭寮). The hope was to have both buried at their birthplace in Fenghua if and when it was possible. In 2004, Chiang Fang-liang, the widow of Chiang Ching-kuo, asked that both father and son be buried at Wuzhi Mountain Military Cemetery in Xizhi, Taipei County (now New Taipei City). Chiang's ultimate funeral ceremony became a political battle between the wishes of the state and the wishes of his family. Chiang was succeeded as President by Vice President Yen Chia-kan and as Kuomintang party ruler by his son Chiang Ching-kuo, who retired Chiang Kai-shek's title of Director-General and instead assumed the position of chairman. Yen's presidency was interim; Chiang Ching-kuo, who was the Premier, became President after Yen's term ended three years later. Cult of personality Chiang's portrait hung over Tiananmen Square before Mao's portrait was set up in its place. People also put portraits of Chiang in their homes and in public on the streets. After his death, the Chiang Kai-shek Memorial Song was written in 1988 to commemorate Chiang Kai-shek. In Cihu, there are several statues of Chiang Kai-shek. Chiang was popular among many people and dressed in plain, simple clothes, unlike contemporary Chinese warlords who dressed extravagantly. Quotes from the Quran and Hadith were used by Muslims in the Kuomintang-controlled Muslim publication, the Yuehua, to justify Chiang Kai-shek's rule over China. When the Muslim General and Warlord Ma Lin was interviewed, Ma Lin was described as having "high admiration for and unwavering loyalty to Chiang Kai-shek". In the Philippines, a school was named in his honour in 1939. Today, Chiang Kai-shek College is the largest educational institution for the Chinoy community in the country. Philosophy The Kuomintang used traditional Chinese religious ceremonies, and promulgated/practised martyrdom in Chinese culture. Kuomintang ideology subserved and promulgated the view that the souls of Party martyrs who died fighting for the Kuomintang, the revolution, and the party founder Dr. Sun Yat-sen were sent to heaven. Chiang Kai-shek believed that these martyrs witnessed events on Earth from heaven after their deaths. When the Northern Expedition was complete, Kuomintang Generals led by Chiang Kai-shek paid tribute to Dr. Sun's soul in heaven with a sacrificial |
heated air will not detonate without a fuel being present. Higher compression ratios can make gasoline (petrol) engines subject to engine knocking (also known as "detonation", "pre-ignition" or "pinging") if lower octane-rated fuel is used. This can reduce efficiency or damage the engine if knock sensors are not present to modify the ignition timing. Diesel engines Diesel engines use higher compression ratios than petrol engines, because the lack of a spark plug means that the compression ratio must increase the temperature of the air in the cylinder sufficiently to ignite the diesel using compression ignition. Compression ratios are often between 14:1 and 23:1 for direct injection diesel engines, and between 18:1 and 23:1 for indirect injection diesel engines. Other fuels The compression ratio may be higher in engines running exclusively on liquefied petroleum gas (LPG or "propane autogas") or compressed natural gas, due to the higher octane rating of these fuels. Kerosene engines typically use a compression ratio of 6.5 or lower. The petrol-paraffin engine version of the Ferguson TE20 tractor had a compression ratio of 4.5:1 for operation on tractor vaporising oil with an octane rating between 55 and 70. Motorsport engines Motorsport engines often run on high octane petrol and can therefore use higher compression ratios. For example, motorcycle racing engines can use compression ratios as high as 14.7:1, and it is common to find motorcycles with compression ratios above 12.0:1 designed for 86 or 87 octane fuel. Ethanol and methanol can take significantly higher compression ratios than gasoline. Racing engines burning methanol and ethanol fuel often have a compression ratio of 14:1 to 16:1. Mathematical formula In a piston engine, the static compression ratio () is the ratio between the volume of the cylinder and combustion chamber when the piston is at the bottom of its stroke, and the volume of the combustion chamber when the piston is at the top of its stroke. It is therefore calculated by the formula Where: = displacement volume. This is the volume inside the cylinder displaced by the piston from the beginning of the compression stroke to the end of the stroke. = clearance volume. This is the volume of the space in the cylinder left at the end of the compression stroke. can be estimated by the cylinder volume formula Where: = cylinder bore (diameter) = piston stroke length Because of the complex shape of it is usually measured directly. This is often done by filling the cylinder with liquid and then measuring the volume of the used liquid. Variable compression ratio engines Most engines use a fixed compression ratio, however a variable compression ratio engine is able to adjust the compression ratio while the engine is in operation. The first production engine with a variable compression ratio was introduced in 2019. Variable compression ratio is a technology to adjust the compression ratio of an internal combustion engine while the engine is in operation. This is done to increase fuel efficiency while under varying loads. Variable compression engines allow the volume above the piston at top dead centre to be changed. Higher loads require | to be reached with less fuel, while giving a longer expansion cycle, creating more mechanical power output and lowering the exhaust temperature. Petrol engines In petrol (gasoline) engines used in passenger cars for the past 20 years, compression ratios have typically been between 8:1 and 12:1. Several production engines have used higher compression ratios, including: Cars built from 1955–1972 which were designed for high-octane leaded gasoline, which allowed compression ratios up to 13:1. Some Mazda SkyActiv engines released since 2012 have compression ratios up to 16:1. The SkyActiv engine achieves this compression ratio with ordinary unleaded gasoline (95 RON in the United Kingdom) through improved scavenging of exhaust gases (which ensures cylinder temperature is as low as possible before the intake stroke), in addition to direct injection. Toyota Dynamic Force engine has a compression ratio up to 14:1. The 2014 Ferrari 458 Speciale also has a compression ratio of 14:1. When forced induction (e.g. a turbocharger or supercharger) is used, the compression ratio is often lower than naturally aspirated engines. This is due to the turbocharger/supercharger already having compressed the air before it enters the cylinders. Engines using port fuel-injection typically run lower boost pressures and/or compression ratios than direct injected engines because port fuel injection causes the air/fuel mixture to be heated together, leading to detonation. Conversely, directly injected engines can run higher boost because heated air will not detonate without a fuel being present. Higher compression ratios can make gasoline (petrol) engines subject to engine knocking (also known as "detonation", "pre-ignition" or "pinging") if lower octane-rated fuel is used. This can reduce efficiency or damage the engine if knock sensors are not present to modify the ignition timing. Diesel engines Diesel engines use higher compression ratios than petrol engines, because the lack of a spark plug means that the compression ratio must increase the temperature of the air in the cylinder sufficiently to ignite the diesel using compression ignition. Compression ratios are often between 14:1 and 23:1 for direct injection diesel engines, and between 18:1 and 23:1 for indirect injection diesel engines. Other fuels The compression ratio may be higher in engines running exclusively on liquefied petroleum gas (LPG or "propane autogas") or compressed natural gas, due to the higher octane rating of these fuels. Kerosene engines typically use a compression ratio of 6.5 or lower. The petrol-paraffin engine version of the Ferguson TE20 tractor had a compression ratio of 4.5:1 for operation on tractor vaporising oil with an octane rating between 55 and 70. Motorsport engines Motorsport engines often run on high octane petrol and can therefore use higher compression ratios. For example, motorcycle racing engines can use compression ratios as high as 14.7:1, and it is common to find motorcycles with compression ratios above 12.0:1 designed for 86 or 87 octane fuel. Ethanol and methanol can take significantly higher compression ratios than gasoline. Racing engines burning methanol and ethanol fuel often have a compression ratio of 14:1 to 16:1. Mathematical formula In a piston engine, the static compression ratio () is the ratio between the volume of the cylinder and combustion chamber when the piston is at the bottom of its stroke, and the volume of the combustion chamber when the piston is at the top of its stroke. It is therefore calculated by the formula Where: = displacement volume. This is the volume inside the cylinder displaced by the piston from the beginning of the compression stroke to the end of the stroke. = clearance volume. This is the volume of the space in the cylinder left at the end of the compression stroke. can be estimated by the cylinder volume formula Where: = cylinder bore (diameter) = piston stroke length Because of the complex shape of it is usually measured directly. This is often done by filling the cylinder with liquid and then measuring the volume of the used liquid. Variable compression ratio engines Most engines use a fixed compression ratio, however a variable compression ratio engine is able to adjust the compression |
Although a possible compromise solution had already been received from England, this does not seem to have ever been considered in-depth, probably on account of it containing an oath of Homage between Emperor and Pope, which been a historical sticking point in earlier negotiations. The papal delegation was led by Cardinal bishop Lamberto Scannabecchi of Ostia, the future Pope Honorius II. Both sides studies previous negotiations between them, including those from 1111, which were considered to have created precedent. On 23 September 1122, papal and imperial delegates signed a series of documents on the outside the walls of Worms; there was insufficient room in the city for the number of attendees and watchers. Adalbert, Archbishop of Mainz wrote to Calixtus of how complex the negotiations had been, given that, as he said, Henry regarded the powers he was being asked to renounce as being hereditary in the Imperial throne. It is probable that what was eventually promulgated was the result of almost every word being carefully considered; the main difference between what was to be agreed at Worms and previous negotiations were the concessions from the pope. Concordat The agreements come to at Worms were in the nature of both concessions and assurances to the other party. Henry, on oath before God, the apostles and the church renounced his right to invest bishops and abbots with ring and crosier, and opened ecclesiastical appointments in his realm to canonical elections, regno vel imperio. He also recognised the traditional extent and boundaries of the papal patrimony as a legal entity rather than one malleable to the emperor. Henry promised to return to the church those lands rightfully belonging to the church seized by himself or his father to the church; furthermore, he will assist the pope in regaining those that were taken by others, and "he will do the same thing for all other churches and princes, both ecclesiastical and lay". If the pope requests Imperial assistance, he will receive it, and the church comes to the empire for justice, it will be treated fairly. He also swore to abstain from "all investiture by ring and staff", marking the end of an ancient imperial tradition. Callixtus made similar reciprocal promises regarding the empire in Italy. He agreed to the presence of the emperor or his officials at the elections and granted the emperor the right to ajudge in the case of disputed outcomes on episcopal advice —as long as they had been held peacefully and without simony—which had officially been the case ever since precedent had been set by the London Accord of 1107. This right to judge was constrained by an assurance that he would support the majority vote among electors, and further that he would take the advice of his other bishops before doing so. The emperor was also allowed to perform a separate ceremony in which he would invest bishops and abbots with their regalia, a sceptre representing the imperial lands associated with their episcopal see. This clause also contained a "cryptic" condition that once the elect had been so endowed, the new bishop "should do what he ought to do according to imperial rights". In the German imperial lands this was to take place prior to the bishop-elect's consecration; elsewhere in the empire—Burgundy and Italy, exempting the Papal States—within six months of the ceremony. The differentiating between the German portion of the Empire and the rest was of particular importance to Calixtus as the papacy had traditionally felt threatened more from it in the peninsular than the broader Empire. Finally, the pope granted "true peace" on the emperor and all those who had supported him. Calixtus had effectively overturned wholesale the strategy he had pursued during the Mouzon negotiation; episcopal investitures in Germany were to take place with very little substantive change in ceremony, while temporal involvement remained, only replacing investiture with homage, although the word itself—hominium—was studiously avoided. Adalbert, from whom Calixtus first received news of the final concordat, emphasized that it still had to be approved in Rome; this suggests, argues Stroll, that the Archbishop—and probably the papal legation as a whole—were against making concessions to the emperor, and probably wanted Calixtus to disown the agreement. Adalbert believed the agreement would make it easier for the Emperor to legalise intimidation of episcopal electors, writing that "through the opportunity of [the emperor's] presence, the Church of God must undergo the same slavery as before, or an even more oppressive one". However, argues Stroll, the concessions Calixtus made were an "excellent bargain" in return for eradicating the danger on the papacy's northern border and therefore allowing him to focus, without threat or distraction, on the Normans to the south. It had achieved its peace, argues Norman Cantor, by allowing local national custom and practice to determine future relations between crown and pope; in most cases, he notes, this "favored the continuance of royal control over the church". The concordat was published as two distinct charters, each laying out the concessions the one party was making to the other. They are known respectively as the Papal (or the Calixtinum) and the Imperial (Henricianum) charters. Calixtus's is addressed to the emperor—in quite personal terms—while Henry's is made out to God. The bishop of Ostia gave the emperor the kiss of peace on behalf of the pope and said Mass. By these rites was Henry returned to the church, the negotiators were lauded for succeeding in their delicate mission and the concordat was called "peace at the will of the pope". Neither charter was signed; both contained probably intentional vagaries and unanswered questions—such as the position of the papacy's churches that lay outside both the patrimony and Germany—which were subsequently addressed on a case-by-case basis. Indeed, Robert Benson has suggested that the brevity of the charters was deliberate and that the agreement as a whole is as important for what it omits as for what it includes. The term regalia, for example, was not only undefined but literally meant two different things to each party. In the Henricianum it referred to the feudal duty owed to a monarch; in the Calixtinium, it was the episcopal temporalities. Broader question, such as the nature of the church and Empire relationship, were also not addressed, although some ambiguity was removed by an 1133 Papal privilege. The Concordat was widely, and deliberately, publicised around Europe. Calixtus was not in Rome when the concordat was delivered. He had left the city by late August and was not to return until mid- to late October, making a progress to Anagni, taking the bishopric of Anagni and Casamari Abbey under his protection. Agreements Preservation The concordat was ratified at the First Council of the Lateran and the original Henricianum charter is preserved at the Vatican Apostolic Archive; the Calixtinum has not survived except in subsequent copies. A copy of the former is also held in the Codex Udalrici, but this is an abridged version for political circulation, as it reduces the number of imperial concessions made. Indicating the extent that he saw the agreement as a papal victory, Calixtus had a copy of the Henricianum painted on a Lateran | survived except in subsequent copies. A copy of the former is also held in the Codex Udalrici, but this is an abridged version for political circulation, as it reduces the number of imperial concessions made. Indicating the extent that he saw the agreement as a papal victory, Calixtus had a copy of the Henricianum painted on a Lateran Palace chamber wall; while nominally portraying the concordat as a victory for the papacy, it also ignored the numerous concessions made to the emperor. This was part of what Hartmut Hoffmann has called "a conspiracy of silence" regarding papal concessions. Indeed, while the Pope is pictured enthroned, and Henry only standing, the suggestion is still that they were jointly wielding their respective authority to come to this agreement. An English copy of the Calixtinum made by William of Malmsbury is reasonably accurate but omits the clause mentioning the use of a sceptre in the granting of the regalia. He then, having condemned Henry's "Teuton fury", proceeds to praise him, comparing him favourably ton Charlemagne for his devotion to God and the peace of Christendom. Aftermath The first invocation of the concordat was not in the empire, as it turned out, but by Henry I of England the following year. Following a long-running dispute between Canterbury–York which ended up in the Papal court, Joseph Huffman argues that it would have been controversial for the Pope "to justify one set of concessions in Germany and another in England". The concordat ended once and for all the "Imperial church system of the Ottonians and Salians". The First Lateran Council was convoked to confirm the Concordat of Worms. The council was most representative with nearly 300 bishops and 600 abbots from every part of Catholic Europe being present. It convened on March 18, 1123. One of its primary concerns was to emphasise the independence of diocesan clergy, and to do so it forbade monks to leave their monasteries to provide pastoral care, which would in future be the sole preserve of the diocese. In ratifying the Concordat, the Council confirmed that in future bishops would be elected by their clergy, although, also per the Concordat, the Emperor could refuse the homage of German bishops. Decrees were passed directed against simony, concubinage among the clergy, church robbers, and forgers of Church documents; the council also reaffirmed indulgences for Crusaders. These, argues C. Colt Anderson "established important precedents in canon law restricting the influence of the laity and the monks". While this led to a busy period of reform, it was important for those advocating reform not to allow themselves to be confused with the myriad heretical sects and schismatics who were making similar criticisms. The Concordat was the last major achievement for Emperor Henry, as he died in 1125; an attempted invasion of France came to nothing in 1124 in the face of "determined opposition". Fuhrmann comments that, as Henry had shown in his life "even less interest in new currents of thought and feeling than his father", he probably did not understand the significance of the events he had lived through. The peace only lasted until his death; when Imperial Electors met to chose his successor, reformists took the opportunity to attack the imperial gains of Worms on the grounds that they had been granted to him personally rather than Emperors generally. However, later emperors, such as Frederick I and Henry VI, continued to wield as much, if intangible, power as their predecessors in episcopal elections, and to a greater degree to that allowed them by Calixtus' charter. Successive emperors found the Concordat sufficiently favourable that it remained, almost unaltered until the empire was dissolved by Napoleon in 1806. Popes, likewise, were able to use the powers codified to them in the Concordatto their advantage in future internal disputes with their Cardinals. Reception The most detailed contemporary description of the Concordat comes to historians through a brief chronicle known as the 1125 continuation chronicle. This is a pro-papal document lays the blame for the schism squarely upon Henry—by his recognition of Gregory VIII—and the praise for ending it on Calixtus, through his making only temporary compromises. I. S. Robinson, writing in The New Cambridge Medieval History, suggests that this was a deliberate ploy to leave further negotiations open with a more politically malleable Emperor in future. To others it was not so clear cut; Honorius of Autun, for example, writing later in the century discussed lay investiture as an aspect of papal-Imperial relations and, even a century later the Sachsenspiegel still stated that Emperors nominated bishops in Germany. Robinson suggests that, by the end of the 12th century, "it was the imperial, rather than the papal version of the Concordat of Worms that was generally accepted by German churchmen". The contemporary English historian William of Malmesbury praised the Concordat for curtailing what he perceived as the emperor's overreach, or as he put it, "severing the sprouting necks of Teuton fury with the axe of Apostolic power". However, he regarded the final settlement not as a defeat of the Empire at the hands of the church, but rather as a reconciliatory effort by the two powers. Although polemicism had died down in the years preceding the Concordat, it did not finish them completely, and factionalism within the church especially continued. Gerhoh of Reichersberg believed that the emperor now had the right to request German bishops pay homage to him, something that would never have been allowed under Paschal, due to the vague clause instructing newly-elects to the things the emperor wished. Gerhoh argued that now imperial intervention in episcopal elections had been curtailed, Henry would use this clause to extend his influence in the church by means of homage. Gerhoh was torn between viewing the concordat as the end of a long struggle between pope and empire, or whether it marked the beginning of a new one within the church itself. Likewise Adelbert of Mainz—who had casually criticised the agreement in his report to Calixtus continued to lobby against it, and continued to bring complaints against Henry, whom, for example, he alleged had illegally removed the Bishop of Strassburg who was suspected of complicity in the death of Duke Berthold of Zaehringen. The reformist party within the church took a similar view, criticising the Concordat for failing to remove all secular influence on the church. For this reason, a group of followers of Paschal II unsuccessfully attempted to prevent the agreement's ratification at the Lateran Council, crying non placet! when asked to do so: "it was only when it was pointed out that much had to be accepted for the sake of peace that the atmosphere quietened". Calixtus told them that they had "not to approve but tolerate" it. At a council in Bamberg in 1122 Henry gathered those nobles who had not attended the Concordat to seek their approval of the agreement, which they did. The following month he sent cordial letters to Caslixtus agreeing with the pope's position that as brothers in Christ they were bound by God to work together, etc., and that he would soon visit personally to discuss the repatriation of papal land. |
This set is context-free, since the union of two context-free languages is always context-free. But there is no way to unambiguously parse strings in the (non-context-free) subset which is the intersection of these two languages. Dyck language The language of all properly matched parentheses is generated by the grammar . Properties Context-free parsing The context-free nature of the language makes it simple to parse with a pushdown automaton. Determining an instance of the membership problem; i.e. given a string , determine whether where is the language generated by a given grammar ; is also known as recognition. Context-free recognition for Chomsky normal form grammars was shown by Leslie G. Valiant to be reducible to boolean matrix multiplication, thus inheriting its complexity upper bound of O(n2.3728639). Conversely, Lillian Lee has shown O(n3−ε) boolean matrix multiplication to be reducible to O(n3−3ε) CFG parsing, thus establishing some kind of lower bound for the latter. Practical uses of context-free languages require also to produce a derivation tree that exhibits the structure that the grammar associates with the given string. The process of producing this tree is called parsing. Known parsers have a time complexity that is cubic in the size of the string that is parsed. Formally, the set of all context-free languages is identical to the set of languages accepted by pushdown automata (PDA). Parser algorithms for context-free languages include the CYK algorithm and Earley's Algorithm. A special subclass of context-free languages are the deterministic context-free languages which are defined as the set of languages accepted by a deterministic pushdown automaton and can be parsed by a LR(k) parser. See also parsing expression grammar as an alternative approach to grammar and parser. Closure The class of context-free languages is closed under the following operations. That is, if L and P are context-free languages, the following languages are context-free as well: | with a pushdown automaton. Determining an instance of the membership problem; i.e. given a string , determine whether where is the language generated by a given grammar ; is also known as recognition. Context-free recognition for Chomsky normal form grammars was shown by Leslie G. Valiant to be reducible to boolean matrix multiplication, thus inheriting its complexity upper bound of O(n2.3728639). Conversely, Lillian Lee has shown O(n3−ε) boolean matrix multiplication to be reducible to O(n3−3ε) CFG parsing, thus establishing some kind of lower bound for the latter. Practical uses of context-free languages require also to produce a derivation tree that exhibits the structure that the grammar associates with the given string. The process of producing this tree is called parsing. Known parsers have a time complexity that is cubic in the size of the string that is parsed. Formally, the set of all context-free languages is identical to the set of languages accepted by pushdown automata (PDA). Parser algorithms for context-free languages include the CYK algorithm and Earley's Algorithm. A special subclass of context-free languages are the deterministic context-free languages which are defined as the set of languages accepted by a deterministic pushdown automaton and can be parsed by a LR(k) parser. See also parsing expression grammar as an alternative approach to grammar and parser. Closure The class of context-free languages is closed under the following operations. That is, if L and P are context-free languages, the following languages are context-free as well: the union of L and P the reversal of L the concatenation of L and P the Kleene star of L the image of L under a homomorphism the image of L under an inverse homomorphism the circular shift of L (the language ) the prefix closure of L (the set of all prefixes of strings from L) the quotient L/R of L by |
caffeine is the coffee bean, the seed of the Coffea plant. People may drink beverages containing caffeine to relieve or prevent drowsiness and to improve cognitive performance. To make these drinks, caffeine is extracted by steeping the plant product in water, a process called infusion. Caffeine-containing drinks, such as coffee, tea, and cola, are consumed globally in high volumes. In 2020, almost 10 million tonnes of coffee beans were consumed globally. Caffeine is the world's most widely consumed psychoactive drug. Unlike most other psychoactive substances, caffeine remains largely unregulated and legal in nearly all parts of the world. Caffeine is also an outlier as its use is seen as socially acceptable in most cultures and even encouraged in others, particularly in the Western world. Caffeine can have both positive and negative health effects. It can treat and prevent the premature infant breathing disorders bronchopulmonary dysplasia of prematurity and apnea of prematurity. Caffeine citrate is on the WHO Model List of Essential Medicines. It may confer a modest protective effect against some diseases, including Parkinson's disease. Some people experience sleep disruption or anxiety if they consume caffeine, but others show little disturbance. Evidence of a risk during pregnancy is equivocal; some authorities recommend that pregnant women limit caffeine to the equivalent of two cups of coffee per day or less. Caffeine can produce a mild form of drug dependence – associated with withdrawal symptoms such as sleepiness, headache, and irritability – when an individual stops using caffeine after repeated daily intake. Tolerance to the autonomic effects of increased blood pressure and heart rate, and increased urine output, develops with chronic use (i.e., these symptoms become less pronounced or do not occur following consistent use). Caffeine is classified by the US Food and Drug Administration as generally recognized as safe. Toxic doses, over 10 grams per day for an adult, are much higher than the typical dose of under 500 milligrams per day. The European Food Safety Authority reported that up to 400 mg of caffeine per day (around 5.7 mg/kg of body mass per day) does not raise safety concerns for non-pregnant adults, while intakes up to 200 mg per day for pregnant and lactating women do not raise safety concerns for the fetus or the breast-fed infants. A cup of coffee contains 80–175 mg of caffeine, depending on what "bean" (seed) is used, how it is roasted (darker roasts have less caffeine), and how it is prepared (e.g., drip, percolation, or espresso). Thus it requires roughly 50–100 ordinary cups of coffee to reach the toxic dose. However, pure powdered caffeine, which is available as a dietary supplement, can be lethal in tablespoon-sized amounts. Use Medical Caffeine is used in: Bronchopulmonary dysplasia in premature infants for both prevention and treatment. It may improve weight gain during therapy and reduce the incidence of cerebral palsy as well as reduce language and cognitive delay. On the other hand, subtle long-term side effects are possible. Apnea of prematurity as a primary treatment, but not prevention. Orthostatic hypotension treatment. Some people use caffeine-containing beverages such as coffee or tea to try to treat their asthma. Evidence to support this practice, however, is poor. It appears that caffeine in low doses improves airway function in people with asthma, increasing forced expiratory volume (FEV1) by 5% to 18%, with this effect lasting for up to four hours. The addition of caffeine (100–130 mg) to commonly prescribed pain relievers such as paracetamol or ibuprofen modestly improves the proportion of people who achieve pain relief. Consumption of caffeine after abdominal surgery shortens the time to recovery of normal bowel function and shortens length of hospital stay. Enhancing performance Cognitive Caffeine is a central nervous system stimulant that may reduce fatigue and drowsiness. At normal doses, caffeine has variable effects on learning and memory, but it generally improves reaction time, wakefulness, concentration, and motor coordination. The amount of caffeine needed to produce these effects varies from person to person, depending on body size and degree of tolerance. The desired effects arise approximately one hour after consumption, and the desired effects of a moderate dose usually subside after about three or four hours. Caffeine can delay or prevent sleep and improves task performance during sleep deprivation. Shift workers who use caffeine make fewer mistakes that could result from drowsiness. Caffeine in a dose dependent manner increases alertness in both fatigued and normal individuals. A systematic review and meta-analysis from 2014 found that concurrent caffeine and -theanine use has synergistic psychoactive effects that promote alertness, attention, and task switching; these effects are most pronounced during the first hour post-dose. Physical Caffeine is a proven ergogenic aid in humans. Caffeine improves athletic performance in aerobic (especially endurance sports) and anaerobic conditions. Moderate doses of caffeine (around 5 mg/kg) can improve sprint performance, cycling and running time trial performance, endurance (i.e., it delays the onset of muscle fatigue and central fatigue), and cycling power output. Caffeine increases basal metabolic rate in adults. Caffeine ingestion prior to aerobic exercise increases fat oxidation, particularly in persons with low physical fitness. Caffeine improves muscular strength and power, and may enhance muscular endurance. Caffeine also enhances performance on anaerobic tests. Caffeine consumption before constant load exercise is associated with reduced perceived exertion. While this effect is not present during exercise-to-exhaustion exercise, performance is significantly enhanced. This is congruent with caffeine reducing perceived exertion, because exercise-to-exhaustion should end at the same point of fatigue. Caffeine also improves power output and reduces time to completion in aerobic time trials, an effect positively (but not exclusively) associated with longer duration exercise. Specific populations Adults For the general population of healthy adults, Health Canada advises a daily intake of no more than 400 mg. This limit was found to be safe by a 2017 systematic review on caffeine toxicology. Children In healthy children, moderate caffeine intake under 400 mg produces effects that are "modest and typically innocuous". As early as 6 months old, infants can metabolize caffeine at the same rate as that of adults. Higher doses of caffeine (>400 mg) can cause physiological, psychological and behavioral harm, particularly for children with psychiatric or cardiac conditions. There is no evidence that coffee stunts a child's growth. The American Academy of Pediatrics recommends that caffeine consumption is not appropriate for children and adolescents and should be avoided. This recommendation is based on a clinical report released by American Academy of Pediatrics in 2011 with a review of 45 publications from 1994 to 2011 and includes inputs from various stakeholders (Pediatricians, Committee on nutrition, Canadian Pediatric Society, Centers for Disease Control & Prevention, Food and Drug Administration, Sports Medicine & Fitness committee, National Federations of High School Associations). For children age 12 and under, Health Canada recommends a maximum daily caffeine intake of no more than 2.5 milligrams per kilogram of body weight. Based on average body weights of children, this translates to the following age-based intake limits: Adolescents Health Canada has not developed advice for adolescents because of insufficient data. However, they suggest that daily caffeine intake for this age group be no more than 2.5 mg/kg body weight. This is because the maximum adult caffeine dose may not be appropriate for light-weight adolescents or for younger adolescents who are still growing. The daily dose of 2.5 mg/kg body weight would not cause adverse health effects in the majority of adolescent caffeine consumers. This is a conservative suggestion since older and heavier-weight adolescents may be able to consume adult doses of caffeine without suffering adverse effects. Pregnancy and breastfeeding The metabolism of caffeine is reduced in pregnancy, especially in the third trimester, and the half-life of caffeine during pregnancy can be increased up to 15 hours (as compared to 2.5 to 4.5 hours in non-pregnant adults). Current evidence regarding the effects of caffeine on pregnancy and for breastfeeding are inconclusive. There is limited primary and secondary advice for, or against, caffeine use during pregnancy and its effects on the fetus or newborn. The UK Food Standards Agency has recommended that pregnant women should limit their caffeine intake, out of prudence, to less than 200 mg of caffeine a day – the equivalent of two cups of instant coffee, or one and a half to two cups of fresh coffee. The American Congress of Obstetricians and Gynecologists (ACOG) concluded in 2010 that caffeine consumption is safe up to 200 mg per day in pregnant women. For women who breastfeed, are pregnant, or may become pregnant, Health Canada recommends a maximum daily caffeine intake of no more than 300 mg, or a little over two 8 oz (237 mL) cups of coffee. A 2017 systematic review on caffeine toxicology found evidence supporting that caffeine consumption up to 300 mg/day for pregnant women is generally not associated with adverse reproductive or developmental effect. There are conflicting reports in the scientific literature about caffeine use during pregnancy. A 2011 review found that caffeine during pregnancy does not appear to increase the risk of congenital malformations, miscarriage or growth retardation even when consumed in moderate to high amounts. Other reviews, however, concluded that there is some evidence that higher caffeine intake by pregnant women may be associated with a higher risk of giving birth to a low birth weight baby, and may be associated with a higher risk of pregnancy loss. A systematic review, analyzing the results of observational studies, suggests that women who consume large amounts of caffeine (greater than 300 mg/day) prior to becoming pregnant may have a higher risk of experiencing pregnancy loss. Adverse effects Physical Caffeine in coffee and other caffeinated drinks can affect gastrointestinal motility and gastric acid secretion. In postmenopausal women, high caffeine consumption can accelerate bone loss. Acute ingestion of caffeine in large doses (at least 250–300 mg, equivalent to the amount found in 2–3 cups of coffee or 5–8 cups of tea) results in a short-term stimulation of urine output in individuals who have been deprived of caffeine for a period of days or weeks. This increase is due to both a diuresis (increase in water excretion) and a natriuresis (increase in saline excretion); it is mediated via proximal tubular adenosine receptor blockade. The acute increase in urinary output may increase the risk of dehydration. However, chronic users of caffeine develop a tolerance to this effect and experience no increase in urinary output. Psychological Minor undesired symptoms from caffeine ingestion not sufficiently severe to warrant a psychiatric diagnosis are common and include mild anxiety, jitteriness, insomnia, increased sleep latency, and reduced coordination. Caffeine can have negative effects on anxiety disorders. According to a 2011 literature review, caffeine use is positively associated with anxiety and panic disorders. At high doses, typically greater than 300 mg, caffeine can both cause and worsen anxiety. For some people, discontinuing caffeine use can significantly reduce anxiety. In moderate doses, caffeine has been associated with reduced symptoms of depression and lower suicide risk. Increased consumption of coffee and caffeine is associated with a decreased risk of depression. Some textbooks state that caffeine is a mild euphoriant, others state that it is not a euphoriant, and one textbook states in one place that caffeine is not a euphoriant but in another place groups it among euphoriants. Caffeine-induced anxiety disorder is a subclass of the DSM-5 diagnosis of substance/medication-induced anxiety disorder. Reinforcement disorders Addiction Whether caffeine can result in an addictive disorder depends on how addiction is defined. Compulsive caffeine consumption under any circumstances has not been observed, and caffeine is therefore not generally considered addictive. However, some diagnostic models, such as the and ICD-10, include a classification of caffeine addiction under a broader diagnostic model. Some state that certain users can become addicted and therefore unable to decrease use even though they know there are negative health effects. Caffeine does not appear to be a reinforcing stimulus, and some degree of aversion may actually occur, with people preferring placebo over caffeine in a study on drug abuse liability published in an NIDA research monograph. Some state that research does not provide support for an underlying biochemical mechanism for caffeine addiction. Other research states it can affect the reward system. "Caffeine addiction" was added to the ICDM-9 and ICD-10. However, its addition was contested with claims that this diagnostic model of caffeine addiction is not supported by evidence. The American Psychiatric Association's does not include the diagnosis of a caffeine addiction but proposes criteria for the disorder for more study. Dependence and withdrawal Withdrawal can cause mild to clinically significant distress or impairment in daily functioning. The frequency at which this occurs is self-reported at 11%, but in lab tests only half of the people who report withdrawal actually experience it, casting doubt on many claims of dependence. Mild physical dependence and withdrawal symptoms may occur upon abstinence, with greater than 100 mg caffeine per day, although these symptoms last no longer than a day. Some symptoms associated with psychological dependence may also occur during withdrawal. The diagnostic criteria for caffeine withdrawal require a previous prolonged daily use of caffeine. Following 24 hours of a marked reduction in consumption, a minimum of 3 of these signs or symptoms is required to meet withdrawal criteria: difficulty concentrating, depressed mood/irritability, flu-like symptoms, headache, and fatigue. Additionally, the signs and symptoms must disrupt important areas of functioning and are not associated with effects of another condition The ICD-11 includes caffeine dependence as a distinct diagnostic category, which closely mirrors the DSM-5's proposed set of criteria for "caffeine-use disorder". Caffeine use disorder refers to dependence on caffeine characterized by failure to control caffeine consumption despite negative physiological consequences. The APA, which published the DSM-5, acknowledged that there was sufficient evidence in order to create a diagnostic model of caffeine dependence for the DSM-5, but they noted that the clinical significance of the disorder is unclear. Due to this inconclusive evidence on clinical significance, the DSM-5 classifies caffeine-use disorder as a "condition for further study". Tolerance to the effects of caffeine occurs for caffeine-induced elevations in blood pressure and the subjective feelings of nervousness. Sensitization, the process whereby effects become more prominent with use, occurs for positive effects such as feelings of alertness and wellbeing. Tolerance varies for daily, regular caffeine users and high caffeine users. High doses of caffeine (750 to 1200 mg/day spread throughout the day) have been shown to produce complete tolerance to some, but not all of the effects of caffeine. Doses as low as 100 mg/day, such as a 6 oz cup of coffee or two to three 12 oz servings of caffeinated soft-drink, may continue to cause sleep disruption, among other intolerances. Non-regular caffeine users have the least caffeine tolerance for sleep disruption. Some coffee drinkers develop tolerance to its undesired sleep-disrupting effects, but others apparently do not. Risk of other diseases A protective effect of caffeine against Alzheimer's disease and dementia is possible but the evidence is inconclusive. It may protect people from liver cirrhosis. Caffeine may lessen the severity of acute mountain sickness if taken a few hours prior to attaining a high altitude. One meta analysis has found that caffeine consumption is associated with a reduced risk of type 2 diabetes. Regular caffeine consumption reduces the risk of developing Parkinson's disease and slows the rate of progression of Parkinson's disease. Caffeine consumption may be associated with reduced risk of depression, although conflicting results have been reported. Caffeine increases intraocular pressure in those with glaucoma but does not appear to affect normal individuals. The DSM-5 also includes other caffeine-induced disorders consisting of caffeine-induced anxiety disorder, caffeine-induced sleep disorder and unspecified caffeine-related disorders. The first two disorders are classified under "Anxiety Disorder" and "Sleep-Wake Disorder" because they share similar characteristics. Other disorders that present with significant distress and impairment of daily functioning that warrant clinical attention but do not meet the criteria to be diagnosed under any specific disorders are listed under "Unspecified Caffeine-Related Disorders". Overdose Consumption of per day is associated with a condition known as caffeinism. Caffeinism usually combines caffeine dependency with a wide range of unpleasant symptoms including nervousness, irritability, restlessness, insomnia, headaches, and palpitations after caffeine use. Caffeine overdose can result in a state of central nervous system over-stimulation known as caffeine intoxication, a clinically significant temporary condition that develops during, or shortly after, the consumption of caffeine. This syndrome typically occurs only after ingestion of large amounts of caffeine, well over the amounts found in typical caffeinated beverages and caffeine tablets (e.g., more than 400–500 mg at a time). According to the DSM-5, caffeine intoxication may be diagnosed if five (or more) of the following symptoms develop after recent consumption of caffeine: restlessness, nervousness, excitement, insomnia, flushed face, diuresis (increased production of urine), gastrointestinal disturbance, muscle twitching, rambling flow of thought and speech, tachycardia (increased heart rate) or cardiac arrythmia, periods of inexhaustibility, and psychomotor agitation. According to the International Classification of Diseases (ICD-11), cases of very high caffeine intake (e.g. > 5 g) may result in caffeine intoxication with symptoms including mania, depression, lapses in judgement, disorientation, disinhibition, delusions, hallucinations or psychosis, and rhabdomyolysis (breakdown of skeletal muscle tissue). Energy drinks High caffeine consumption in energy drinks (At least 1 liter or 320 mg of caffeine) was associated with short-term cardiovascular side effects including hypertension, prolonged QT interval and heart palpitations. These cardiovascular side effects were not seen with smaller amounts of caffeine consumption in energy drinks (less than 200 mg). Severe intoxication As of 2007 there is no known antidote or reversal agent for caffeine intoxication, treatment of mild caffeine intoxication is directed toward symptom relief; severe | diagnostic model of caffeine addiction is not supported by evidence. The American Psychiatric Association's does not include the diagnosis of a caffeine addiction but proposes criteria for the disorder for more study. Dependence and withdrawal Withdrawal can cause mild to clinically significant distress or impairment in daily functioning. The frequency at which this occurs is self-reported at 11%, but in lab tests only half of the people who report withdrawal actually experience it, casting doubt on many claims of dependence. Mild physical dependence and withdrawal symptoms may occur upon abstinence, with greater than 100 mg caffeine per day, although these symptoms last no longer than a day. Some symptoms associated with psychological dependence may also occur during withdrawal. The diagnostic criteria for caffeine withdrawal require a previous prolonged daily use of caffeine. Following 24 hours of a marked reduction in consumption, a minimum of 3 of these signs or symptoms is required to meet withdrawal criteria: difficulty concentrating, depressed mood/irritability, flu-like symptoms, headache, and fatigue. Additionally, the signs and symptoms must disrupt important areas of functioning and are not associated with effects of another condition The ICD-11 includes caffeine dependence as a distinct diagnostic category, which closely mirrors the DSM-5's proposed set of criteria for "caffeine-use disorder". Caffeine use disorder refers to dependence on caffeine characterized by failure to control caffeine consumption despite negative physiological consequences. The APA, which published the DSM-5, acknowledged that there was sufficient evidence in order to create a diagnostic model of caffeine dependence for the DSM-5, but they noted that the clinical significance of the disorder is unclear. Due to this inconclusive evidence on clinical significance, the DSM-5 classifies caffeine-use disorder as a "condition for further study". Tolerance to the effects of caffeine occurs for caffeine-induced elevations in blood pressure and the subjective feelings of nervousness. Sensitization, the process whereby effects become more prominent with use, occurs for positive effects such as feelings of alertness and wellbeing. Tolerance varies for daily, regular caffeine users and high caffeine users. High doses of caffeine (750 to 1200 mg/day spread throughout the day) have been shown to produce complete tolerance to some, but not all of the effects of caffeine. Doses as low as 100 mg/day, such as a 6 oz cup of coffee or two to three 12 oz servings of caffeinated soft-drink, may continue to cause sleep disruption, among other intolerances. Non-regular caffeine users have the least caffeine tolerance for sleep disruption. Some coffee drinkers develop tolerance to its undesired sleep-disrupting effects, but others apparently do not. Risk of other diseases A protective effect of caffeine against Alzheimer's disease and dementia is possible but the evidence is inconclusive. It may protect people from liver cirrhosis. Caffeine may lessen the severity of acute mountain sickness if taken a few hours prior to attaining a high altitude. One meta analysis has found that caffeine consumption is associated with a reduced risk of type 2 diabetes. Regular caffeine consumption reduces the risk of developing Parkinson's disease and slows the rate of progression of Parkinson's disease. Caffeine consumption may be associated with reduced risk of depression, although conflicting results have been reported. Caffeine increases intraocular pressure in those with glaucoma but does not appear to affect normal individuals. The DSM-5 also includes other caffeine-induced disorders consisting of caffeine-induced anxiety disorder, caffeine-induced sleep disorder and unspecified caffeine-related disorders. The first two disorders are classified under "Anxiety Disorder" and "Sleep-Wake Disorder" because they share similar characteristics. Other disorders that present with significant distress and impairment of daily functioning that warrant clinical attention but do not meet the criteria to be diagnosed under any specific disorders are listed under "Unspecified Caffeine-Related Disorders". Overdose Consumption of per day is associated with a condition known as caffeinism. Caffeinism usually combines caffeine dependency with a wide range of unpleasant symptoms including nervousness, irritability, restlessness, insomnia, headaches, and palpitations after caffeine use. Caffeine overdose can result in a state of central nervous system over-stimulation known as caffeine intoxication, a clinically significant temporary condition that develops during, or shortly after, the consumption of caffeine. This syndrome typically occurs only after ingestion of large amounts of caffeine, well over the amounts found in typical caffeinated beverages and caffeine tablets (e.g., more than 400–500 mg at a time). According to the DSM-5, caffeine intoxication may be diagnosed if five (or more) of the following symptoms develop after recent consumption of caffeine: restlessness, nervousness, excitement, insomnia, flushed face, diuresis (increased production of urine), gastrointestinal disturbance, muscle twitching, rambling flow of thought and speech, tachycardia (increased heart rate) or cardiac arrythmia, periods of inexhaustibility, and psychomotor agitation. According to the International Classification of Diseases (ICD-11), cases of very high caffeine intake (e.g. > 5 g) may result in caffeine intoxication with symptoms including mania, depression, lapses in judgement, disorientation, disinhibition, delusions, hallucinations or psychosis, and rhabdomyolysis (breakdown of skeletal muscle tissue). Energy drinks High caffeine consumption in energy drinks (At least 1 liter or 320 mg of caffeine) was associated with short-term cardiovascular side effects including hypertension, prolonged QT interval and heart palpitations. These cardiovascular side effects were not seen with smaller amounts of caffeine consumption in energy drinks (less than 200 mg). Severe intoxication As of 2007 there is no known antidote or reversal agent for caffeine intoxication, treatment of mild caffeine intoxication is directed toward symptom relief; severe intoxication may require peritoneal dialysis, hemodialysis, or hemofiltration. Intralipid infusion therapy is indicated in cases of imminent risk of cardiac arrest in order to scavenge the free serum caffeine. Lethal dose Death from caffeine ingestion appears to be rare, and most commonly caused by an intentional overdose of medications. In 2016, 3702 caffeine-related exposures were reported to Poison Control Centers in the United States, of which 846 required treatment at a medical facility, and 16 had a major outcome; and several caffeine-related deaths are reported in case studies. The LD50 of caffeine in humans is dependent on individual sensitivity, but is estimated to be 150–200 milligrams per kilogram (2.2 lb) of body mass (75–100 cups of coffee for a adult). There are cases where doses as low as 57 milligrams per kilogram have been fatal. A number of fatalities have been caused by overdoses of readily available powdered caffeine supplements, for which the estimated lethal amount is less than a tablespoon. The lethal dose is lower in individuals whose ability to metabolize caffeine is impaired due to genetics or chronic liver disease. A death was reported in a man with liver cirrhosis who overdosed on caffeinated mints. Interactions Caffeine is a substrate for CYP1A2, and interacts with many substances through this and other mechanisms. Alcohol According to DSST, alcohol provides a reduction in performance and caffeine has a significant improvement in performance. When alcohol and caffeine are consumed jointly, the effects produced by caffeine are affected, but the alcohol effects remain the same. For example, when additional caffeine is added, the drug effect produced by alcohol is not reduced. However, the jitteriness and alertness given by caffeine is decreased when additional alcohol is consumed. Alcohol consumption alone reduces both inhibitory and activational aspects of behavioral control. Caffeine antagonizes the activational aspect of behavioral control, but has no effect on the inhibitory behavioral control. The Dietary Guidelines for Americans recommend avoidance of concomitant consumption of alcohol and caffeine, as this may lead to increased alcohol consumption, with a higher risk of alcohol-associated injury. Tobacco Smoking tobacco increases caffeine clearance by 56%. Cigarette smoking induces the cytochrome P450 1A2 enzyme that breaks down caffeine, which may lead to increased caffeine tolerance and coffee consumption for regular smokers. Birth control Birth control pills can extend the half-life of caffeine, requiring greater attention to caffeine consumption. Medications Caffeine sometimes increases the effectiveness of some medications, such as those for headaches. Caffeine was determined to increase the potency of some over-the-counter analgesic medications by 40%. The pharmacological effects of adenosine may be blunted in individuals taking large quantities of methylxanthines like caffeine. Some other examples of methylxanthines include the medications theophylline and aminophylline, which are prescribed to relieve symptoms of asthma or COPD. Pharmacology Pharmacodynamics In the absence of caffeine and when a person is awake and alert, little adenosine is present in (CNS) neurons. With a continued wakeful state, over time adenosine accumulates in the neuronal synapse, in turn binding to and activating adenosine receptors found on certain CNS neurons; when activated, these receptors produce a cellular response that ultimately increases drowsiness. When caffeine is consumed, it antagonizes adenosine receptors; in other words, caffeine prevents adenosine from activating the receptor by blocking the location on the receptor where adenosine binds to it. As a result, caffeine temporarily prevents or relieves drowsiness, and thus maintains or restores alertness. Receptor and ion channel targets Caffeine is an antagonist of adenosine A2A receptors, and knockout mouse studies have specifically implicated antagonism of the A2A receptor as responsible for the wakefulness-promoting effects of caffeine. Antagonism of A2A receptors in the ventrolateral preoptic area (VLPO) reduces inhibitory GABA neurotransmission to the tuberomammillary nucleus, a histaminergic projection nucleus that activation-dependently promotes arousal. This disinhibition of the tuberomammillary nucleus is the downstream mechanism by which caffeine produces wakefulness-promoting effects. Caffeine is an antagonist of all four adenosine receptor subtypes (A1, A2A, A2B, and A3), although with varying potencies. The affinity (KD) values of caffeine for the human adenosine receptors are 12 μM at A1, 2.4 μM at A2A, 13 μM at A2B, and 80 μM at A3. Antagonism of adenosine receptors by caffeine also stimulates the medullary vagal, vasomotor, and respiratory centers, which increases respiratory rate, reduces heart rate, and constricts blood vessels. Adenosine receptor antagonism also promotes neurotransmitter release (e.g., monoamines and acetylcholine), which endows caffeine with its stimulant effects; adenosine acts as an inhibitory neurotransmitter that suppresses activity in the central nervous system. Heart palpitations are caused by blockade of the A1 receptor. Because caffeine is both water- and lipid-soluble, it readily crosses the blood–brain barrier that separates the bloodstream from the interior of the brain. Once in the brain, the principal mode of action is as a nonselective antagonist of adenosine receptors (in other words, an agent that reduces the effects of adenosine). The caffeine molecule is structurally similar to adenosine, and is capable of binding to adenosine receptors on the surface of cells without activating them, thereby acting as a competitive antagonist. In addition to its activity at adenosine receptors, caffeine is an inositol trisphosphate receptor 1 antagonist and a voltage-independent activator of the ryanodine receptors (RYR1, RYR2, and RYR3). It is also a competitive antagonist of the ionotropic glycine receptor. Effects on striatal dopamine While caffeine does not directly bind to any dopamine receptors, it influences the binding activity of dopamine at its receptors in the striatum by binding to adenosine receptors that have formed GPCR heteromers with dopamine receptors, specifically the A1–D1 receptor heterodimer (this is a receptor complex with 1 adenosine A1 receptor and 1 dopamine D1 receptor) and the A2A–D2 receptor heterotetramer (this is a receptor complex with 2 adenosine A2A receptors and 2 dopamine D2 receptors). The A2A–D2 receptor heterotetramer has been identified as a primary pharmacological target of caffeine, primarily because it mediates some of its psychostimulant effects and its pharmacodynamic interactions with dopaminergic psychostimulants. Caffeine also causes the release of dopamine in the dorsal striatum and nucleus accumbens core (a substructure within the ventral striatum), but not the nucleus accumbens shell, by antagonizing A1 receptors in the axon terminal of dopamine neurons and A1–A2A heterodimers (a receptor complex composed of 1 adenosine A1 receptor and 1 adenosine A2A receptor) in the axon terminal of glutamate neurons. During chronic caffeine use, caffeine-induced dopamine release within the nucleus accumbens core is markedly reduced due to drug tolerance. Enzyme targets Caffeine, like other xanthines, also acts as a phosphodiesterase inhibitor. As a competitive nonselective phosphodiesterase inhibitor, caffeine raises intracellular cAMP, activates protein kinase A, inhibits TNF-alpha and leukotriene synthesis, and reduces inflammation and innate immunity. Caffeine also affects the cholinergic system where it is a moderate inhibitor of the enzyme acetylcholinesterase. Pharmacokinetics Caffeine from coffee or other beverages is absorbed by the small intestine within 45 minutes of ingestion and distributed throughout all bodily tissues. Peak blood concentration is reached within 1–2 hours. It is eliminated by first-order kinetics. Caffeine can also be absorbed rectally, evidenced by suppositories of ergotamine tartrate and caffeine (for the relief of migraine) and of chlorobutanol and caffeine (for the treatment of hyperemesis). However, rectal absorption is less efficient than oral: the maximum concentration (Cmax) and total amount absorbed (AUC) are both about 30% (i.e., 1/3.5) of the oral amounts. Caffeine's biological half-life – the time required for the body to eliminate one-half of a dose – varies widely among individuals according to factors such as pregnancy, other drugs, liver enzyme function level (needed for caffeine metabolism) and age. In healthy adults, caffeine's half-life is between 3 and 7 hours. The half-life is decreased by 30-50% in adult male smokers, approximately doubled in women taking oral contraceptives, and prolonged in the last trimester of pregnancy. In newborns the half-life can be 80 hours or more, dropping very rapidly with age, possibly to less than the adult value by age 6 months. The antidepressant fluvoxamine (Luvox) reduces the clearance of caffeine by more than 90%, and increases its elimination half-life more than tenfold; from 4.9 hours to 56 hours. Caffeine is metabolized in the liver by the cytochrome P450 oxidase enzyme system, in particular, by the CYP1A2 isozyme, into three dimethylxanthines, each of which has its own effects on the body: Paraxanthine (84%): Increases lipolysis, leading to elevated glycerol and free fatty acid levels in blood plasma. Theobromine (12%): Dilates blood vessels and increases urine volume. Theobromine is also the principal alkaloid in the cocoa bean (chocolate). Theophylline (4%): Relaxes smooth muscles of the bronchi, and is used to treat asthma. The therapeutic dose of theophylline, however, is many times greater than the levels attained from caffeine metabolism. 1,3,7-Trimethyluric acid is a minor caffeine metabolite. Each of these metabolites is further metabolized and then excreted in the urine. Caffeine can accumulate in individuals with severe liver disease, increasing its half-life. A 2011 review found that increased caffeine intake was associated with a variation in two genes that increase the rate of caffeine catabolism. Subjects who had this mutation on both chromosomes consumed 40 mg more caffeine per day than others. This is presumably due to the need for a higher intake to achieve a comparable desired effect, not that the gene led to a disposition for greater incentive of habituation. Chemistry Pure anhydrous caffeine is a bitter-tasting, white, odorless powder with a melting point of 235–238 °C. Caffeine is moderately soluble in water at room temperature (2 g/100 mL), but very soluble in boiling water (66 g/100 mL). It is also moderately soluble in ethanol (1.5 g/100 mL). It is weakly basic (pKa of conjugate acid = ~0.6) requiring strong acid to protonate it. Caffeine does not contain any stereogenic centers and hence is classified as an achiral molecule. The xanthine core of caffeine contains two fused rings, a pyrimidinedione and imidazole. The pyrimidinedione in turn contains two amide functional groups that exist predominantly in a zwitterionic resonance the location from which the nitrogen atoms are double bonded to their adjacent amide carbons atoms. Hence all six of the atoms within the pyrimidinedione ring system are sp2 hybridized and planar. Therefore, the fused 5,6 ring core of caffeine contains a total of ten pi electrons and hence according to Hückel's rule is aromatic. Synthesis The biosynthesis of caffeine is an example of convergent evolution among different species. Caffeine may be synthesized in the lab starting with dimethylurea and malonic acid. Commercial supplies of caffeine are not usually manufactured synthetically because the chemical is readily available as a byproduct of decaffeination. Decaffeination Extraction of caffeine from coffee, to produce caffeine and decaffeinated coffee, can be performed using a number of solvents. Following are main methods: Water extraction: Coffee beans are soaked in water. The water, which contains many other compounds in addition to caffeine and contributes to the flavor of coffee, is then passed through activated charcoal, which removes the caffeine. The water can then be put back with the beans and evaporated dry, leaving decaffeinated coffee with its original flavor. Coffee manufacturers recover the caffeine and resell it for use in soft drinks and over-the-counter caffeine tablets. Supercritical carbon dioxide extraction: Supercritical carbon dioxide is an excellent nonpolar solvent for caffeine, and is safer than the organic solvents that are otherwise used. The extraction process is simple: is forced through the green coffee beans at temperatures above 31.1 °C and pressures above 73 atm. Under these conditions, is in a "supercritical" state: It has gaslike properties that allow it to penetrate deep into the beans but also liquid-like properties that dissolve 97–99% of the caffeine. The caffeine-laden is then sprayed with high-pressure water to remove the caffeine. The caffeine can then be isolated by charcoal adsorption (as above) or by distillation, recrystallization, or reverse osmosis. Extraction by organic solvents: Certain |
Greiner while at Stanford University), but within a few years of the launch of the Cyc project it became clear that even representing a typical news story or novel or advertisement would require more than the expressive power of full first-order logic, namely second-order predicate calculus ("What is the relationship between rain and water?") and then even higher-level orders of logic including modal logic, reflection (enabling the system to reason about its progress so far, on a problem on which it's working), and context logic (enabling the system to reason explicitly about the contexts in which its various premises and conclusions might hold), non-monotonic logic, and circumscription. By 1989, CycL had expanded in expressive power to higher-order logic (HOL). Triplestore representations (which are akin to the Frame -and-slot representation languages of the 1970s from which RLL sprang) are widespread today in AI. It may be useful to cite a few examples that stress or break that type of representation, typical of the examples that forced the Cyc project to move from a triplestore representation to a much more expressive one during the period 1984–1989: English sentences including negations ("Fred does not own a dog"), nested quantifiers ("Every American has a mother" means for-all x there-exists y... but "Every American has a President" means there-exists y such that for-all x...), nested modals such as "The United States believes that Germany wants NATO to avoid pursuing..." and it's even awkward to represent, in a Triplestore, relationships of arity higher than 2, such as "Los Angeles is between San Diego and San Francisco along US101." Cyc's ontology grew to about 100,000 terms during the first decade of the project, to 1994, and as of 2017 contained about 1,500,000 terms. This ontology included: 416,000 collections (types, sorts, natural kinds, which includes both types of things such as Fish and types of actions such as Fishing) a little over a million individuals representing 42,500 predicates (relations, attributes, fields, properties, functions), about a million generally well known entities such as TheUnitedStatesOfAmerica, BarackObama, TheSigningOfTheUSDeclarationOfIndependence, etc. An arbitrarily large number of additional terms are also implicitly present in the Cyc ontology, in the sense that there are term-denoting functions such as CalendarYearFn (when given the argument 2016, it denotes the calendar year 2016), GovernmentFn (when given the argument France it denotes the government of France), Meter (when given the argument 2016, it denotes a distance of 2.016 kilometers), and nestings and compositions of such function-denoting terms. The Cyc knowledge base of general common-sense rules and assertions involving those ontological terms was largely created by hand axiom-writing; it grew to about 1 million in 1994, and as of 2017 is about 24.5 million and has taken well over 1,000 person-years of effort to construct. It is important to understand that the Cyc ontological engineers strive to keep those numbers as small as possible, not inflate them, so long as the deductive closure of the knowledge base isn't reduced. Suppose Cyc is told about one billion individual people, animals, etc. Then it could be told 1018 facts of the form "Mickey Mouse is not the same individual as <Bullwinkle the Moose/Abraham Lincoln/Jennifer Lopez>". But instead of that, one could tell Cyc 10,000 Linnaean taxonomy rules followed by just 108 rules of the form "No mouse is a moose". And even more compactly, Cyc could instead just be given those 10,000 Linnaean taxonomy rules followed by just one rule of the form "For any two Linnaean taxons, if neither is explicitly known to be a supertaxon of the other, then they are disjoint". Those 10,001 assertions have the same deductive closure as the earlier-mentioned 1018 facts. The Cyc inference engine design separates the epistemological problem (what content should be in the Cyc KB) from the heuristic problem (how Cyc could efficiently infer arguments hundreds of steps deep, in a sea of tens of millions of axioms). To do the former, the CycL language and well-understood logical inference might suffice. For the latter, Cyc used a community-of-agents architecture, where specialized reasoning modules, each with its own data structure and algorithm, "raised their hand" if they could efficiently make progress on any of the currently open sub-problems. By 1994 there were 20 such heuristic level (HL) modules; as of 2017 there are over 1,050 HL modules. Some of these HL modules are very general, such as a module that caches the Kleene Star (transitive closure) of all the commonly-used transitive relations in Cyc's ontology. Some are domain-specific, such as a chemical equation-balancer. These can be and often are an "escape" to (pointer to) some externally available program or webservice or online database, such as a module to quickly "compute" the current population of a city by knowing where/how to look that up. CycL has a publicly released specification and dozens of HL modules were described in Lenat and Guha's textbook, but the actual Cyc inference engine code, and the full list of 1000+ HL modules, is Cycorp-proprietary. The name "Cyc" (from "encyclopedia", pronounced , like "syke") is a registered trademark owned by Cycorp. Access to Cyc is through paid licenses, but bona fide AI research groups are given research-only no-cost licenses (cf. ResearchCyc); as of 2017, over 600 such groups worldwide have these licenses. Typical pieces of knowledge represented in the Cyc knowledge base are "Every tree is a plant" and "Plants die eventually". When asked whether trees die, the inference engine can draw the obvious conclusion and answer the question correctly. Most of Cyc's knowledge, outside math, is only true by default. For example, Cyc knows that as a default parents love their children, when you're made happy you smile, taking your first step is a big accomplishment, when someone you love has a big accomplishment that makes you happy, and only adults have children. When asked whether a picture captioned "Someone watching his daughter take her first step" contains a smiling adult person, Cyc can logically infer that the answer is Yes, and "show its work" by presenting the step by step logical argument using those five pieces of knowledge from its knowledge base. These are formulated in the language CycL, which is based on predicate calculus and has a syntax similar to that of the Lisp programming language. In 2008, Cyc resources were mapped to many Wikipedia articles. Cyc is presently connected to Wikidata. Future plans may connect Cyc to both DBpedia and Freebase. Much of the current work Cyc continues to be knowledge engineering, representing facts about the world by hand, and implementing efficient inference mechanisms on that knowledge. Increasingly, however, work at Cycorp involves giving the Cyc system the ability to communicate with end users in natural language, and to assist with the ongoing knowledge formation process via machine learning and natural language understanding. Another large effort at Cycorp is building a suite of Cyc-powered ontological engineering tools to lower the bar to entry for individuals to contribute to, edit, browse, and query Cyc. Like many companies, Cycorp has ambitions to use Cyc's natural language processing to parse the entire internet to extract structured data; unlike all others, it is able to call on the Cyc system itself to act as an inductive bias and as an adjudicator of ambiguity, metaphor, and ellipsis. There are few, if any, systematic benchmark studies of Cyc's performance. Knowledge base The concept names in Cyc are CycL terms or constants. Constants start with an optional "#$" and are case-sensitive. There are constants | Cyc; listed here are a few mutually dissimilar instances: Pharmaceutical Term Thesaurus Manager/Integrator For over a decade, Glaxo has used Cyc to semi-automatically integrate all the large (hundreds of thousands of terms) thesauri of pharmaceutical-industry terms that reflect differing usage across companies, countries, years, and sub-industries. This ontology integration task requires domain knowledge, shallow semantic knowledge, but also arbitrarily deep common sense knowledge and reasoning. Pharma vocabulary varies across countries, (sub-) industries, companies, departments, and decades of time. E.g., what’s a gel pak? What’s the “street name” for ranitidine hydrochloride? Each of these n controlled vocabularies is an ontology with approximately 300k terms. Glaxo researchers need to issue a query in their current vocabulary, have it translated into a neutral “true meaning”, and then have that transformed in the opposite direction to find potential matches against documents each of which was written to comply with a particular known vocabulary. They had been using a large staff to do that manually. Cyc is used as the universal interlingua capable of representing the union of all the terms’ “true meanings”, and capable of representing the 300k transformations between each of those controlled vocabularies and Cyc, thereby converting an n² problem into a linear one without introducing the usual sort of “telephone game” attenuation of meaning. Furthermore, creating each of those 300k mappings for each thesaurus is done in a largely automated fashion, by Cyc. Terrorism Knowledge Base The comprehensive Terrorism Knowledge Base was an application of Cyc in development that tried to ultimately contain all relevant knowledge about "terrorist" groups, their members, leaders, ideology, founders, sponsors, affiliations, facilities, locations, finances, capabilities, intentions, behaviors, tactics, and full descriptions of specific terrorist events. The knowledge is stored as statements in mathematical logic, suitable for computer understanding and reasoning. Cleveland Clinic Foundation The Cleveland Clinic has used Cyc to develop a natural language query interface of biomedical information, spanning decades of information on cardiothoracic surgeries. A query is parsed into a set of CycL (higher-order logic) fragments with open variables (e.g., "this question is talking about a person who developed an endocarditis infection", "this question is talking about a subset of Cleveland Clinic patients who underwent surgery there in 2009", etc.); then various constraints are applied (medical domain knowledge, common sense, discourse pragmatics, syntax) to see how those fragments could possibly fit together into one semantically meaningful formal query; significantly, in most cases, there is exactly one and only one such way of incorporating and integrating those fragments. Integrating the fragments involves (i) deciding which open variables in which fragments actually represent the same variable, and (ii) for all the final variables, decide what order and scope of quantification that variable should have, and what type (universal or existential). That logical (CycL) query is then converted into a SPARQL query that is passed to the CCF SemanticDB that is its data lake. MathCraft One Cyc application aims to help students doing math at a 6th grade level, helping them much more deeply understand that subject matter. It is based on the experience that we often have thought we understood something, but only really understood it after we had to explain or teach it to someone else. Unlike almost all other educational software, where the computer plays the role of the teacher, this application of Cyc, called MathCraft, has Cyc play the role of a fellow student who is always slightly more confused than you, the user, are about the subject. The user's role is to observe the Cyc avatar and give it advice, correct its errors, mentor it, get it to see what it's doing wrong, etc. As the user gives good advice, Cyc allows the avatar to make fewer mistakes of that type, hence, from the user's point of view, it seems as though the user has just successfully taught it something. This is a variation of learning by teaching. Criticisms The Cyc project has been described as "one of the most controversial endeavors of the artificial intelligence history". Catherine Havasi, CEO of Luminoso, says that Cyc is the predecessor project to IBM's Watson. Machine-learning scientist Pedro Domingos refers to the project as a "catastrophic failure" for several reasons, including the unending amount of data required to produce any viable results and the inability for Cyc to evolve on its own. Robin Hanson, a professor of economics at George Mason University, gives a more balanced analysis: A similar sentiment was expressed by Marvin Minsky: "Unfortunately, the strategies most popular among AI researchers in the 1980s have come to a dead end," said Minsky. So-called “expert systems,” which emulated human expertise within tightly defined subject areas like law and medicine, could match users’ queries to relevant diagnoses, papers and abstracts, yet they could not learn concepts that most children know by the time they are 3 years old. “For each different kind of problem,” said Minsky, “the construction of expert systems had to start all over again, because they didn’t accumulate common-sense knowledge.” Only one researcher has committed himself to the colossal task of building a comprehensive common-sense reasoning system, according to Minsky. Douglas Lenat, through his Cyc project, has directed the line-by-line entry of more than 1 million rules into a commonsense knowledge base." Gary Marcus, a professor of psychology and neural science at New York University and the cofounder of an AI company called Geometric Intelligence, says "it represents an approach that is very different from all the deep-learning stuff that has been in the news.” This is consistent with Doug Lenat's position that "Sometimes the veneer of intelligence is not enough". Stephen Wolfram writes: Marcus writes: Every few years since it began publishing (1993), there is a new Wired Magazine article about Cyc, some positive and some negative (including one issue which contained one of each). Notable employees This is a list of some of the notable people who work or have worked on Cyc either while it was a project at MCC (where Cyc was first started) or Cycorp. Douglas Lenat Michael Witbrock Pat Hayes Ramanathan V. Guha Stuart J. Russell Srinija Srinivasan Jared Friedman John McCarthy See also BabelNet Categorical logic Chinese room DARPA Agent Markup Language DBpedia Fifth generation computer Freebase Large Scale Concept Ontology for Multimedia List of notable artificial intelligence projects Mindpixel Never-Ending Language Learning Open Mind Common Sense Semantic Web Suggested Upper Merged Ontology SHRDLU True Knowledge UMBEL Wolfram Alpha YAGO References Further reading Alan Belasco et al. (2004). "Representing Knowledge Gaps Effectively". In: D. Karagiannis, U. Reimer (Eds.): Practical Aspects of Knowledge Management, Proceedings of PAKM 2004, Vienna, Austria, December 2–3, 2004. Springer-Verlag, Berlin Heidelberg. Elisa Bertino, Gian Piero & B.C. Zarria (2001). Intelligent Database Systems. Addison-Wesley Professional. John Cabral & others (2005). "Converting Semantic Meta-Knowledge into Inductive Bias". In: Proceedings of the 15th International Conference on Inductive Logic Programming. Bonn, Germany, August 2005. Jon Curtis et al. (2005). "On the Effective Use of Cyc in a Question Answering System". In: Papers from the IJCAI Workshop on Knowledge and Reasoning for Answering Questions. Edinburgh, Scotland: 2005. Chris Deaton et al. (2005). "The Comprehensive Terrorism Knowledge Base in Cyc". In: Proceedings of the 2005 International Conference on Intelligence Analysis, McLean, Virginia, May 2005. Kenneth Forbus et al. (2005) ."Combining analogy, intelligent information retrieval, and knowledge integration for analysis: A preliminary report". In: Proceedings of the 2005 International Conference on Intelligence Analysis, McLean, Virginia, May 2005 douglas foxvog (2010), "Cyc". In: Theory and Applications of Ontology: Computer Applications", Springer. Fritz Lehmann and d. foxvog (1998), "Putting Flesh on the Bones: Issues that Arise in Creating Anatomical Knowledge Bases with Rich Relational Structures". In: Knowledge Sharing across Biological and Medical Knowledge Based Systems, AAAI. Douglas Lenat and R. V. Guha (1990). Building Large Knowledge-Based Systems: Representation and Inference in the Cyc Project. Addison-Wesley. . James Masters (2002). "Structured Knowledge Source Integration and its applications to information fusion". In: Proceedings of the Fifth International Conference on Information Fusion. Annapolis, MD, July 2002. James Masters and Z. Güngördü (2003). ."Structured Knowledge Source Integration: A Progress Report" In: Integration of Knowledge Intensive Multiagent Systems. Cambridge, Massachusetts, USA, 2003. Cynthia Matuszek et al. (2006). "An Introduction to the Syntax and Content of Cyc.". In: Proc. of the 2006 AAAI Spring Symposium on Formalizing and Compiling Background Knowledge and Its Applications to Knowledge Representation and Question Answering. Stanford, 2006 Cynthia Matuszek et al. (2005) ."Searching for Common Sense: Populating Cyc from the Web". In: Proceedings of the Twentieth National Conference on Artificial Intelligence. Pittsburgh, Pennsylvania, July 2005. Tom O'Hara et al. (2003). "Inducing criteria for mass noun lexical mappings using the Cyc Knowledge Base and its Extension to WordNet". In: Proceedings of the Fifth International Workshop on Computational Semantics. Tilburg, 2003. Fabrizio Morbini and Lenhart Schubert (2009). "Evaluation of EPILOG: a Reasoner for Episodic Logic". University of Rochester, Commonsense '09 Conference (describes Cyc's |
Navy), a Seabee occupational rating in the U.S. Navy Languages Canadian English Chechen language (ISO 639-1 language code: ce) Organizations Church of England, the state church of the U.K. and mother church of the Anglican Communion, also referred to as the C of E Command element (United States Marine Corps), headquarters component of U.S. Marine Corps Marine Air-Ground Task Force (MAGTF) European Community, (, , , , ) Places Cé (Pictish territory), an early medieval Pictish territory in modern-day Scotland Province of Caserta (ISO 3166-2:IT code CE), a province of Italy County Clare, Ireland (vehicle registration plate code CE) Ceará (ISO 3166-2:BR code CE), a state in Brazil Lough Key, known in Irish as Loch Cé Sri Lanka (FIPS Pub 10-4 and obsolete NATO country code CE) Science and technology Computing Central European, an alternate name for Windows-1250 Cheat Engine, a system debugger and cheating tool Clear Entry, a button on a standard electronic calculator that clears the last number entered c.e., a common abbreviation for Computably enumerable, a property of some sets in computability | College English, an official publication of the American National Council of Teachers of English Common Entrance Examination, tests used by independent schools in the UK Conductive education, an educational system developed for people with motor disorders Continuing education, a broad spectrum of post-secondary learning activities and programs Hong Kong Certificate of Education Examination, a standardized examination from 1974 to 2011 Entertainment cê, a 2006 music album by Caetano Veloso Chaotic Evil, an alignment in the tabletop game Dungeons and Dragons Collector's edition, describing some special editions of software, movies, and books Cash Explosion, Ohio Lottery's scratch off game and weekly game show Halo: Combat Evolved, sometimes abbreviated as Halo: CE Job titles Chief Executive, administrative head of some regions County executive, the head of the executive branch of county government, common in the U.S. Construction Electrician (US Navy), a |
time. Four years later, Valderrama led his nation to qualify for the 1998 World Cup in France, scoring three goals during the qualifying stages. His impact in the final tournament at the advancing age of 37, however, was less decisive, and, despite defeating Tunisia, Colombia once again suffered a first round exit, following a 2–0 defeat against England, which was Valderrama's final international appearance. Playing style Although Valderrama is often defined as a 'classic number 10 playmaker', due to his creativity and offensive contribution, in reality he was not a classic playmaker in the traditional sense. Although he often wore the number 10 shirt throughout his career and was deployed as an attacking midfielder at times, he played mostly in deeper positions in the centre of the pitch – often operating in a free role as a deep-lying playmaker, rather than in more advanced midfield positions behind the forwards – in order to have a greater influence on the game. A team-player, Valderrama was also known to be an extremely selfless midfielder, who preferred assisting his teammates over going for goal himself; his tactical intelligence, positioning, reading of the game, efficient movement, and versatile range of passing enabled him to find space for himself to distribute and receive the ball, which allowed him both to set the tempo of his team in midfield with short, first time exchanges, or create chances with long lobbed passes or through balls. Valderrama's most instantly recognisable physical features were his big afro-blonde hairstyle, jewelry, and moustache, but he was best known for his grace and elegance on the ball, as well as his agility, and quick feet as a footballer. His control, dribbling ability and footwork were similar to those of smaller players, which for a player of Valderrama's size and physical build was fairly uncommon, and he frequently stood out throughout his career for his ability to use his strength, balance, composure, and flamboyant technique to shield the ball from opponents when put under pressure, and retain possession in difficult situations, often with elaborate skills, which made him an extremely popular figure with the fans. Valderrama's mix of physical strength, two-footed ability, unpredictability and flair enabled him to produce key and incisive performances against top-tier teams, while his world class vision and exceptional passing and crossing ability with his right foot made him one of the best assist providers of his time; his height, physique and elevation also made him effective in the air, and he was also an accurate free kick taker and striker of the ball, despite not being a particularly prolific goalscorer. Despite his natural talent and ability as a footballer, Valderrama earned a reputation for having a "languid" playing style, as well as lacking notable pace, being unfit, and for having a poor defensive work-rate on the pitch, in particular, after succumbing to the physical effects of ageing in his later career in the MLS. In his first season in France, he also initially struggled to adapt to the faster-paced, more physical and tactically rigorous European brand of football, which saw him play in an unfamiliar position, and gave him less space and time on the ball to dictate attacking passing moves; he was criticised at times for his lack of match fitness and his low defensive contribution, which initially limited his appearances with the club, although he later successfully became a key creative player in his team's starting line-up due to his discipline, skill, and his precise and efficient passing. Despite these claims, earlier in his career, however, Valderrama demonstrated substantial pace, stamina, and defensive competence. Former French defender Laurent Blanc, who played with Valderrama in Montpellier, voiced one of the most accurate descriptions for Valderrama, "In the fast and furious European game he wasn't always at his ease. He was a natural exponent of 'toque', keeping the ball moving. But he was so gifted that we could give him the ball when we didn't know what else to do with it knowing he wouldn't lose it... and often he would do things that most of us only dream about." Retirement and legacy In February 2004, Valderrama ended his 22-year career in a tribute match at the Metropolitan stadium of Barranquilla, with some of the most important football players of South America, such as Diego Maradona, Enzo Francescoli, Iván Zamorano, and José Luis Chilavert. In 2006, a 22-foot bronze statue of Valderrama, created by Colombian artist Amilkar Ariza, was erected outside Estadio Eduardo Santos in Valderrama's birthplace of Santa Marta. Valderrama was the only Colombian to feature in FIFA's 125 Top Living Football Players list in March 2004. Media Valderrama appeared on the cover of Konami's International Superstar Soccer Pro 98. In the Nintendo 64 version of the game, he is referred to by his nickname, El Pibe. Valderrama has also appeared in EA Sports' FIFA football video game series; he was named one of the Ultimate Team Legend cards in FIFA 15. Coaching career Since retiring from professional football, Valderrama has become assistant manager of Atlético Junior. On 1 November 2007, Valderrama accused a referee of corruption by waving cash in the face of Oscar Julian Ruiz when the official awarded a penalty to América de Cali. Junior lost the match 4–1, which ended the club's hopes of playoff qualification. He later also served as a coach for a | Most Valuable Player, finishing the season with 4 goals and 17 assists. He remained with the club for the 1997 season, and also spent a spell on loan back at Deportivo Cali in Colombia, before moving to another MLS side, Miami Fusion, in 1998, where he also remained for two seasons. He returned to Tampa Bay in 2000, spending two more seasons with the club; while a member of the Mutiny, the team would sell Carlos Valderrama wigs at Tampa Stadium. In the 2000 MLS season, Valderrama recorded the only 20+ assist season in MLS history—ending the season with 26 — a single season assist record that remains intact to this day, and which MLS itself suggested was an "unbreakable" record in a 2012 article. In 2001, Valderrama joined the Colorado Rapids, and remained with the team until 2002, when he retired; his American soccer league career spanned a total of eight years, during which he made 175 appearances. In the MLS, Valderrama scored relatively few goals (16) for a midfielder, but is the league's fourth all-time leader in assists (114) after Brad Davis (123), Steve Ralston (135) – a former teammate, and Landon Donovan (145). In 2005, he was named to the MLS All-Time Best XI. International career Valderrama was a member of the Colombia national football team from 1985 until 1998; he made 111 international appearances, scoring 11 goals, making him the most capped outfield player in the country's history. He represented and captained his national side in the 1990, 1994, and 1998 FIFA World Cups, and also took part in the 1987, 1989, 1991, 1993, and 1995 Copa América tournaments. Valderrama made his international debut on 27 October 1985, in a 3–0 defeat to Paraguay in a 1986 World Cup qualifying match, at the age of 24. In his first major international tournament, he helped Colombia to a third-place finish at the 1987 Copa América in Argentina, as his team's captain, where he was named the tournament's best player; during the tournament he scored the opening goal in Colombia's 2–0 over Bolivia on 1 July, their first match of the group stage. Some of Valderrama's most impressive international performances came during the 1990 FIFA World Cup in Italy, during which he served as Colombia's captain. He helped his team to a 2–0 win against the UAE in Colombia's opening match of the group stage, scoring the second goal of the match with a strike from 20 yards. Colombia lost their second match against Yugoslavia, however, needing at least a draw against the eventual champions West Germany in their final group match in order to advance to the next round of the competition. In the decisive game, German striker Pierre Littbarski scored what appeared to be the winning goal in the 88th minute of the game; however, within the last minute of injury time, Valderrama beat several opposing players and made a crucial left-footed pass to Freddy Rincón, who subsequently equalised, sealing a place for Colombia in the second round of the tournament with a 1–1 draw. Colombia were eliminated in the round of 16, following a 2–1 extra time loss to Cameroon. On 5 September 1993, Valderrama contributed to Colombia's historic 5–0 victory over South American rivals Argentina at the Monumental in Buenos Aires, which allowed them to qualify for the 1994 World Cup. Although much was expected of Valderrama at the World Cup, an injury during a pre-tournament warm-up game put his place in the squad in jeopardy; although he was able to regain match fitness in time for the tournament, Colombia disappointed and suffered a first round elimination following defeats to Romania and the hosts USA, though it has been contributed by the internal problem and threats by cartel groups at the time. Four years later, Valderrama led his nation to qualify for the 1998 World Cup in France, scoring three goals during the qualifying stages. His impact in the final tournament at the advancing age of 37, however, was less decisive, and, despite defeating Tunisia, Colombia once again suffered a first round exit, following a 2–0 defeat against England, which was Valderrama's final international appearance. Playing style Although Valderrama is often defined as a 'classic number 10 playmaker', due to his creativity and offensive contribution, in reality he was not a classic playmaker in the traditional sense. Although he often wore the number 10 shirt throughout his career and was deployed as an attacking midfielder at times, he played mostly in deeper positions in the centre of the pitch – often operating in a free role as a deep-lying playmaker, rather than in more advanced midfield positions behind the forwards – in order to have a greater influence on the game. A team-player, Valderrama was also known to be an extremely selfless midfielder, who preferred assisting his teammates over going for goal himself; his tactical intelligence, positioning, reading of the game, efficient movement, and versatile range of passing enabled him to find space for himself to distribute and receive the ball, which allowed him both to set the tempo of his team in midfield with short, first time exchanges, or create chances with long lobbed passes or through balls. Valderrama's most instantly recognisable physical features were his big afro-blonde hairstyle, jewelry, and moustache, but he was best known for his grace and elegance on the ball, as well as his agility, and quick feet as a footballer. His control, dribbling ability and footwork were similar to those of smaller players, which for a player of Valderrama's size and physical build was fairly uncommon, and he frequently stood out throughout his career for his ability to use his strength, balance, composure, and flamboyant technique to shield the ball from opponents when put under pressure, and retain possession in difficult situations, often with elaborate skills, which made him an extremely popular figure with the fans. Valderrama's mix of physical strength, two-footed ability, unpredictability and flair enabled him to produce key and incisive performances against top-tier teams, while his world class vision and exceptional passing and crossing ability with his right foot made him one of the best assist providers of his time; his height, physique and elevation also made him effective in the air, and he was also an accurate free kick taker and striker of the ball, despite not being a particularly prolific goalscorer. Despite his natural talent and ability as a footballer, Valderrama earned a reputation for having a "languid" playing style, as well as lacking notable pace, being unfit, and for having a poor defensive work-rate on the pitch, in particular, after succumbing to the physical effects of ageing in his later career in the MLS. In his first season in France, he also initially struggled to adapt to the |
the United States. His daughter Rosa recounted that her father invented the salad at his restaurant Caesar's (at the Hotel Caesar in Tijuana, Mexico) when a Fourth of July rush in 1924 depleted the kitchen's supplies. Cardini made do with what he had, adding the dramatic flair of the table-side tossing "by the chef." Cardini was living in San Diego, but he was also working in Tijuana, where he avoided the restrictions of Prohibition. A number of Cardini's staff have said that they invented the dish. Julia Child said that she had eaten a Caesar salad at Cardini's restaurant when she was a child in the 1920s. In 1946, newspaper columnist Dorothy Kilgallen wrote of a Caesar containing anchovies, differing from Cardini's version: The big food rage in Hollywood—the Caesar salad—will be introduced to New Yorkers by Gilmore's Steak House. It's an intricate concoction that takes ages to prepare and contains (zowie!) lots of garlic, raw or slightly coddled eggs, croutons, romaine, anchovies, parmeasan [sic] cheese, olive oil, vinegar and plenty of black pepper.According to Rosa Cardini, the original Caesar salad (unlike his brother Alex's Aviator's salad, which was later renamed to Caesar salad) did not contain pieces of anchovy; the slight anchovy flavor comes from the Worcestershire sauce. Cardini was opposed to using anchovies in his salad. In the 1970s, Cardini's daughter said that the original recipe included whole lettuce leaves, which were meant to be lifted by the stem and eaten with the fingers; coddled eggs; and Italian olive oil. Although the original recipe does not contain anchovies, | Cardini made do with what he had, adding the dramatic flair of the table-side tossing "by the chef." Cardini was living in San Diego, but he was also working in Tijuana, where he avoided the restrictions of Prohibition. A number of Cardini's staff have said that they invented the dish. Julia Child said that she had eaten a Caesar salad at Cardini's restaurant when she was a child in the 1920s. In 1946, newspaper columnist Dorothy Kilgallen wrote of a Caesar containing anchovies, differing from Cardini's version: The big food rage in Hollywood—the Caesar salad—will be introduced to New Yorkers by Gilmore's Steak House. It's an intricate concoction that takes ages to prepare and contains (zowie!) lots of garlic, raw or slightly coddled eggs, croutons, romaine, anchovies, parmeasan [sic] cheese, olive oil, vinegar and plenty of black pepper.According to Rosa Cardini, the original Caesar salad (unlike his brother Alex's Aviator's salad, which was later renamed to Caesar salad) did not contain pieces of anchovy; the slight anchovy flavor comes from the Worcestershire sauce. Cardini was opposed to using anchovies in his salad. In the 1970s, Cardini's |
"Please make no mystery about it—it was only an idea to put the black kitten on her cousin's shoulder. Nothing deeper." Beaux donated Sita and Sarita to the Musée du Luxembourg, but only after making a copy for herself. Another highly regarded portrait from that period is New England Woman (1895), a nearly all-white oil painting which was purchased by the Pennsylvania Academy of the Fine Arts. In 1895 Beaux became the first woman to have a regular teaching position at the Pennsylvania Academy of the Fine Arts, where she instructed in portrait drawing and painting for the next twenty years. That rare type of achievement by a woman prompted one local newspaper to state, "It is a legitimate source of pride to Philadelphia that one of its most cherished institutions has made this innovation." She was a popular instructor. In 1896, Beaux returned to France to see a group of her paintings presented at the Salon. Influential French critic M. Henri Rochefort commented, "I am compelled to admit, not without some chagrin, that not one of our female artists…is strong enough to compete with the lady who has given us this year the portrait of Dr. Grier. Composition, flesh, texture, sound drawing—everything is there without affectation, and without seeking for effect." Cecilia Beaux considered herself a "New Woman", a 19th-century women who explored educational and career opportunities that had generally been denied to women. In the late 19th century Charles Dana Gibson depicted the "New Woman" in his painting, The Reason Dinner was Late, which is "a sympathetic portrayal of artistic aspiration on the part of young women" as she paints a visiting policeman. This "New Woman" was successful, highly trained, and often did not marry; other such women included Ellen Day Hale, Mary Cassatt, Elizabeth Nourse and Elizabeth Coffin. Beaux was a member of Philadelphia's The Plastic Club. Other members included Elenore Abbott, Jessie Willcox Smith, Violet Oakley, Emily Sartain, and Elizabeth Shippen Green. Many of the women who founded the organization had been students of Howard Pyle. It was founded to provide a means to encourage one another professionally and create opportunities to sell their works of art. New York By 1900 the demand for Beaux's work brought clients from Washington, D.C., to Boston, prompting the artist to move to New York City; it was there she spent the winters, while summering at Green Alley, the home and studio she had built in Gloucester, Massachusetts. Beaux's friendship with Richard Gilder, editor-in-chief of the literary magazine The Century, helped promote her career and he introduced her to the elite of society. Among her portraits which followed from that association are those of Georges Clemenceau; First Lady Edith Roosevelt and her daughter; and Admiral Sir David Beatty. She also sketched President Teddy Roosevelt during her White House visits in 1902, during which "He sat for two hours, talking most of the time, reciting Kipling, and reading scraps of Browning." Her portraits Fanny Travis Cochran, Dorothea and Francesca, and Ernesta and her Little Brother, are fine examples of her skill in painting children; Ernesta with Nurse, one of a series of essays in luminous white, was a highly original composition, seemingly without precedent. She became a member of the National Academy of Design in 1902. and won the Logan Medal of the arts at the Art Institute of Chicago in 1921. Green Alley By 1906, Beaux began to live year-round at Green Alley, in a comfortable colony of "cottages" belonging to her wealthy friends and neighbors. All three aunts had died and she needed an emotional break from Philadelphia and New York. She managed to find new subjects for portraiture, working in the mornings and enjoying a leisurely life the rest of the time. She carefully regulated her energy and her activities to maintain a productive output, and considered that a key to her success. On why so few women succeeded in art as she did, she stated, "Strength is the stumbling block. They (women) are sometimes unable to stand the hard work of it day in and day out. They become tired and cannot reenergize themselves." While Beaux stuck to her portraits of the elite, American art was advancing into urban and social subject matter, led by artists such as Robert Henri who espoused a totally different aesthetic, "Work with great speed..Have your energies alert, up and active. Do it all in one sitting if you can. In one minute if you can. There is no use delaying…Stop studying water pitchers and bananas and paint everyday life." He advised his students, among them Edward Hopper and Rockwell Kent, to live with the common man and paint the common man, in total opposition to Cecilia Beaux's artistic methods and subjects. The clash of Henri and William Merritt Chase (representing Beaux and the traditional art establishment) resulted in 1907 in the independent exhibition by the urban realists known as "The Eight" or the Ashcan School. Beaux and her art friends defended the old order, and many thought (and hoped) the new movement to be a passing fad, but it turned out to be a revolutionary turn in American art. In 1910, her beloved Uncle Willie died. Though devastated by the loss, at fifty-five years of age, Beaux remained highly productive. In the next five years she painted almost 25 percent of her lifetime output and received a steady stream of honors. She had a major exhibition of 35 paintings at the Corcoran Gallery of Art in Washington, D.C., in 1912. Despite her continuing production and accolades, however, Beaux was working against the current of tastes and trends in art. The famed "Armory Show" of 1913 in New York City was a landmark presentation of 1,200 paintings showcasing Modernism. Beaux believed that the public, initially of mixed opinion about the "new" art, would ultimately reject it and return its favor to the Pre-Impressionists. Beaux was crippled after breaking her hip while walking in Paris in 1924. With her health impaired, her work output dwindled for the remainder of her life. That same year Beaux was asked to produce a self-portrait for the Medici collection in the Uffizi Gallery in Florence. In 1930 she published an autobiography, Background with Figures. Her later life was filled with honors. In 1930 she was elected a member of the National Institute of Arts and Letters; in 1933 came membership in the American Academy of Arts and Letters, which two years later organized the first major retrospective of her work. Also in 1933 Eleanor Roosevelt honored Beaux as "the American woman who had made the greatest contribution to the culture of the world". In 1942 The National Institute of Arts and Letters awarded her a gold medal for lifetime achievement. Death and critical regard Cecilia Beaux died at the age of 87 on September 17, 1942, in Gloucester, Massachusetts. She was buried at West Laurel Hill Cemetery in Bala Cynwyd, Pennsylvania. In her will she left a Duncan Phyfe rosewood secretaire made for her father to her cherished nephew Cecil Kent Drinker, a Harvard physician whom she had painted as a young boy. Beaux was included in the 2018 exhibit Women in Paris 1850-1900 at the Clark Art Institute. Though Beaux was an individualist, comparisons to Sargent would prove inevitable, and often favorable. Her strong technique, her perceptive reading of her subjects, and her ability to flatter without falsifying, were traits similar to his. "The critics are very enthusiastic. (Bernard) Berenson, Mrs. Coates tells me, stood in front of the portraits – Miss Beaux's three – and wagged his head. 'Ah, yes, I see!' Some Sargents. The ordinary ones are signed John Sargent, the best are signed Cecilia Beaux, which is, of course, nonsense in more ways than one, but it is part of the generous chorus of praise." Though overshadowed by Mary Cassatt and relatively unknown to museum-goers today, Beaux's craftsmanship and extraordinary output were highly regarded in her time. While presenting the Carnegie Institute's Gold Medal to Beaux in 1899, William Merritt Chase stated "Miss Beaux is not only the greatest living woman painter, but the best that has ever lived. Miss Beaux has done away entirely with sex [gender] in art." During her long productive life as an artist, she maintained her personal aesthetic and high standards against all distractions and countervailing forces. She constantly struggled for perfection, "A perfect technique in anything," she stated in an interview, "means that there has been no break in continuity between the conception and the act of performance." She summed up her driving work ethic, "I can say this: When I attempt anything, I have a passionate determination to overcome every obstacle…And I do my own work with a refusal to accept defeat that might almost be called painful." References Sources Grafly, Dorothy. "Cecilia Beaux" in Edward T. James, Janet Wilson James, and Paul S. Boyer, eds. Notable American Women, 1607–1950: A Biographical Dictionary (1971) Beaux, | explained, "Please make no mystery about it—it was only an idea to put the black kitten on her cousin's shoulder. Nothing deeper." Beaux donated Sita and Sarita to the Musée du Luxembourg, but only after making a copy for herself. Another highly regarded portrait from that period is New England Woman (1895), a nearly all-white oil painting which was purchased by the Pennsylvania Academy of the Fine Arts. In 1895 Beaux became the first woman to have a regular teaching position at the Pennsylvania Academy of the Fine Arts, where she instructed in portrait drawing and painting for the next twenty years. That rare type of achievement by a woman prompted one local newspaper to state, "It is a legitimate source of pride to Philadelphia that one of its most cherished institutions has made this innovation." She was a popular instructor. In 1896, Beaux returned to France to see a group of her paintings presented at the Salon. Influential French critic M. Henri Rochefort commented, "I am compelled to admit, not without some chagrin, that not one of our female artists…is strong enough to compete with the lady who has given us this year the portrait of Dr. Grier. Composition, flesh, texture, sound drawing—everything is there without affectation, and without seeking for effect." Cecilia Beaux considered herself a "New Woman", a 19th-century women who explored educational and career opportunities that had generally been denied to women. In the late 19th century Charles Dana Gibson depicted the "New Woman" in his painting, The Reason Dinner was Late, which is "a sympathetic portrayal of artistic aspiration on the part of young women" as she paints a visiting policeman. This "New Woman" was successful, highly trained, and often did not marry; other such women included Ellen Day Hale, Mary Cassatt, Elizabeth Nourse and Elizabeth Coffin. Beaux was a member of Philadelphia's The Plastic Club. Other members included Elenore Abbott, Jessie Willcox Smith, Violet Oakley, Emily Sartain, and Elizabeth Shippen Green. Many of the women who founded the organization had been students of Howard Pyle. It was founded to provide a means to encourage one another professionally and create opportunities to sell their works of art. New York By 1900 the demand for Beaux's work brought clients from Washington, D.C., to Boston, prompting the artist to move to New York City; it was there she spent the winters, while summering at Green Alley, the home and studio she had built in Gloucester, Massachusetts. Beaux's friendship with Richard Gilder, editor-in-chief of the literary magazine The Century, helped promote her career and he introduced her to the elite of society. Among her portraits which followed from that association are those of Georges Clemenceau; First Lady Edith Roosevelt and her daughter; and Admiral Sir David Beatty. She also sketched President Teddy Roosevelt during her White House visits in 1902, during which "He sat for two hours, talking most of the time, reciting Kipling, and reading scraps of Browning." Her portraits Fanny Travis Cochran, Dorothea and Francesca, and Ernesta and her Little Brother, are fine examples of her skill in painting children; Ernesta with Nurse, one of a series of essays in luminous white, was a highly original composition, seemingly without precedent. She became a member of the National Academy of Design in 1902. and won the Logan Medal of the arts at the Art Institute of Chicago in 1921. Green Alley By 1906, Beaux began to live year-round at Green Alley, in a comfortable colony of "cottages" belonging to her wealthy friends and neighbors. All three aunts had died and she needed an emotional break from Philadelphia and New York. She managed to find new subjects for portraiture, working in the mornings and enjoying a leisurely life the rest of the time. She carefully regulated her energy and her activities to maintain a productive output, and considered that a key to her success. On why so few women succeeded in art as she did, she stated, "Strength is the stumbling block. They (women) are sometimes unable to stand the hard work of it day in and day out. They become tired and cannot reenergize themselves." While Beaux stuck to her portraits of the elite, American art was advancing into urban and social subject matter, led by artists such as Robert Henri who espoused a totally different aesthetic, "Work with great speed..Have your energies alert, up and active. Do it all in one sitting if you can. In one minute if you can. There is no use delaying…Stop studying water pitchers and bananas and paint everyday life." He advised his students, among them Edward Hopper and Rockwell Kent, to live with the common man and paint the common man, in total opposition to Cecilia Beaux's artistic methods and subjects. The clash of Henri and William Merritt Chase (representing Beaux and the traditional art establishment) resulted in 1907 in the independent exhibition by the urban realists known as "The Eight" or the Ashcan School. Beaux and her art friends defended the old order, and many thought (and hoped) the new movement to be a passing fad, but it turned out to be a revolutionary turn in American art. In 1910, her beloved Uncle Willie died. Though devastated by the loss, at fifty-five years of age, Beaux remained highly productive. In the next five years she painted almost 25 percent of her lifetime output and received a steady stream of honors. She had a major exhibition of 35 paintings at the Corcoran Gallery of Art in Washington, D.C., in 1912. Despite her continuing production and accolades, however, Beaux was working against the current of tastes and trends in art. The famed "Armory Show" of 1913 in New York City was a landmark presentation of 1,200 paintings showcasing Modernism. Beaux believed that the public, initially of mixed opinion about the "new" art, would ultimately reject it and return its favor to the Pre-Impressionists. Beaux was crippled after breaking her hip while walking in Paris in 1924. With her health impaired, her work output dwindled for the remainder of her life. That same year Beaux was asked to produce a self-portrait for the Medici collection in the Uffizi Gallery in Florence. In 1930 she published an autobiography, Background with Figures. Her later life was filled with honors. In 1930 she was elected a member of the National Institute of Arts and Letters; in 1933 came membership in the American Academy of Arts and Letters, which two years later organized the first major retrospective of her work. Also in 1933 Eleanor Roosevelt honored Beaux as "the American woman who had made the greatest contribution to the culture of the world". In 1942 The National Institute of Arts and Letters awarded her a gold medal for lifetime achievement. Death and critical regard Cecilia Beaux died at the age of 87 on September 17, 1942, in Gloucester, Massachusetts. She was buried at West Laurel Hill Cemetery in Bala Cynwyd, Pennsylvania. In her will she left a Duncan Phyfe rosewood secretaire made for her father to her cherished nephew Cecil Kent Drinker, a Harvard physician whom she had painted as a young boy. Beaux was included in the 2018 exhibit Women in Paris 1850-1900 at the Clark Art Institute. Though Beaux was an individualist, comparisons to Sargent would prove inevitable, and often favorable. Her strong technique, her perceptive reading of her subjects, and her ability to flatter without falsifying, were traits similar to his. "The critics are very enthusiastic. (Bernard) Berenson, Mrs. Coates tells me, stood in front of the portraits – Miss Beaux's three – and wagged his head. 'Ah, yes, I see!' Some Sargents. The ordinary ones are signed John Sargent, the best are signed Cecilia Beaux, which is, of course, nonsense in more ways than one, but it is part of the generous chorus of praise." Though overshadowed by Mary Cassatt and relatively unknown to museum-goers today, Beaux's craftsmanship and extraordinary output were highly regarded in her time. While presenting the Carnegie Institute's Gold Medal to Beaux in 1899, William Merritt Chase stated "Miss Beaux is not only the greatest living woman painter, but the best that has ever lived. Miss Beaux has done away entirely with sex [gender] in art." During her long productive life as an artist, she maintained her personal aesthetic and high standards against all distractions and countervailing forces. She constantly struggled for perfection, "A perfect technique in anything," she stated in an interview, "means that there has been no break in |
control. The 1932 Chryslers introduced the Floating Power rubber engine mounts which eliminated further vibrations from the chassis. A vacuum-controlled automatic clutch, Oilite bearings, and the first universal joints with roller bearings were also added. In 1933 Chrysler models received a host of new improvements including a new three-speed manual transmission that used helical gears- for silent use. Chrysler engines received new alloy valve seats for better reliability, along with new spring shackles which improved lubrication. In 1934 the Chrysler 6 introduced an independent front coil spring suspension and received vent windows that rolled down with the side glass. Chrysler also introduced its revolutionary Chrysler Airflow, which included a welded Unibody, a wind-tunnel-designed aerodynamic body for a better power to power ratio, and better handling. In 1935 Chrysler introduced the Plymouth-based Chrysler Airstream Six which gave customers an economical modern alternative to the radically styled Airflows. The Airflow received an updated front hood and grille for 1935. For 1936, the Chrysler Airflow received an enlarged luggage compartment, a new roof, and a new adjustable front seat. The Airstream Six and Eight of the previous year were renamed the Chrysler Six and Deluxe Eight. The Automatic overdrive was optional to both cars. For 1937 the Airflow cars were mostly discontinued besides the C-17 Airflow, which received a final facelift. Only 4600 C-17 Airflows were built for 1937. The Chrysler Six and Chrysler Eight were respectively renamed the Royal and Imperial and gained isolated rubber body mounts to remove road vibrations. In 1938 the Chrysler Royal received the new 95 HP Gold Seal Inline 6. For 1939 Chrysler unveiled Superfinish a process in which all major chassis components subject to wear were finished to a mirror-like surface. Other features new to Chrysler were push-button door locks and rotary-type door latches. 1940s For 1940 Chrysler introduced sealed beam headlights on its cars which in turn improves night visibility by 50%. Mid-year in 1940 Chrysler introduced the Highlander as a special edition featuring popular features and Scottish plaid interior. The luxury sport model, called the Saratoga was also added to the New Yorker range as the Imperial became the exclusive limousine model. In 1941 Chrysler introduces the Fluid Drive semi-automatic transmission. 1942 Chryslers were redesigned with a wrap-a-round chrome grille and concealed running boards for this abbreviated model year, civilian production stopped by February 1942. For 1946 Chrysler redesigned the 1942 cars and reintroduced the Town & Country. For 1949 Chrysler came out with the first all-new redesign in almost a decade. For 1949 Chrysler moved the ignition to key only instead of having a key and push-button, they also reintroduced the nine-passenger station wagon body style to the line. 1950s For 1950 Chrysler updated the overly conservative 1949 models by lowering cars slightly, updating the grille to appear more simple, replacing the chrome fin tail lamps with flush units, and removal of the third brake light from the trunk lid. Also in 1950, Chrysler introduced disc brakes on the Imperial, the new Chrysler Newport hardtop, power windows, and the padded safety dash. Chrysler introduced their first overhead-valve, high-compression V8 engine in 1951, Displacing 331 cubic inches, it was rated at 180 bhp, 20 more horsepower than the new-for-1949 Cadillac V8. It was unique as the only American V8 engine designed with hemispherical combustion chambers. After successfully winning Mexican Road Races, the engine was upgraded to 250 bhp by 1955. Although Chrysler didn't build a small sporty car (such as the Chevrolet Corvette and the Ford Thunderbird), they decided to build a unique sporting car based on the New Yorker hardtop coupe, that featured a 300-bhp "Hemi" V8. To add to the car's uniqueness, the car was given a grille from the Imperial, and side trim from the less-adorned Windsor. A PowerFlite 2-speed automatic transmission was the only available gearbox. It was marketed as the Chrysler 300, emphasizing the engine's horsepower, continuing a luxury sport approach introduced earlier with the Chrysler Saratoga. A 1955 restyle by newly hired Virgil Exner saw a dramatic rise in Chrysler sales, which rose even more in 1957, when the entire line was dramatically restyled a second time with a sloping front end and high-flying tailfins at the rear. Although well received at first, it soon became apparent that quality control was compromised to get the new cars to market on an accelerated schedule. In 1957 all Chrysler products were installed with Torsion-Aire front suspension, which was a Torsion bar suspension only for the front wheels that followed two years after Packard installed Torsion-Level suspension on both the front and rear wheels. Sales of all Chrysler models plummeted in 1958 and 1959 despite improvements in quality. Throughout the mid- and late-1950s, Chryslers were available in top-line New Yorker, mid-line Saratoga, and base Windsor series. Exner's designs for the Chrysler brand in the early 1960s were overblown versions of the late 1950s, which were unhelpful in sales. Exner left his post by 1962, leaving Elwood Engel, a recent transfer from Ford Motor Co, in charge of Chrysler styling. 1960s Although early 1960s Chrysler cars reflected Virgil Exner's exaggerated styling, Elwood Engel's influence was evident as early as 1963, when a restyled, trimmer, boxier Chrysler was introduced. The Desoto lines along with the Windsor and Saratoga series were replaced with the Newport, while New Yorker continued as the luxury model, while Imperial continued to be the top-of-the-line brand. The Chrysler 300, officially part of the New Yorker product line, continued in production as a high-performance coupe through 1965, adding a different letter of the alphabet for each year of production, starting with the 300-B of 1956, through the 300-L of 1965. 1962 saw a "non-letter" 300 which was lower in price but was equipped with downgraded standard equipment. The '65 Chryslers were again dramatically restyled, with a thoroughly modern unit body and larger engines up to 440 cubic inches. They were squared off and slab-sided, with optional glass-covered headlamps that retracted when the headlights were turned on and a swept-back roofline for 2-door hardtop models. Chryslers through the 1960s were well-built, quality cars with innovative features such as unit bodies and front torsion bar suspension, and in 1963 Bob Hope was a spokesperson of The Chrysler Theatre, the same year the Chrysler Turbine Car was introduced. 1970s The Cordoba was introduced by Chrysler for the 1975 model year as an upscale personal luxury car that replaced the 300, competing with the Oldsmobile Cutlass, Buick Regal, and Mercury Cougar. The Cordoba was originally intended to be a Plymouth—the names Mirada, Premier, Sebring, and Grand Era were associated with the project; all except Grand Era would be used on later Chrysler, Dodge, and Eagle vehicles, though only the Dodge Mirada would be related to the Cordoba. However, losses from the newly introduced full-size C-body models due to the 1973 oil crisis, along with the investment in the Turbine Car that didn't produce a product to sell encouraged Chrysler executives to seek higher profits by marketing the model under the more upscale Chrysler brand. The car was a success, with over 150,000 examples sold in 1975, a sales year that was otherwise dismal for the company. For the 1976 model year, sales increased slightly to 165,000. The mildly revised 1977 version also sold well, with just under 140,000 cars. The success of using the Chrysler nameplate strategy is contrasted to sales of its similar and somewhat cheaper corporate cousin, the Dodge Charger SE. Interiors were more luxurious than the Dodge Charger SE and much more than the top-line standard intermediates (Plymouth Fury, Dodge Coronet) with a velour cloth notchback bench seat and folding armrest standard. Optionally available were bucket seats upholstered in Corinthian leather with a center armrest and cushion, or at extra cost, a center console with floor shifter and storage compartment. In 1977, Chrysler brought out a new mid-size line of cars called LeBaron (a name previously used for an Imperial model) which included a coupe, sedan, and station wagon. 1980s For 1982, the LeBaron moved to the front-wheel drive Chrysler K platform, where it was the upscale brand's lowest-priced offering. It was initially available in just sedan and coupe versions. In early 1982, it was released in a convertible version, bringing to the market the first factory-built open-topped domestic vehicle since the 1976 Cadillac Eldorado. A station wagon version called the Town and Country was added as well. A special Town and Country convertible was also made from 1983 to 1986 in limited quantities (1,105 total), which like the wagon featured simulated wood paneling that made it resemble the original 1940s Town and Country. This model was part of the well-equipped Mark Cross option package for the latter years. In 1982 the R-body line was discontinued and the New Yorker nameplate transferred to the smaller M-body line. Up to this point, the Chrysler M-body entry had been sold as LeBaron, but that name was moved to a new K-car-based FWD line (refer to the Chrysler LeBaron article for information on the 1977-81 M-bodies). Following the nameplate swap, the M-body line was consolidated and simplified. 360 V8 engines were gone, as were coupes and station wagons (the K-car LeBaron's coupe and wagon replaced them). The Fifth Avenue option was still available as a $1,244 option package. It was adapted from the earlier LeBaron's package, with a distinctive vinyl roof, electro-luminescent opera lamps, and a rear fascia adapted from the Dodge Diplomat. Interiors featured button-tufted, pillow-soft seats covered in either "Kimberley velvet" or "Corinthian leather", choices that would continue unchanged throughout the car's run. In addition, the carpet was thicker than that offered in the base New Yorker, Diplomat and Gran Fury/Caravelle Salon, and the interior had more chrome trim. 1983 was the last year for Chrysler's Cordoba coupe. Also in 1983, Chrysler introduced a new front-wheel-drive New Yorker model based on a stretched K-Car platform. Additionally, a less expensive, less equipped version of the new New Yorker was sold as the Chrysler E-Class in 1983 and 1984. More upscale stretched K-Car models were also sold as Chrysler Executive sedans and limousines. For 1984, the New Yorker Fifth Avenue was now simply called Fifth Avenue, setting the name that would continue for six successful years. All Fifth Avenues from 1984 to 1989 were powered by a 5.2 L (318 in³) V8 engine, with either a two-barrel carburetor making (in all states except California) or a four-barrel rated at (in California), mated to Chrysler's well-known Torqueflite three-speed automatic transmission. Fifth Avenue production was moved from Windsor, Ontario to St. Louis, Missouri. Beginning in late 1986 through the 1989 model year, they were manufactured at the American Motors plant in Kenosha, Wisconsin (purchased by Chrysler in 1987). The Fifth Avenue also far outsold its Dodge Diplomat and Plymouth Gran Fury siblings, with a much greater proportion of sales going to private customers, despite its higher price tag. Production peaked at 118,000 cars for 1986 and the Fifth Avenue stood out in a by-now K-car dominated lineup as Chrysler's lone concession to traditional RWD American sedans. Chrysler introduced a new mid-size four-door hatchback model for 1985 under the LeBaron GTS nameplate. It was sold alongside the mid-size LeBaron sedan, coupe, convertible, and station wagon. The LeBaron coupe and convertible were redesigned for 1987. Unlike previous LeBarons, this new coupe and convertible had unique styling instead of being just two-door versions of the sedan. The new design featured hidden headlamps (through 1992) and full-width taillights. The New Yorker was redesigned for the 1988 model year and now included a standard V6 engine. This generation New Yorker also saw the return of hidden headlamps which had not been available on the New Yorker since the 1981 R-body version. In 1989, Chrysler brought out the TC by Maserati luxury roadster as a more affordable alternative to Cadillac's Allante. It was a joint venture model between Chrysler and Maserati. 1990s Chrysler re-introduced the Town & Country nameplate in calendar year 1989 as a luxury rebadged variant of the Dodge Grand Caravan/Plymouth Grand Voyager minivan for the 1990 model year and continued to sell this incarnation of the Chrysler Town & Country until the end of the 2016 model year when Chrysler reintroduced the Pacifica nameplate for their minivan in calendar year 2016 for the 2017 model year run. 1990 saw the previous relationship between New Yorker and Fifth Avenue return, as the Fifth Avenue became a model of the New Yorker. There was some substantive difference, however, as the New Yorker Fifth Avenue used a slightly longer chassis than the standard car. The new New Yorker Fifth Avenue's larger interior volume classified it as a full-size model this time; despite having smaller exterior dimensions than the first generation. For 1990, Chrysler's new 3.3 L V6 engine was the standard and only choice, teamed with the company's A-604 four-speed electronic automatic transaxle. Beginning in 1991, a larger 3.8 L V6 became optional. It delivered the same 147 horsepower as the 3.3, but had more torque. The New Yorker Fifth Avenue's famous seats, long noted for their button-tufted appearance and sofa-like comfort, continued to be offered with the customer's choice of velour or leather, with the former "Corinthian leather" replaced by that of the Mark Cross company. Leather-equipped cars bore the Mark Cross logo on the seats and, externally, on an emblem attached to the brushed aluminum band ahead of the rear door opera windows. In this form, the New Yorker Fifth Avenue resembled the newly revived Chrysler Imperial, although some much-needed distinction was provided between the cars when the New Yorker Fifth Avenue (along with its New Yorker Salon linemate) received restyled, rounded-off front and rear ends for the 1992 model year, while the Imperial continued in its original crisply-lined form. The early 1990s saw a revival of the Imperial as a high-end sedan in Chrysler's lineup. Unlike the 1955–1983 Imperial, this car was a model of Chrysler, not its own marque. Based on the Y platform, it represented the top full-size model in Chrysler's lineup; below it was the similar New Yorker Fifth Avenue, and below that was the shorter wheelbase New Yorker. The reintroduction of the Imperial was two years after the Lincoln Continental was changed to a front-wheel-drive sedan with a V6 engine. Other domestic competitors in this segment included the Cadillac Sedan de Ville/Fleetwood, Oldsmobile 98, and Buick Electra/Park Avenue. Though closely related, the Imperial differed from the New Yorker Fifth Avenue in many ways. The Imperial's nose was more wedge-shaped, while the New Yorker Fifth Avenue's had a sharper, more angular profile (the New Yorker Fifth Avenue was later restyled with a more rounded front end). The rears of the two cars also differed. Like the front, the New Yorker Fifth Avenue's rear came to stiffer angles while the Imperial's rear-end came to more rounded edges. Also found on the Imperial were full-width taillights which were similar to those of the Chrysler TC, as well as the early 1980s Imperial coupe, while the New Yorker Fifth Avenue came with smaller vertical taillights. Initially, the 1990 Imperial was powered by the 3.3 L EGA V6 engine, which was rated at of torque. For 1991, the 3.3 L V6 was replaced by the larger 3.8 L EGH V6. Although horsepower only increased to , with the new larger 3.8 L V6 torque increased to at 2750 rpm. A four-speed automatic transmission was standard with both engines. Also new for 1990 was a redesigned LeBaron sedan which offered a standard V6 engine. Later models would also be available with 4 cylinder engines. The Town & Country minivan was restyled for 1991 in conjunction with the restyling of the Dodge and Plymouth minivan models. 1991 would also be the last year for the TC by Maserati, leaving the LeBaron as the brand's sole coupe and convertible options. The first generation of the Chrysler Concorde debuted at the 1992 North American International Auto Show in Detroit as a 1993 model. It debuted as a single, well-equipped model with a base price of US$18,341. Out of all the LH sedans, the first generation Concorde was most closely related to the Eagle Vision. The Concorde was given a more traditional image than the Vision. The two shared nearly all sheetmetal in common with the main differences limited to their grilles, rear fascias, body side moldings, and wheel choices. The Concorde featured a modern take on Chrysler's signature waterfall grille. It was split into six sections divided by body-colored strips with the Chrysler Pentastar logo on the center strip. The Concorde's rear fascia was highlighted by a full-width and full-height lightbar between the taillights, giving the appearance that the taillights stretched across the entire trunk. In keeping with its upscale position, Concorde's body side moldings incorporated bright chrome (later golden colored) work not found on its Dodge or Eagle siblings. On Concordes with gray lower body paint color, the gray came all the way up to the chrome beltline; on Visions, the gray lower body paint area was smaller and much more subtle. Wheel styles, which included available aluminum wheels with a Spiralcast design, were also unique to the Chrysler LH sedans (Concorde, LHS, New Yorker); Dodge and Eagle had their own different wheel styles. Introduced in May 1993 for the 1994 model year, the Chrysler LHS was the top-of-the-line model for the division, as well as the most expensive of the Chrysler LH platform cars. All the LH-series models shared a wheelbase and were developed using Chrysler's new computer drafting system. The car was differentiated from the division's New Yorker sedan by its bucket leather seats (the New Yorker had a bench seat) and standard features such as alloy wheels that were options on the New Yorker. Further differences between the Chrysler LHS and its New Yorker counterpart were a floor console and shifter, five-passenger seating, lack of chrome trim, an upgraded interior and a sportier image. The New Yorker was dropped after the 1996 model year in favor of a six-passenger option on the LHS. The LHS received a minor face change in 1995 when the corporate-wide Pentastar emblem was replaced with the revived Chrysler brand emblem. Standard features of the LHS included a 3.5 L EGE 24-valve V6 engine, body-colored grille, side mirrors and trim, traction control, aluminum wheels, integrated fog lights, 8-way power-adjustable front seats, premium sound systems with amplifiers, and automatic temperature control. Unlike the New Yorker, leather seats were standard. The final generation of the New Yorker continued with front-wheel drive on an elongated version of the new Chrysler LH platform and was released in May 1993 along with the nearly identical Chrysler LHS as an early 1994 model, eight months after the original LH cars: the Chrysler Concorde, Dodge Intrepid, and Eagle Vision, were introduced. The New Yorker came standard with the 3.5 L EGE which produced . Chrysler gave the New Yorker a more "traditional American" luxury image, and the LHS a more European performance image (as was done with the Eagle Vision). Little separated New Yorker from LHS in appearance, with New Yorker's chrome hood trim, body-color cladding, standard chrome wheel covers and 15-inch wheels, column shifter, and front bench seat, being the only noticeable differences. An option provided for 16-inch wheels and a firmer suspension type ("touring suspension"). This option eliminated the technical differences between New Yorker and LHS. LHS came with almost all of New Yorker's optional features as standard equipment and featured the firmer tuned suspension, to go with its more European image. During the 1994 model run, various changes were made to the New Yorker. On the outside, New Yorker was switched to new accent-color body cladding, whereas LHS received body-color cladding. This change aligned New Yorker with the Chrysler Concorde which also had accent-color cladding. Instead of standard 15-inch and optional 16-inch wheels, for the sake of enhanced stability 16-inch wheels became standard and the 15-inch versions were dropped. Likewise, the touring suspension option available on early 1994 New Yorker models was discontinued, leaving only "ride-tuned" suspension. In 1995, the Chrysler Sebring was introduced as a coupe, replacing the LeBaron coupe, and the new JA platform Chrysler Cirrus replaced the outgoing LeBaron sedan. A year later, a convertible version of the Sebring went on the market and replaced the LeBaron convertible. In 1999, Chrysler introduced the new LH platform 300M sedan alongside a redesigned LHS. The 300M was originally designed to be the next-generation Eagle Vision but since the Eagle brand had been discontinued in 1998, it instead became a Chrysler sedan. 2000s In 2000, the Voyager and Grand Voyager minivans were repositioned as Chrysler models due to the phasing out of the Plymouth brand. In 2001, a sedan was added to the Sebring model line and served as a replacement for the discontinued Cirrus. That same year, the Chrysler brand added a retro-styled PT Cruiser as well as the Prowler roadster which had previously been a Plymouth model. By 2004, all Chrysler brand minivans were now sold under the Town & Country nameplate. The 2000s also saw the Chrysler brand move into the fast-growing crossover/SUV segment with the introduction of the Chrysler Pacifica crossover in 2004, and the Chrysler Aspen SUV in 2007. The Pacifica would be discontinued in 2008 (the nameplate would return on a new minivan model in 2017) and the Aspen would be discontinued in 2009. Between 2004 and 2008, Chrysler offered a two-seat coupe and convertible model called Crossfire. This was in addition to Chrysler's five-seat Sebring coupe (through 2005) and four-seat convertible being sold at the time. In 2005, Chrysler introduced the LX platform Chrysler 300 sedan which replaced both the 300M and Concorde. It was the brand's first rear-wheel-drive sedan since the discontinuation of the Chrysler Fifth Avenue in 1989. It was also the first time a Chrysler sedan was available with a V8 engine since 1989. 2010s Following FCA's | EFI Chryslers were so dissatisfied that all but one were retrofitted with carburetors (while that one has been completely restored, with original EFI electronic problems resolved). Imperial would see new body styles introduced every two to three years, all with V8 engines and automatic transmissions, as well as technologies that would filter down to Chrysler corporation's other models. Imperial was folded back into the Chrysler brand in 1971. The Valiant was also introduced for 1960 as a distinct brand. In the U.S. market, Valiant was made a model in the Plymouth line for 1961 and the DeSoto make was discontinued in 1961. With those exceptions per applicable year and market, Chrysler's range from lowest to highest price from the 1940s through the 1970s was Valiant, Plymouth, Dodge, DeSoto, Chrysler, and Imperial. From 1963 through 1969, Chrysler increased its existing stakes to take full control of the French Simca, British Rootes and Spanish Barreiros companies, merging them into Chrysler Europe in 1967. In the 1970s, an engineering partnership was established with Mitsubishi Motors, and Chrysler began selling Mitsubishi vehicles branded as Dodge and Plymouth in North America. Chrysler struggled to adapt to the changing environment of the 1970s. When consumer tastes shifted to smaller cars in the early 1970s, particularly after the 1973 oil crisis, Chrysler could not meet the demand, although their compact models on the "A" body platform, the Dodge Dart and Plymouth Valiant, had proven economy and reliability and sold very well. Additional burdens came from increased US import competition, and tougher government regulation of car safety, fuel economy, and emissions. As the smallest of the Big 3 US automakers, Chrysler lacked the financial resources to meet all of these challenges. In 1976, with the demise of the reliable Dart/Valiant, quality control declined. Their replacements, the Dodge Aspen and Plymouth Volare, were comfortable and had good roadability, but owners soon experienced major reliability problems which crept into other models as well. Engines failed and/or did not run well, and premature rust plagued bodies. In 1978, Lee Iacocca was brought in to turn the company around, and in 1979 Iacocca sought US government help. Congress later passed the Loan Guarantee Act providing $1.5 billion in loan guarantees. The Loan Guarantee Act required that Chrysler also obtain $2 billion in concessions or aid from sources outside the federal government, which included interest rate reductions for $650 million of the savings, asset sales of $300 million, local and state tax concessions of $250 million, and wage reductions of about $590 million along with a $50 million stock offering. $180 million was to come from concessions from dealers and suppliers. After a period of plant closures and salary cuts agreed to by both management and the auto unions, the loans were repaid with interest in 1983. In November 1983, the Dodge Caravan/Plymouth Voyager was introduced, establishing the minivan as a major category, and initiating Chrysler's return to stability. In 1985, Diamond-Star Motors was created, further expanding the Chrysler-Mitsubishi relationship. In 1987, Chrysler acquired American Motors Corporation (AMC), which brought the profitable Jeep brand under the Chrysler umbrella. In 1985, Chrysler entered an agreement with AMC to produce Chrysler M platform rear-drive, as well as Dodge Omnis front wheel drive cars, in AMC's Kenosha, Wisconsin plant. In 1987, Chrysler acquired the 47% ownership of AMC that was held by Renault. The remaining outstanding shares of AMC were bought on the NYSE by August 5, 1987, making the deal valued somewhere between US$1.7 billion and US$2 billion, depending on how costs were counted. Chrysler CEO Lee Iacocca wanted the Jeep brand, particularly the Jeep Grand Cherokee (ZJ) that was under development, the new world-class manufacturing plant in Bramalea, Ontario, and AMC's engineering and management talent that became critical for Chrysler's future success. Chrysler established the Jeep/Eagle division as a "specialty" arm to market products distinctly different from the K-car-based products with the Eagle cars targeting import buyers. Former AMC dealers sold Jeep vehicles and various new Eagle models, as well as Chrysler products, strengthening the automaker's retail distribution system. Eurostar, a joint venture between Chrysler and Steyr-Daimler-Puch, began producing the Chrysler Voyager in Austria for European markets in 1992. 1998–2007: DaimlerChrysler In 1998, Chrysler and its subsidiaries entered into a partnership dubbed a "merger of equals" with German-based Daimler-Benz AG, creating the combined entity DaimlerChrysler AG. To the surprise of many stockholders, Daimler acquired Chrysler in a stock swap before Chrysler CEO Bob Eaton retired. It is widely accepted that the merger was needed because of Eaton's lack of planning for Chrysler in the 1990s, to become their own global automotive company. Under DaimlerChrysler, the company was named DaimlerChrysler Motors Company LLC, with its U.S. operations generally called "DCX". The Eagle brand was retired soon after Chrysler's merger with Daimler-Benz in 1998 Jeep became a stand-alone division, and efforts were made to merge the Chrysler and Jeep brands as one sales unit. In 2001, the Plymouth brand was also discontinued. Eurostar also built the Chrysler PT Cruiser in 2001 and 2002. The Austrian venture was sold to Magna International in 2002 and became Magna Steyr. The Voyager continued in production until 2007, whereas the Chrysler 300C, Jeep Grand Cherokee and Jeep Commander were also built at the plant from 2005 to 2010. On May 14, 2007, DaimlerChrysler announced the sale of 80.1% of Chrysler Group to American private equity firm Cerberus Capital Management, L.P., thereafter known as Chrysler LLC, although Daimler (renamed as Daimler AG) continued to hold a 19.9% stake. 2007–2014: Effects of Great Recession The economic collapse of 2007 to 2009 pushed the fragile company to the brink. On April 30, 2009, the automaker filed for Chapter 11 bankruptcy protection to be able to operate as a going concern, while renegotiating its debt structure and other obligations, which resulted in the corporation defaulting on over $4 billion in secured debts. The U.S. government described the company's action as a "prepackaged surgical bankruptcy". On June 10, 2009, substantially all of Chrysler's assets were sold to "New Chrysler", organized as Chrysler Group LLC. The federal government provided support for the deal with US$8 billion in financing at near 21%. Under CEO Sergio Marchionne, "World Class Manufacturing" or WCM, a system of thorough manufacturing quality, was introduced and several products re-launched with quality and luxury. The 2010 Jeep Grand Cherokee very soon became the most awarded SUV ever. The Ram, Jeep, Dodge, SRT and Chrysler divisions were separated to focus on their own identity and brand, and 11 major model refreshes occurred in 21 months. The PT Cruiser, Nitro, Liberty and Caliber models (created during DCX) were discontinued. On May 24, 2011, Chrysler repaid its $7.6 billion loans to the United States and Canadian governments. The US Treasury, through the Troubled Asset Relief Program (TARP), invested $12.5 billion in Chrysler and recovered $11.2 billion when the company shares were sold in May 2011, resulting in a $1.3 billion loss. On July 21, 2011, Fiat bought the Chrysler shares held by the US Treasury. The purchase made Chrysler foreign-owned again, this time as the luxury division. The Chrysler 300 was badged Lancia Thema in some European markets (with additional engine options), giving Lancia a much needed replacement for its flagship. 2014–2021: Fiat Chrysler Automobiles On January 21, 2014, Fiat bought the remaining shares of Chrysler owned by the VEBA worth $3.65 billion. Several days later, the intended reorganization of Fiat and Chrysler under a new holding company, Fiat Chrysler Automobiles, together with a new FCA logo were announced. The most challenging launch for this new company came immediately in January 2014 with a completely redesigned Chrysler 200. The vehicle's creation is from the completely integrated company, FCA, executing from a global compact-wide platform. On December 16, 2014, Chrysler Group LLC announced a name change to FCA US LLC. On January 12, 2017, FCA shares traded at the New York Stock Exchange lost value after the EPA accused FCA US of using emissions cheating software to evade diesel-emissions tests, however the company countered the accusations, and the chairman and CEO Sergio Marchionne sternly rejected them. The following day, shares rose as investors played down the effect of the accusations. Analysts gave estimates of potential fines from several hundred million dollars to $4 billion, although the likelihood of a hefty fine was low. Senior United States Senator Bill Nelson urged the FTC to look into possible deceptive marketing of the company's diesel-powered SUVs. Shares dropped 2.2% after the announcement. On July 21, 2018, Sergio Marchionne stepped down as chairman and CEO for health reasons, and was replaced by John Elkann and Michael Manley, respectively. As a result of ending domestic production of more fuel-efficient passenger automobiles such as the Dodge Dart and Chrysler 200 sedans, FCA US elected to pay $77 million in fines for violating the anti-backsliding provision of fuel economy standards set under the Energy Independence and Security Act of 2007 for its model year 2016 fleet. It was again fined for the 2017 model year for not meeting the minimum domestic passenger car standard. FCA described the $79 million civil penalty as "not expected to have a material impact on its business." As part of a January 2019 settlement, Fiat Chrysler will recall and repair approximately 100,000 automobiles equipped with a 3.0-liter V6 EcoDiesel engine having a prohibited defeat device, pay $311 million in total civil penalties to US regulators and CARB, pay $72.5 million for state civil penalties, implement corporate governance reforms, and pay $33.5 million to mitigate excess pollution. The company will also pay affected consumers up to $280 million and offer extended warranties on such vehicles worth $105 million. The total value of the settlement is worth about $800 million, though FCA did not admit liability, and it did not resolve an ongoing criminal investigation. Corporate governance , management positions of Stellantis North America include: Board of directors Mark Stewart, COO Michael J. Keegan, Chief Audit, Sustainability and Compliance Officer Richard Palmer, CFO Management team Jeffrey Kommor: head of US sales Lottie Holland: head of diversity, inclusion and engagement, FCA - North America Bruno Cattori: president and CEO, FCA Mexico, S.A. de C.V. Mark Champine: head of quality, FCA - North America Mark Chernoby: chief technical compliance officer, Stellantis N.V. Martin Horneck: head of purchasing and supply chain management, FCA - North America Mamatha Chamarthi: chief information officer, FCA - North America and Asia Pacific Marissa Hunter: head of marketing Philip Langley: head of network development, FCA - North America Ralph Gilles: head of design Michael Resha: head of manufacturing, FCA - North America Roger "Shane" Karr: head of external affairs, FCA - North America Michael J. Keegan: chief audit; sustainability and compliance officer Michael Koval Jr.: brand chief executive officer, Ram Trucks Timothy Kuniskis: brand chief executive officer, Chrysler (interim), Dodge Jim Morisson: head of Jeep brand, FCA - North America João Laranjo: chief financial officer, FCA - North America Michael Bly: head of global propulsion systems, Stellantis N.V. Jeffrey P. Lux: head of transmission powertrain, FCA - North America Chris Pardi: general counsel and corporate secretary, FCA - North America Barbara J. Pilarski: head of business development, FCA - North America Mark Stewart: chief operating officer Scott Thiele: head of portfolio planning, FCA - North America; head of global long range plan coordination Joseph Veltri: head of investor relations Rob Wichman: ad interim head of product development, FCA - North America Larry Dominique: senior vice president, Alfa Romeo - North America Christopher G. Fields: vice president, U.S. employee relations Sales and marketing United States sales Chrysler is the smallest of the "Big Three" U.S. automakers (Stellantis North America, Ford Motor Company, and General Motors). In 2020, FCA US sold just over 1.8 million vehicles. Global sales Chrysler was the world's 11th largest vehicle manufacturer as ranked by OICA in 2012. Total Chrysler vehicle production was about 2.37 million that year. Marketing Lifetime powertrain warranty In 2007, Chrysler began to offer vehicle lifetime powertrain warranty for the first registered owner or retail lessee. The deal covered owner or lessee in U.S., Puerto Rico and the Virgin Islands, for 2009 model year vehicles, and 2006, 2007 and 2008 model year vehicles purchased on or after July 26, 2007. Covered vehicles excluded SRT models, Diesel vehicles, Sprinter models, Ram Chassis Cab, Hybrid System components (including transmission), and certain fleet vehicles. The warranty is non-transferable. After Chrysler's restructuring, the warranty program was replaced by five-year/100,000 mile transferable warranty for 2010 or later vehicles. "Let's Refuel America" In 2008, as a response to customer feedback citing the prospect of rising gas prices as a top concern, Chrysler launched the "Let's Refuel America" incentive campaign, which guaranteed new-car buyers a gasoline price of $2.99 for three years. With the U.S. purchase of eligible Chrysler, Jeep, and Dodge vehicles, customers could enroll in the program and receive a gas card that immediately lowers their gas price to $2.99 a gallon, and keeps it there for the three years. Lancia co-branding Chrysler plans for Lancia to codevelop products, with some vehicles being shared. Olivier Francois, Lancia's CEO, was appointed to the Chrysler division in October 2009. Francois plans to reestablish the Chrysler brand as an upscale brand. Ram trucks In October 2009, Dodge's car and truck lines were separated, with the name "Dodge" being used for cars, minivans and crossovers and "Ram" for light- and medium-duty trucks and other commercial-use vehicles. "Imported From Detroit" In 2011, Chrysler unveiled their "Imported From Detroit" campaign with ads featuring Detroit rapper Eminem, one of which aired during the Super Bowl. The campaign highlighted the rejuvenation of the entire product lineup, which included the new, redesigned, and repackaged 2011 200 sedan and 200 convertible, the Chrysler 300 sedan, and the Chrysler Town & Country minivan. As part of the campaign, Chrysler sold a line of clothing items featuring the Monument to Joe Louis, with proceeds being funneled to Detroit-area charities, including the Boys and Girls Clubs of Southeast Michigan, Habitat for Humanity Detroit and the Marshall Mathers Foundation. Following the Eminem ad, there was also an ad for Detroit Lions defensive tackle Ndamukong Suh driving a Chrysler 300 to Portland, Oregon, to visit his mother, an ad featuring Detroit-born fashion designer John Varvatos cruising through a shadowy Gotham while Kevin Yon's familiar baritone traces the designer's genesis. In March 2011, Chrysler Group LLC filed a lawsuit against Moda Group LLC (owner of Pure Detroit clothing retailer) for copying and selling merchandise with the "Imported from Detroit" slogan. Chrysler claimed it had notified defendant of its pending trademark application February 14, but the defendant argued Chrysler had not secured a trademark for the "Imported From Detroit" phrase. On June 18, 2011, U.S. District Judge Arthur Tarnow ruled that Chrysler's request did not show that it would suffer irreparable harm or that it had a strong likelihood of winning its case. Therefore, Pure Detroit's owner, Detroit retailer Moda Group LLC, can continue selling its "Imported from Detroit" products. Tarnow also noted that Chrysler does not have a trademark on "Imported from Detroit" and rejected the automaker's argument that trademark law is not applicable to the case. In March 2012, Chrysler Group LLC and Pure Detroit agreed to a March 27 mediation to try to settle the lawsuit over the clothing company's use of "Imported from Detroit" slogan. Pure Detroit stated that Chrysler has made false claims about the origins of three vehicles - Chrysler 200, Chrysler 300 and Chrysler Town & Country - none of which are built in Detroit. Pure Detroit also said that Chrysler's Imported From Detroit merchandise is not being made in Detroit. In 2012 Chrysler and Pure Detroit came to an undisclosed settlement. Chrysler's Jefferson North Assembly, which makes the Jeep Grand Cherokee and Dodge Durango, is the only car manufacturing plant of any company remaining entirely in Detroit (General Motors operates a plant which is partly in Detroit and partly in Hamtramck). In 2011, Eminem settled a lawsuit against Audi alleging the defendant had ripped off the Chrysler 300 Super Bowl commercial in the Audi A6 Avant ad. "Halftime in America" Again in 2012, Chrysler advertised during the Super Bowl. Its two-minute February 5, 2012 Super Bowl XLVI advertisement was titled "Halftime in America". The ad drew criticism from several leading U.S. conservatives, who suggested that its messaging implied that President Barack Obama deserved a second term and, as such, was political payback for Obama's support for the federal bailout of the company. Asked about the criticism in a 60 Minutes interview with Steve Kroft, Sergio Marchionne responded "just to rectify the record I paid back the loans at 19.7% Interest. I don't think I committed to do to a commercial on top of that" and characterized the Republican reaction as "unnecessary and out of place". America's Import In 2014, Chrysler started using a new slogan, "America's Import" in ads introducing their all-new 2015 Chrysler 200, targeting foreign automakers from Germany to Japan with such ads (German performance and Japanese quality), and at the ending of selected ads, the advertisement will say, "We Built This", indicating being built in America, instead of overseas. Slogans Engineered to the Power of Cars (1998–2001) Drive = Love (2002–2004) Inspiration comes standard (2004–2007) Engineered Beautifully (2007–mid 2010) Imported From Detroit (2011–2014) America's Import (2014–2016) Product line Mopar Mopar: Replacement parts for Chrysler-built vehicles, as well as a brand for dealer service and customer service operations. Mopar Performance: a subdivision providing performance aftermarket parts for Chrysler-built vehicles. Chrysler Uconnect First introduced as MyGig, Chrysler Uconnect is a system that brings interactive ability to the in-car radio and telemetric-like controls to car settings. As of mid-2015, it is installed in hundreds of thousands of Fiat Chrysler vehicles. It connects to the Internet via the mobile network of AT&T, providing the car with its own IP address. Internet connectivity using any Chrysler, Dodge, Jeep or Ram vehicle, via a Wi-Fi "hot-spot", is also available via Uconnect Web. According to Chrysler LLC, the hotspot range extends approximately from the vehicle in all directions, and combines both Wi-Fi and Sprint's 3G cellular connectivity. Uconnect is available on several current and was available on several discontinued Chrysler models including the current Dodge Dart, Chrysler 300, Aspen, Sebring, Town and Country, Dodge Avenger, Caliber, Grand Caravan, Challenger, Charger, Journey, Nitro, and Ram. In July 2015, IT security researchers announced a severe security flaw assumed to affect every Chrysler vehicle with Uconnect produced from late 2013 to early 2015. It allows hackers to gain access to the car over the Internet, and in the case of a Jeep Cherokee was demonstrated to enable an attacker to take control not just of |
of the metropolis of London, though it remains a notable part of central London. Administratively, it forms one of the 33 local authority districts of London; however, the City of London is not a London borough, a status reserved for the other 32 districts (including London's only other city, the City of Westminster). It is also a separate ceremonial county, being an enclave surrounded by Greater London, and is the smallest ceremonial county in the United Kingdom. The City of London is widely referred to simply as the City (differentiated from the phrase "the city of London" by capitalising City) and is also colloquially known as the Square Mile, as it is in area. Both of these terms are also often used as metonyms for the United Kingdom's trading and financial services industries, which continue a notable history of being largely based in the city. The name London is now ordinarily used for a far wider area than just the city. London most often denotes the sprawling London metropolis, or the 32 London boroughs, in addition to the City of London itself. This wider usage of London is documented as far back as 1888, when the County of London was created. The local authority for the city, namely the City of London Corporation, is unique in the UK and has some unusual responsibilities for a local council, such as being the police authority. It is also unusual in having responsibilities and ownerships beyond its boundaries. The corporation is headed by the Lord Mayor of the City of London (an office separate from, and much older than, the Mayor of London). The Lord Mayor, as of November 2019, is Vincent Keaveny. The city is made up of 25 wards, with administration at the historic Guildhall. Other historic sites include St Paul's Cathedral, Royal Exchange, Mansion House, Old Bailey, and Smithfield Market. Although not within the city, the adjacent Tower of London is part of its old defensive perimeter. Bridges under the jurisdiction of the City include London Bridge and Blackfriars Bridge. The city is a major business and financial centre, and the Bank of England is headquartered in the city. Throughout the 19th century, the city was the world's primary business centre, and it continues to be a major meeting point for businesses. London came top in the Worldwide Centres of Commerce Index, published in 2008. The insurance industry is located in the eastern side of the city, around Lloyd's building. A secondary financial district exists outside the city, at Canary Wharf, to the east. The city has a resident population of 9,401 (ONS estimate, mid-2016) but over 500,000 are employed there, and some estimates put the number of workers in the city to be over 1 million. About three-quarters of the jobs in the City of London are in the financial, professional, and associated business services sectors. The legal profession forms a major component of the northern and western sides of the city, especially in the Temple and Chancery Lane areas where the Inns of Court are located, of which two—Inner Temple and Middle Temple—fall within the City of London boundary. History Origins The Roman legions established a settlement known as "Londinium" on the current site of the City of London around AD 43. Its bridge over the River Thames turned the city into a road nexus and major port, serving as a major commercial centre in Roman Britain until its abandonment during the 5th century. Archaeologist Leslie Wallace notes that, because extensive archaeological excavation has not revealed any signs of a significant pre-Roman presence, "arguments for a purely Roman foundation of London are now common and uncontroversial." At its height, the Roman city had a population of approximately 45,000–60,000 inhabitants. Londinium was an ethnically diverse city, with inhabitants from across the Roman Empire, including natives of Britannia, continental Europe, the Middle East, and North Africa. The Romans built the London Wall some time between AD 190 and 225. The boundaries of the Roman city were similar to those of the City of London today, though the City extends further west than Londonium's Ludgate, and the Thames was undredged and thus wider than it is today, with Londonium's shoreline slightly north of the city's present shoreline. The Romans built a bridge across the river, as early as AD 50, near to today's London Bridge. Decline By the time the London Wall was constructed, the city's fortunes were in decline, and it faced problems of plague and fire. The Roman Empire entered a long period of instability and decline, including the Carausian Revolt in Britain. In the 3rd and 4th centuries, the city was under attack from Picts, Scots, and Saxon raiders. The decline continued, both for Londinium and the Empire, and in AD 410 the Romans withdrew entirely from Britain. Many of the Roman public buildings in Londinium by this time had fallen into decay and disuse, and gradually after the formal withdrawal the city became almost (if not, at times, entirely) uninhabited. The centre of trade and population moved away from the walled Londinium to Lundenwic ("London market"), a settlement to the west, roughly in the modern-day Strand/Aldwych/Covent Garden area. Anglo-Saxon restoration During the Anglo-Saxon Heptarchy, the London area came in turn under the Kingdoms of Essex, Mercia, and later Wessex, though from the mid 8th century it was frequently under the control of or threat from the Vikings. Bede records that in AD 604 St Augustine consecrated Mellitus as the first bishop to the Anglo-Saxon kingdom of the East Saxons and their king, Sæberht. Sæberht's uncle and overlord, Æthelberht, king of Kent, built a church dedicated to St Paul in London, as the seat of the new bishop. It is assumed, although unproven, that this first Anglo-Saxon cathedral stood on the same site as the later medieval and the present cathedrals. Alfred the Great, King of Wessex occupied and began the resettlement of the old Roman walled area, in 886, and appointed his son-in-law Earl Æthelred of Mercia over it as part of their reconquest of the Viking occupied parts of England. The refortified Anglo-Saxon settlement was known as Lundenburh ("London Fort", a borough). The historian Asser said that "Alfred, king of the Anglo-Saxons, restored the city of London splendidly ... and made it habitable once more." Alfred's "restoration" entailed reoccupying and refurbishing the nearly deserted Roman walled city, building quays along the Thames, and laying a new city street plan. Alfred's taking of London and the rebuilding of the old Roman city was a turning point in history, not only as the permanent establishment of the City of London, but also as part of a unifying moment in early England, with Wessex becoming the dominant English kingdom and the repelling (to some degree) of the Viking occupation and raids. While London, and indeed England, were afterwards subjected to further periods of Viking and Danish raids and occupation, the establishment of the City of London and the Kingdom of England prevailed. In the 10th century, Athelstan permitted eight mints to be established, compared with six in his capital, Winchester, indicating the wealth of the city. London Bridge, which had fallen into ruin following the Roman evacuation and abandonment of Londinium, was rebuilt by the Saxons, but was periodically destroyed by Viking raids and storms. As the focus of trade and population was moved back to within the old Roman walls, the older Saxon settlement of Lundenwic was largely abandoned and gained the name of Ealdwic (the "old settlement"). The name survives today as Aldwych (the "old market-place"), a name of a street and an area of the City of Westminster between Westminster and the City of London. Medieval era Following the Battle of Hastings, William the Conqueror marched on London, reaching as far as Southwark, but failed to get across London Bridge or to defeat the Londoners. He eventually crossed the River Thames at Wallingford, pillaging the land as he went. Rather than continuing the war, Edgar the Ætheling, Edwin of Mercia and Morcar of Northumbria surrendered at Berkhamsted. William granted the citizens of London a charter in 1075; the city was one of a few examples of the English retaining some authority. The city was not covered by the Domesday Book. William built three castles around the city, to keep Londoners subdued: Tower of London, which is still a major establishment. Baynard's Castle, which no longer exists but gave its name to a city ward. Montfichet's Tower or Castle on Ludgate Hill, which was dismantled and sold off in the 13th century. About 1130, Henry I granted a sheriff to the people of London, along with control of the county of Middlesex: this meant that the two entities were regarded as one administratively (not that the county was a dependency of the city) until the Local Government Act 1888. By 1141 the whole body of the citizenry was considered to constitute a single community. This 'commune' was the origin of the City of London Corporation and the citizens gained the right to appoint, with the king's consent, a mayor in 1189—and to directly elect the mayor from 1215. From medieval times, the city has been composed of 25 ancient wards, each headed by an alderman, who chairs Wardmotes, which still take place at least annually. A Folkmoot, for the whole of the City held at the outdoor cross of St Paul's Cathedral, was formerly also held. Many of the medieval offices and traditions continue to the present day, demonstrating the unique nature of the City and its Corporation. In 1381, the Peasants' Revolt affected London. The rebels took the City and the Tower of London, but the rebellion ended after its leader, Wat Tyler, was killed during a confrontation that included Lord Mayor William Walworth. The city was burnt severely on a number of occasions, the worst being in 1123 and in the Great Fire of London in 1666. Both of these fires were referred to as the Great Fire. After the fire of 1666, a number of plans were drawn up to remodel the city and its street pattern into a renaissance-style city with planned urban blocks, squares and boulevards. These plans were almost entirely not taken up, and the medieval street pattern re-emerged almost intact. Early modern period In the 1630s the Crown sought to have the Corporation of the City of London extend its jurisdiction to surrounding areas. In what is sometimes called the "great refusal", the Corporation said no to the King, which in part accounts for its unique government structure to the present. By the late 16th century, London increasingly became a major centre for banking, international trade and commerce. The Royal Exchange was founded in 1565 by Sir Thomas Gresham as a centre of commerce for London's merchants, and gained Royal patronage in 1571. Although no longer used for its original purpose, its location at the corner of Cornhill and Threadneedle Street continues to be the geographical centre of the city's core of banking and financial services, with the Bank of England moving to its present site in 1734, opposite the Royal Exchange on Threadneedle Street. Immediately to the south of Cornhill, Lombard Street was the location from 1691 of Lloyd's Coffee House, which became the world-leading insurance market. London's insurance sector continues to be based in the area, particularly in Lime Street. In 1708, Christopher Wren's masterpiece, St Paul's Cathedral, was completed on his birthday. The first service had been held on 2 December 1697, more than 10 years earlier. It replaced the original St Paul's, which had been completely destroyed in the Great Fire of London, and is considered to be one of the finest cathedrals in Britain and a fine example of Baroque architecture. Growth of London The 18th century was a period of rapid growth for London, reflecting an increasing national population, the early stirrings of the Industrial Revolution, and London's role at the centre of the evolving British Empire. The urban area expanded beyond the borders of the City of London, most notably during this period towards the West End and Westminster. Expansion continued and became more rapid by the beginning of the 19th century, with London growing in all directions. To the East the Port of London grew rapidly during the century, with the construction of many docks, needed as the Thames at the city could not cope with the volume of trade. The arrival of the railways and the Tube meant that London could expand over a much greater area. By the mid-19th century, with London still rapidly expanding in population and area, the city had already become only a small part of the wider metropolis. 19th and 20th centuries An attempt was made in 1894 with the Royal Commission on the Amalgamation of the City and County of London to end the distinction between the city and the surrounding County of London, but a change of government at Westminster meant the option was not taken up. The city as a distinct polity survived despite its position within the London conurbation and numerous local government reforms. Supporting this status, the city was a special parliamentary borough that elected four members to the unreformed House of Commons, who were retained after the Reform Act 1832; reduced to two under the Redistribution of Seats Act 1885; and ceased to be a separate constituency under the Representation of the People Act 1948. Since then the city is a minority (in terms of population and area) of the Cities of London and Westminster. The city's population fell rapidly in the 19th century and through most of the 20th century, as people moved outwards in all directions to London's vast suburbs, and many residential buildings were demolished to make way for office blocks. Like many areas of London and other British cities, the City fell victim to large scale and highly destructive aerial bombing during World War II, especially in the Blitz. Whilst St Paul's Cathedral survived the onslaught, large swathes of the area did not and the particularly heavy raids of late December 1940 led to a firestorm called the Second Great Fire of London. There was a major rebuilding programme in the decades following the war, in some parts (such as at the Barbican) dramatically altering the urban landscape. But the destruction of the older historic fabric allowed the construction of modern and larger-scale developments, whereas in those parts not so badly affected by bomb damage the City retains its older character of smaller buildings. The street pattern, which is still largely medieval, was altered slightly in places, although there is a more recent trend of reversing some of the post-war modernist changes made, such as at Paternoster Square. The City suffered terrorist attacks including the 1993 Bishopsgate bombing (IRA) and the 7 July 2005 London bombings (Islamist). In response to the 1993 bombing, a system of road barriers, checkpoints and surveillance cameras referred to as the "ring of steel" has been maintained to control entry points to the city. The 1970s saw the construction of tall office buildings including the 600-foot (183 m), 47-storey NatWest Tower, the first skyscraper in the UK. Office space development has intensified especially in the central, northern and eastern parts, with skyscrapers including 30 St. Mary Axe ("the Gherkin"'), Leadenhall Building ("the Cheesegrater"), 20 Fenchurch Street ("the Walkie-Talkie"), the Broadgate Tower, the Heron Tower and 22 Bishopsgate, which is the tallest building in the city. The main residential section of the City today is the Barbican Estate, constructed between 1965 and 1976. The Museum of London is based there, as are a number of other services provided by the corporation. Governance The city has a unique political status, a legacy of its uninterrupted integrity as a corporate city since the Anglo-Saxon period and its singular relationship with the Crown. Historically its system of government was not unusual, but it was not reformed by the Municipal Reform Act 1835 and little changed by later reforms, so that it is the only local government in the UK where elections are not run on the basis of one vote for every adult citizen. It is administered by the City of London Corporation, headed by the Lord Mayor of London (not to be confused with the separate Mayor of London, an office created only in the year 2000), which is responsible for a number of functions and has interests in land beyond the city's boundaries. Unlike other English local authorities, the corporation has two council bodies: the (now largely ceremonial) Court of Aldermen and the Court of Common Council. The Court of Aldermen represents the wards, with each ward (irrespective of size) returning one alderman. The chief executive of the Corporation holds the ancient office of Town Clerk of London. The city is a ceremonial county which has a Commission of Lieutenancy headed by the Lord Mayor instead of a Lord-Lieutenant and has two Sheriffs instead of a High Sheriff (see list of Sheriffs of London), quasi-judicial offices appointed by the livery companies, an ancient political system based on the representation and protection of trades (guilds). Senior members of the livery companies are known as liverymen and form the Common Hall, which chooses the lord mayor, the sheriffs and certain other officers. Wards The city is made up of 25 wards. They are survivors of the medieval government system that allowed a very local area to exist as a self-governing unit within the wider city. They can be described as electoral/political divisions; ceremonial, geographic and administrative entities; sub-divisions of the city. Each ward has an Alderman, who until the mid-1960s held office for life but since put themselves up for re-election at least every 6 years. Wards continue to have a Beadle, an ancient position which is now largely ceremonial whose main remaining function is the running of an annual Wardmote of electors, representatives and officials. At the Wardmote the ward's Alderman appoints at least one Deputy for the year ahead. Each ward also has a Ward Club, which is similar | when the County of London was created. The local authority for the city, namely the City of London Corporation, is unique in the UK and has some unusual responsibilities for a local council, such as being the police authority. It is also unusual in having responsibilities and ownerships beyond its boundaries. The corporation is headed by the Lord Mayor of the City of London (an office separate from, and much older than, the Mayor of London). The Lord Mayor, as of November 2019, is Vincent Keaveny. The city is made up of 25 wards, with administration at the historic Guildhall. Other historic sites include St Paul's Cathedral, Royal Exchange, Mansion House, Old Bailey, and Smithfield Market. Although not within the city, the adjacent Tower of London is part of its old defensive perimeter. Bridges under the jurisdiction of the City include London Bridge and Blackfriars Bridge. The city is a major business and financial centre, and the Bank of England is headquartered in the city. Throughout the 19th century, the city was the world's primary business centre, and it continues to be a major meeting point for businesses. London came top in the Worldwide Centres of Commerce Index, published in 2008. The insurance industry is located in the eastern side of the city, around Lloyd's building. A secondary financial district exists outside the city, at Canary Wharf, to the east. The city has a resident population of 9,401 (ONS estimate, mid-2016) but over 500,000 are employed there, and some estimates put the number of workers in the city to be over 1 million. About three-quarters of the jobs in the City of London are in the financial, professional, and associated business services sectors. The legal profession forms a major component of the northern and western sides of the city, especially in the Temple and Chancery Lane areas where the Inns of Court are located, of which two—Inner Temple and Middle Temple—fall within the City of London boundary. History Origins The Roman legions established a settlement known as "Londinium" on the current site of the City of London around AD 43. Its bridge over the River Thames turned the city into a road nexus and major port, serving as a major commercial centre in Roman Britain until its abandonment during the 5th century. Archaeologist Leslie Wallace notes that, because extensive archaeological excavation has not revealed any signs of a significant pre-Roman presence, "arguments for a purely Roman foundation of London are now common and uncontroversial." At its height, the Roman city had a population of approximately 45,000–60,000 inhabitants. Londinium was an ethnically diverse city, with inhabitants from across the Roman Empire, including natives of Britannia, continental Europe, the Middle East, and North Africa. The Romans built the London Wall some time between AD 190 and 225. The boundaries of the Roman city were similar to those of the City of London today, though the City extends further west than Londonium's Ludgate, and the Thames was undredged and thus wider than it is today, with Londonium's shoreline slightly north of the city's present shoreline. The Romans built a bridge across the river, as early as AD 50, near to today's London Bridge. Decline By the time the London Wall was constructed, the city's fortunes were in decline, and it faced problems of plague and fire. The Roman Empire entered a long period of instability and decline, including the Carausian Revolt in Britain. In the 3rd and 4th centuries, the city was under attack from Picts, Scots, and Saxon raiders. The decline continued, both for Londinium and the Empire, and in AD 410 the Romans withdrew entirely from Britain. Many of the Roman public buildings in Londinium by this time had fallen into decay and disuse, and gradually after the formal withdrawal the city became almost (if not, at times, entirely) uninhabited. The centre of trade and population moved away from the walled Londinium to Lundenwic ("London market"), a settlement to the west, roughly in the modern-day Strand/Aldwych/Covent Garden area. Anglo-Saxon restoration During the Anglo-Saxon Heptarchy, the London area came in turn under the Kingdoms of Essex, Mercia, and later Wessex, though from the mid 8th century it was frequently under the control of or threat from the Vikings. Bede records that in AD 604 St Augustine consecrated Mellitus as the first bishop to the Anglo-Saxon kingdom of the East Saxons and their king, Sæberht. Sæberht's uncle and overlord, Æthelberht, king of Kent, built a church dedicated to St Paul in London, as the seat of the new bishop. It is assumed, although unproven, that this first Anglo-Saxon cathedral stood on the same site as the later medieval and the present cathedrals. Alfred the Great, King of Wessex occupied and began the resettlement of the old Roman walled area, in 886, and appointed his son-in-law Earl Æthelred of Mercia over it as part of their reconquest of the Viking occupied parts of England. The refortified Anglo-Saxon settlement was known as Lundenburh ("London Fort", a borough). The historian Asser said that "Alfred, king of the Anglo-Saxons, restored the city of London splendidly ... and made it habitable once more." Alfred's "restoration" entailed reoccupying and refurbishing the nearly deserted Roman walled city, building quays along the Thames, and laying a new city street plan. Alfred's taking of London and the rebuilding of the old Roman city was a turning point in history, not only as the permanent establishment of the City of London, but also as part of a unifying moment in early England, with Wessex becoming the dominant English kingdom and the repelling (to some degree) of the Viking occupation and raids. While London, and indeed England, were afterwards subjected to further periods of Viking and Danish raids and occupation, the establishment of the City of London and the Kingdom of England prevailed. In the 10th century, Athelstan permitted eight mints to be established, compared with six in his capital, Winchester, indicating the wealth of the city. London Bridge, which had fallen into ruin following the Roman evacuation and abandonment of Londinium, was rebuilt by the Saxons, but was periodically destroyed by Viking raids and storms. As the focus of trade and population was moved back to within the old Roman walls, the older Saxon settlement of Lundenwic was largely abandoned and gained the name of Ealdwic (the "old settlement"). The name survives today as Aldwych (the "old market-place"), a name of a street and an area of the City of Westminster between Westminster and the City of London. Medieval era Following the Battle of Hastings, William the Conqueror marched on London, reaching as far as Southwark, but failed to get across London Bridge or to defeat the Londoners. He eventually crossed the River Thames at Wallingford, pillaging the land as he went. Rather than continuing the war, Edgar the Ætheling, Edwin of Mercia and Morcar of Northumbria surrendered at Berkhamsted. William granted the citizens of London a charter in 1075; the city was one of a few examples of the English retaining some authority. The city was not covered by the Domesday Book. William built three castles around the city, to keep Londoners subdued: Tower of London, which is still a major establishment. Baynard's Castle, which no longer exists but gave its name to a city ward. Montfichet's Tower or Castle on Ludgate Hill, which was dismantled and sold off in the 13th century. About 1130, Henry I granted a sheriff to the people of London, along with control of the county of Middlesex: this meant that the two entities were regarded as one administratively (not that the county was a dependency of the city) until the Local Government Act 1888. By 1141 the whole body of the citizenry was considered to constitute a single community. This 'commune' was the origin of the City of London Corporation and the citizens gained the right to appoint, with the king's consent, a mayor in 1189—and to directly elect the mayor from 1215. From medieval times, the city has been composed of 25 ancient wards, each headed by an alderman, who chairs Wardmotes, which still take place at least annually. A Folkmoot, for the whole of the City held at the outdoor cross of St Paul's Cathedral, was formerly also held. Many of the medieval offices and traditions continue to the present day, demonstrating the unique nature of the City and its Corporation. In 1381, the Peasants' Revolt affected London. The rebels took the City and the Tower of London, but the rebellion ended after its leader, Wat Tyler, was killed during a confrontation that included Lord Mayor William Walworth. The city was burnt severely on a number of occasions, the worst being in 1123 and in the Great Fire of London in 1666. Both of these fires were referred to as the Great Fire. After the fire of 1666, a number of plans were drawn up to remodel the city and its street pattern into a renaissance-style city with planned urban blocks, squares and boulevards. These plans were almost entirely not taken up, and the medieval street pattern re-emerged almost intact. Early modern period In the 1630s the Crown sought to have the Corporation of the City of London extend its jurisdiction to surrounding areas. In what is sometimes called the "great refusal", the Corporation said no to the King, which in part accounts for its unique government structure to the present. By the late 16th century, London increasingly became a major centre for banking, international trade and commerce. The Royal Exchange was founded in 1565 by Sir Thomas Gresham as a centre of commerce for London's merchants, and gained Royal patronage in 1571. Although no longer used for its original purpose, its location at the corner of Cornhill and Threadneedle Street continues to be the geographical centre of the city's core of banking and financial services, with the Bank of England moving to its present site in 1734, opposite the Royal Exchange on Threadneedle Street. Immediately to the south of Cornhill, Lombard Street was the location from 1691 of Lloyd's Coffee House, which became the world-leading insurance market. London's insurance sector continues to be based in the area, particularly in Lime Street. In 1708, Christopher Wren's masterpiece, St Paul's Cathedral, was completed on his birthday. The first service had been held on 2 December 1697, more than 10 years earlier. It replaced the original St Paul's, which had been completely destroyed in the Great Fire of London, and is considered to be one of the finest cathedrals in Britain and a fine example of Baroque architecture. Growth of London The 18th century was a period of rapid growth for London, reflecting an increasing national population, the early stirrings of the Industrial Revolution, and London's role at the centre of the evolving British Empire. The urban area expanded beyond the borders of the City of London, most notably during this period towards the West End and Westminster. Expansion continued and became more rapid by the beginning of the 19th century, with London growing in all directions. To the East the Port of London grew rapidly during the century, with the construction of many docks, needed as the Thames at the city could not cope with the volume of trade. The arrival of the railways and the Tube meant that London could expand over a much greater area. By the mid-19th century, with London still rapidly expanding in population and area, the city had already become only a small part of the wider metropolis. 19th and 20th centuries An attempt was made in 1894 with the Royal Commission on the Amalgamation of the City and County of London to end the distinction between the city and the surrounding County of London, but a change of government at Westminster meant the option was not taken up. The city as a distinct polity survived despite its position within the London conurbation and numerous local government reforms. Supporting this status, the city was a special parliamentary borough that elected four members to the unreformed House of Commons, who were retained after the Reform Act 1832; reduced to two under the Redistribution of Seats Act 1885; and ceased to be a separate constituency under the Representation of the People Act 1948. Since then the city is a minority (in terms of population and area) of the Cities of London and Westminster. The city's population fell rapidly in the 19th century and through most of the 20th century, as people moved outwards in all directions to London's vast suburbs, and many residential buildings were demolished to make way for office blocks. Like many areas of London and other British cities, the City fell victim to large scale and highly destructive aerial bombing during World War II, especially in the Blitz. Whilst St Paul's Cathedral survived the onslaught, large swathes of the area did not and the particularly heavy raids of late December 1940 led to a firestorm called the Second Great Fire of London. There was a major rebuilding programme in the decades following the war, in some parts (such as at the Barbican) dramatically altering the urban landscape. But the destruction of the older historic fabric allowed the construction of modern and larger-scale developments, whereas in those parts not so badly affected by bomb damage the City retains its older character of smaller buildings. The street pattern, which is still largely medieval, was altered slightly in places, although there is a more recent trend of reversing some of the post-war modernist changes made, such as at Paternoster Square. The City suffered terrorist attacks including the 1993 Bishopsgate bombing (IRA) and the 7 July 2005 London bombings (Islamist). In response to the 1993 bombing, a system of road barriers, checkpoints and surveillance cameras referred to as the "ring of steel" has been maintained to control entry points to the city. The 1970s saw the construction of tall office buildings including the 600-foot (183 m), 47-storey NatWest Tower, the first skyscraper in the UK. Office space development has intensified especially in the central, northern and eastern parts, with skyscrapers including 30 St. Mary Axe ("the Gherkin"'), Leadenhall Building ("the Cheesegrater"), 20 Fenchurch Street ("the Walkie-Talkie"), the Broadgate Tower, the Heron Tower and 22 Bishopsgate, which is the tallest building in the city. The main residential section of the City today is the Barbican Estate, constructed between 1965 and 1976. The Museum of London is based there, as are a number of other services provided by the corporation. Governance The city has a unique political status, a legacy of its uninterrupted integrity as a corporate city since the Anglo-Saxon period and its singular relationship with the Crown. Historically its system of government was not unusual, but it was not reformed by the Municipal Reform Act 1835 and little changed by later reforms, so that it is the only local government in the UK where elections are not run on the basis of one vote for every adult citizen. It is administered by the City of London Corporation, headed by the Lord Mayor of London (not to be confused with the separate Mayor of London, an office created only in the year 2000), which is responsible for a number of functions and has interests in land beyond the city's boundaries. Unlike other English local authorities, the corporation has two council bodies: the (now largely ceremonial) Court of Aldermen and the Court of Common Council. The Court of Aldermen represents the wards, with each ward (irrespective of size) returning one alderman. The chief executive of the Corporation holds the ancient office of Town Clerk of London. The city is a ceremonial county which has a Commission of Lieutenancy headed by the Lord Mayor instead of a Lord-Lieutenant and has two Sheriffs instead of a High Sheriff (see list of Sheriffs of London), quasi-judicial offices appointed by the livery companies, an ancient political system based on the representation and protection of trades (guilds). Senior members of the livery companies are known as liverymen and form the Common Hall, which chooses the lord mayor, the sheriffs and certain other officers. Wards The city is made up of 25 wards. They are survivors of the medieval government system that allowed a very local area to exist as a self-governing unit within the wider city. They can be described as electoral/political divisions; ceremonial, geographic and administrative entities; sub-divisions of the city. Each ward has an Alderman, who until the mid-1960s held office for life but since put themselves up for re-election at least every 6 years. Wards continue to have a Beadle, an ancient position which is now largely ceremonial whose main remaining function is the running of an annual Wardmote of electors, representatives and officials. At the Wardmote the ward's Alderman appoints at least one Deputy for the year ahead. Each ward also has a Ward Club, which is similar to a residents' association. The wards are ancient and their number has changed three times since time immemorial in 1394 Farringdon was divided into Farringdon Within and Farringdon Without in 1550 the ward of Bridge Without, south of the river, was created, the ward of Bridge becoming Bridge Within; in 1978 these Bridge wards were merged as Bridge ward. Following boundary changes in 1994, and later reform of the business vote in the city, there was a major boundary and electoral representation revision of the wards in 2003, and they were reviewed again in 2010 for change in 2013, though not to such a dramatic extent. The review was conducted by senior officers of the corporation and senior judges of the Old Bailey; the wards are reviewed by this process to avoid malapportionment. The procedure of review is unique in the United Kingdom as it is not conducted by the Electoral Commission or a local government boundary commission every 8 to 12 years, which is the case for all other wards in Great Britain. Particular churches, livery company halls and other historic buildings and structures are associated with a ward, such as St Paul's Cathedral with Castle Baynard, and London Bridge with Bridge; boundary changes in 2003 removed some of these historic connections. Each ward elects an alderman to the Court of Aldermen, and commoners (the City equivalent of a councillor) to the Court of Common Council of the corporation. Only electors who are Freemen of the City of London are eligible to stand. The number of commoners a ward sends to the Common Council varies from two to ten, depending on the number of electors in each ward. Since the 2003 review it has been agreed that the four more residential wards: Portsoken, Queenhithe, Aldersgate and Cripplegate together elect 20 of the 100 commoners, whereas the business-dominated remainder elect the remaining 80 commoners. 2003 and 2013 boundary changes have increased the residential emphasis of the mentioned four wards. Census data provides eight nominal rather than 25 real wards, all of varying size and population. Being subject to renaming and definition at any time, these census 'wards' are notable in that four of the eight wards accounted for 67% of the 'square mile' and held 86% of the population, and these were in fact similar to and named after four City of London wards: Elections The city has a unique electoral system. Most of its voters are representatives of businesses and other bodies that occupy premises in the city. Its ancient wards have very unequal numbers of voters. In elections, both the businesses based in the city and the residents of the City vote. The City of London Corporation was not reformed by the Municipal Corporations Act 1835, because it had a more extensive electoral franchise than any other borough or city; in fact, it widened this further with its own equivalent legislation allowing one to become a freeman without being a liveryman. In 1801, the city had a population of about 130,000, but increasing development of the city as a central business district led to this falling to below 5,000 after the Second World War. It has risen slightly to around 9,000 since, largely due to the development of the Barbican Estate. In 2009, the business vote was about 24,000, greatly exceeding residential voters. As the City of London Corporation has not been affected by other municipal legislation over the period of time since then, its electoral practice has become increasingly anomalous. Uniquely for city or borough elections, its elections remain independent-dominated. The business or "non-residential vote" was abolished in other UK local council elections by the Representation of the People Act 1969, but was preserved in the City of London. The principal reason given by successive UK governments for retaining this mechanism for giving businesses representation, is that the city is "primarily a place for doing business". About 330,000 non-residents constitute the day-time population and use most of its services, far outnumbering residents, who number around 7,000 (2011). By contrast, opponents of the retention of the business vote argue that it is a cause of institutional inertia. The City of London (Ward Elections) Act 2002, a private Act of Parliament, reformed the voting system and greatly increased the business franchise, allowing many more businesses to be represented. Under the new system, the number of non-resident voters has doubled from 16,000 to 32,000. Previously disenfranchised firms (and other organisations) are entitled to nominate voters, in addition to those already represented, and all such bodies are now required to choose their voters in a representative fashion. Bodies employing fewer than 10 people may appoint 1 voter; those employing 10 to 50 people 1 voter for every 5 employees; those employing more than 50 people 10 voters and 1 additional voter for each 50 employees beyond the first 50. The Act also removed other anomalies which had been unchanged since the 1850s. The Temple Inner Temple and Middle Temple (which neighbour each other) |
understanding up to and throughout the Renaissance (i.e. for almost two thousand years), and various terms being used to describe the clitoris seemed to have further confused the issue of its structure. In addition to Avicenna's naming it the albaratha or virga ("rod") and Colombo's calling it sweetness of Venus, Hippocrates used the term columella ("little pillar'"), and Albucasis, an Arabic medical authority, named it tentigo ("tension"). The names indicated that each description of the structures was about the body and glans of the clitoris but usually the glans. It was additionally known to the Romans, who named it (vulgar slang) landica. However, Albertus Magnus, one of the most prolific writers of the Middle Ages, felt that it was important to highlight "homologies between male and female structures and function" by adding "a psychology of sexual arousal" that Aristotle had not used to detail the clitoris. While in Constantine's treatise Liber de coitu, the clitoris is referred to a few times, Magnus gave an equal amount of attention to male and female organs. Like Avicenna, Magnus also used the word virga for the clitoris, but employed it for the male and female genitals; despite his efforts to give equal ground to the clitoris, the cycle of suppression and rediscovery of the organ continued, and a 16th-century justification for clitoridectomy appears to have been confused by hermaphroditism and the imprecision created by the word nymphae substituted for the word clitoris. Nymphotomia was a medical operation to excise an unusually large clitoris, but what was considered "unusually large" was often a matter of perception. The procedure was routinely performed on Egyptian women, due to physicians such as Jacques Daléchamps who believed that this version of the clitoris was "an unusual feature that occurred in almost all Egyptian women [and] some of ours, so that when they find themselves in the company of other women, or their clothes rub them while they walk or their husbands wish to approach them, it erects like a male penis and indeed they use it to play with other women, as their husbands would do ... Thus the parts are cut". 17th century–present day knowledge and vernacular Caspar Bartholin, a 17th-century Danish anatomist, dismissed Colombo's and Falloppio's claims that they discovered the clitoris, arguing that the clitoris had been widely known to medical science since the second century. Although 17th-century midwives recommended to men and women that women should aspire to achieve orgasms to help them get pregnant for general health and well-being and to keep their relationships healthy, debate about the importance of the clitoris persisted, notably in the work of Regnier de Graaf in the 17th century and Georg Ludwig Kobelt in the 19th. Like Falloppio and Bartholin, De Graaf criticized Colombo's claim of having discovered the clitoris; his work appears to have provided the first comprehensive account of clitoral anatomy. "We are extremely surprised that some anatomists make no more mention of this part than if it did not exist at all in the universe of nature," he stated. "In every cadaver we have so far dissected we have found it quite perceptible to sight and touch." De Graaf stressed the need to distinguish nympha from clitoris, choosing to "always give [the clitoris] the name clitoris" to avoid confusion; this resulted in frequent use of the correct name for the organ among anatomists, but considering that nympha was also varied in its use and eventually became the term specific to the labia minora, more confusion ensued. Debate about whether orgasm was even necessary for women began in the Victorian era, and Freud's 1905 theory about the immaturity of clitoral orgasms (see above) negatively affected women's sexuality throughout most of the 20th century. Toward the end of World War I, a maverick British MP named Noel Pemberton Billing published an article entitled "The Cult of the Clitoris", furthering his conspiracy theories and attacking the actress Maud Allan and Margot Asquith, wife of the prime minister. The accusations led to a sensational libel trial, which Billing eventually won; Philip Hoare reports that Billing argued that "as a medical term, 'clitoris' would only be known to the 'initiated', and was incapable of corrupting moral minds". Jodie Medd argues in regard to "The Cult of the Clitoris" that "the female nonreproductive but desiring body [...] simultaneously demands and refuses interpretative attention, inciting scandal through its very resistance to representation." From the 18th – 20th century, especially during the 20th, details of the clitoris from various genital diagrams presented in earlier centuries were omitted from later texts. The full extent of the clitoris was alluded to by Masters and Johnson in 1966, but in such a muddled fashion that the significance of their description became obscured; in 1981, the Federation of Feminist Women's Health Clinics (FFWHC) continued this process with anatomically precise illustrations identifying 18 structures of the clitoris. Despite the FFWHC's illustrations, Josephine Lowndes Sevely, in 1987, described the vagina as more of the counterpart of the penis. Concerning other beliefs about the clitoris, Hite (1976 and 1981) found that, during sexual intimacy with a partner, clitoral stimulation was more often described by women as foreplay than as a primary method of sexual activity, including orgasm. Further, although the FFWHC's work significantly propelled feminist reformation of anatomical texts, it did not have a general impact. Helen O'Connell's late 1990s research motivated the medical community to start changing the way the clitoris is anatomically defined. O'Connell describes typical textbook descriptions of the clitoris as lacking detail and including inaccuracies, such as older and modern anatomical descriptions of the female human urethral and genital anatomy having been based on dissections performed on elderly cadavers whose erectile (clitoral) tissue had shrunk. She instead credits the work of Georg Ludwig Kobelt as the most comprehensive and accurate description of clitoral anatomy. MRI measurements, which provide a live and multi-planar method of examination, now complement the FFWHC's, as well as O'Connell's, research efforts concerning the clitoris, showing that the volume of clitoral erectile tissue is ten times that which is shown in doctors' offices and in anatomy text books. In Bruce Bagemihl's survey of The Zoological Record (1978–1997) – which contains over a million documents from over 6,000 scientific journals – 539 articles focusing on the penis were found, while 7 were found focusing on the clitoris. In 2000, researchers Shirley Ogletree and Harvey Ginsberg concluded that there is a general neglect of the word clitoris in common vernacular. They looked at the terms used to describe genitalia in the PsycINFO database from 1887 to 2000 and found that penis was used in 1,482 sources, vagina in 409, while clitoris was only mentioned in 83. They additionally analyzed 57 books listed in a computer database for sex instruction. In the majority of the books, penis was the most commonly discussed body part – mentioned more than clitoris, vagina, and uterus put together. They last investigated terminology used by college students, ranging from Euro-American (76%/76%), Hispanic (18%/14%), and African American (4%/7%), regarding the students' beliefs about sexuality and knowledge on the subject. The students were overwhelmingly educated to believe that the vagina is the female counterpart of the penis. The authors found that the students' belief that the inner portion of the vagina is the most sexually sensitive part of the female body correlated with negative attitudes toward masturbation and strong support for sexual myths. A 2005 study reported that, among a sample of undergraduate students, the most frequently cited sources for knowledge about the clitoris were school and friends, and that this was associated with the least tested knowledge. Knowledge of the clitoris by self-exploration was the least cited, but "respondents correctly answered, on average, three of the five clitoral knowledge measures". The authors stated that "[k]nowledge correlated significantly with the frequency of women's orgasm in masturbation but not partnered sex" and that their "results are discussed in light of gender inequality and a social construction of sexuality, endorsed by both men and women, that privileges men's sexual pleasure over women's, such that orgasm for women is pleasing, but ultimately incidental." They concluded that part of the solution to remedying "this problem" requires that males and females are taught more about the clitoris than is currently practiced. In May 2013, humanitarian group Clitoraid launched the first annual International Clitoris Awareness Week, from 6 to 12 May. Clitoraid spokesperson Nadine Gary stated that the group's mission is to raise public awareness about the clitoris because it has "been ignored, vilified, made taboo, and considered sinful and shameful for centuries". In 2016, Odile Fillod created a 3D printable, open source, full-size model of the clitoris, for use in a set of anti-sexist videos she had been commissioned to produce. Fillod was interviewed by Stephanie Theobald, whose article in The Guardian stated that the 3D model would be used for sex education in French schools, from primary to secondary level, from September 2016 onwards; this was not the case, but the story went viral across the world. In a 2019 study, a questionnaire was administered to a sample of educational sciences postgraduate students to trace the level of their knowledge concerning the organs of the female and male reproductive system. The authors reported that about two-thirds of the students failed to name external female genitals, such as the clitoris and labia, even after detailed pictures were provided to them. Contemporary art In 2012, New York artist Sophia Wallace started work on a multimedia project to challenge misconceptions about the clitoris. Based on O'Connell's 1998 research, Wallace's work emphasizes the sheer scope and size of the human clitoris. She says that ignorance of this still seems to be pervasive in modern society. "It is a curious dilemma to observe the paradox that on the one hand the female body is the primary metaphor for sexuality, its use saturates advertising, art and the mainstream erotic imaginary," she said. "Yet, the clitoris, the true female sexual organ, is virtually invisible." The project is called Cliteracy and it includes a "clit rodeo", which is an interactive, climb-on model of a giant golden clitoris, including its inner parts, produced with the help of sculptor Kenneth Thomas. "It's been a showstopper wherever it's been shown. People are hungry to be able to talk about this," Wallace said. "I love seeing men standing up for the clit [...] Cliteracy is about not having one's body controlled or legislated [...] Not having access to the pleasure that is your birthright is a deeply political act." In 2016, another project started in New York, street art that has since spread to almost 100 cities: Clitorosity, a "community-driven effort to celebrate the full structure of the clitoris", combining chalk drawings and words to spark interaction and conversation with passers-by, which the team documents on social media. In 2016, Lori-Malépart Traversy made an animated documentary about the unrecognized anatomy of the clitoris. In 2017, Alli Sebastian Wolf created a golden 100:1 scale model anatomical of a clitoris, called the Glitoris and said, she hopes knowledge of the clitoris will soon become so uncontroversial that making art about them would be as irrelevant as making art about penises. Other projects listed by the BBC include Clito Clito, body-positive jewellery made in Berlin; Clitorissima, a documentary intended to normalize mother-daughter conversations about the clitoris; and a ClitArt festival in London, encompassing spoken word performances as well as visual art. French art collective Les Infemmes (a pun on "infamous" and "women") published a fanzine whose title can be translated as "The Clit Cheatsheet". Influence on female genital mutilation Significant controversy surrounds female genital mutilation (FGM), with the World Health Organization (WHO) being one of many health organizations that have campaigned against the procedures on behalf of human rights, stating that "FGM has no health benefits" and that it is "a violation of the human rights of girls and women" and "reflects deep-rooted inequality between the sexes". The practice has existed at one point or another in almost all human civilizations, most commonly to exert control over the sexual behavior, including masturbation, of girls and women, but also to change the clitoris's appearance. Custom and tradition are the most frequently cited reasons for FGM, with some cultures believing that not performing it has the possibility of disrupting the cohesiveness of their social and political systems, such as FGM also being a part of a girl's initiation into adulthood. Often, a girl is not considered an adult in an FGM-practicing society unless she has undergone FGM, and the "removal of the clitoris and labia – viewed by some as the male parts of a woman's body – is thought to enhance the girl's femininity, often synonymous with docility and obedience". Female genital mutilation is carried out in several societies, especially in Africa, with 85 percent of genital mutilations performed in Africa consisting of clitoridectomy or excision, and to a lesser extent in other parts of the Middle East and Southeast Asia, on girls from a few days old to mid-adolescent, often to reduce sexual desire in an effort to preserve vaginal virginity. The practice of FGM has spread globally, as immigrants from Asia, Africa, and the Middle East bring the custom with them. In the United States, it is sometimes practiced on girls born with a clitoris that is larger than usual. Comfort Momoh, who specializes in the topic of FGM, states that FGM might have been "practiced in ancient Egypt as a sign of distinction among the aristocracy"; there are reports that traces of infibulation are on Egyptian mummies. FGM is still routinely practiced in Egypt. Greenberg et al. report that "one study found that 97% of married women in Egypt had had some form of genital mutilation performed." Amnesty International estimated in 1997 that more than two million FGM procedures are performed every year. Other animals General Although the clitoris exists in all mammal species, few detailed studies of the anatomy of the clitoris in non-humans exist. The clitoris is especially developed in fossas, apes, lemurs, moles, and, like the penis in many non-human placental mammals, often contains a small bone. In females, this bone is known as the os clitoridis. The clitoris exists in turtles, ostriches, crocodiles, and in species of birds in which the male counterpart has a penis. Some intersex female bears mate and give birth through the tip of the clitoris; these species are grizzly bears, brown bears, American black bears and polar bears. Although the bears have been described as having "a birth canal that runs through the clitoris rather than forming a separate vagina" (a feature that is estimated to make up 10 to 20 percent of the bears' population), scientists state that female spotted hyenas are the only non-hermaphroditic female mammals devoid of an external vaginal opening, and whose sexual anatomy is distinct from usual intersex cases. Non-human primates In spider monkeys, the clitoris is especially developed and has an interior passage, or urethra, that makes it almost identical to the penis, and it retains and distributes urine droplets as the female spider monkey moves around. Scholar Alan F. Dixson stated that this urine "is voided at the bases of the clitoris, flows down the shallow groove on its perineal surface, and is held by the skin folds on each side of the groove". Because spider monkeys of South America have pendulous and erectile clitorises long enough to be mistaken for a penis, researchers and observers of the species look for a scrotum to determine the animal's sex; a similar approach is to identify scent-marking glands that may also be present on the clitoris. The clitoris erects in squirrel monkeys during dominance displays, which indirectly influences the squirrel monkeys' reproductive success. The clitoris of bonobos is larger and more externalized than in most mammals; Natalie Angier said that a young adolescent "female bonobo is maybe half the weight of a human teenager, but her clitoris is three times bigger than the human equivalent, and visible enough to waggle unmistakably as she walks". Female bonobos often engage in the practice of genital-genital (GG) rubbing, which is the non-human form of tribadism that human females engage in. Ethologist Jonathan Balcombe stated that female bonobos rub their clitorises together rapidly for ten to twenty seconds, and this behavior, "which may be repeated in rapid succession, is usually accompanied by grinding, shrieking, and clitoral engorgement"; he added that, on average, they engage in this practice "about once every two hours", and as bonobos sometimes mate face-to-face, "evolutionary biologist Marlene Zuk has suggested that the position of the clitoris in bonobos and some other primates has evolved to maximize stimulation during sexual intercourse". Many strepsirrhine species exhibit elongated clitorises that are either fully or partially tunneled by the urethra, including mouse lemurs, dwarf lemurs, all Eulemur species, lorises and galagos. Some of these species also exhibit a membrane seal across the vagina that closes the vaginal opening during the non-mating seasons, most notably mouse and dwarf lemurs. The clitoral morphology of the ring-tailed lemur is the most well-studied. They are described as having "elongated, pendulous clitorises that are [fully] tunneled by a urethra". The urethra is surrounded by erectile tissue, which allows for significant swelling during breeding seasons, but this erectile tissue differs from the typical male corpus spongiosum. Non-pregnant adult ring-tailed females do not show higher testosterone levels than males, but they do exhibit higher A4 and estrogen levels during seasonal aggression. During pregnancy, estrogen, A4, and testosterone levels are raised, but female fetuses are still "protected" from excess testosterone. These "masculinized" genitalia are often found alongside other traits, such as female-dominated social groups, reduced sexual dimorphism that makes females the same size as males, and even ratios of sexes in adult populations. This phenomenon that has been dubbed the "lemur syndrome". A 2014 study of Eulemur masculinization proposed that behavioral and morphological masculinization in female lemuriformes is an ancestral trait that likely emerged after their split from lorisiformes. Spotted hyenas While female spotted hyenas are sometimes referred to as hermaphrodites or as intersex, and scientists of ancient and later historical times believed that they were hermaphrodites, modern scientists do not refer to them as such. That designation is typically reserved for those who simultaneously exhibit features of both sexes; the genetic makeup of female spotted hyenas "are clearly distinct" from male spotted hyenas. Female spotted hyenas have a clitoris 90 percent as long and the same diameter as a male penis (171 millimeters long and 22 millimeters in diameter), and this pseudo-penis's formation seems largely androgen-independent because it appears in the female fetus before differentiation of the fetal ovary and adrenal gland. The spotted hyenas have a highly erectile clitoris, complete with a false scrotum; author John C. Wingfield stated that "the resemblance to male genitalia is so close that sex can be determined with confidence only by palpation of the scrotum". The pseudo-penis can also be distinguished from the males' genitalia by its greater thickness and more rounded glans. The female possesses no external vagina, as the labia are fused to form a pseudo-scrotum. In the females, this scrotum consists of soft adipose tissue. Like male spotted hyenas with regard to their penises, the female spotted hyenas have small penile spines on the head of their clitorises, which scholar Catherine Blackledge said makes "the clitoris tip feel like soft sandpaper". She added that the clitoris "extends away from the body in a sleek and slender arc, measuring, on average, over 17 cm from root to tip. Just like a penis, [it] is fully erectile, raising its head in hyena greeting ceremonies, social displays, games of rough and tumble or when sniffing out peers". Due to their higher levels of androgen exposure during fetal development, the female hyenas are significantly more muscular and aggressive than their male counterparts; social-wise, they are of higher rank than the males, being dominant or dominant and alpha, and the females who have been exposed to higher levels of androgen than average become higher-ranking than their female peers. Subordinate females lick the clitorises of higher-ranked females as a sign of submission and obedience, but females also lick each other's clitorises as a greeting or to strengthen social bonds; in contrast, while all males lick the clitorises of dominant females, the females will not lick the penises of males because males are considered to be of lowest rank. The urethra and vagina of the female spotted hyena exit through the clitoris, allowing the females to urinate, copulate and give birth through this organ. This trait makes mating more laborious for the male than in other mammals, and also makes attempts to sexually coerce (physically force sexual activity on) females futile. Joan Roughgarden, an ecologist and evolutionary biologist, said that because the hyena's clitoris is higher on the belly than the vagina in most mammals, the male hyena "must slide his rear under the female when mating so that his penis lines up with [her clitoris]". In an action similar to pushing up a shirtsleeve, the "female retracts the [pseudo-penis] on itself, and creates an opening into which the male inserts his own penis". The male must practice this act, which can take a couple of months to successfully perform. Female spotted hyenas exposed to larger doses of androgen have significantly damaged ovaries, making it difficult to conceive. After giving birth, the pseudo-penis is stretched and loses much of its original aspects; it becomes a slack-walled and reduced prepuce with an enlarged orifice with split lips. Approximately 15% of the females die during their first time giving birth, and over 60% of their species' firstborn young die. A 2006 Baskin et al. study concluded, "The basic anatomical structures of the corporeal bodies in both sexes of humans and spotted hyenas were similar. As in humans, the dorsal nerve distribution was unique in being devoid of nerves at the 12 o'clock position in the penis and clitoris of the spotted hyena" and that "[d]orsal nerves of the penis/clitoris in humans and male spotted hyenas tracked along both sides of the corporeal body to the corpus spongiosum at the 5 and 7 o'clock positions. The dorsal nerves penetrated the corporeal body and distally the glans in the hyena", and in female hyenas, "the dorsal nerves fanned out laterally on the clitoral body. Glans morphology was different in appearance in both sexes, being wide and blunt in the female and tapered in the male". Moles Many species of Talpid moles exhibit peniform clitorises that are tunneled by the urethra and are found to have erectile tissue, most notably species from the Talpa genus found in Europe. Unique to this clade are the presence of ovotestes, wherein the female ovary also is mostly made up of sterile testicular tissue that secretes testosterone with only a small portion of the gonad containing ovarian tissue. Genetic studies have revealed that females have an XX genotype and do not have any translocated Y-linked genes. Detailed developmental studies of Talpa occidentalis have revealed that the female gonads develop in a "testis-like pattern". DMRT1, a gene that regulates development of Sertoli cells, was found to be expressed in female germ cells before meiosis, however no Sertoli cells were present in the fully-developed ovotestes. Additionally, the female germ cells only enter meiosis postnatally, a phenomenon that has not been found in any other eutherian mammal. Phylogenetic analyses have suggested that, like in lemuroids, this trait must have evolved in a common ancestor of the clade, and has been "turned off and on" in different Talpid lineages. Female European moles are highly territorial and will not allow males in to their territory outside of breeding season, the probable cause of this behavior being the high levels of testosterone secreted by the female ovotestes. During the non-breeding season, their vaginal opening is covered by skin, akin to the condition seen in mouse and dwarf lemurs. Cats, sheep and mice Researchers studying the peripheral and central afferent pathways from the feline clitoris concluded that "afferent neurons projecting to the clitoris of the cat were identified by WGA-HRP tracing in the S1 and S2 dorsal root ganglia. An average of 433 cells were identified on each side of the animal. 85 percent and 15 percent of the labeled cells were located in the S1 and S2 dorsal root ganglia, respectively. The average cross sectional area of clitoral afferent neuron profiles was 1.479±627 μm2." They also stated that light "constant pressure on the clitoris produced an initial burst of single unit firing (maximum frequencies 170–255 Hz) followed by rapid adaptation and a sustained firing (maximum 40 Hz), which was maintained during the stimulation" and that further examination of tonic firing "indicate that the clitoris is innervated by mechano-sensitive myelinated afferent fibers in the pudental nerve which project centrally to the region of the dorsal commissure in the L7-S1 spinal cord". The external phenotype and reproductive behavior of 21 freemartin sheep and two male pseudohermaphrodite sheep were recorded with the aim of identifying any characteristics that could predict a failure to breed. The vagina's length and the size and shape of the vulva and clitoris were among the aspects analyzed. While the study reported that "a number of physical and behavioural abnormalities were detected," it also concluded that "the only consistent finding in all 23 animals was a short vagina which varied in length from 3.1 to 7.0 cm, compared with 10 | mm for the mean. Other research indicates that the clitoral body can measure in length, while the clitoral body and crura together can be or more in length. Hood The clitoral hood projects at the front of the labia commissure, where the edges of the labia majora (outer lips) meet at the base of the pubic mound; it is partially formed by fusion of the upper part of the external folds of the labia minora (inner lips) and covers the glans and external shaft. There is considerable variation in how much of the glans protrudes from the hood and how much is covered by it, ranging from completely covered to fully exposed, and tissue of the labia minora also encircles the base of the glans. Bulbs The vestibular bulbs are more closely related to the clitoris than the vestibule because of the similarity of the trabecular and erectile tissue within the clitoris and bulbs, and the absence of trabecular tissue in other genital organs, with the erectile tissue's trabecular nature allowing engorgement and expansion during sexual arousal. The vestibular bulbs are typically described as lying close to the crura on either side of the vaginal opening; internally, they are beneath the labia majora. When engorged with blood, they cuff the vaginal opening and cause the vulva to expand outward. Although a number of texts state that they surround the vaginal opening, Ginger et al. state that this does not appear to be the case and tunica albuginea does not envelop the erectile tissue of the bulbs. In Yang et al.'s assessment of the bulbs' anatomy, they conclude that the bulbs "arch over the distal urethra, outlining what might be appropriately called the 'bulbar urethra' in women." Homology The clitoris and penis are generally the same anatomical structure, although the distal portion (or opening) of the urethra is absent in the clitoris of humans and most other animals. The idea that males have clitorises was suggested in 1987 by researcher Josephine Lowndes Sevely, who theorized that the male corpora cavernosa (a pair of sponge-like regions of erectile tissue which contain most of the blood in the penis during penile erection) are the true counterpart of the clitoris. She argued that "the male clitoris" is directly beneath the rim of the glans penis, where the frenulum of prepuce of the penis (a fold of the prepuce) is located, and proposed that this area be called the "Lownde's crown". Her theory and proposal, though acknowledged in anatomical literature, did not materialize in anatomy books. Modern anatomical texts show that the clitoris displays a hood that is the equivalent of the penis's foreskin, which covers the glans. It also has a shaft that is attached to the glans. The male corpora cavernosa are homologous to the corpus cavernosum clitoridis (the female cavernosa), the bulb of penis is homologous to the vestibular bulbs beneath the labia minora, the scrotum is homologous to the labia majora, and the penile urethra and part of the skin of the penis is homologous to the labia minora. Upon anatomical study, the penis can be described as a clitoris that has been mostly pulled out of the body and grafted on top of a significantly smaller piece of spongiosum containing the urethra. With regard to nerve endings, the human clitoris's estimated 8,000 or more (for its glans or clitoral body as a whole) is commonly cited as being twice as many as the nerve endings found in the human penis (for its glans or body as a whole) and as more than any other part of the human body. These reports sometimes conflict with other sources on clitoral anatomy or those concerning the nerve endings in the human penis. For example, while some sources estimate that the human penis has 4,000 nerve endings, other sources state that the glans or the entire penile structure have the same amount of nerve endings as the clitoral glans or discuss whether the uncircumcised penis has thousands more than the circumcised penis or is generally more sensitive. Some sources state that in contrast to the glans penis, the clitoral glans lacks smooth muscle within its fibrovascular cap and is thus differentiated from the erectile tissues of the clitoris and bulbs; additionally, bulb size varies and may be dependent on age and estrogenization. While the bulbs are considered the equivalent of the male spongiosum, they do not completely encircle the urethra. The thin corpus spongiosum of the penis runs along the underside of the penile shaft, enveloping the urethra, and expands at the end to form the glans. It partially contributes to erection, which are primarily caused by the two corpora cavernosa that comprise the bulk of the shaft; like the female cavernosa, the male cavernosa soak up blood and become erect when sexually excited. The male corpora cavernosa taper off internally on reaching the spongiosum head. With regard to the Y-shape of the cavernosa – crown, body, and legs – the body accounts for much more of the structure in men, and the legs are stubbier; typically, the cavernosa are longer and thicker in males than in females. Function Sexual activity General The clitoris has an abundance of nerve endings, and is the human female's most sensitive erogenous zone and generally the primary anatomical source of human female sexual pleasure. When sexually stimulated, it may incite female sexual arousal. Sexual stimulation, including arousal, may result from mental stimulation, foreplay with a sexual partner, or masturbation, and can lead to orgasm. The most effective sexual stimulation of the organ is usually manually or orally (cunnilingus), which is often referred to as direct clitoral stimulation; in cases involving sexual penetration, these activities may also be referred to as additional or assisted clitoral stimulation. Direct clitoral stimulation involves physical stimulation to the external anatomy of the clitoris – glans, hood, and the external shaft. Stimulation of the labia minora (inner lips), due to its external connection with the glans and hood, may have the same effect as direct clitoral stimulation. Though these areas may also receive indirect physical stimulation during sexual activity, such as when in friction with the labia majora (outer lips), indirect clitoral stimulation is more commonly attributed to penile-vaginal penetration. Penile-anal penetration may also indirectly stimulate the clitoris by the shared sensory nerves (especially the pudendal nerve, which gives off the inferior anal nerves and divides into two terminal branches: the perineal nerve and the dorsal nerve of the clitoris). Due to the glans's high sensitivity, direct stimulation to it is not always pleasurable; instead, direct stimulation to the hood or the areas near the glans is often more pleasurable, with the majority of women preferring to use the hood to stimulate the glans, or to have the glans rolled between the lips of the labia, for indirect touch. It is also common for women to enjoy the shaft of the clitoris being softly caressed in concert with occasional circling of the clitoral glans. This might be with or without manual penetration of the vagina, while other women enjoy having the entire area of the vulva caressed. As opposed to use of dry fingers, stimulation from fingers that have been well-lubricated, either by vaginal lubrication or a personal lubricant, is usually more pleasurable for the external anatomy of the clitoris. As the clitoris's external location does not allow for direct stimulation by sexual penetration, any external clitoral stimulation while in the missionary position usually results from the pubic bone area, the movement of the groins when in contact. As such, some couples may engage in the woman-on-top position or the coital alignment technique, a sex position combining the "riding high" variation of the missionary position with pressure-counterpressure movements performed by each partner in rhythm with sexual penetration, to maximize clitoral stimulation. Lesbian couples may engage in tribadism for ample clitoral stimulation or for mutual clitoral stimulation during whole-body contact. Pressing the penis in a gliding or circular motion against the clitoris (intercrural sex), or stimulating it by movement against another body part, may also be practiced. A vibrator (such as a clitoral vibrator), dildo or other sex toy may be used. Other women stimulate the clitoris by use of a pillow or other inanimate object, by a jet of water from the faucet of a bathtub or shower, or by closing their legs and rocking. During sexual arousal, the clitoris and the whole of the genitalia engorge and change color as the erectile tissues fill with blood (vasocongestion), and the individual experiences vaginal contractions. The ischiocavernosus and bulbocavernosus muscles, which insert into the corpora cavernosa, contract and compress the dorsal vein of the clitoris (the only vein that drains the blood from the spaces in the corpora cavernosa), and the arterial blood continues a steady flow and having no way to drain out, fills the venous spaces until they become turgid and engorged with blood. This is what leads to clitoral erection. The clitoral glans doubles in diameter upon arousal and upon further stimulation, becomes less visible as it is covered by the swelling of tissues of the clitoral hood. The swelling protects the glans from direct contact, as direct contact at this stage can be more irritating than pleasurable. Vasocongestion eventually triggers a muscular reflex, which expels the blood that was trapped in surrounding tissues, and leads to an orgasm. A short time after stimulation has stopped, especially if orgasm has been achieved, the glans becomes visible again and returns to its normal state, with a few seconds (usually 5–10) to return to its normal position and 5–10 minutes to return to its original size. If orgasm is not achieved, the clitoris may remain engorged for a few hours, which women often find uncomfortable. Additionally, the clitoris is very sensitive after orgasm, making further stimulation initially painful for some women. Clitoral and vaginal orgasmic factors General statistics indicate that 70–80 percent of women require direct clitoral stimulation (consistent manual, oral or other concentrated friction against the external parts of the clitoris) to reach orgasm. Indirect clitoral stimulation (for example, via vaginal penetration) may also be sufficient for female orgasm. The area near the entrance of the vagina (the lower third) contains nearly 90 percent of the vaginal nerve endings, and there are areas in the anterior vaginal wall and between the top junction of the labia minora and the urethra that are especially sensitive, but intense sexual pleasure, including orgasm, solely from vaginal stimulation is occasional or otherwise absent because the vagina has significantly fewer nerve endings than the clitoris. Prominent debate over the quantity of vaginal nerve endings began with Alfred Kinsey. Although Sigmund Freud's theory that clitoral orgasms are a prepubertal or adolescent phenomenon and that vaginal (or G-spot) orgasms are something that only physically mature females experience had been criticized before, Kinsey was the first researcher to harshly criticize the theory. Through his observations of female masturbation and interviews with thousands of women, Kinsey found that most of the women he observed and surveyed could not have vaginal orgasms, a finding that was also supported by his knowledge of sex organ anatomy. Scholar Janice M. Irvine stated that he "criticized Freud and other theorists for projecting male constructs of sexuality onto women" and "viewed the clitoris as the main center of sexual response". He considered the vagina to be "relatively unimportant" for sexual satisfaction, relaying that "few women inserted fingers or objects into their vaginas when they masturbated". Believing that vaginal orgasms are "a physiological impossibility" because the vagina has insufficient nerve endings for sexual pleasure or climax, he "concluded that satisfaction from penile penetration [is] mainly psychological or perhaps the result of referred sensation". Masters and Johnson's research, as well as Shere Hite's, generally supported Kinsey's findings about the female orgasm. Masters and Johnson were the first researchers to determine that the clitoral structures surround and extend along and within the labia. They observed that both clitoral and vaginal orgasms have the same stages of physical response, and found that the majority of their subjects could only achieve clitoral orgasms, while a minority achieved vaginal orgasms. On that basis, they argued that clitoral stimulation is the source of both kinds of orgasms, reasoning that the clitoris is stimulated during penetration by friction against its hood. The research came at the time of the second-wave feminist movement, which inspired feminists to reject the distinction made between clitoral and vaginal orgasms. Feminist Anne Koedt argued that because men "have orgasms essentially by friction with the vagina" and not the clitoral area, this is why women's biology had not been properly analyzed. "Today, with extensive knowledge of anatomy, with [C. Lombard Kelly], Kinsey, and Masters and Johnson, to mention just a few sources, there is no ignorance on the subject [of the female orgasm]," she stated in her 1970 article The Myth of the Vaginal Orgasm. She added, "There are, however, social reasons why this knowledge has not been popularized. We are living in a male society which has not sought change in women's role." Supporting an anatomical relationship between the clitoris and vagina is a study published in 2005, which investigated the size of the clitoris; Australian urologist Helen O'Connell, described as having initiated discourse among mainstream medical professionals to refocus on and redefine the clitoris, noted a direct relationship between the legs or roots of the clitoris and the erectile tissue of the clitoral bulbs and corpora, and the distal urethra and vagina while using magnetic resonance imaging (MRI) technology. While some studies, using ultrasound, have found physiological evidence of the G-spot in women who report having orgasms during vaginal intercourse, O'Connell argues that this interconnected relationship is the physiological explanation for the conjectured G-Spot and experience of vaginal orgasms, taking into account the stimulation of the internal parts of the clitoris during vaginal penetration. "The vaginal wall is, in fact, the clitoris," she said. "If you lift the skin off the vagina on the side walls, you get the bulbs of the clitoris – triangular, crescental masses of erectile tissue." O'Connell et al., having performed dissections on the female genitals of cadavers and used photography to map the structure of nerves in the clitoris, made the assertion in 1998 that there is more erectile tissue associated with the clitoris than is generally described in anatomical textbooks and were thus already aware that the clitoris is more than just its glans. They concluded that some females have more extensive clitoral tissues and nerves than others, especially having observed this in young cadavers compared to elderly ones, and therefore whereas the majority of females can only achieve orgasm by direct stimulation of the external parts of the clitoris, the stimulation of the more generalized tissues of the clitoris via vaginal intercourse may be sufficient for others. French researchers Odile Buisson and Pierre Foldès reported similar findings to that of O'Connell's. In 2008, they published the first complete 3D sonography of the stimulated clitoris and republished it in 2009 with new research, demonstrating the ways in which erectile tissue of the clitoris engorges and surrounds the vagina. On the basis of their findings, they argued that women may be able to achieve vaginal orgasm via stimulation of the G-spot, because the highly innervated clitoris is pulled closely to the anterior wall of the vagina when the woman is sexually aroused and during vaginal penetration. They assert that since the front wall of the vagina is inextricably linked with the internal parts of the clitoris, stimulating the vagina without activating the clitoris may be next to impossible. In their 2009 published study, the "coronal planes during perineal contraction and finger penetration demonstrated a close relationship between the root of the clitoris and the anterior vaginal wall". Buisson and Foldès suggested "that the special sensitivity of the lower anterior vaginal wall could be explained by pressure and movement of clitoris's root during a vaginal penetration and subsequent perineal contraction". Researcher Vincenzo Puppo, who, while agreeing that the clitoris is the center of female sexual pleasure and believing that there is no anatomical evidence of the vaginal orgasm, disagrees with O'Connell and other researchers' terminological and anatomical descriptions of the clitoris (such as referring to the vestibular bulbs as the "clitoral bulbs") and states that "the inner clitoris" does not exist because the penis cannot come in contact with the congregation of multiple nerves/veins situated until the angle of the clitoris, detailed by Kobelt, or with the roots of the clitoris, which do not have sensory receptors or erogenous sensitivity, during vaginal intercourse. Puppo's belief contrasts the general belief among researchers that vaginal orgasms are the result of clitoral stimulation; they reaffirm that clitoral tissue extends, or is at least stimulated by its bulbs, even in the area most commonly reported to be the G-spot. The G-spot being analogous to the base of the male penis has additionally been theorized, with sentiment from researcher Amichai Kilchevsky that because female fetal development is the "default" state in the absence of substantial exposure to male hormones and therefore the penis is essentially a clitoris enlarged by such hormones, there is no evolutionary reason why females would have an entity in addition to the clitoris that can produce orgasms. The general difficulty of achieving orgasms vaginally, which is a predicament that is likely due to nature easing the process of child bearing by drastically reducing the number of vaginal nerve endings, challenge arguments that vaginal orgasms help encourage sexual intercourse in order to facilitate reproduction. Supporting a distinct G-spot, however, is a study by Rutgers University, published in 2011, which was the first to map the female genitals onto the sensory portion of the brain; the scans indicated that the brain registered distinct feelings between stimulating the clitoris, the cervix and the vaginal wall – where the G-spot is reported to be – when several women stimulated themselves in a functional magnetic resonance (fMRI) machine. Barry Komisaruk, head of the research findings, stated that he feels that "the bulk of the evidence shows that the G-spot is not a particular thing" and that it is "a region, it's a convergence of many different structures". Vestigiality, adaptionist and reproductive views Whether the clitoris is vestigial, an adaptation, or serves a reproductive function has also been debated. Geoffrey Miller stated that Helen Fisher, Meredith Small and Sarah Blaffer Hrdy "have viewed the clitoral orgasm as a legitimate adaptation in its own right, with major implications for female sexual behavior and sexual evolution". Like Lynn Margulis and Natalie Angier, Miller believes, "The human clitoris shows no apparent signs of having evolved directly through male mate choice. It is not especially large, brightly colored, specifically shaped or selectively displayed during courtship." He contrasts this with other female species such as spider monkeys and spotted hyenas that have clitorises as long as their male counterparts. He said the human clitoris "could have evolved to be much more conspicuous if males had preferred sexual partners with larger brighter clitorises" and that "its inconspicuous design combined with its exquisite sensitivity suggests that the clitoris is important not as an object of male mate choice, but as a mechanism of female choice." While Miller stated that male scientists such as Stephen Jay Gould and Donald Symons "have viewed the female clitoral orgasm as an evolutionary side-effect of the male capacity for penile orgasm" and that they "suggested that clitoral orgasm cannot be an adaptation because it is too hard to achieve", Gould acknowledged that "most female orgasms emanate from a clitoral, rather than vaginal (or some other), site" and that his nonadaptive belief "has been widely misunderstood as a denial of either the adaptive value of female orgasm in general, or even as a claim that female orgasms lack significance in some broader sense". He said that although he accepts that "clitoral orgasm plays a pleasurable and central role in female sexuality and its joys," "[a]ll these favorable attributes, however, emerge just as clearly and just as easily, whether the clitoral site of orgasm arose as a spandrel or an adaptation". He added that the "male biologists who fretted over [the adaptionist questions] simply assumed that a deeply vaginal site, nearer the region of fertilization, would offer greater selective benefit" due to their Darwinian, summum bonum beliefs about enhanced reproductive success. Similar to Gould's beliefs about adaptionist views and that "females grow nipples as adaptations for suckling, and males grow smaller unused nipples as a spandrel based upon the value of single development channels", Elisabeth Lloyd suggested that there is little evidence to support an adaptionist account of female orgasm. Meredith L. Chivers stated that "Lloyd views female orgasm as an ontogenetic leftover; women have orgasms because the urogenital neurophysiology for orgasm is so strongly selected for in males that this developmental blueprint gets expressed in females without affecting fitness" and this is similar to "males hav[ing] nipples that serve no fitness-related function." At the 2002 conference for Canadian Society of Women in Philosophy, Nancy Tuana argued that the clitoris is unnecessary in reproduction; she stated that it has been ignored because of "a fear of pleasure. It is pleasure separated from reproduction. That's the fear." She reasoned that this fear causes ignorance, which veils female sexuality. O'Connell stated, "It boils down to rivalry between the sexes: the idea that one sex is sexual and the other reproductive. The truth is that both are sexual and both are reproductive." She reiterated that the vestibular bulbs appear to be part of the clitoris and that the distal urethra and vagina are intimately related structures, although they are not erectile in character, forming a tissue cluster with the clitoris that appears to be the location of female sexual function and orgasm. Clinical significance Modification Modifications to the clitoris can be intentional or unintentional. They include female genital mutilation (FGM), sex reassignment surgery (for trans men as part transitioning, which may also include clitoris enlargement), intersex surgery, and genital piercings. Use of anabolic steroids by bodybuilders and other athletes can result in significant enlargement of the clitoris in concert with other masculinizing effects on their bodies. Abnormal enlargement of the clitoris may also be referred to as clitoromegaly, but clitoromegaly is more commonly seen as a congenital anomaly of the genitalia. Those taking hormones or other medications as part of a transgender transition usually experience dramatic clitoral growth; individual desires and the difficulties of phalloplasty (construction of a penis) often result in the retention of the original genitalia with the enlarged clitoris as a penis analogue (metoidioplasty). However, the clitoris cannot reach the size of the penis through hormones. A surgery to add function to the clitoris, such as metoidioplasty, is an alternative to phalloplasty that permits retention of sexual sensation in the clitoris. In clitoridectomy, the clitoris may be removed as part of a radical vulvectomy to treat cancer such as vulvar intraepithelial neoplasia; however, modern treatments favor more conservative approaches, as invasive surgery can have psychosexual consequences. Clitoridectomy more often involves parts of the clitoris being partially or completely removed during FGM, which may be additionally known as female circumcision or female genital cutting (FGC). Removing the glans of the clitoris does not mean that the whole structure is lost, since the clitoris reaches deep into the genitals. In reduction clitoroplasty, a common intersex surgery, the glans is preserved and parts of the erectile bodies are excised. Problems with this technique include loss of sensation, loss of sexual function, and sloughing of the glans. One way to preserve the clitoris with its innervations and function is to imbricate and bury the clitoral glans; however, Şenaylı et al. state that "pain during stimulus because of trapped tissue under the scarring is nearly routine. In another method, 50 percent of the ventral clitoris is removed through the level base of the clitoral shaft, and it is reported that good sensation and clitoral function are observed in follow up"; additionally, it has "been reported that the complications are from the same as those in the older procedures for this method". With regard to females who have the condition congenital adrenal hyperplasia, the largest group requiring surgical genital correction, researcher Atilla Şenaylı stated, "The main expectations for the operations are to create a normal female anatomy, with minimal complications and improvement of life quality." Şenaylı added that "[c]osmesis, structural integrity, and coital capacity of the vagina, and absence of pain during sexual activity are the parameters to be judged by the surgeon." (Cosmesis usually refers to the surgical correction of a disfiguring defect.) He stated that although "expectations can be standardized within these few parameters, operative techniques have not yet become homogeneous. Investigators have preferred different operations for different ages |
4–2. The Cubs are the oldest Major League Baseball team to have never changed their city; they have played in Chicago since 1871, and continuously so since 1874 due to the Great Chicago Fire. They have played more games and have more wins than any other team in Major League baseball since 1876. They have won three World Series titles, including the 2016 World Series, but had the dubious honor of having the two longest droughts in American professional sports: They had not won their sport's title since 1908, and had not participated in a World Series since 1945, both records, until they beat the Cleveland Indians in the 2016 World Series. The White Sox have played on the South Side continuously since 1901, with all three of their home fields throughout the years being within blocks of one another. They have won three World Series titles (1906, 1917, 2005) and six American League pennants, including the first in 1901. The Sox are fifth in the American League in all-time wins, and sixth in pennants. The Chicago Bears, one of the last two remaining charter members of the National Football League (NFL), have won nine NFL Championships, including the 1985 Super Bowl XX. The other remaining charter franchise, the Chicago Cardinals, also started out in the city, but is now known as the Arizona Cardinals. The Bears have won more games in the history of the NFL than any other team, and only the Green Bay Packers, their longtime rivals, have won more championships. The Bears play their home games at Soldier Field. Soldier Field re-opened in 2003 after an extensive renovation. The Chicago Bulls of the National Basketball Association (NBA) is one of the most recognized basketball teams in the world. During the 1990s, with Michael Jordan leading them, the Bulls won six NBA championships in eight seasons. They also boast the youngest player to win the NBA Most Valuable Player Award, Derrick Rose, who won it for the 2010–11 season. The Chicago Blackhawks of the National Hockey League (NHL) began play in 1926, and are one of the "Original Six" teams of the NHL. The Blackhawks have won six Stanley Cups, including in 2010, 2013, and 2015. Both the Bulls and the Blackhawks play at the United Center. Chicago Fire FC is a member of Major League Soccer (MLS) and plays at Soldier Field. After playing its first eight seasons at Soldier Field, the team moved to suburban Bridgeview to play at SeatGeek Stadium. In 2019, the team announced a move back to Soldier Field. The Fire have won one league title and four U.S. Open Cups, since their founding in 1997. In 1994, the United States hosted a successful FIFA World Cup with games played at Soldier Field. The Chicago Sky is a professional basketball team playing in the Women's National Basketball Association (WNBA). They play home games at the Wintrust Arena. The team was founded before the 2006 WNBA season began. The Chicago Marathon has been held each year since 1977 except for 1987, when a half marathon was run in its place. The Chicago Marathon is one of six World Marathon Majors. Five area colleges play in Division I conferences: two from major conferences—the DePaul Blue Demons (Big East Conference) and the Northwestern Wildcats (Big Ten Conference)—and three from other D1 conferences—the Chicago State Cougars (Western Athletic Conference); the Loyola Ramblers (Missouri Valley Conference); and the UIC Flames (Horizon League). Chicago has also entered into eSports with the creation of the Chicago Huntsmen, a professional Call of Duty team that participates within the CDL. At the Call of Duty League's Launch Week games in Minneapolis, Minnesota, the Chicago Huntsmen went on to beat both the Dallas Empire and Optic Gaming Los Angeles. Parks and greenspace When Chicago was incorporated in 1837, it chose the motto Urbs in Horto, a Latin phrase which means "City in a Garden". Today, the Chicago Park District consists of more than 570 parks with over of municipal parkland. There are 31 sand beaches, a plethora of museums, two world-class conservatories, and 50 nature areas. Lincoln Park, the largest of the city's parks, covers and has over 20 million visitors each year, making it third in the number of visitors after Central Park in New York City, and the National Mall and Memorial Parks in Washington, D.C. There is a historic boulevard system, a network of wide, tree-lined boulevards which connect a number of Chicago parks. The boulevards and the parks were authorized by the Illinois legislature in 1869. A number of Chicago neighborhoods emerged along these roadways in the 19th century. The building of the boulevard system continued intermittently until 1942. It includes nineteen boulevards, eight parks, and six squares, along twenty-six miles of interconnected streets. The Chicago Park Boulevard System Historic District was listed on the National Register of Historic Places in 2018. With berths for more than 6,000 boats, the Chicago Park District operates the nation's largest municipal harbor system. In addition to ongoing beautification and renewal projects for the existing parks, a number of new parks have been added in recent years, such as the Ping Tom Memorial Park in Chinatown, DuSable Park on the Near North Side, and most notably, Millennium Park, which is in the northwestern corner of one of Chicago's oldest parks, Grant Park in the Chicago Loop. The wealth of greenspace afforded by Chicago's parks is further augmented by the Cook County Forest Preserves, a network of open spaces containing forest, prairie, wetland, streams, and lakes that are set aside as natural areas which lie along the city's outskirts, including both the Chicago Botanic Garden in Glencoe and the Brookfield Zoo in Brookfield. Washington Park is also one of the city's biggest parks; covering nearly . The park is listed on the National Register of Historic Places listings in South Side Chicago. Law and government Government The government of the City of Chicago is divided into executive and legislative branches. The mayor of Chicago is the chief executive, elected by general election for a term of four years, with no term limits. The current mayor is Lori Lightfoot. The mayor appoints commissioners and other officials who oversee the various departments. As well as the mayor, Chicago's clerk and treasurer are also elected citywide. The City Council is the legislative branch and is made up of 50 aldermen, one elected from each ward in the city. The council takes official action through the passage of ordinances and resolutions and approves the city budget. The Chicago Police Department provides law enforcement and the Chicago Fire Department provides fire suppression and emergency medical services for the city and its residents. Civil and criminal law cases are heard in the Cook County Circuit Court of the State of Illinois court system, or in the Northern District of Illinois, in the federal system. In the state court, the public prosecutor is the Illinois state's attorney; in the Federal court it is the United States attorney. Politics During much of the last half of the 19th century, Chicago's politics were dominated by a growing Democratic Party organization. During the 1880s and 1890s, Chicago had a powerful radical tradition with large and highly organized socialist, anarchist and labor organizations. For much of the 20th century, Chicago has been among the largest and most reliable Democratic strongholds in the United States; with Chicago's Democratic vote the state of Illinois has been "solid blue" in presidential elections since 1992. Even before then, it was not unheard of for Republican presidential candidates to win handily in downstate Illinois, only to lose statewide due to large Democratic margins in Chicago. The citizens of Chicago have not elected a Republican mayor since 1927, when William Thompson was voted into office. The strength of the party in the city is partly a consequence of Illinois state politics, where the Republicans have come to represent rural and farm concerns while the Democrats support urban issues such as Chicago's public school funding. Chicago contains less than 25% of the state's population, but it is split between eight of Illinois' 19 districts in the United States House of Representatives. All eight of the city's representatives are Democrats; only two Republicans have represented a significant portion of the city since 1973, for one term each: Robert P. Hanrahan from 1973 to 1975, and Michael Patrick Flanagan from 1995 to 1997. Machine politics persisted in Chicago after the decline of similar machines in other large U.S. cities. During much of that time, the city administration found opposition mainly from a liberal "independent" faction of the Democratic Party. The independents finally gained control of city government in 1983 with the election of Harold Washington (in office 1983–1987). From 1989 until May 16, 2011, Chicago was under the leadership of its longest-serving mayor, Richard M. Daley, the son of Richard J. Daley. Because of the dominance of the Democratic Party in Chicago, the Democratic primary vote held in the spring is generally more significant than the general elections in November for U.S. House and Illinois State seats. The aldermanic, mayoral, and other city offices are filled through nonpartisan elections with runoffs as needed. The city is home of former United States President Barack Obama and First Lady Michelle Obama; Barack Obama was formerly a state legislator representing Chicago and later a US senator. The Obamas' residence is located near the University of Chicago in Kenwood on the city's south side. Crime Chicago had a murder rate of 18.5 per 100,000 residents in 2012, ranking 16th among US cities with 100,000 people or more. This was higher than in New York City and Los Angeles, the two largest cities in the United States, which have lower murder rates and lower total homicides. However, it was less than in many smaller American cities, including New Orleans, Newark, and Detroit, which had 53 murders per 100,000 residents in 2012. The 2015 year-end crime statistics showed there were 468 murders in Chicago in 2015 compared with 416 the year before, a 12.5% increase, as well as 2,900 shootings—13% more than the year prior, and up 29% since 2013. Chicago had more homicides than any other city in 2015 in total but not on per capita basis, according to the Chicago Tribune. In its annual crime statistics for 2016, the Chicago Police Department reported that the city experienced a dramatic rise in gun violence, with 4,331 shooting victims. The department also reported 762 murders in Chicago for the year 2016, a total that marked a 62.79% increase in homicides from 2015. In June 2017, the Chicago Police Department and the Federal ATF announced a new task force, similar to past task forces, to address the flow of illegal guns and repeat offenses with guns. According to reports in 2013, "most of Chicago's violent crime comes from gangs trying to maintain control of drug-selling territories", and is specifically related to the activities of the Sinaloa Cartel, which is active in several American cities. By 2006, the cartel sought to control most illicit drug sales. Violent crime rates vary significantly by area of the city, with more economically developed areas having low rates, but other sections have much higher rates of crime. In 2013, the violent crime rate was 910 per 100,000 people; the murder rate was 10.4 – while high crime districts saw 38.9, low crime districts saw 2.5 murders per 100,000. The number of murders in Chicago peaked at 970 in 1974, when the city's population was over 3 million people (a murder rate of about 29 per 100,000), and it reached 943 murders in 1992, (a murder rate of 34 per 100,000). However, Chicago, like other major U.S. cities, experienced a significant reduction in violent crime rates through the 1990s, falling to 448 homicides in 2004, its lowest total since 1965 and only 15.65 murders per 100,000. Chicago's homicide tally remained low during 2005 (449), 2006 (452), and 2007 (435) but rose to 510 in 2008, breaking 500 for the first time since 2003. In 2009, the murder count fell to 458 (10% down). and in 2010 Chicago's murder rate fell to 435 (16.14 per 100,000), a 5% decrease from 2009 and lowest levels since 1965. In 2011, Chicago's murders fell another 1.2% to 431 (a rate of 15.94 per 100,000). but shot up to 506 in 2012. In 2012, Chicago ranked 21st in the United States in numbers of homicides per person, and in the first half of 2013 there was a significant drop per-person, in all categories of violent crime, including homicide (down 26%). Chicago ended 2013 with 415 murders, the lowest number of murders since 1965, and overall crime rates dropped by 16 percent. In 2013, the city's murder rate was only slightly higher than the national average as a whole. According to the FBI, St. Louis, New Orleans, Detroit, and Baltimore had the highest murder rate along with several other cities. Jens Ludwig, director of the University of Chicago Crime Lab, estimated that shootings cost the city of Chicago $2.5 billion in 2012. As of 2021, Chicago has become the American city with the highest number of carjackings. Chicago began experiencing a massive surge in carjackings after 2019, and at least 1,415 such crimes took place in the city in 2020. According to the Chicago Police Department, carjackers are using face masks that are widely worn due to the ongoing COVID-19 pandemic to effectively blend in with the public and conceal their identity. On January 27, 2021, Mayor Lightfoot described the worsening wave of carjackings as being 'top of mind,' and added 40 police officers to the CPD carjacking unit. Employee pensions In September 2016, an Illinois state appellate court found that cities do not have an obligation under the Illinois Constitution to pay certain benefits if those benefits had included an expiration date under whichever negotiated agreement they were covered. The Illinois Constitution prohibits governments from doing anything that could cause retirement benefits for government workers to be "diminished or impaired." In this particular case, the fact that the workers' agreements had expiration dates let the city of Chicago set an expiration date of 2013 for contribution to health benefits for workers who retired after 1989. Education Schools and libraries Chicago Public Schools (CPS) is the governing body of the school district that contains over 600 public elementary and high schools citywide, including several selective-admission magnet schools. There are eleven selective enrollment high schools in the Chicago Public Schools, designed to meet the needs of Chicago's most academically advanced students. These schools offer a rigorous curriculum with mainly honors and Advanced Placement (AP) courses. Walter Payton College Prep High School is ranked number one in the city of Chicago and the state of Illinois. Northside College Preparatory High School is ranked second, Jones College Prep is third, and the oldest magnet school in the city, Whitney M. Young Magnet High School, which was opened in 1975, is ranked fourth. The magnet school with the largest enrollment is Lane Technical College Prep High School. Lane is one of the oldest schools in Chicago and in 2012 was designated a National Blue Ribbon School by the U.S. Department of Education. Chicago high school rankings are determined by the average test scores on state achievement tests. The district, with an enrollment exceeding 400,545 students (2013–2014 20th Day Enrollment), is the third-largest in the U.S. On September 10, 2012, teachers for the Chicago Teachers Union went on strike for the first time since 1987 over pay, resources and other issues. According to data compiled in 2014, Chicago's "choice system", where students who test or apply and may attend one of a number of public high schools (there are about 130), sorts students of different achievement levels into different schools (high performing, middle performing, and low performing schools). Chicago has a network of Lutheran schools, and several private schools are run by other denominations and faiths, such as the Ida Crown Jewish Academy in West Ridge. Several private schools are completely secular, such as the Latin School of Chicago in the Near North Side neighborhood, the University of Chicago Laboratory Schools in Hyde Park, the British School of Chicago and the Francis W. Parker School in Lincoln Park, the Lycée Français de Chicago in Uptown, the Feltre School in River North and the Morgan Park Academy. There are also the private Chicago Academy for the Arts, a high school focused on six different categories of the arts and the public Chicago High School for the Arts, a high school focused on five categories (visual arts, theatre, musical theatre, dance, and music) of the arts. The Roman Catholic Archdiocese of Chicago operates Catholic schools, that include Jesuit preparatory schools and others including St. Rita of Cascia High School, De La Salle Institute, Josephinum Academy, DePaul College Prep, Cristo Rey Jesuit High School, Brother Rice High School, St. Ignatius College Preparatory School, Mount Carmel High School, Queen of Peace High School, Mother McAuley Liberal Arts High School, Marist High School, St. Patrick High School and Resurrection High School. The Chicago Public Library system operates 79 public libraries, including the central library, two regional libraries, and numerous branches distributed throughout the city. Colleges and universities Since the 1850s, Chicago has been a world center of higher education and research with several universities. These institutions consistently rank among the top "National Universities" in the United States, as determined by U.S. News & World Report. Highly regarded universities in Chicago and the surrounding area are: the University of Chicago; Northwestern University; Illinois Institute of Technology; Loyola University Chicago; DePaul University; Columbia College Chicago and University of Illinois at Chicago. Other notable schools include: Chicago State University; the School of the Art Institute of Chicago; East–West University; National Louis University; North Park University; Northeastern Illinois University; Robert Morris University Illinois; Roosevelt University; Saint Xavier University; Rush University; and Shimer College. William Rainey Harper, the first president of the University of Chicago, was instrumental in the creation of the junior college concept, establishing nearby Joliet Junior College as the first in the nation in 1901. His legacy continues with the multiple community colleges in the Chicago proper, including the seven City Colleges of Chicago: Richard J. Daley College, Kennedy–King College, Malcolm X College, Olive–Harvey College, Truman College, Harold Washington College and Wilbur Wright College, in addition to the privately held MacCormac College. Chicago also has a high concentration of post-baccalaureate institutions, graduate schools, seminaries, and theological schools, such as the Adler School of Professional Psychology, The Chicago School of Professional Psychology, the Erikson Institute, The Institute for Clinical Social Work, the Lutheran School of Theology at Chicago, the Catholic Theological Union, the Moody Bible Institute, the John Marshall Law School and the University of Chicago Divinity School. Media Television The Chicago metropolitan area is the third-largest media market in North America, after New York City and Los Angeles and a major media hub. Each of the big four U.S. television networks, CBS, ABC, NBC and Fox, directly owns and operates a high-definition television station in Chicago (WBBM 2, WLS 7, WMAQ 5 and WFLD 32, respectively). Former CW affiliate WGN-TV 9, which is owned by the Tribune Media, is carried with some programming differences, as "WGN America" on cable and satellite TV nationwide and in parts of the Caribbean. Chicago has also been the home of several prominent talk shows, including The Oprah Winfrey Show, Steve Harvey Show, The Rosie Show, The Jerry Springer Show, The Phil Donahue Show, The Jenny Jones Show, and more. The city also has one PBS member station (its second: WYCC 20, removed its affiliation with PBS in 2017): WTTW 11, producer of shows such as Sneak Previews, The Frugal Gourmet, Lamb Chop's Play-Along and The McLaughlin Group. , Windy City Live is Chicago's only daytime talk show, which is hosted by Val Warner and Ryan Chiaverini at ABC7 Studios with a live weekday audience. Since 1999, Judge Mathis also films his syndicated arbitration-based reality court show at the NBC Tower. Beginning in January 2019, Newsy began producing 12 of its 14 hours of live news programming per day from its new facility in Chicago. Newspapers Two major daily newspapers are published in Chicago: the Chicago Tribune and the Chicago Sun-Times, with the Tribune having the larger circulation. There are also several regional and special-interest newspapers and magazines, such as Chicago, the Dziennik Związkowy (Polish Daily News), Draugas (the Lithuanian daily newspaper), the Chicago Reader, the SouthtownStar, the Chicago Defender, the Daily Herald, Newcity, StreetWise and the Windy City Times. The entertainment and cultural magazine Time Out Chicago and GRAB magazine are also published in the city, as well as local music magazine Chicago Innerview. In addition, Chicago is the home of satirical national news outlet, The Onion, as well as its sister pop-culture publication, The A.V. Club. Movies and filming Since the 1980s, many motion pictures have been filmed or set in the city such as The Untouchables, The Blues Brothers, The Matrix, Brewster's Millions, Ferris Bueller's Day Off, Sixteen Candles, Home Alone, The Fugitive, I, Robot, Mean Girls, Wanted, Batman Begins, The Dark Knight, Dhoom 3, Transformers: Dark of the Moon, Transformers: Age of Extinction, Transformers: The Last Knight, Divergent, Man of Steel, Batman v Superman: Dawn of Justice, Sinister 2, Suicide Squad, Justice League, Rampage and The Batman. In The Dark Knight Trilogy and the DC Extended Universe, Chicago was used as the inspiration and filming site for Gotham City and Metropolis respectively. Chicago has also been the setting of a number of television shows, including the situation comedies Perfect Strangers and its spinoff Family Matters, Married... with Children, Punky Brewster, Kenan & Kel, Still Standing, The League, The Bob Newhart Show, and Shake It Up. The city served as the venue for the medical dramas ER and Chicago Hope, as well as the fantasy drama series Early Edition and the 2005–2009 drama Prison Break. Discovery Channel films two shows in Chicago: Cook County Jail and the Chicago version of Cash Cab. Other notable shows include CBS's The Good Wife and Mike and Molly. Chicago is currently the setting for Showtime's Shameless, and NBC's Chicago Fire, Chicago P.D. and Chicago Med. All three Chicago franchise shows are filmed locally throughout Chicago and maintain strong national viewership averaging 7 million viewers per show. Radio Chicago has five 50,000 watt AM radio stations: the CBS Radio-owned WBBM and WSCR; the Tribune Broadcasting-owned WGN; the Cumulus Media-owned WLS; and the ESPN Radio-owned WMVP. Chicago is also home to a number of national radio shows, including Beyond the Beltway with Bruce DuMont on Sunday evenings. Chicago Public Radio produces nationally aired programs such as PRI's This American Life and NPR's Wait Wait...Don't Tell Me!. Music In 2005, indie rock artist Sufjan Stevens created a concept album about Illinois titled Illinois; many of its songs were about Chicago and its history. Industrial genre The city was particularly important for the development of the harsh and electronic based music genre known as industrial. Many themes are transgressive and derived from the works of authors such as William S. Burroughs. While the genre was pioneered by Throbbing Gristle in the late 70s, the genre was largely started in the United Kingdom, with the Chicago-based record label Wax Trax! later establishing itself as America's home for the genre. The label first found success with Ministry, with the release of the cold life single, which entered the US Dance charts in 1982. The record label later signed many prominent industrial acts, with the most notable being: My Life with the Thrill Kill Kult, KMFDM, Front Line Assembly and Front 242. Richard Giraldi of the Chicago Sun-Times remarked on the significance of the label and wrote, "As important as Chess Records was to blues and soul music, Chicago's Wax Trax imprint was just as significant to the punk rock, new wave and industrial genres." Video games Chicago is also featured in a few video games, including Watch Dogs and Midtown Madness, a real-life, car-driving simulation game. Chicago is home to NetherRealm Studios, the developers of the Mortal Kombat series. Infrastructure Transportation Chicago is a major transportation hub in the United States. It is an important component in global distribution, as it is the third-largest inter-modal port in the world after Hong Kong and Singapore. The city of Chicago has a higher than average percentage of households without a car. In 2015, 26.5 percent of Chicago households were without a car, and increased slightly to 27.5 percent in 2016. The national average was 8.7 percent in 2016. Chicago averaged 1.12 cars per household in 2016, compared to a national average of 1.8. Expressways Seven mainline and four auxiliary interstate highways (55, 57, 65 (only in Indiana), 80 (also in Indiana), 88, 90 (also in Indiana), 94 (also in Indiana), 190, 290, 294, and 355) run through Chicago and its suburbs. Segments that link to the city center are named after influential politicians, with three of them named after former U.S. Presidents (Eisenhower, Kennedy, and Reagan) and one named after two-time Democratic candidate Adlai Stevenson. The Kennedy and Dan Ryan Expressways are the busiest state maintained routes in the entire state of Illinois. Transit systems The Regional Transportation Authority (RTA) coordinates the operation of the three service boards: CTA, Metra, and Pace. The Chicago Transit Authority (CTA) handles public transportation in the City of Chicago and a few adjacent suburbs outside of the Chicago city limits. The CTA operates an extensive network of buses and a rapid transit elevated and subway system known as the 'L' (for "elevated"), with lines designated by colors. These rapid transit lines also serve both Midway and O'Hare Airports. The CTA's rail lines consist of the Red, Blue, Green, Orange, Brown, Purple, Pink, and Yellow lines. Both the Red and Blue lines offer 24‑hour service which makes Chicago one of a handful of cities around the world (and one of two in the United States, the other being New York City) to offer rail service 24 hours a day, every day of the year, within the city's limits. Metra, the nation's second-most used passenger regional rail network, operates an 11-line commuter rail service in Chicago and throughout the Chicago suburbs. The Metra Electric Line shares its trackage with Northern Indiana Commuter Transportation District's South Shore Line, which provides commuter service between South Bend and Chicago. Pace provides bus and paratransit service in over 200 surrounding suburbs with some extensions into | decades of the 19th century, Chicago was the destination of waves of immigrants from Ireland, Southern, Central and Eastern Europe, including Italians, Jews, Russians, Poles, Greeks, Lithuanians, Bulgarians, Albanians, Romanians, Turkish, Croatians, Serbs, Bosnians, Montenegrins and Czechs. To these ethnic groups, the basis of the city's industrial working class, were added an additional influx of African Americans from the American South—with Chicago's black population doubling between 1910 and 1920 and doubling again between 1920 and 1930. In the 1920s and 1930s, the great majority of African Americans moving to Chicago settled in a so‑called "Black Belt" on the city's South Side. A large number of blacks also settled on the West Side. By 1930, two-thirds of Chicago's black population lived in sections of the city which were 90% black in racial composition. Chicago's South Side emerged as United States second-largest urban black concentration, following New York's Harlem. Today, Chicago's South Side and the adjoining south suburbs constitute the largest black majority region in the entire United States. Chicago's population declined in the latter half of the 20th century, from over 3.6 million in 1950 down to under 2.7 million by 2010. By the time of the official census count in 1990, it was overtaken by Los Angeles as the United States' second largest city. The city has seen a rise in population for the 2000 census and after a decrease in 2010, it rose again for the 2020 census. Per U.S. Census estimates , Chicago's largest racial or ethnic group is non-Hispanic White at 32.8% of the population, Blacks at 30.1% and the Hispanic population at 29.0% of the population. Chicago has the third-largest LGBT population in the United States. In 2018, the Chicago Department of Health, estimated 7.5% of the adult population, approximately 146,000 Chicagoans, were LGBTQ. In 2015, roughly 4% of the population identified as LGBT. Since the 2013 legalization of same-sex marriage in Illinois, over 10,000 same-sex couples have wed in Cook County, a majority of them in Chicago. Chicago became a "de jure" sanctuary city in 2012 when Mayor Rahm Emanuel and the City Council passed the Welcoming City Ordinance. According to the U.S. Census Bureau's American Community Survey data estimates for 2008–2012, the median income for a household in the city was $47,408, and the median income for a family was $54,188. Male full-time workers had a median income of $47,074 versus $42,063 for females. About 18.3% of families and 22.1% of the population lived below the poverty line. In 2018, Chicago ranked 7th globally for the highest number of ultra-high-net-worth residents with roughly 3,300 residents worth more than $30 million. According to the 2008–2012 American Community Survey, the ancestral groups having 10,000 or more persons in Chicago were: Ireland (137,799) Poland (134,032) Germany (120,328) Italy (77,967) China (66,978) American (37,118) UK (36,145) recent African (32,727) India (25,000) Russia (19,771) Arab (17,598) European (15,753) Sweden (15,151) Japan (15,142) Greece (15,129) France (except Basque) (11,410) Ukraine (11,104) West Indian (except Hispanic groups) (10,349) Persons identifying themselves in "Other groups" were classified at 1.72 million, and unclassified or not reported were approximately 153,000. Religion Most people in Chicago are Christian, with the city being the 4th-most religious metropolis in the United States after Dallas, Atlanta and Houston. Roman Catholicism and Protestantism are the largest branches (34% and 35% respectively), followed by Eastern Orthodoxy and Jehovah's Witnesses with 1% each. Chicago also has a sizable non-Christian population. Non-Christian groups include Irreligious (22%), Judaism (3%), Islam (2%), Buddhism (1%) and Hinduism (1%). Chicago is the headquarters of several religious denominations, including the Evangelical Covenant Church and the Evangelical Lutheran Church in America. It is the seat of several dioceses. The Fourth Presbyterian Church is one of the largest Presbyterian congregations in the United States based on memberships. Since the 20th century Chicago has also been the headquarters of the Assyrian Church of the East. In 2014 the Catholic Church was the largest individual Christian denomination (34%), with the Roman Catholic Archdiocese of Chicago being the largest Catholic jurisdiction. Evangelical Protestantism form the largest theological Protestant branch (16%), followed by Mainline Protestants (11%), and historically Black churches (8%). Among denominational Protestant branches, Baptists formed the largest group in Chicago (10%); followed by Nondenominational (5%); Lutherans (4%); and Pentecostals (3%). Non-Christian faiths accounted for 7% of the religious population in 2014. Judaism has at least 261,000 adherents which is 3% of the population, making it the second largest religion. A 2020 study estimated the total Jewish population of the Chicago metropolitan area, both religious and irreligious, at 319,600. The first two Parliament of the World's Religions in 1893 and 1993 were held in Chicago. Many international religious leaders have visited Chicago, including Mother Teresa, the Dalai Lama and Pope John Paul II in 1979. Economy Chicago has the third-largest gross metropolitan product in the United States—about $670.5 billion according to September 2017 estimates. The city has also been rated as having the most balanced economy in the United States, due to its high level of diversification. In 2007, Chicago was named the fourth-most important business center in the world in the MasterCard Worldwide Centers of Commerce Index. Additionally, the Chicago metropolitan area recorded the greatest number of new or expanded corporate facilities in the United States for calendar year 2014. The Chicago metropolitan area has the third-largest science and engineering work force of any metropolitan area in the nation. In 2009 Chicago placed ninth on the UBS list of the world's richest cities. Chicago was the base of commercial operations for industrialists John Crerar, John Whitfield Bunn, Richard Teller Crane, Marshall Field, John Farwell, Julius Rosenwald and many other commercial visionaries who laid the foundation for Midwestern and global industry. Chicago is a major world financial center, with the second-largest central business district in the United States. The city is the seat of the Federal Reserve Bank of Chicago, the Bank's Seventh District. The city has major financial and futures exchanges, including the Chicago Stock Exchange, the Chicago Board Options Exchange (CBOE), and the Chicago Mercantile Exchange (the "Merc"), which is owned, along with the Chicago Board of Trade (CBOT) by Chicago's CME Group. In 2017, Chicago exchanges traded 4.7 billion derivatives with a face value of over one quadrillion dollars. Chase Bank has its commercial and retail banking headquarters in Chicago's Chase Tower. Academically, Chicago has been influential through the Chicago school of economics, which fielded some 12 Nobel Prize winners. The city and its surrounding metropolitan area contain the third-largest labor pool in the United States with about 4.63 million workers. Illinois is home to 66 Fortune 1000 companies, including those in Chicago. The city of Chicago also hosts 12 Fortune Global 500 companies and 17 Financial Times 500 companies. The city claims three Dow 30 companies: aerospace giant Boeing, which moved its headquarters from Seattle to the Chicago Loop in 2001, McDonald's and Walgreens Boots Alliance. For six consecutive years since 2013, Chicago was ranked the nation's top metropolitan area for corporate relocations. Manufacturing, printing, publishing and food processing also play major roles in the city's economy. Several medical products and services companies are headquartered in the Chicago area, including Baxter International, Boeing, Abbott Laboratories, and the Healthcare division of General Electric. In addition to Boeing, which located its headquarters in Chicago in 2001, and United Airlines in 2011, GE Transportation moved its offices to the city in 2013 and GE Healthcare moved its HQ to the city in 2016, as did ThyssenKrupp North America, and agriculture giant Archer Daniels Midland. Moreover, the construction of the Illinois and Michigan Canal, which helped move goods from the Great Lakes south on the Mississippi River, and of the railroads in the 19th century made the city a major transportation center in the United States. In the 1840s, Chicago became a major grain port, and in the 1850s and 1860s Chicago's pork and beef industry expanded. As the major meat companies grew in Chicago many, such as Armour and Company, created global enterprises. Although the meatpacking industry currently plays a lesser role in the city's economy, Chicago continues to be a major transportation and distribution center. Lured by a combination of large business customers, federal research dollars, and a large hiring pool fed by the area's universities, Chicago is also the site of a growing number of web startup companies like CareerBuilder, Orbitz, Basecamp, Groupon, Feedburner, Grubhub and NowSecure. Prominent food companies based in Chicago include the world headquarters of Conagra, Ferrara Candy Company, Kraft Heinz, McDonald's, Mondelez International, Quaker Oats, and US Foods. Chicago has been a hub of the retail sector since its early development, with Montgomery Ward, Sears, and Marshall Field's. Today the Chicago metropolitan area is the headquarters of several retailers, including Walgreens, Sears, Ace Hardware, Claire's, ULTA Beauty and Crate & Barrel. Late in the 19th century, Chicago was part of the bicycle craze, with the Western Wheel Company, which introduced stamping to the production process and significantly reduced costs, while early in the 20th century, the city was part of the automobile revolution, hosting the Brass Era car builder Bugmobile, which was founded there in 1907. Chicago was also the site of the Schwinn Bicycle Company. Chicago is a major world convention destination. The city's main convention center is McCormick Place. With its four interconnected buildings, it is the largest convention center in the nation and third-largest in the world. Chicago also ranks third in the U.S. (behind Las Vegas and Orlando) in number of conventions hosted annually. Chicago's minimum wage for non-tipped employees is one of the highest in the nation and reached $15 in 2021. Culture and contemporary life The city's waterfront location and nightlife has attracted residents and tourists alike. Over a third of the city population is concentrated in the lakefront neighborhoods from Rogers Park in the north to South Shore in the south. The city has many upscale dining establishments as well as many ethnic restaurant districts. These districts include the Mexican American neighborhoods, such as Pilsen along 18th street, and La Villita along 26th Street; the Puerto Rican enclave of Paseo Boricua in the Humboldt Park neighborhood; Greektown, along South Halsted Street, immediately west of downtown; Little Italy, along Taylor Street; Chinatown in Armour Square; Polish Patches in West Town; Little Seoul in Albany Park around Lawrence Avenue; Little Vietnam near Broadway in Uptown; and the Desi area, along Devon Avenue in West Ridge. Downtown is the center of Chicago's financial, cultural, governmental and commercial institutions and the site of Grant Park and many of the city's skyscrapers. Many of the city's financial institutions, such as the CBOT and the Federal Reserve Bank of Chicago, are located within a section of downtown called "The Loop", which is an eight-block by five-block area of city streets that is encircled by elevated rail tracks. The term "The Loop" is largely used by locals to refer to the entire downtown area as well. The central area includes the Near North Side, the Near South Side, and the Near West Side, as well as the Loop. These areas contribute famous skyscrapers, abundant restaurants, shopping, museums, a stadium for the Chicago Bears, convention facilities, parkland, and beaches. Lincoln Park contains the Lincoln Park Zoo and the Lincoln Park Conservatory. The River North Gallery District features the nation's largest concentration of contemporary art galleries outside of New York City. Lakeview is home to Boystown, the city's large LGBT nightlife and culture center. The Chicago Pride Parade, held the last Sunday in June, is one of the world's largest with over a million people in attendance. North Halsted Street is the main thoroughfare of Boystown. The South Side neighborhood of Hyde Park is the home of former US President Barack Obama. It also contains the University of Chicago, ranked one of the world's top ten universities, and the Museum of Science and Industry. The long Burnham Park stretches along the waterfront of the South Side. Two of the city's largest parks are also located on this side of the city: Jackson Park, bordering the waterfront, hosted the World's Columbian Exposition in 1893, and is the site of the aforementioned museum; and slightly west sits Washington Park. The two parks themselves are connected by a wide strip of parkland called the Midway Plaisance, running adjacent to the University of Chicago. The South Side hosts one of the city's largest parades, the annual African American Bud Billiken Parade and Picnic, which travels through Bronzeville to Washington Park. Ford Motor Company has an automobile assembly plant on the South Side in Hegewisch, and most of the facilities of the Port of Chicago are also on the South Side. The West Side holds the Garfield Park Conservatory, one of the largest collections of tropical plants in any U.S. city. Prominent Latino cultural attractions found here include Humboldt Park's Institute of Puerto Rican Arts and Culture and the annual Puerto Rican People's Parade, as well as the National Museum of Mexican Art and St. Adalbert's Church in Pilsen. The Near West Side holds the University of Illinois at Chicago and was once home to Oprah Winfrey's Harpo Studios, the site of which has been rebuilt as the global headquarters of McDonald's. The city's distinctive accent, made famous by its use in classic films like The Blues Brothers and television programs like the Saturday Night Live skit "Bill Swerski's Superfans", is an advanced form of Inland Northern American English. This dialect can also be found in other cities bordering the Great Lakes such as Cleveland, Milwaukee, Detroit, and Rochester, New York, and most prominently features a rearrangement of certain vowel sounds, such as the short 'a' sound as in "cat", which can sound more like "kyet" to outsiders. The accent remains well associated with the city. Entertainment and the arts Renowned Chicago theater companies include the Goodman Theatre in the Loop; the Steppenwolf Theatre Company and Victory Gardens Theater in Lincoln Park; and the Chicago Shakespeare Theater at Navy Pier. Broadway In Chicago offers Broadway-style entertainment at five theaters: the Nederlander Theatre, CIBC Theatre, Cadillac Palace Theatre, Auditorium Building of Roosevelt University, and Broadway Playhouse at Water Tower Place. Polish language productions for Chicago's large Polish speaking population can be seen at the historic Gateway Theatre in Jefferson Park. Since 1968, the Joseph Jefferson Awards are given annually to acknowledge excellence in theater in the Chicago area. Chicago's theater community spawned modern improvisational theater, and includes the prominent groups The Second City and I.O. (formerly ImprovOlympic). The Chicago Symphony Orchestra (CSO) performs at Symphony Center, and is recognized as one of the best orchestras in the world. Also performing regularly at Symphony Center is the Chicago Sinfonietta, a more diverse and multicultural counterpart to the CSO. In the summer, many outdoor concerts are given in Grant Park and Millennium Park. Ravinia Festival, located north of Chicago, is the summer home of the CSO, and is a favorite destination for many Chicagoans. The Civic Opera House is home to the Lyric Opera of Chicago. The Lithuanian Opera Company of Chicago was founded by Lithuanian Chicagoans in 1956, and presents operas in Lithuanian. The Joffrey Ballet and Chicago Festival Ballet perform in various venues, including the Harris Theater in Millennium Park. Chicago has several other contemporary and jazz dance troupes, such as the Hubbard Street Dance Chicago and Chicago Dance Crash. Other live-music genre which are part of the city's cultural heritage include Chicago blues, Chicago soul, jazz, and gospel. The city is the birthplace of house music (a popular form of electronic dance music) and industrial music, and is the site of an influential hip hop scene. In the 1980s and 90s, the city was the global center for house and industrial music, two forms of music created in Chicago, as well as being popular for alternative rock, punk, and new wave. The city has been a center for rave culture, since the 1980s. A flourishing independent rock music culture brought forth Chicago indie. Annual festivals feature various acts, such as Lollapalooza and the Pitchfork Music Festival. A 2007 report on the Chicago music industry by the University of Chicago Cultural Policy Center ranked Chicago third among metropolitan U.S. areas in "size of music industry" and fourth among all U.S. cities in "number of concerts and performances". Chicago has a distinctive fine art tradition. For much of the twentieth century, it nurtured a strong style of figurative surrealism, as in the works of Ivan Albright and Ed Paschke. In 1968 and 1969, members of the Chicago Imagists, such as Roger Brown, Leon Golub, Robert Lostutter, Jim Nutt, and Barbara Rossi produced bizarre representational paintings. Henry Darger is one of the most celebrated figures of outsider art. Chicago contains a number of large, outdoor works by well-known artists. These include the Chicago Picasso, Miró's Chicago, Flamingo and Flying Dragon by Alexander Calder, Agora by Magdalena Abakanowicz, Monument with Standing Beast by Jean Dubuffet, Batcolumn by Claes Oldenburg, Cloud Gate by Anish Kapoor, Crown Fountain by Jaume Plensa, and the Four Seasons mosaic by Marc Chagall. Chicago also hosts a nationally televised Thanksgiving parade that occurs annually. The Chicago Thanksgiving Parade is broadcast live nationally on WGN-TV and WGN America, featuring a variety of diverse acts from the community, marching bands from across the country, and is the only parade in the city to feature inflatable balloons every year. Tourism , Chicago attracted 50.17 million domestic leisure travelers, 11.09 million domestic business travelers and 1.308 million overseas visitors. These visitors contributed more than billion to Chicago's economy. Upscale shopping along the Magnificent Mile and State Street, thousands of restaurants, as well as Chicago's eminent architecture, continue to draw tourists. The city is the United States' third-largest convention destination. A 2017 study by Walk Score ranked Chicago the sixth-most walkable of fifty largest cities in the United States. Most conventions are held at McCormick Place, just south of Soldier Field. The historic Chicago Cultural Center (1897), originally serving as the Chicago Public Library, now houses the city's Visitor Information Center, galleries and exhibit halls. The ceiling of its Preston Bradley Hall includes a Tiffany glass dome. Grant Park holds Millennium Park, Buckingham Fountain (1927), and the Art Institute of Chicago. The park also hosts the annual Taste of Chicago festival. In Millennium Park, the reflective Cloud Gate public sculpture by artist Anish Kapoor is the centerpiece of the AT&T Plaza in Millennium Park. Also, an outdoor restaurant transforms into an ice rink in the winter season. Two tall glass sculptures make up the Crown Fountain. The fountain's two towers display visual effects from LED images of Chicagoans' faces, along with water spouting from their lips. Frank Gehry's detailed, stainless steel band shell, the Jay Pritzker Pavilion, hosts the classical Grant Park Music Festival concert series. Behind the pavilion's stage is the Harris Theater for Music and Dance, an indoor venue for mid-sized performing arts companies, including the Chicago Opera Theater and Music of the Baroque. Navy Pier, located just east of Streeterville, is long and houses retail stores, restaurants, museums, exhibition halls and auditoriums. In the summer of 2016, Navy Pier constructed a DW60 Ferris wheel. Dutch Wheels, a world renowned company that manufactures ferris wheels, was selected to design the new wheel. It features 42 navy blue gondolas that can hold up to eight adults and two children. It also has entertainment systems inside the gondolas as well as a climate controlled environment. The DW60 stands at approximately , which is taller than the previous wheel. The new DW60 is the first in the United States and is the sixth tallest in the U.S. Chicago was the first city in the world to ever erect a ferris wheel. On June 4, 1998, the city officially opened the Museum Campus, a lakefront park, surrounding three of the city's main museums, each of which is of national importance: the Adler Planetarium & Astronomy Museum, the Field Museum of Natural History, and the Shedd Aquarium. The Museum Campus joins the southern section of Grant Park, which includes the renowned Art Institute of Chicago. Buckingham Fountain anchors the downtown park along the lakefront. The University of Chicago Oriental Institute has an extensive collection of ancient Egyptian and Near Eastern archaeological artifacts. Other museums and galleries in Chicago include the Chicago History Museum, the Driehaus Museum, the DuSable Museum of African American History, the Museum of Contemporary Art, the Peggy Notebaert Nature Museum, the Polish Museum of America, the Museum of Broadcast Communications, the Pritzker Military Library, the Chicago Architecture Foundation, and the Museum of Science and Industry. With an estimated completion date of 2020, the Barack Obama Presidential Center will be housed at the University of Chicago in Hyde Park and include both the Obama presidential library and offices of the Obama Foundation. The Willis Tower (formerly named Sears Tower) is a popular destination for tourists. The Willis Tower has an observation deck open to tourists year round with high up views overlooking Chicago and Lake Michigan. The observation deck includes an enclosed glass balcony that extends out on the side of the building. Tourists are able to look straight down. In 2013, Chicago was chosen as one of the "Top Ten Cities in the United States" to visit for its restaurants, skyscrapers, museums, and waterfront, by the readers of Condé Nast Traveler, and in 2020 for the fourth year in a row, Chicago was named the top U.S. city tourism destination. Cuisine Chicago lays claim to a large number of regional specialties that reflect the city's ethnic and working-class roots. Included among these are its nationally renowned deep-dish pizza; this style is said to have originated at Pizzeria Uno. The Chicago-style thin crust is also popular in the city. Certain Chicago pizza favorites include Lou Malnati's and Giordano's. The Chicago-style hot dog, typically an all-beef hot dog, is loaded with an array of toppings that often includes pickle relish, yellow mustard, pickled sport peppers, tomato wedges, dill pickle spear and topped off with celery salt on a poppy seed bun. Enthusiasts of the Chicago-style hot dog frown upon the use of ketchup as a garnish, but may prefer to add giardiniera. A distinctly Chicago sandwich, the Italian beef sandwich is thinly sliced beef simmered in au jus and served on an Italian roll with sweet peppers or spicy giardiniera. A popular modification is the Combo—an Italian beef sandwich with the addition of an Italian sausage. The Maxwell Street Polish is a grilled or deep-fried kielbasa—on a hot dog roll, topped with grilled onions, yellow mustard, and hot sport peppers. Chicken Vesuvio is roasted bone-in chicken cooked in oil and garlic next to garlicky oven-roasted potato wedges and a sprinkling of green peas. The Puerto Rican-influenced jibarito is a sandwich made with flattened, fried green plantains instead of bread. The mother-in-law is a tamale topped with chili and served on a hot dog bun. The tradition of serving the Greek dish saganaki while aflame has its origins in Chicago's Greek community. The appetizer, which consists of a square of fried cheese, is doused with Metaxa and flambéed table-side. Annual festivals feature various Chicago signature dishes, such as Taste of Chicago and the Chicago Food Truck Festival. One of the world's most decorated restaurants and a recipient of three Michelin stars, Alinea is located in Chicago. Well-known chefs who have had restaurants in Chicago include: Charlie Trotter, Rick Tramonto, Grant Achatz, and Rick Bayless. In 2003, Robb Report named Chicago the country's "most exceptional dining destination". Literature Chicago literature finds its roots in the city's tradition of lucid, direct journalism, lending to a strong tradition of social realism. In the Encyclopedia of Chicago, Northwestern University Professor Bill Savage describes Chicago fiction as prose which tries to "capture the essence of the city, its spaces and its people". The challenge for early writers was that Chicago was a frontier outpost that transformed into a global metropolis in the span of two generations. Narrative fiction of that time, much of it in the style of "high-flown romance" and "genteel realism", needed a new approach to describe the urban social, political, and economic conditions of Chicago. Nonetheless, Chicagoans worked hard to create a literary tradition that would stand the test of time, and create a "city of feeling" out of concrete, steel, vast lake, and open prairie. Much notable Chicago fiction focuses on the city itself, with social criticism keeping exultation in check. At least three short periods in the history of Chicago have had a lasting influence on American literature. These include from the time of the Great Chicago Fire to about 1900, what became known as the Chicago Literary Renaissance in the 1910s and early 1920s, and the period of the Great Depression through the 1940s. What would become the influential Poetry magazine was founded in 1912 by Harriet Monroe, who was working as an art critic for the Chicago Tribune. The magazine discovered such poets as Gwendolyn Brooks, James Merrill, and John Ashbery. T. S. Eliot's first professionally published poem, "The Love Song of J. Alfred Prufrock", was first published by Poetry. Contributors have included Ezra Pound, William Butler Yeats, William Carlos Williams, Langston Hughes, and Carl Sandburg, among others. The magazine was instrumental in launching the Imagist and Objectivist poetic movements. From the 1950s through 1970s, American poetry continued to evolve in Chicago. In the 1980s, a modern form of poetry performance began in Chicago, the Poetry Slam. Sports Sporting News named Chicago the "Best Sports City" in the United States in 1993, 2006, and 2010. Along with Boston, Chicago is the only city to continuously host major professional sports since 1871, having only taken 1872 and 1873 off due to the Great Chicago Fire. Additionally, Chicago is one of the eight cities in the United States to have won championships in the four major professional leagues and, along with Los Angeles, New York, Philadelphia and Washington, is one of five cities to have won soccer championships as well. All of its major franchises have won championships within recent years – the Bears (1985), the Bulls (1991, 1992, 1993, 1996, 1997, and 1998), the White Sox (2005), the Cubs (2016), the Blackhawks (2010, 2013, 2015), and the Fire (1998). Chicago has the third most franchises in the four major North American sports leagues with five, behind the New York and Los Angeles Metropolitan Areas, and have six top-level professional sports clubs when including Chicago Fire FC of Major League Soccer (MLS). The city has two Major League Baseball (MLB) teams: the Chicago Cubs of the National League play in Wrigley Field on the North Side; and the Chicago White Sox of the American League play in Guaranteed Rate Field on the South Side. Chicago is the only city that has had more than one MLB franchise every year since the AL began in 1901 (New York hosted only one between 1958 and early 1962). The two teams have faced each other in a World Series only once: in 1906, when the White Sox, known as the "Hitless Wonders," defeated the Cubs, 4–2. The Cubs are the oldest Major League Baseball team to have never changed their city; they have played in Chicago since 1871, and continuously so since 1874 due to the Great Chicago Fire. They have played more games and have more wins than any other team in Major League baseball since 1876. They have won three World Series titles, including the 2016 World Series, but had the dubious honor of having the two longest droughts in American professional sports: They had not won their sport's title since 1908, and had not participated in a World Series since 1945, both records, until they beat the Cleveland Indians in the 2016 World Series. The White Sox have played on the South Side continuously since 1901, with all three of their home fields throughout the years being within blocks of one another. They have won three World Series titles (1906, 1917, 2005) and six American League pennants, including the first in 1901. The Sox are fifth in the American League in all-time wins, and sixth in pennants. The Chicago Bears, one of the last two remaining charter members of the National Football League (NFL), have won nine NFL Championships, including the 1985 Super Bowl XX. The other remaining charter franchise, the Chicago Cardinals, also started out in the city, but is now known as the Arizona Cardinals. The Bears have won more games in the history of the NFL than any other team, and only the Green Bay Packers, their longtime rivals, have won more championships. The Bears play their home games at Soldier Field. Soldier Field re-opened in 2003 after an extensive renovation. The Chicago Bulls of the National Basketball Association (NBA) is one of the most recognized basketball teams in the world. During the 1990s, with Michael Jordan leading them, the Bulls won six NBA championships in eight seasons. They also boast the youngest player to win the NBA Most Valuable Player Award, Derrick Rose, who won it for the 2010–11 season. The Chicago Blackhawks of the National Hockey League (NHL) began play in 1926, and are one of the "Original Six" teams of the NHL. The Blackhawks have won six Stanley Cups, including in 2010, 2013, and 2015. Both the Bulls and the Blackhawks play at the United Center. Chicago Fire FC is a member of Major League Soccer (MLS) and plays at Soldier Field. After playing its first eight seasons at Soldier Field, the team moved to suburban Bridgeview to play at SeatGeek Stadium. In 2019, the team announced a move back to Soldier Field. The Fire have won one league title and four U.S. Open Cups, since their founding in 1997. In 1994, the United States hosted a successful FIFA World Cup with games played at Soldier Field. The Chicago Sky is a professional basketball team playing in the Women's National Basketball Association (WNBA). They play home games at the Wintrust Arena. The team was founded before the 2006 WNBA season began. The Chicago Marathon has been held each year since 1977 except for 1987, when a half marathon was run in its place. The Chicago Marathon is one of six World Marathon Majors. Five area colleges play in Division I conferences: two from major conferences—the DePaul Blue Demons (Big East Conference) and the Northwestern Wildcats (Big Ten Conference)—and three from other D1 conferences—the Chicago State Cougars (Western Athletic Conference); the Loyola Ramblers (Missouri Valley Conference); and the UIC Flames (Horizon League). Chicago has also entered into eSports with the creation of the Chicago Huntsmen, a professional Call of Duty team that participates within the CDL. At the Call of Duty League's Launch Week games in Minneapolis, Minnesota, the Chicago Huntsmen went on to beat both the Dallas Empire and Optic Gaming Los Angeles. Parks and greenspace When Chicago was incorporated in 1837, it chose the motto Urbs in Horto, a Latin phrase which means "City in a Garden". Today, the Chicago Park District consists of more than 570 parks with over of municipal parkland. There are 31 sand beaches, a plethora of museums, two world-class conservatories, and 50 nature areas. Lincoln Park, the largest of the city's parks, covers and has over 20 million visitors each year, making it third in the number of visitors after Central Park in New York City, and the National Mall and Memorial Parks in Washington, D.C. There is a historic boulevard system, a network of wide, tree-lined boulevards which connect a number of Chicago parks. The boulevards and the parks were authorized by the Illinois legislature in 1869. A number of Chicago neighborhoods emerged along these roadways in the 19th century. The building of the boulevard system continued intermittently until 1942. It includes nineteen boulevards, eight parks, and six squares, along twenty-six miles of interconnected streets. The Chicago Park Boulevard System Historic District was listed on the National Register of Historic Places in 2018. With berths for more than 6,000 boats, the Chicago Park District operates the nation's largest municipal harbor system. In addition to ongoing beautification and renewal projects for the existing parks, a number of new parks have been added in recent years, such as the Ping Tom Memorial Park in Chinatown, DuSable Park on the Near North Side, and most notably, Millennium Park, which is in the northwestern corner of one of Chicago's oldest parks, Grant Park in the Chicago Loop. The wealth of greenspace afforded by Chicago's parks is further augmented by the Cook County Forest Preserves, a network of open spaces containing forest, prairie, wetland, streams, and lakes that are set aside as natural areas which lie along the city's outskirts, including both the Chicago Botanic Garden in Glencoe and the Brookfield Zoo in Brookfield. Washington Park is also one of the city's biggest parks; covering nearly . The park is listed on the National Register of Historic Places listings in South Side Chicago. Law and government Government The government of the City of Chicago is divided into executive and legislative branches. The mayor of Chicago is the chief executive, elected by general election for a term of four years, with no term limits. The current mayor is Lori Lightfoot. The mayor appoints commissioners and other officials who oversee the various departments. As well as the mayor, Chicago's clerk and treasurer are also elected citywide. The City Council is the legislative branch and is made up of 50 aldermen, one elected from each ward in the city. The council takes official action through the passage of ordinances and resolutions and approves the city budget. The Chicago Police Department provides law enforcement and the Chicago Fire Department provides fire suppression and emergency medical services for the city and its residents. Civil and criminal law cases are heard in the Cook County Circuit Court of the State of Illinois court system, or in the Northern District of Illinois, in the federal system. In the state court, the public prosecutor is the Illinois state's attorney; in the Federal court it is the United States attorney. Politics During much of the last half of the 19th century, Chicago's politics were dominated by a growing Democratic Party organization. During the 1880s and 1890s, Chicago had a powerful radical tradition with large and highly organized socialist, anarchist and labor organizations. For much of the 20th century, Chicago has been among the largest and most reliable Democratic strongholds in the United States; with Chicago's Democratic vote the state of Illinois has been "solid blue" in presidential elections since 1992. Even before then, it was not unheard of for Republican presidential candidates to win handily in downstate Illinois, only to lose statewide due to large Democratic margins in Chicago. The citizens of Chicago have not elected a Republican mayor since 1927, when William Thompson was voted into office. The strength of the party in the city is partly a consequence of Illinois state politics, where the Republicans have come to represent rural and farm concerns while the Democrats support urban issues such as Chicago's public school funding. Chicago contains less than 25% of the state's population, but it is split between eight of Illinois' 19 districts in the United States House of Representatives. All eight of the city's representatives are Democrats; only two Republicans have represented a significant portion of the city since 1973, for one term each: Robert P. Hanrahan from 1973 to 1975, and Michael Patrick Flanagan from 1995 to 1997. Machine politics persisted in Chicago after the decline of similar machines in other large U.S. cities. During much of that time, the city administration found opposition mainly from a liberal "independent" faction of the Democratic Party. The independents finally gained control of city government in 1983 with the election of Harold Washington (in office 1983–1987). From 1989 until May 16, 2011, Chicago was under the leadership of its longest-serving mayor, Richard M. Daley, the son of Richard J. Daley. Because of the dominance of the Democratic Party in Chicago, the Democratic primary vote held in the spring is generally more significant than the general elections in November for U.S. House and Illinois State seats. The aldermanic, mayoral, and other city offices are filled through nonpartisan elections with runoffs as needed. The city is home of former United States President Barack Obama and First Lady Michelle Obama; Barack Obama was formerly a state legislator representing Chicago and later a US senator. The Obamas' residence is located near the University of Chicago in Kenwood on the city's south side. Crime Chicago had a murder rate of 18.5 per 100,000 residents in 2012, ranking 16th among US cities with 100,000 people or more. This was higher than in New York City and Los Angeles, the two largest cities in the United States, which have lower murder rates and lower total homicides. However, it was less than in many smaller American cities, including New Orleans, |
could no longer compete in performance. Models 6x86 The 6x86 (codename M1) was released by Cyrix in 1996. The first generation of 6x86 had heat problems. This was primarily caused by their higher heat output than other x86 CPUs of the day and, as such, computer builders sometimes did not equip them with adequate cooling. The CPUs topped out at around 25 W heat output (like the AMD K6), whereas the P5 Pentium produced around 15 W of waste heat at its peak. However, both numbers would be a fraction of the heat generated by many high performance processors, some years later. 6x86L The 6x86L (codename M1L) was later released by Cyrix to address heat issues; the L standing for low-power. Improved manufacturing technologies permitted usage of a lower Vcore. Just like the Pentium MMX, the 6x86L required a split powerplane voltage regulator with separate voltages for I/O and CPU core. 6x86MX / MII Another release of the 6x86, the 6x86MX, added MMX compatibility along with the EMMI instruction set, improved compatibility with the Pentium and Pentium Pro by adding a Time Stamp Counter and CMOVcc instructions respectively, and quadrupled the primary cache size to 64 KB. The 256-byte instruction line cache can be turned into a scratchpad cache to provide support for multimedia operations. Later revisions of this chip were renamed MII, to better compete with the Pentium II processor. Unfortunately, 6x86MX / MII was late to market, and couldn't scale well in clock speed with the manufacturing processes used at the time. References Further reading Gwennap, Linley (October 25, 1993). "Cyrix Describes Pentium Competitor" Microprocessor Report. Gwennap, Linley (December 5, 1994). "Cyrix M1 Design Tapes Out". Microprocessor Report. Gwennap, Linley (June 2, 1997). "Cyrix 6x68MX Outperforms AMD K6". Microprocessor Report. | as being a P5 Pentium 166's equal. However, the PR rating was not an entirely truthful representation of the 6x86's performance. While the 6x86's integer performance was significantly higher than P5 Pentium's, its floating point performance was more mediocre—between 2 and 4 times the performance of the 486 FPU per clock cycle (depending on the operation and precision). The FPU in the 6x86 was largely the same circuitry that was developed for Cyrix's earlier high performance 8087/80287/80387-compatible coprocessors, which was very fast for its time—the Cyrix FPU was much faster than the 80387, and even the 80486 FPU. However, it was still considerably slower than the new and completely redesigned P5 Pentium and P6 Pentium Pro-Pentium III FPUs. During the 6x86's development, the majority of applications (office software as well as games) performed almost entirely integer operations. The designers foresaw that future applications would most likely maintain this instruction focus. So, to optimize the chip's performance for what they believed to be the most likely application of the CPU, the integer execution resources received most of the transistor budget. This would later prove to be a strategic mistake, as the popularity of the P5 Pentium caused many software developers to hand-optimize code in assembly language, to take advantage of the P5 Pentium's tightly pipelined and lower latency FPU. For example, the highly anticipated first-person shooter Quake used highly optimized assembly code designed almost entirely around the P5 Pentium's FPU. As a result, the P5 Pentium significantly outperformed other CPUs in the game. Therefore, despite being very fast clock by clock, the 6x86 and MII were forced to compete at the low-end of the market as AMD K6 and Intel P6 Pentium II were always ahead on clock speed. The 6x86's and MII's old generation "486 class" floating point unit combined with an integer section that was at best on-par with the newer P6 and K6 chips meant that Cyrix could no longer compete in performance. Models 6x86 The 6x86 (codename M1) was released by Cyrix in 1996. The first generation of 6x86 had heat problems. This was primarily caused by their higher heat output than other x86 CPUs of the day and, as such, computer builders sometimes did not equip them with adequate cooling. The CPUs topped out at around 25 W heat output (like the AMD K6), whereas the P5 Pentium produced around 15 W of waste heat at its peak. However, both numbers would be a fraction of the heat generated by many high performance processors, some years later. 6x86L The 6x86L (codename M1L) was later released by Cyrix to address heat issues; the L standing for low-power. Improved manufacturing technologies permitted usage of a lower Vcore. Just like the Pentium MMX, the 6x86L required a split powerplane voltage regulator with separate voltages for I/O and CPU core. 6x86MX / MII Another release of the 6x86, the 6x86MX, added MMX compatibility along with the EMMI instruction set, improved compatibility with the Pentium and Pentium Pro by adding a Time Stamp Counter and CMOVcc instructions respectively, and quadrupled the primary |
summarized in a specific call number: L,45;421:6;253:f.44'N5 Organization The colon classification system uses 42 main classes that are combined with other letters, numbers, and marks in a manner resembling the Library of Congress Classification. Facets CC uses five primary categories, or facets, to specify the sorting of a publication. Collectively, they are called PMEST: Other symbols can be used to indicate components of facets called isolates, and to specify complex combinations or relationships between disciplines. Classes The following are the main classes of CC, with some subclasses, the main method used to sort the subclass using the PMEST scheme and examples showing application of PMEST. z Generalia 1 Universe of Knowledge 2 Library Science 3 Book science 4 Journalism A Natural science B Mathematics B2 Algebra C Physics D Engineering E Chemistry F Technology G Biology H Geology HX Mining I Botany J Agriculture J1 Horticulture J2 Feed J3 Food J4 Stimulant J5 | associated with every item in a library, and so form a reasonably universal sorting system. As an example, the subject "research in the cure of tuberculosis of lungs by x-ray conducted in India in 1950" would be categorized as: Medicine,Lungs;Tuberculosis:Treatment;X-ray:Research.India'1950 This is summarized in a specific call number: L,45;421:6;253:f.44'N5 Organization The colon classification system uses 42 main classes that are combined with other letters, numbers, and marks in a manner resembling the Library of Congress Classification. Facets CC uses five primary categories, or facets, to specify the sorting of a publication. Collectively, they are called PMEST: Other symbols can be used to indicate components of facets called isolates, and to specify complex combinations or relationships between disciplines. Classes The following are the main classes of CC, with some subclasses, the main method used to sort the subclass using the PMEST scheme and examples showing application of PMEST. z Generalia 1 Universe of Knowledge 2 Library Science 3 Book science 4 Journalism A Natural science B Mathematics B2 Algebra C Physics D Engineering E Chemistry F Technology G Biology H Geology HX Mining I Botany J Agriculture J1 Horticulture J2 Feed J3 Food J4 Stimulant J5 Oil J6 Drug J7 Fabric J8 Dye K Zoology KZ Animal Husbandry L Medicine LZ3 Pharmacology LZ5 Pharmacopoeia M Useful arts M7 Textiles [material]:[work] Δ Spiritual experience and mysticism [religion],[entity]:[problem] N Fine arts ND Sculpture NN Engraving NQ Painting NR Music O Literature P Linguistics Q Religion R Philosophy S Psychology T Education U Geography V History W Political science X Economics Y Sociology YZ Social Work Z Law Example |
frame such as an address register. Census counts are necessary to adjust samples to be representative of a population by weighting them as is common in opinion polling. Similarly, stratification requires knowledge of the relative sizes of different population strata, which can be derived from census enumerations. In some countries, the census provides the official counts used to apportion the number of elected representatives to regions (sometimes controversially – e.g., Utah v. Evans). In many cases, a carefully chosen random sample can provide more accurate information than attempts to get a population census. Sampling A census is often construed as the opposite of a sample as its intent is to count everyone in a population rather than a fraction. However, population censuses do rely on a sampling frame to count the population. This is the only way to be sure that everyone has been included as otherwise those not responding would not be followed up on and individuals could be missed. The fundamental premise of a census is that the population is not known and a new estimate is to be made by the analysis of primary data. The use of a sampling frame is counterintuitive as it suggests that the population size is already known. However, a census is also used to collect attribute data on the individuals in the nation, not only to assess population size. This process of sampling marks the difference between a historical census, which was a house to house process or the product of an imperial decree, and the modern statistical project. The sampling frame used by census is almost always an address register. Thus it is not known if there is anyone resident or how many people there are in each household. Depending on the mode of enumeration, a form is sent to the householder, an enumerator calls, or administrative records for the dwelling are accessed. As a preliminary to the dispatch of forms, census workers will check any address problems on the ground. While it may seem straightforward to use the postal service file for this purpose, this can be out of date and some dwellings may contain a number of independent households. A particular problem is what are termed 'communal establishments' which category includes student residences, religious orders, homes for the elderly, people in prisons etc. As these are not easily enumerated by a single householder, they are often treated differently and visited by special teams of census workers to ensure they are classified appropriately. Residence definitions Individuals are normally counted within households, and information is typically collected about the household structure and the housing. For this reason international documents refer to censuses of population and housing. Normally the census response is made by a household, indicating details of individuals resident there. An important aspect of census enumerations is determining which individuals can be counted and which cannot be counted. Broadly, three definitions can be used: de facto residence; de jure residence; and permanent residence. This is important in considering individuals who have multiple or temporary addresses. Every person should be identified uniquely as resident in one place; but the place where they happen to be on Census Day, their de facto residence, may not be the best place to count them. Where an individual uses services may be more useful, and this is at their usual residence. An individual may be recorded at a "permanent" address, which might be a family home for students or long term migrants. A precise definition of residence is needed, to decide whether visitors to a country should be included in the population count. This is becoming more important as students travel abroad for education for a period of several years. Other groups causing problems of enumeration are new-born babies, refugees, people away on holiday, people moving home around census day, and people without a fixed address. People with second homes because they are working in another part of the country or have a holiday cottage are difficult to fix at a particular address; this sometimes causes double counting or houses being mistakenly identified as vacant. Another problem is where people use a different address at different times e.g. students living at their place of education in term time but returning to a family home during vacations, or children whose parents have separated who effectively have two family homes. Census enumeration has always been based on finding people where they live, as there is no systematic alternative: any list used to find people is likely to be derived from census activities in the first place. Recent UN guidelines provide recommendations on enumerating such complex households. In the census of agriculture, data is collected at the agricultural holding unit. An agricultural holding is an economic unit of agricultural production under single management comprising all livestock kept and all land used wholly or partly for agricultural production purposes, without regard to title, legal form, or size. Single management may be exercised by an individual or household, jointly by two or more individuals or households, by a clan or tribe, or by a juridical person such as a corporation, cooperative or government agency. The holding's land may consist of one or more parcels, located in one or more separate areas or in one or more territorial or administrative divisions, providing the parcels share the same production means, such as labour, farm buildings, machinery or draught animals. Enumeration strategies Historical censuses used crude enumeration assuming absolute accuracy. Modern approaches take into account the problems of overcount and undercount, and the coherence of census enumerations with other official sources of data. This reflects a realist approach to measurement, acknowledging that under any definition of residence there is a true value of the population but this can never be measured with complete accuracy. An important aspect of the census process is to evaluate the quality of the data. Many countries use a post-enumeration survey to adjust the raw census counts. This works in a similar manner to capture-recapture estimation for animal populations. Among census experts this method is called dual system enumeration (DSE). A sample of households are visited by interviewers who record the details of the household as at census day. These data are then matched to census records, and the number of people missed can be estimated by considering the numbers of people who are included in one count but not the other. This allows adjustments to the count for non-response, varying between different demographic groups. An explanation using a fishing analogy can be found in "Trout, Catfish and Roach..." which won an award from the Royal Statistical Society for excellence in official statistics in 2011. Triple system enumeration has been proposed as an improvement as it would allow evaluation of the statistical dependence of pairs of sources. However, as the matching process is the most difficult aspect of census estimation this has never been implemented for a national enumeration. It would also be difficult to identify three different sources that were sufficiently different to make the triple system effort worthwhile. The DSE approach has another weakness in that it assumes there is no person counted twice (over count). In de facto residence definitions this would not be a problem but in de jure definitions individuals risk being recorded on more than one form leading to double counting. A particular problem here is students who often have a term time and family address. Several countries have used a system which is known as short form/long form. This is a sampling strategy which randomly chooses a proportion of people to send a more detailed questionnaire to (the long form). Everyone receives the short form questions. This means more data are collected, but without imposing a burden on the whole population. This also reduces the burden on the statistical office. Indeed, in the UK until 2001 all residents were required to fill in the whole form but only a 10% sample were coded and analysed in detail. New technology means that all data are now scanned and processed. During the 2011 Canadian census there was controversy about the cessation of the mandatory long form census; the head of Statistics Canada, Munir Sheikh, resigned upon the federal government's decision to do so. The use of alternative enumeration strategies is increasing but these are not as simple as many people assume, and are only used in developed countries. The Netherlands has been most advanced in adopting a census using administrative data. This allows a simulated census to be conducted by linking several different administrative databases at an agreed time. Data can be matched and an overall enumeration established allowing for discrepancies between different data sources. A validation survey is still conducted in a similar way to the post enumeration survey employed in a traditional census. Other countries which have a population register use this as a basis for all the census statistics needed by users. This is most common among Nordic countries, but requires many distinct registers to be combined, including population, housing, employment and education. These registers are then combined and brought up to the standard of a statistical register by comparing the data in different sources and ensuring the quality is sufficient for official statistics to be produced. A recent innovation is the French instigation of a rolling census programme with different regions enumerated each year, so that the whole country is completely enumerated every 5 to 10 years. In Europe, in connection with the 2010 census round, many countries adopted alternative census methodologies, often based on the combination of data from registers, surveys and other sources. Technology Censuses have evolved in their use of technology: censuses in 2010 used many new types of computing. In Brazil, handheld devices were used by enumerators to locate residences on the ground. In many countries, census returns could be made via the Internet as well as in paper form. DSE is facilitated by computer matching techniques which can be automated, such as propensity score matching. In the UK, all census formats are scanned and stored electronically before being destroyed, replacing the need for physical archives. | sampling frame to count the population. This is the only way to be sure that everyone has been included as otherwise those not responding would not be followed up on and individuals could be missed. The fundamental premise of a census is that the population is not known and a new estimate is to be made by the analysis of primary data. The use of a sampling frame is counterintuitive as it suggests that the population size is already known. However, a census is also used to collect attribute data on the individuals in the nation, not only to assess population size. This process of sampling marks the difference between a historical census, which was a house to house process or the product of an imperial decree, and the modern statistical project. The sampling frame used by census is almost always an address register. Thus it is not known if there is anyone resident or how many people there are in each household. Depending on the mode of enumeration, a form is sent to the householder, an enumerator calls, or administrative records for the dwelling are accessed. As a preliminary to the dispatch of forms, census workers will check any address problems on the ground. While it may seem straightforward to use the postal service file for this purpose, this can be out of date and some dwellings may contain a number of independent households. A particular problem is what are termed 'communal establishments' which category includes student residences, religious orders, homes for the elderly, people in prisons etc. As these are not easily enumerated by a single householder, they are often treated differently and visited by special teams of census workers to ensure they are classified appropriately. Residence definitions Individuals are normally counted within households, and information is typically collected about the household structure and the housing. For this reason international documents refer to censuses of population and housing. Normally the census response is made by a household, indicating details of individuals resident there. An important aspect of census enumerations is determining which individuals can be counted and which cannot be counted. Broadly, three definitions can be used: de facto residence; de jure residence; and permanent residence. This is important in considering individuals who have multiple or temporary addresses. Every person should be identified uniquely as resident in one place; but the place where they happen to be on Census Day, their de facto residence, may not be the best place to count them. Where an individual uses services may be more useful, and this is at their usual residence. An individual may be recorded at a "permanent" address, which might be a family home for students or long term migrants. A precise definition of residence is needed, to decide whether visitors to a country should be included in the population count. This is becoming more important as students travel abroad for education for a period of several years. Other groups causing problems of enumeration are new-born babies, refugees, people away on holiday, people moving home around census day, and people without a fixed address. People with second homes because they are working in another part of the country or have a holiday cottage are difficult to fix at a particular address; this sometimes causes double counting or houses being mistakenly identified as vacant. Another problem is where people use a different address at different times e.g. students living at their place of education in term time but returning to a family home during vacations, or children whose parents have separated who effectively have two family homes. Census enumeration has always been based on finding people where they live, as there is no systematic alternative: any list used to find people is likely to be derived from census activities in the first place. Recent UN guidelines provide recommendations on enumerating such complex households. In the census of agriculture, data is collected at the agricultural holding unit. An agricultural holding is an economic unit of agricultural production under single management comprising all livestock kept and all land used wholly or partly for agricultural production purposes, without regard to title, legal form, or size. Single management may be exercised by an individual or household, jointly by two or more individuals or households, by a clan or tribe, or by a juridical person such as a corporation, cooperative or government agency. The holding's land may consist of one or more parcels, located in one or more separate areas or in one or more territorial or administrative divisions, providing the parcels share the same production means, such as labour, farm buildings, machinery or draught animals. Enumeration strategies Historical censuses used crude enumeration assuming absolute accuracy. Modern approaches take into account the problems of overcount and undercount, and the coherence of census enumerations with other official sources of data. This reflects a realist approach to measurement, acknowledging that under any definition of residence there is a true value of the population but this can never be measured with complete accuracy. An important aspect of the census process is to evaluate the quality of the data. Many countries use a post-enumeration survey to adjust the raw census counts. This works in a similar manner to capture-recapture estimation for animal populations. Among census experts this method is called dual system enumeration (DSE). A sample of households are visited by interviewers who record the details of the household as at census day. These data are then matched to census records, and the number of people missed can be estimated by considering the numbers of people who are included in one count but not the other. This allows adjustments to the count for non-response, varying between different demographic groups. An explanation using a fishing analogy can be found in "Trout, Catfish and Roach..." which won an award from the Royal Statistical Society for excellence in official statistics in 2011. Triple system enumeration has been proposed as an improvement as it would allow evaluation of the statistical dependence of pairs of sources. However, as the matching process is the most difficult aspect of census estimation this has never been implemented for a national enumeration. It would also be difficult to identify three different sources that were sufficiently different to make the triple system effort worthwhile. The DSE approach has another weakness in that it assumes there is no person counted twice (over count). In de facto residence definitions this would not be a problem but in de jure definitions individuals risk being recorded on more than one form leading to double counting. A particular problem here is students who often have a term time and family address. Several countries have used a system which is known as short form/long form. This is a sampling strategy which randomly chooses a proportion of people to send a more detailed questionnaire to (the long form). Everyone receives the short form questions. This means more data are collected, but without imposing a burden on the whole population. This also reduces the burden on the statistical office. Indeed, in the UK until 2001 all residents were required to fill in the whole form but only a 10% sample were coded and analysed in detail. New technology means that all data are now scanned and processed. During the 2011 Canadian census there was controversy about the cessation of the mandatory long form census; the head of Statistics Canada, Munir Sheikh, resigned upon the federal government's decision to do so. The use of alternative enumeration strategies is increasing but these are not as simple as many people assume, and are only used in developed countries. The Netherlands has been most advanced in adopting a census using administrative data. This allows a simulated census to be conducted by linking several different administrative databases at an agreed time. Data can be matched and an overall enumeration established allowing for discrepancies between different data sources. A validation survey is still conducted in a similar way to the post enumeration survey employed in a traditional census. Other countries which have a population register use this as a basis for all the census statistics needed by users. This is most common among Nordic countries, but requires many distinct registers to be combined, including population, housing, employment and education. These registers are then combined and brought up to the standard of a statistical register by comparing the data in different sources and ensuring |
that proceed with the absorption of light by atoms or molecules.. History of physical chemistry – history of the study of macroscopic, atomic, subatomic, and particulate phenomena in chemical systems in terms of physical laws and concepts. History of chemical kinetics – history of the study of rates of chemical processes. History of chemical thermodynamics – history of the study of the interrelation of heat and work with chemical reactions or with physical changes of state within the confines of the laws of thermodynamics. History of electrochemistry – history of the branch of chemistry that studies chemical reactions which take place in a solution at the interface of an electron conductor (a metal or a semiconductor) and an ionic conductor (the electrolyte), and which involve electron transfer between the electrode and the electrolyte or species in solution. History of Femtochemistry – history of the Femtochemistry is the science that studies chemical reactions on extremely short timescales, approximately 10−15 seconds (one femtosecond, hence the name). History of mathematical chemistry – history of the area of research engaged in novel applications of mathematics to chemistry; it concerns itself principally with the mathematical modeling of chemical phenomena. History of mechanochemistry – history of the coupling of the mechanical and the chemical phenomena on a molecular scale and includes mechanical breakage, chemical behaviour of mechanically stressed solids (e.g., stress-corrosion cracking), tribology, polymer degradation under shear, cavitation-related phenomena (e.g., sonochemistry and sonoluminescence), shock wave chemistry and physics, and even the burgeoning field of molecular machines. History of physical organic chemistry – history of the study of the interrelationships between structure and reactivity in organic molecules. History of quantum chemistry – history of the branch of chemistry whose primary focus is the application of quantum mechanics in physical models and experiments of chemical systems. History of sonochemistry – history of the study of the effect of sonic waves and wave properties on chemical systems. History of stereochemistry – history of the study of the relative spatial arrangement of atoms within molecules. History of supramolecular chemistry – history of the area of chemistry beyond the molecules and focuses on the chemical systems made up of a discrete number of assembled molecular subunits or components. History of thermochemistry – history of the study of the energy and heat associated with chemical reactions and/or physical transformations. History of phytochemistry – history of the strict sense of the word the study of phytochemicals. History of polymer chemistry – history of the multidisciplinary science that deals with the chemical synthesis and chemical properties of polymers or macromolecules. History of solid-state chemistry – history of the study of the synthesis, structure, and properties of solid phase materials, particularly, but not necessarily exclusively of, non-molecular solids History of multidisciplinary fields involving chemistry: History of chemical biology – history of the scientific discipline spanning the fields of chemistry and biology that involves the application of chemical techniques and tools, often compounds produced through synthetic chemistry, to the study and manipulation of biological systems. History of chemical engineering – history of the branch of engineering that deals with physical science (e.g., chemistry and physics), and life sciences (e.g., biology, microbiology and biochemistry) with mathematics and economics, to the process of converting raw materials or chemicals into more useful or valuable forms. History of chemical oceanography – history of the study of the behavior of the chemical elements within the Earth's oceans. History of chemical physics – history of the branch of physics that studies chemical processes from the point of view of physics. History of materials science – history of the interdisciplinary field applying the properties of matter to various areas of science and engineering. History of nanotechnology – history of the study of manipulating matter on an atomic and molecular scale History of oenology – history of the science and study of all aspects of wine and winemaking except vine-growing and grape-harvesting, which is a subfield called viticulture. History of spectroscopy – history of the study of the interaction between matter and radiated energy History of surface science – history of the Surface science is the study of physical and chemical phenomena that occur at the interface of two phases, including solid–liquid interfaces, solid–gas interfaces, solid–vacuum interfaces, and liquid–gas interfaces. History of chemicals History of chemical elements History of carbon History of hydrogen Timeline of hydrogen technologies History of oxygen History of chemical products History of aspirin History of cosmetics History of gunpowder History of pharmaceutical drugs History of vitamins History of chemical processes History of manufactured gas History of the Haber process History of the chemical industry History of the petroleum industry History of the pharmaceutical industry History of the periodic table Chemicals Dictionary of chemical formulas List of biomolecules List of inorganic compounds Periodic table Atomic Theory Atomic theory Atomic models Atomism – Natural philosophy that theorizes that the world is composed of indivisible pieces. Plum pudding model Rutherford model Bohr model Thermochemistry Thermochemistry Terminology Thermochemistry – Chemical kinetics – the study of the rates of chemical reactions and investigates how different experimental conditions can influence the speed of a chemical reaction and yield information about the reaction's mechanism and transition states, as well as the construction of mathematical models that can describe the characteristics of a chemical reaction. Exothermic –a process or reaction in which the system release energy to its surroundings in the form of heat. They are denoted by negative heat flow. Endothermic –a process or reaction in which the system absorbs energy from its surroundings in the form of heat. They are denoted by positive heat flow. Thermochemical equation – Enthalpy change – internal energy of a system plus the product of pressure and volume. Its change in a system is equal to the heat brought to the system at constant pressure. Enthalpy of reaction – Temperature – an objective comparative measure of heat. Calorimeter – an object used for calorimetry, or the process of measuring the heat of chemical reactions or physical changes as well as heat capacity. Heat – A form of energy associated with the kinetic energy of atoms or molecules and capable of being transmitted through solid and fluid media by conduction, through fluid media by convection, and through empty space by radiation. Joule – a unit of energy. Calorie – Specific heat – Specific heat capacity – Latent heat – Heat of fusion – Heat of vaporization – Collision theory – Activation energy – Activated complex – Reaction rate – Catalyst – Thermochemical Equations Chemical equations that include the heat involved in a reaction, either on the reactant side or the product side. Examples: H2O(l) + 240kJ → H2O(g) N2 + 3H2 → 2NH3 + 92kJ Joule (J) – Enthalpy How to calculate the enthalpy of ? Enthalpy and Thermochemical Equations Endothermic Reactions Exothermic Reactions Potential Energy Diagrams Thermochemistry Stoichiometry Chemists For more chemists, see: Nobel Prize in Chemistry and List of chemists Elias James Corey Marie Curie John Dalton Humphry Davy Eleuthère Irénée du Pont George Eastman Michael Faraday Dmitriy Mendeleyev Alfred Nobel Wilhelm Ostwald Louis Pasteur Linus Pauling Joseph Priestley Karl Ziegler Ahmed Zewail Robert Burns Woodward Rosalind Franklin Amedeo Avogadro Chemistry literature Scientific literature – Scientific journal – Academic journal – List of important publications in chemistry List of scientific journals in chemistry List of science magazines Scientific American Lists Chemical elements data references List of chemical elements — atomic mass, atomic number, symbol, name List of minerals - Minerals Electron configurations of the elements (data page) — electron configuration, electrons per shell Densities of the elements (data page) — density (solid, liquid, | to chemistry Alchemy (outline) History of alchemy History of the branches of chemistry History of analytical chemistry – history of the study of the separation, identification, and quantification of the chemical components of natural and artificial materials. History of astrochemistry – history of the study of the abundance and reactions of chemical elements and molecules in the universe, and their interaction with radiation. History of cosmochemistry – history of the study of the chemical composition of matter in the universe and the processes that led to those compositions History of atmospheric chemistry – history of the branch of atmospheric science in which the chemistry of the Earth's atmosphere and that of other planets is studied. It is a multidisciplinary field of research and draws on environmental chemistry, physics, meteorology, computer modeling, oceanography, geology and volcanology and other disciplines History of biochemistry – history of the study of chemical processes in living organisms, including, but not limited to, living matter. Biochemistry governs all living organisms and living processes. History of agrochemistry – history of the study of both chemistry and biochemistry which are important in agricultural production, the processing of raw products into foods and beverages, and in environmental monitoring and remediation. History of bioinorganic chemistry – history of the examines the role of metals in biology. History of bioorganic chemistry – history of the rapidly growing scientific discipline that combines organic chemistry and biochemistry. History of biophysical chemistry – history of the new branch of chemistry that covers a broad spectrum of research activities involving biological systems. History of environmental chemistry – history of the scientific study of the chemical and biochemical phenomena that occur in natural places. History of immunochemistry – history of the branch of chemistry that involves the study of the reactions and components on the immune system. History of medicinal chemistry – history of the discipline at the intersection of chemistry, especially synthetic organic chemistry, and pharmacology and various other biological specialties, where they are involved with design, chemical synthesis and development for market of pharmaceutical agents (drugs). History of pharmacology – history of the branch of medicine and biology concerned with the study of drug action. History of natural product chemistry – history of the chemical compound or substance produced by a living organism – history of the found in nature that usually has a pharmacological or biological activity for use in pharmaceutical drug discovery and drug design. History of neurochemistry – history of the specific study of neurochemicals, which include neurotransmitters and other molecules such as neuro-active drugs that influence neuron function. History of computational chemistry – history of the branch of chemistry that uses principles of computer science to assist in solving chemical problems. History of chemo-informatics – history of the use of computer and informational techniques, applied to a range of problems in the field of chemistry. History of molecular mechanics – history of the uses Newtonian mechanics to model molecular systems. History of Flavor chemistry – history of the someone who uses chemistry to engineer artificial and natural flavors. History of Flow chemistry – history of the chemical reaction is run in a continuously flowing stream rather than in batch production. History of geochemistry – history of the study of the mechanisms behind major geological systems using chemistry History of aqueous geochemistry – history of the study of the role of various elements in watersheds, including copper, sulfur, mercury, and how elemental fluxes are exchanged through atmospheric-terrestrial-aquatic interactions History of isotope geochemistry – history of the study of the relative and absolute concentrations of the elements and their isotopes using chemistry and geology History of ocean chemistry – history of the studies the chemistry of marine environments including the influences of different variables. History of organic geochemistry – history of the study of the impacts and processes that organisms have had on Earth History of regional, environmental and exploration geochemistry – history of the study of the spatial variation in the chemical composition of materials at the surface of the Earth History of inorganic chemistry – history of the branch of chemistry concerned with the properties and behavior of inorganic compounds. History of nuclear chemistry – history of the subfield of chemistry dealing with radioactivity, nuclear processes and nuclear properties. History of radiochemistry – history of the chemistry of radioactive materials, where radioactive isotopes of elements are used to study the properties and chemical reactions of non-radioactive isotopes (often within radiochemistry the absence of radioactivity leads to a substance being described as being inactive as the isotopes are stable). History of organic chemistry – history of the study of the structure, properties, composition, reactions, and preparation (by synthesis or by other means) of carbon-based compounds, hydrocarbons, and their derivatives. History of petrochemistry – history of the branch of chemistry that studies the transformation of crude oil (petroleum) and natural gas into useful products or raw materials. History of organometallic chemistry – history of the study of chemical compounds containing bonds between carbon and a metal. History of photochemistry – history of the study of chemical reactions that proceed with the absorption of light by atoms or molecules.. History of physical chemistry – history of the study of macroscopic, atomic, subatomic, and particulate phenomena in chemical systems in terms of physical laws and concepts. History of chemical kinetics – history of the study of rates of chemical processes. History of chemical thermodynamics – history of the study of the interrelation of heat and work with chemical reactions or with physical changes of state within the confines of the laws of thermodynamics. History of electrochemistry – history of the branch of chemistry that studies chemical reactions which take place in a solution at the interface of an electron conductor (a metal or a semiconductor) and an ionic conductor (the electrolyte), and which involve electron transfer between the electrode and the electrolyte or species in solution. History of Femtochemistry – history of the Femtochemistry is the science that studies chemical reactions on extremely short timescales, approximately 10−15 seconds (one femtosecond, hence the name). History of mathematical chemistry – history of the area of research engaged in novel applications of mathematics to chemistry; it concerns itself principally with the mathematical modeling of chemical phenomena. History of mechanochemistry – history of the coupling of the mechanical and the chemical phenomena on a molecular scale and includes mechanical breakage, chemical behaviour of mechanically stressed solids (e.g., stress-corrosion cracking), tribology, polymer degradation under shear, cavitation-related phenomena (e.g., sonochemistry and sonoluminescence), shock wave chemistry and physics, and even the burgeoning field of molecular machines. History of physical organic chemistry – history of the study of the interrelationships between structure and reactivity in organic molecules. History of quantum chemistry – history of the branch of chemistry whose primary focus is the application of quantum mechanics in physical models and experiments of chemical systems. History of sonochemistry – history of the study of the effect of sonic waves and wave properties on chemical systems. History of stereochemistry – history of the study of the relative spatial arrangement of atoms within molecules. History of supramolecular chemistry – history of the area of chemistry beyond the molecules and focuses on the chemical systems made up of a discrete number of assembled molecular subunits or components. History of thermochemistry – history of the study of the energy and heat associated with chemical reactions and/or physical transformations. History of phytochemistry – history of the strict sense of the word the study of phytochemicals. History of polymer chemistry – history of the multidisciplinary science that deals with the chemical |
Gilles Deleuze – Félix Guattari – Ernesto Laclau – Claude Lefort – A Cyborg Manifesto – Reconstructivism Reconstructivism Paulo Freire – John Dewey – Psychoanalytic theory Psychoanalytic theory Félix Guattari – Schizoanalysis – Ecosophy – Luce Irigaray – Teresa de Lauretis – Jacques Lacan – Julia Kristeva – Slavoj Žižek – Sigmund Freud – The Interpretation of Dreams – On Narcissism – Totem and Taboo – Beyond the Pleasure Principle – The Ego and the Id – The Future of an Illusion – Civilization and Its Discontents – Moses and Monotheism – Queer theory Queer theory Judith Butler – Heteronormativity – Eve Kosofsky Sedgwick – Gloria E. Anzaldúa – New Queer Cinema – Queer pedagogy – Semiotics Semiotics Roland Barthes – Julia Kristeva – Charles Sanders Peirce – Ferdinand de Saussure – Cultural anthropology Cultural anthropology René Girard – Theories of identity Private sphere – certain sector of societal life in which an individual enjoys a degree of authority, unhampered by interventions from governmental or other institutions. Examples of the private sphere are family and home. The complement or opposite of public sphere. Public sphere – area in social life where individuals can come together to freely discuss and identify societal problems, and through that discussion influence political action. It is "a discursive space in which individuals and groups congregate to discuss matters of mutual interest and, where possible, to reach a common judgment." Creolization Linguistical theories of literature Mary Louise Pratt – Major works Bloch, Ernst (1938–47). The Principle of Hope Fromm, Erich (1941). The Fear of Freedom (UK)/Escape from Freedom (US) Horkheimer, Max; Adorno, Theodor W. (1944–47) Dialectic of Enlightenment Barthes, Roland (1957). Mythologies Habermas, Jürgen (1962). The Structural Transformation of the Public Sphere Marcuse, Herbert (1964). One-Dimensional Man Adorno, Theodor W. (1966) Negative Dialectics Derrida, Jacques (1967). Of Grammatology Derrida, Jacques (1967). Writing and Difference Habermas, Jürgen (1981). The Theory of Communicative Action Major theorists List of critical theorists Theodor Adorno – Max Horkheimer – Louis Althusser – Roland Barthes – Jean Baudrillard – Jacques Lacan – Jacques Derrida – Erich Fromm – Jürgen Habermas – Herbert Marcuse – External links | – The Future of an Illusion – Civilization and Its Discontents – Moses and Monotheism – Queer theory Queer theory Judith Butler – Heteronormativity – Eve Kosofsky Sedgwick – Gloria E. Anzaldúa – New Queer Cinema – Queer pedagogy – Semiotics Semiotics Roland Barthes – Julia Kristeva – Charles Sanders Peirce – Ferdinand de Saussure – Cultural anthropology Cultural anthropology René Girard – Theories of identity Private sphere – certain sector of societal life in which an individual enjoys a degree of authority, unhampered by interventions from governmental or other institutions. Examples of the private sphere are family and home. The complement or opposite of public sphere. Public sphere – area in social life where individuals can come together to freely discuss and identify societal problems, and through that discussion influence political action. It is "a discursive space in which individuals and groups congregate to discuss matters of mutual interest and, where possible, to reach a common judgment." Creolization Linguistical theories of literature Mary Louise Pratt – Major works Bloch, Ernst (1938–47). The Principle of Hope Fromm, Erich (1941). The Fear of Freedom (UK)/Escape from Freedom (US) Horkheimer, Max; Adorno, Theodor W. (1944–47) Dialectic of Enlightenment Barthes, Roland (1957). Mythologies Habermas, Jürgen (1962). The Structural Transformation of the Public Sphere Marcuse, Herbert (1964). One-Dimensional Man Adorno, Theodor W. (1966) Negative Dialectics Derrida, Jacques (1967). Of Grammatology Derrida, Jacques (1967). Writing and Difference Habermas, Jürgen (1981). The Theory of Communicative Action Major theorists List of critical theorists Theodor Adorno – Max Horkheimer – Louis Althusser – Roland Barthes – Jean Baudrillard – Jacques Lacan – Jacques Derrida – Erich Fromm – Jürgen Habermas – Herbert Marcuse – External links Critical Theory, Stanford Encyclopedia of Philosophy "Theory: Death is Not the End", n+1 magazine's short history of academic critical theory. Winter 2005. Critical Legal Thinking: A critical legal studies website which |
Stratford-upon-Avon to just south of Bath near Radstock. It lies across the boundaries of several English counties; mainly Gloucestershire and Oxfordshire, and parts of Wiltshire, Somerset, Worcestershire, and Warwickshire. The highest point of the region is Cleeve Hill at , just east of Cheltenham. The hills give their name to the Cotswold local government district, formed on 1 April 1974, which is within the county of Gloucestershire. Its main town is Cirencester, where the Cotswold District Council offices are located. The population of the District was about 83,000 in 2011. The much larger area referred to as the Cotswolds encompasses nearly , over five counties: Gloucestershire, Oxfordshire, Warwickshire, Wiltshire, and Worcestershire. The population of the Area of Outstanding Natural Beauty was 139,000 in 2016. History The largest excavation of Jurassic-era echinoderm fossils, including of rare and previously unknown species, occurred at a quarry in the Cotswolds in 2021. There is evidence of Neolithic settlement from burial chambers on Cotswold Edge, and there are remains of Bronze and Iron Age forts. Later the Romans built villas, such as at Chedworth, settlements such as Gloucester, and paved the Celtic path later known as Fosse Way. During the Middle Ages, thanks to the breed of sheep known as the Cotswold Lion, the Cotswolds became prosperous from the wool trade with the continent, with much of the money made from wool directed towards the building of churches. The most successful era for the wool trade was 1250–1350; much of the wool at that time was sold to Italian merchants. The area still preserves numerous large, handsome Cotswold Stone "wool churches". The affluent area in the 21st century has attracted wealthy Londoners and others who own second homes there or have chosen to retire to the Cotswolds. Etymology The name Cotswold is popularly attributed the meaning "sheep enclosure in rolling hillsides", incorporating the term, wold, meaning hills. Compare also the Weald from the Saxon/German word Wald meaning 'forest'. However, the English Place-Name Society has for many years accepted that the term Cotswold is derived from Codesuualt of the 12th century or other variations on this form, the etymology of which was given, 'Cod's-wold', which is 'Cod's high open land'. Cod was interpreted as an Old English personal name, which may be recognised in further names: Cutsdean, Codeswellan, and Codesbyrig, some of which date back to the eighth century AD. It has subsequently been noticed that "Cod" could derive philologically from a Brittonic female cognate "Cuda", a hypothetical mother goddess in Celtic mythology postulated to have been worshipped in the Cotswold region. Geography The spine of the Cotswolds runs southwest to northeast through six counties, particularly Gloucestershire, west Oxfordshire and southwestern Warwickshire. The northern and western edges of the Cotswolds are marked by steep escarpments down to the Severn valley and the Warwickshire Avon. This feature, known as the Cotswold escarpment, or sometimes the Cotswold Edge, is a result of the uplifting (tilting) of the limestone layer, exposing its broken edge. This is a cuesta, in geological terms. The dip slope is to the southeast. On the eastern boundary lies the city of Oxford and on the west is Stroud. To the southeast, the upper reaches of the Thames Valley and towns such as Lechlade, Tetbury, and Fairford are often considered to mark the limit of this region. To the south the Cotswolds, with the characteristic uplift of the Cotswold Edge, reach beyond Bath, and towns such as Chipping Sodbury and Marshfield share elements of Cotswold character. The area is characterised by attractive small towns and villages built of the underlying Cotswold stone (a yellow oolitic limestone). This limestone is rich in fossils, particularly of fossilised sea urchins. Cotswold towns include Bourton-on-the-Water, Broadway, Burford, Chipping Campden, Chipping Norton, Cricklade, Dursley, Malmesbury, Moreton-in-Marsh, Nailsworth, Northleach, Stow-on-the-Wold, Stroud, Witney, and Winchcombe. In addition, much of Box lies in the Cotswolds. Bath, Cheltenham, Cirencester, Gloucester, Stroud, and Swindon are larger urban centres that border on, or are virtually surrounded by, the Cotswold AONB. The town of Chipping Campden is notable for being the home of the Arts and Crafts movement, founded by William Morris at the end of the 19th and beginning of the 20th centuries. William Morris lived occasionally in Broadway Tower, a folly, now part of a country park. Chipping Campden is also known for the annual Cotswold Olimpick Games, a celebration of sports and games dating back to the early 17th century. Of the nearly of the Cotswolds, roughly eighty percent is farmland. There are over of footpaths and bridleways. There are also of historic stone walls. Economy A 2017 report on employment within the Area of Outstanding Natural Beauty, stated that the main sources of income were real estate, renting and business activities, manufacturing and wholesale & retail trade repairs. Some 44% of residents were employed in these sectors. Agriculture is also important. Some 86% of the land in the AONB is used for this purpose. The primary crops include barley, beans, rape seed oil and wheat, while the raising of sheep is also important; cows and pigs are also reared. The livestock sector has been declining since 2002, however. According to the 2011 Census data for the Cotswolds, the wholesale and retail trade was the largest employer (15.8% of the workforce), followed by education (9.7%) and health and social work (9.3%). The report also indicates that a relatively higher proportion of residents were working in agriculture, forestry and fishing, accommodation and food services as well as in professional, scientific and technical activities. Unemployment in the Cotswold District was among the lowest in the country. A report in August 2017 showed only 315 unemployed persons, a slight decrease of five from a year earlier. Tourism Tourism is a significant part of the economy. The Cotswold District area alone gained over £373 million from visitor spending on accommodation, £157 million on local attractions and entertainments, and about £100m on travel in 2016. In the larger Cotswolds Tourism area, including Stroud, Cheltenham, Gloucester and Tewkesbury, tourism generated about £1 billion in 2016, providing 200,000 jobs. Some 38 million day visits were made to the Cotswold Tourism area that year. Many travel guides direct tourists to Chipping Campden, Stow-on-the-Wold, Bourton-on-the-Water, Broadway, Bibury, and Stanton. Some of these locations can be very crowded at times. Roughly 300,000 people visit Bourton per year, for example, with about half staying for a day or less. The area also has numerous public walking trails and footpaths that attract visitors, including the Cotswold Way (part of the National Trails System) from Bath to Chipping Camden. Housing development In August 2018, the final decision was made for a Local Plan that would lead to the building of nearly 7,000 additional homes by 2031, in addition to over 3,000 already built. Areas for development include Cirencester, Bourton-on-the-Water, Down Ampney, Fairford, Kemble, Lechlade, Northleach, South Cerney, Stow-on-the-Wold, Tetbury and Moreton-in-Marsh. Some of the money received from developers will be earmarked for new infrastructure to support the increasing population. Cotswold stone Cotswold stone is a yellow oolitic Jurassic limestone. This limestone is rich in fossils, particularly of fossilised sea urchins. When weathered, the colour of buildings made or faced with this stone is often described as honey or golden. The stone varies in colour from north to south, being honey-coloured in the north and north east of the region, as shown in Cotswold villages such as Stanton and Broadway; golden-coloured in the central and southern areas, as shown in Dursley and Cirencester; and pearly white in Bath. The rock outcrops at places on the Cotswold Edge; small quarries are common. The exposures are rarely sufficiently compact to be good for rock-climbing, but an exception is Castle Rock, on Cleeve Hill, near Cheltenham. Due to the rapid expansion of the Cotswolds in order for nearby areas to capitalize on increased house prices, well known ironstone villages, such as Hook Norton, have even been claimed by some to be in the Cotswolds despite lacking key features of Cotswolds villages such as Cotswold stone and are instead built using a deep red/orange ironstone, known locally as Hornton Stone. In his 1934 book English Journey, J. B. Priestley made this comment about Cotswold buildings made of the local stone. The truth is that it has no colour that can be described. Even when the sun is obscured and the light is cold, these walls are still faintly warm and luminous, as if they knew the trick of keeping the lost sunlight of centuries glimmering about them Area of Outstanding Natural Beauty The Cotswolds were designated as an Area of Outstanding Natural Beauty (AONB) in 1966, with an expansion on 21 December 1990 to . In 1991, all AONBs were measured again using modern methods, and the official area of the Cotswolds AONB was increased to . In 2000, the government confirmed that AONBs have the same landscape quality and status as National Parks. The Cotswolds AONB, which is | back to the eighth century AD. It has subsequently been noticed that "Cod" could derive philologically from a Brittonic female cognate "Cuda", a hypothetical mother goddess in Celtic mythology postulated to have been worshipped in the Cotswold region. Geography The spine of the Cotswolds runs southwest to northeast through six counties, particularly Gloucestershire, west Oxfordshire and southwestern Warwickshire. The northern and western edges of the Cotswolds are marked by steep escarpments down to the Severn valley and the Warwickshire Avon. This feature, known as the Cotswold escarpment, or sometimes the Cotswold Edge, is a result of the uplifting (tilting) of the limestone layer, exposing its broken edge. This is a cuesta, in geological terms. The dip slope is to the southeast. On the eastern boundary lies the city of Oxford and on the west is Stroud. To the southeast, the upper reaches of the Thames Valley and towns such as Lechlade, Tetbury, and Fairford are often considered to mark the limit of this region. To the south the Cotswolds, with the characteristic uplift of the Cotswold Edge, reach beyond Bath, and towns such as Chipping Sodbury and Marshfield share elements of Cotswold character. The area is characterised by attractive small towns and villages built of the underlying Cotswold stone (a yellow oolitic limestone). This limestone is rich in fossils, particularly of fossilised sea urchins. Cotswold towns include Bourton-on-the-Water, Broadway, Burford, Chipping Campden, Chipping Norton, Cricklade, Dursley, Malmesbury, Moreton-in-Marsh, Nailsworth, Northleach, Stow-on-the-Wold, Stroud, Witney, and Winchcombe. In addition, much of Box lies in the Cotswolds. Bath, Cheltenham, Cirencester, Gloucester, Stroud, and Swindon are larger urban centres that border on, or are virtually surrounded by, the Cotswold AONB. The town of Chipping Campden is notable for being the home of the Arts and Crafts movement, founded by William Morris at the end of the 19th and beginning of the 20th centuries. William Morris lived occasionally in Broadway Tower, a folly, now part of a country park. Chipping Campden is also known for the annual Cotswold Olimpick Games, a celebration of sports and games dating back to the early 17th century. Of the nearly of the Cotswolds, roughly eighty percent is farmland. There are over of footpaths and bridleways. There are also of historic stone walls. Economy A 2017 report on employment within the Area of Outstanding Natural Beauty, stated that the main sources of income were real estate, renting and business activities, manufacturing and wholesale & retail trade repairs. Some 44% of residents were employed in these sectors. Agriculture is also important. Some 86% of the land in the AONB is used for this purpose. The primary crops include barley, beans, rape seed oil and wheat, while the raising of sheep is also important; cows and pigs are also reared. The livestock sector has been declining since 2002, however. According to the 2011 Census data for the Cotswolds, the wholesale and retail trade was the largest employer (15.8% of the workforce), followed by education (9.7%) and health and social work (9.3%). The report also indicates that a relatively higher proportion of residents were working in agriculture, forestry and fishing, accommodation and food services as well as in professional, scientific and technical activities. Unemployment in the Cotswold District was among the lowest in the country. A report in August 2017 showed only 315 unemployed persons, a slight decrease of five from a year earlier. Tourism Tourism is a significant part of the economy. The Cotswold District area alone gained over £373 million from visitor spending on accommodation, £157 million on local attractions and entertainments, and about £100m on travel in 2016. In the larger Cotswolds Tourism area, including Stroud, Cheltenham, Gloucester and Tewkesbury, tourism generated about £1 billion in 2016, providing 200,000 jobs. Some 38 million day visits were made to the Cotswold Tourism area that year. Many travel guides direct tourists to Chipping Campden, Stow-on-the-Wold, Bourton-on-the-Water, Broadway, Bibury, and Stanton. Some of these locations can be very crowded at times. Roughly 300,000 people visit Bourton per year, for example, with about half staying for a day or less. The area also has numerous public walking trails and footpaths that attract visitors, including the Cotswold Way (part of the National Trails System) from Bath to Chipping Camden. Housing development In August 2018, the final decision was made for a Local Plan that would lead to the building of nearly 7,000 additional homes by 2031, in addition to over 3,000 already built. Areas for development include Cirencester, Bourton-on-the-Water, Down Ampney, Fairford, Kemble, Lechlade, Northleach, South Cerney, Stow-on-the-Wold, Tetbury and Moreton-in-Marsh. Some of the money received from developers will be earmarked for new infrastructure to support the increasing population. Cotswold stone Cotswold stone is a yellow oolitic Jurassic limestone. This limestone is rich in fossils, particularly of fossilised sea urchins. When weathered, the colour of buildings made or faced with this stone is often described as honey or golden. The stone varies in colour from north to south, being honey-coloured in the north and north east of the region, as shown in Cotswold villages such as Stanton and Broadway; golden-coloured in the central and southern areas, as shown in Dursley and Cirencester; and pearly white in Bath. The rock outcrops at places on the Cotswold Edge; small quarries are common. The exposures are rarely sufficiently compact to be good for rock-climbing, but an exception is Castle Rock, on Cleeve Hill, near Cheltenham. Due to the rapid expansion of the Cotswolds in order for nearby areas to capitalize on increased house prices, well known ironstone villages, such as Hook Norton, have even been claimed by some to be in the Cotswolds despite lacking key features of Cotswolds villages such as Cotswold stone and are instead built using a deep red/orange ironstone, known locally as Hornton Stone. In his 1934 book English Journey, J. B. Priestley made this comment about Cotswold buildings made of the local stone. The truth is that it has no colour that can be described. Even when the sun is obscured and the light is cold, these walls are still faintly warm and luminous, as if they knew the trick of keeping the lost sunlight of centuries glimmering about them Area of Outstanding Natural Beauty The Cotswolds were designated as an Area of Outstanding Natural Beauty (AONB) in 1966, with an expansion on 21 December 1990 to . In 1991, all AONBs were measured again using modern methods, and the official area of the Cotswolds AONB was increased to . In 2000, the government confirmed that AONBs have the same landscape quality and status as National Parks. The Cotswolds AONB, which is the largest in England and Wales, stretches from the border regions of South Warwickshire and Worcestershire, through West Oxfordshire and Gloucestershire, and takes in parts of Wiltshire and of Bath and North East Somerset in the south. Gloucestershire County Council is responsible for sixty-three percent of the AONB. The Cotswolds Conservation Board has the task of conserving and enhancing the AONB. Established under statute in 2004 as an independent public body, the Board carries out a range of work from securing funding for 'on the ground' conservation projects, to providing a strategic overview of the area for key decision makers, such as planning officials. The Board is funded by Natural England and the seventeen local authorities that are covered by the AONB. The Cotswolds AONB Management Plan 2018–2023 was adopted by the Board in |
and Milan, the team slowly lost position in the league table. With three matches remaining in the season, Chievo was third-from-last, a position which would see it relegated to Serie B. As a last resort, Beretta was fired and Maurizio D'Angelo, a former Chievo player, was appointed temporarily to replace him as coach. Morale improved, and two wins and a draw from the final three matches proved just enough to keep Chievo in Serie A. In 2005–06, Giuseppe Pillon of Treviso FBC was appointed as new coach. The team experienced a return to the successful Delneri era, both in style of play and results, which resulted in Chievo ending the season in seventh and gaining a berth in the UEFA Cup. However, because of the football scandal involving several top-class teams, all of which finished higher than Chievo in the 2005–06 season, the Flying Donkeys were awarded a place in the next Champions League preliminary phase. On 14 July 2006, the verdict in the scandal was made public. Juventus, Milan and Fiorentina, who had all originally qualified for the 2006–07 Champions League, and Lazio, who had originally qualified for the 2006–07 UEFA Cup, were all banned from UEFA competition for the 2006–07 season, although Milan were allowed to enter the Champions League after their appeal to the FIGC. Chievo took up a place in the third qualifying stage of the competition along with Milan and faced Bulgarian side Levski Sofia. Chievo lost the first leg 2–0 in Sofia and managed a 2–2 home draw on the second leg and were eliminated by a 4–2 aggregate score with Levski advancing to the Champions League group stage. As a Champions League third round qualifying loser, Chievo was given a place in the UEFA Cup final qualifying round. On 25 August 2006, they were drawn to face Portuguese side Braga. The first leg, played on 14 September in Braga, ended in a 2–0 win for the Portuguese. The return match, played on 28 September in Verona, although won by Chievo 2–1, resulted in a 3–2 aggregate loss and the club's elimination from the competition. On 16 October 2006, following a 1–0 defeat against Torino, head coach Giuseppe Pillon was fired, and replaced by Luigi Delneri, one of the original symbols of the miracle Chievo, who had led the club to the Serie A in 2002. On 27 May 2007, the last match day of the 2006–07 Serie A season, Chievo was one of five teams in danger of falling into the last undecided relegation spot. Needing only a draw against Catania, a direct competitor in the relegation battle, Chievo lost 2–0 playing on a neutral field in Bologna. Wins by Parma, Siena and Reggina condemned Chievo to Serie B for the 2007–08 season after six seasons in the top flight. Even as a relatively-successful Serie A team the club, which averages only 9,000 to 10,000 fans and is kept afloat mainly by money from television rights, does not have the same number of fan supporters as Hellas, the oldest team in Verona. The difference between the clubs' supporters' number was highlighted during local derby games played in season 2001–02 at the clubs' shared stadium when, for Chievo's "home" fixtures, the Chievo fans were located in the "away" end of the stadium (the area of the stadium Chievo's supporters for years claimed as "theirs", in fact the main supporters faction's name is "North Side", the side of the stadium usually assigned to away teams' supporters), while most of the rest of the stadium seats were assigned to Hellas supporters. A year with the Cadetti (2007–08) Chievo bounced back quickly from the disappointment of their relegation on the last matchday of 2006–07, going in search of an immediate promotion back to the top flight. After the expected departure of several top-quality players including Franco Semioli, Salvatore Lanna, Matteo Brighi, Paolo Sammarco and Erjon Bogdani, the manager Delneri also parted ways with the club. Giuseppe Iachini replaced him and the captain, Lorenzo D'Anna, gave way to Sergio Pellissier at the end | Pellissier at the end of the transfer window. A new squad was constructed, most notably including the arrivals of midfielders Maurizio Ciaramitaro and Simone Bentivoglio, defender César and forward Antimo Iunco. This new incarnation of the gialloblu were crowned winter champions (along with Bologna), en route to a 41st matchday promotion after a 1–1 draw at Grosseto left them four points clear of third-place Lecce with one match remaining. In addition to winning promotion, they were conferred with the Ali della Vittoria trophy on the final matchday of the season, their first league title of any kind in 14 years. Back in Serie A (2008–2019) In their first season back to the top flight, Chievo immediately struggled in the league resulting in the dismissal of Iachini in November and his replacement with former Parma boss Domenico Di Carlo. After Di Carlo's appointment, Chievo managed a remarkable resurgence that led the gialloblu out of the relegation zone after having collected just nine points from their first 17 matches. Highlight matches included a 3–0 defeat of Lazio (who then won the 2008–09 Coppa Italia title) at the Stadio Olimpico, and a thrilling 3–3 draw away to Juventus in which captain and longtime Chievo striker Sergio Pellissier scored a late equaliser to complete his first career hat-trick. A series of hard-fought draws against top clubs Roma, Internazionale and Genoa in the final stretch of the season solidified Ceo's position outside the drop zone and Serie A status was finally confirmed on matchday 37 with a home draw against Bologna. A largely unchanged line-up earned safety the following season with four matchdays to spare, and Chievo is therefore a part of the inaugural Lega Calcio Serie A in 2010–11, their third consecutive season (and ninth season in the last ten years) in the top flight of Italian football. Lorenzo D'Anna remained as coach of the club for the 2018–19 season after replacing Rolando Maran during the 2017–18 season. On 13 September, Chievo were deducted 3 points after being found guilty of false accounting on exchanging players with Cesena. President Luca Campedelli was banned for three months as a result of the scheme. Chievo were officially relegated on 14 April 2019 after a 3–1 home loss to Napoli. Serie B years and league exclusion (2019–2021) In July 2021, Chievo was expelled from Serie B for the 2021–22 season for being unable to prove its financial viability due to outstanding tax payments. The club argued that there was an agreement in place during the COVID-19 pandemic that allowed them to spread the payments out over a longer period. However, after three unsuccessful appeals, the decision to bar Chievo Verona from registering to Serie B was upheld, with Cosenza taking their place in Serie B. Over the next month, former captain Sergio Pellissier led the search for a new ownership group to allow a phoenix club to compete in Serie D under the Chievo name. However, on 21 August, Pellissier announced in an Instagram post that no owners were found in time for the Serie D registration deadline. The original Chievo club has in the meantime appealed to the Council of State against its exclusion and is currently registered in no division, albeit still with the right to apply for a spot in an amateur league of Veneto in the following weeks. Campedelli eventually opted to keep the club alive as a youth team for the 2021–22 season, while Pellissier decided instead to found a new club himself, which was admitted to Terza Categoria at the very bottom of the Italian football league system; the club, originally named FC Chievo 2021, was then renamed to FC Clivense following a legal warning from AC ChievoVerona. Historical names 1929 – O.N.D. Chievo (Opera Nazionale Dopolavoro Chievo) 1936 – folded 1948 – refounded as A.C. Chievo (Associazione Calcio Chievo) 1960 – A.C. Cardi Chievo (Associazione Calcio Cardi Chievo) 1975 – A.C. Chievo (Associazione Calcio Chievo) 1981 – A.C. Paluani Chievo (Associazione Calcio Paluani Chievo) 1986 – A.C. Chievo (Associazione Calcio Chievo) 1990 – A.C. ChievoVerona (Associazione Calcio ChievoVerona) Retired numbers 30 Jason Mayélé, left/right winger, 2001–2002 (posthumous) 31 Sergio Pellissier, left/right winger, 2000–2019 (retired in recognition of his career) Notable players Note: this list includes players that have reached international status. Francesco Acerbi Amauri Daniel Andersson Simone Barone Andrea Barzagli Erjon Bogdani Oliver Bierhoff Valter Birsa Albano Bizzarri Michael Bradley Matteo Brighi Boštjan Cesar Bernardo Corradi Rinaldo Cruzado Dario Dainelli Boukary Dramé Mauro Esposito Marcelo Estigarribia Ivan Fatić Gelson Fernandes Giannis Fetfatzidis Stefano Fiore Alessandro Gamberini Massimo Gobbi Jonathan de Guzmán Përparim Hetemaj Bojan Jokić Radoslav Kirilov Kamil Kosowski Nicola Legrottaglie Christian Manfredini Jason Mayélé Stephen Makinwa John Mensah Victor Obinna Sergio Pellissier Simone Pepe Simone Perrotta Mauricio Pinilla Giampiero Pinzi Ivan Radovanović Flavio Roma Fredrik Risp Mamadou Samassa Nikos Spyropoulos Samir Ujkani Sauli Väisänen Martin Valjent Mario Yepes See :Category:A.C. ChievoVerona players for all Chievo players. Coaches Colours and badge The club's original colours were blue and white and not the current blue and yellow. The club's historic nickname is Gialloblu (from the club colours of yellow and blue), although throughout Italian football, the Verona's team recognised in the past by most fans as Gialloblu are Hellas Verona, Chievo's main rivals. Local supporters often call the club simply Ceo, which is Venetian for Chievo. The club is now sometimes referred to as the I Mussi Volanti ("The Flying Donkeys" in the Verona dialect of Venetian). "The Flying Donkeys" nickname was originally used by fans from crosstown rivals Hellas to mock Chievo. The two clubs first met in Serie B in the mid-1990s, with Hellas chanting Quando i mussi volara, il Ceo in Serie A — "Donkeys will fly before Chievo are in Serie A." However, once Chievo earned promotion to Serie A at the end of the 2000–01 Serie B season, Chievo fans started to call themselves "The Flying Donkeys". The current club crest represents Cangrande I della Scala, a medieval lord of Verona. Stadium Stadio Marcantonio Bentegodi is a stadium in Verona, Italy. It is also the home of Chievo Verona city rival Hellas. Inaugurated as a state-of-the-art facility and as one of Italy's finest venues in 1963, the stadium appeared excessive for a team (Hellas) that had spent the best part of the previous 35 years in Serie B. For the 1990 FIFA World Cup renovations included an extra tier and a roof to cover all sections, improved visibility, public transport connections, an urban motorway connecting the city centre with the stadium and the Verona Nord motorway exit |
certain amount of time for doing the administration saving and loading registers and memory maps, updating various tables and lists, etc. What is actually involved in a context switch depends on the architectures, operating systems, and the number of resources shared (threads that belong to the same process share many resources compared to unrelated non-cooperating processes). For example, in the Linux kernel, context switching involves loading the corresponding process control block (PCB) stored in the PCB table in the kernel stack to retrieve information about the state of the new process. CPU state information including the registers, stack pointer, and program counter as well as memory management information like segmentation tables and page tables (unless the old process shares the memory with the new) are loaded from the PCB for the new process. To avoid incorrect address translation in the case of the previous and current processes using different memory, the translation lookaside buffer (TLB) must be flushed. This negatively affects performance because every memory reference to the TLB will be a miss because it is empty after most context switches. Furthermore, analogous context switching happens between user threads, notably green threads, and is often very lightweight, saving and restoring minimal context. In extreme cases, such as switching between goroutines in Go, a context switch is equivalent to a coroutine yield, which is only marginally more expensive than a subroutine call. Switching cases There are three potential triggers for a context switch: Multitasking Most commonly, within some scheduling scheme, one process must be switched out of the CPU so another process can run. This context switch can be triggered by the process making itself unrunnable, such as by waiting for an I/O or synchronization operation to complete. On a pre-emptive multitasking system, the scheduler may also switch out processes that are still runnable. To prevent other processes from being starved of CPU time, pre-emptive schedulers often configure a timer interrupt to fire when a process exceeds its time slice. This interrupt ensures that the scheduler will gain control to perform a context switch. Interrupt handling Modern architectures are interrupt driven. This means that if the CPU requests data from a disk, for example, it does not need to busy-wait until the read is over; it can issue the request (to the I/O device) and continue with some other task. When the read is over, the CPU can be interrupted (by a hardware in this case, which sends interrupt request to PIC) and presented with the read. For interrupts, a program called an interrupt handler is installed, and it is the interrupt handler that handles the interrupt from the disk. When an interrupt occurs, the hardware automatically switches a part of the context (at least enough to allow the handler to return to the interrupted code). The handler may save additional context, depending on details of the particular hardware and software designs. Often only a minimal part of the context is changed in order to minimize the amount of time spent handling the interrupt. The kernel does not spawn or schedule a special process to handle interrupts, but instead the handler executes in the (often partial) context established at the beginning of interrupt handling. Once interrupt servicing is complete, the context in effect before the interrupt occurred is restored so | as switching between goroutines in Go, a context switch is equivalent to a coroutine yield, which is only marginally more expensive than a subroutine call. Switching cases There are three potential triggers for a context switch: Multitasking Most commonly, within some scheduling scheme, one process must be switched out of the CPU so another process can run. This context switch can be triggered by the process making itself unrunnable, such as by waiting for an I/O or synchronization operation to complete. On a pre-emptive multitasking system, the scheduler may also switch out processes that are still runnable. To prevent other processes from being starved of CPU time, pre-emptive schedulers often configure a timer interrupt to fire when a process exceeds its time slice. This interrupt ensures that the scheduler will gain control to perform a context switch. Interrupt handling Modern architectures are interrupt driven. This means that if the CPU requests data from a disk, for example, it does not need to busy-wait until the read is over; it can issue the request (to the I/O device) and continue with some other task. When the read is over, the CPU can be interrupted (by a hardware in this case, which sends interrupt request to PIC) and presented with the read. For interrupts, a program called an interrupt handler is installed, and it is the interrupt handler that handles the interrupt from the disk. When an interrupt occurs, the hardware automatically switches a part of the context (at least enough to allow the handler to return to the interrupted code). The handler may save additional context, depending on details of the particular hardware and software designs. Often only a minimal part of the context is changed in order to minimize the amount of time spent handling the interrupt. The kernel does not spawn or schedule a special process to handle interrupts, but instead the handler executes in the (often partial) context established at the beginning of interrupt handling. Once interrupt servicing is complete, the context in effect before the interrupt occurred is restored so that the interrupted process can resume execution in its proper state. User and kernel mode switching When the system transitions between user mode and kernel mode, a context switch is not necessary; a mode transition is not by itself a context switch. However, depending on the operating system, a context switch may also take place at this time. Steps The state of the currently executing process must be saved so it can be restored when rescheduled for execution. The process state includes all the registers that the process may be using, especially the program counter, plus any other operating system specific data that may be necessary. This is usually stored in a data structure called a process control block (PCB) or switchframe. The PCB might be stored on a per-process stack in kernel memory (as opposed to the user-mode call stack), or there may be some specific operating system-defined data structure for this information. A handle to the PCB is added to a queue of processes that are ready to run, often called the ready queue. Since the operating system has effectively suspended the execution of one process, it can then switch context by choosing a process from the ready queue and restoring its PCB. In |
a 74-gun third rate ship of the line of the Royal Navy, launched at Deptford in 1783 , a 74-gun third rate ship of the line of the Royal Navy, launched at Portsmouth Dockyard in 1823 | classical music of Southern India Carnatic may also refer to: Carnatic Wars, a series of military conflicts in India during the 18th century , a Bangor-class minesweeper of the Royal Indian Navy, |
as in the "wheel of time" or "wheel of dharma", such as in Rigveda hymn verse 1.164.11, pervasive in the earliest Vedic texts. In Buddhism, especially in Theravada, the Pali noun cakka connotes "wheel". Within the central "Tripitaka", the Buddha variously refers the "dhammacakka", or "wheel of dharma", connoting that this dharma, universal in its advocacy, should bear the marks characteristic of any temporal dispensation. The Buddha spoke of freedom from cycles in and of themselves, whether karmic, reincarnative, liberative, cognitive or emotional. In Jainism, the term chakra also means "wheel" and appears in various contexts in its ancient literature. As in other Indian religions, chakra in esoteric theories in Jainism such as those by Buddhisagarsuri means a yogic energy center. Ancient history The term chakra appears to first emerge within the Hindu Vedas, though not precisely in the sense of psychic energy centers, rather as chakravartin or the king who "turns the wheel of his empire" in all directions from a center, representing his influence and power. The iconography popular in representing the Chakras, states the scholar David Gordon White, traces back to the five symbols of yajna, the Vedic fire altar: "square, circle, triangle, half moon and dumpling". The hymn 10.136 of the Rigveda mentions a renunciate yogi with a female named kunamnama. Literally, it means "she who is bent, coiled", representing both a minor goddess and one of many embedded enigmas and esoteric riddles within the Rigveda. Some scholars, such as White and Georg Feuerstein, interpret this might be related to kundalini shakti, and an overt overture to the terms of esotericism that would later emerge in Post-Aryan Bramhanism. the Upanishad. Breath channels (nāḍi) are mentioned in the classical Upanishads of Hinduism from the 1st millennium BCE, but not psychic-energy chakra theories. Three classical Nadis are Ida, Pingala and Sushumna in which the central channel Sushumna is said to be foremost as per Kṣurikā-Upaniṣhad. The latter, states David Gordon White, were introduced about 8th-century CE in Buddhist texts as hierarchies of inner energy centers, such as in the Hevajra Tantra and Caryāgiti. These are called by various terms such as cakka, padma (lotus) or pitha (mound). These medieval Buddhist texts mention only four chakras, while later Hindu texts such as the Kubjikāmata and Kaulajñānanirnaya expanded the list to many more. In contrast to White, according to Feuerstein, early Upanishads of Hinduism do mention chakras in the sense of "psychospiritual vortices", along with other terms found in tantra: prana or vayu (life energy) along with nadi (energy carrying arteries). According to Gavin Flood, the ancient texts do not present chakra and kundalini-style yoga theories although these words appear in the earliest Vedic literature in many contexts. The chakra in the sense of four or more vital energy centers appear in the medieval era Hindu and Buddhist texts. Overview The Chakras are part of esoteric medieval-era beliefs about physiology and psychic centers that emerged across Indian traditions. The belief held that human life simultaneously exists in two parallel dimensions, one "physical body" (sthula sarira) and other "psychological, emotional, mind, non-physical" it is called the "subtle body" (sukshma sarira). This subtle body is energy, while the physical body is mass. The psyche or mind plane corresponds to and interacts with the body plane, and the belief holds that the body and the mind mutually affect each other. The subtle body consists of nadi (energy channels) connected by nodes of psychic energy called chakra. The belief grew into extensive elaboration, with some suggesting 88,000 chakras throughout the subtle body. The number of major chakras varied between various traditions, but they typically ranged between four and seven. Nyingmapa Vajrayana Buddhist teachings mention eight chakras and there is a complete yogic system for each of them. The important chakras are stated in Hindu and Buddhist texts to be arranged in a column along the spinal cord, from its base to the top of the head, connected by vertical channels. The tantric traditions sought to master them, awaken and energize them through various breathing exercises or with assistance of a teacher. These chakras were also symbolically mapped to specific human physiological capacity, seed syllables (bija), sounds, subtle elements (tanmatra), in some cases deities, colors and other motifs. Belief in the chakra system of Hinduism and Buddhism differs from the historic Chinese system of meridians in acupuncture. Unlike the latter, the chakra relates to subtle body, wherein it has a position but no definite nervous node or precise physical connection. The tantric systems envision it as continually present, highly relevant and a means to psychic and emotional energy. It is useful in a type of yogic rituals and meditative discovery of radiant inner energy (prana flows) and mind-body connections. The meditation is aided by extensive symbology, mantras, diagrams, models (deity and mandala). The practitioner proceeds step by step from perceptible models, to increasingly abstract models where deity and external mandala are abandoned, inner self and internal mandalas are awakened. These ideas are not unique to Hindu and Buddhist traditions. Similar and overlapping concepts emerged in other cultures in the East and the West, and these are variously called by other names such as subtle body, spirit body, esoteric anatomy, sidereal body and etheric body. According to Geoffrey Samuel and Jay Johnston, professors of Religious studies known for their studies on Yoga and esoteric traditions: Contrast with classical yoga Chakra and related beliefs have been important to the esoteric traditions, but they are not directly related to mainstream yoga. According to the Indologist Edwin Bryant and other scholars, the goals of classical yoga such as spiritual liberation (freedom, self-knowledge, moksha) is "attained entirely differently in classical yoga, and the cakra / nadi / kundalini physiology is completely peripheral to it." Classical traditions The classical eastern traditions, particularly those that developed in India during the 1st millennium AD, primarily describe nadi and chakra in a "subtle body" context. To them, they are in same dimension as of the psyche-mind reality that is invisible yet real. In the nadi and cakra flow the prana (breath, life energy). The concept of "life energy" varies between the texts, ranging from simple inhalation-exhalation to far more complex association with breath-mind-emotions-sexual energy. This prana or essence is what vanishes when a person dies, leaving a gross body. Some of this concept states this subtle body is what withdraws within, when one sleeps. All of it is believed to be reachable, awake-able and important for an individual's body-mind health, and how one relates to other people in one's life. This subtle body network of nadi and chakra is, according to some later Indian theories and many new age speculations, closely associated with emotions. Hindu Tantra Esoteric traditions in Hinduism mention numerous numbers and arrangements of chakras, of which a classical system of six-plus-one, the last being the Sahasrara, is most prevalent. This seven-part system, central to the core texts of hatha yoga, is one among many systems found in Hindu tantric literature. Hindu Tantra associates six Yoginis with six places in the subtle body, corresponding to the six chakras of the six-plus-one system. The Chakra methodology is extensively developed in the goddess tradition of Hinduism called Shaktism. It is an important concept along with yantras, mandalas and kundalini yoga in its practice. Chakra in Shakta tantrism means circle, an "energy center" within, as well as being a term for group rituals such as in chakra-puja (worship within a circle) which may or may not involve tantra practice. The cakra-based system is a part of the meditative exercises that came to be known as yoga. Buddhist Tantra The esoteric traditions in Buddhism generally teach four chakras. In some early Buddhist sources, these chakras are identified as: manipura (navel), anahata (heart), vishuddha (throat) and ushnisha kamala (crown). In one development within the Nyingma lineage of the Mantrayana of Tibetan Buddhism a popular conceptualization of chakras in increasing subtlety and increasing order is as follows: Nirmanakaya (gross self), Sambhogakaya (subtle self), Dharmakaya (causal self), and Mahasukhakaya (non-dual self), each vaguely and indirectly corresponding to the categories within the Shaiva Mantramarga universe, i.e., Svadhisthana, Anahata, Visuddha, Sahasrara, etc. However, depending on the meditational tradition, these vary between three and six. The chakras are considered psycho-spiritual constituents, each bearing meaningful correspondences to cosmic processes and their postulated Buddha counterpart. A | concepts emerged in other cultures in the East and the West, and these are variously called by other names such as subtle body, spirit body, esoteric anatomy, sidereal body and etheric body. According to Geoffrey Samuel and Jay Johnston, professors of Religious studies known for their studies on Yoga and esoteric traditions: Contrast with classical yoga Chakra and related beliefs have been important to the esoteric traditions, but they are not directly related to mainstream yoga. According to the Indologist Edwin Bryant and other scholars, the goals of classical yoga such as spiritual liberation (freedom, self-knowledge, moksha) is "attained entirely differently in classical yoga, and the cakra / nadi / kundalini physiology is completely peripheral to it." Classical traditions The classical eastern traditions, particularly those that developed in India during the 1st millennium AD, primarily describe nadi and chakra in a "subtle body" context. To them, they are in same dimension as of the psyche-mind reality that is invisible yet real. In the nadi and cakra flow the prana (breath, life energy). The concept of "life energy" varies between the texts, ranging from simple inhalation-exhalation to far more complex association with breath-mind-emotions-sexual energy. This prana or essence is what vanishes when a person dies, leaving a gross body. Some of this concept states this subtle body is what withdraws within, when one sleeps. All of it is believed to be reachable, awake-able and important for an individual's body-mind health, and how one relates to other people in one's life. This subtle body network of nadi and chakra is, according to some later Indian theories and many new age speculations, closely associated with emotions. Hindu Tantra Esoteric traditions in Hinduism mention numerous numbers and arrangements of chakras, of which a classical system of six-plus-one, the last being the Sahasrara, is most prevalent. This seven-part system, central to the core texts of hatha yoga, is one among many systems found in Hindu tantric literature. Hindu Tantra associates six Yoginis with six places in the subtle body, corresponding to the six chakras of the six-plus-one system. The Chakra methodology is extensively developed in the goddess tradition of Hinduism called Shaktism. It is an important concept along with yantras, mandalas and kundalini yoga in its practice. Chakra in Shakta tantrism means circle, an "energy center" within, as well as being a term for group rituals such as in chakra-puja (worship within a circle) which may or may not involve tantra practice. The cakra-based system is a part of the meditative exercises that came to be known as yoga. Buddhist Tantra The esoteric traditions in Buddhism generally teach four chakras. In some early Buddhist sources, these chakras are identified as: manipura (navel), anahata (heart), vishuddha (throat) and ushnisha kamala (crown). In one development within the Nyingma lineage of the Mantrayana of Tibetan Buddhism a popular conceptualization of chakras in increasing subtlety and increasing order is as follows: Nirmanakaya (gross self), Sambhogakaya (subtle self), Dharmakaya (causal self), and Mahasukhakaya (non-dual self), each vaguely and indirectly corresponding to the categories within the Shaiva Mantramarga universe, i.e., Svadhisthana, Anahata, Visuddha, Sahasrara, etc. However, depending on the meditational tradition, these vary between three and six. The chakras are considered psycho-spiritual constituents, each bearing meaningful correspondences to cosmic processes and their postulated Buddha counterpart. A system of five chakras is common among the Mother class of Tantras and these five chakras along with their correspondences are: Basal chakra (Element: Earth, Buddha: Amoghasiddhi, Bija mantra: LAM) Abdominal chakra (Element: Water, Buddha: Ratnasambhava, Bija mantra: VAM) Heart chakra (Element: Fire, Buddha: Akshobhya, Bija mantra: RAM) Throat chakra (Element: Wind, Buddha: Amitabha, Bija mantra: YAM) Crown chakra (Element: Space, Buddha: Vairochana, Bija mantra: KHAM) Chakras clearly play a key role in Tibetan Buddhism, and are considered to be the pivotal providence of Tantric thinking. And, the precise use of the chakras across the gamut of tantric sadhanas gives little space to doubt the primary efficacy of Tibetan Buddhism as distinct religious agency, that being that precise revelation that, without Tantra there would be no Chakras, but more importantly, without Chakras, there is no Tibetan Buddhism. The highest practices in Tibetan Buddhism point to the ability to bring the subtle pranas of an entity into alignment with the central channel, and to thus penetrate the realisation of the ultimate unity, namely, the "organic harmony" of one's individual consciousness of Wisdom with the co-attainment of All-embracing Love, thus synthesizing a direct cognition of absolute Buddhahood. According to Geoffrey Samuel, the buddhist esoteric systems developed cakra and nadi as "central to their soteriological process". The theories were sometimes, but not always, coupled with a unique system of physical exercises, called yantra yoga or phrul khor. Chakras, according to the Bon tradition, enable the gestalt of experience, with each of the five major chakras, being |
can be used as an expansion of IVF to increase the number of available embryos. If both embryos are successful, it gives rise to monozygotic (identical) twins. Dolly the sheep Dolly, a Finn-Dorset ewe, was the first mammal to have been successfully cloned from an adult somatic cell. Dolly was formed by taking a cell from the udder of her 6-year-old biological mother. Dolly's embryo was created by taking the cell and inserting it into a sheep ovum. It took 435 attempts before an embryo was successful. The embryo was then placed inside a female sheep that went through a normal pregnancy. She was cloned at the Roslin Institute in Scotland by British scientists Sir Ian Wilmut and Keith Campbell and lived there from her birth in 1996 until her death in 2003 when she was six. She was born on 5 July 1996 but not announced to the world until 22 February 1997. Her stuffed remains were placed at Edinburgh's Royal Museum, part of the National Museums of Scotland. Dolly was publicly significant because the effort showed that genetic material from a specific adult cell, designed to express only a distinct subset of its genes, can be redesigned to grow an entirely new organism. Before this demonstration, it had been shown by John Gurdon that nuclei from differentiated cells could give rise to an entire organism after transplantation into an enucleated egg. However, this concept was not yet demonstrated in a mammalian system. The first mammalian cloning (resulting in Dolly) had a success rate of 29 embryos per 277 fertilized eggs, which produced three lambs at birth, one of which lived. In a bovine experiment involving 70 cloned calves, one-third of the calves died quite young. The first successfully cloned horse, Prometea, took 814 attempts. Notably, although the first clones were frogs, no adult cloned frog has yet been produced from a somatic adult nucleus donor cell. There were early claims that Dolly had pathologies resembling accelerated aging. Scientists speculated that Dolly's death in 2003 was related to the shortening of telomeres, DNA-protein complexes that protect the end of linear chromosomes. However, other researchers, including Ian Wilmut who led the team that successfully cloned Dolly, argue that Dolly's early death due to respiratory infection was unrelated to problems with the cloning process. This idea that the nuclei have not irreversibly aged was shown in 2013 to be true for mice. Dolly was named after performer Dolly Parton because the cells cloned to make her were from a mammary gland cell, and Parton is known for her ample cleavage. Species cloned The modern cloning techniques involving nuclear transfer have been successfully performed on several species. Notable experiments include: Tadpole: (1952) Robert Briggs and Thomas J. King had successfully cloned northern leopard frogs: thirty-five complete embryos and twenty-seven tadpoles from one-hundred and four successful nuclear transfers. Carp: (1963) In China, embryologist Tong Dizhou produced the world's first cloned fish by inserting the DNA from a cell of a male carp into an egg from a female carp. He published the findings in a Chinese science journal. Zebrafish: The first vertebrate cloned (1981) by George Streisinger () Sheep: Marked the first mammal being cloned (1984) from early embryonic cells by Steen Willadsen. Megan and Morag cloned from differentiated embryonic cells in June 1995 and Dolly from a somatic cell in 1996. Mice: (1986) A mouse was successfully cloned from an early embryonic cell. Soviet scientists Chaylakhyan, Veprencev, Sviridova, and Nikitin had the mouse "Masha" cloned. Research was published in the magazine Biofizika volume ХХХII, issue 5 of 1987. Rhesus monkey: Tetra (January 2000) from embryo splitting and not nuclear transfer. More akin to artificial formation of twins. Pig: the first cloned pigs (March 2000). By 2014, BGI in China was producing 500 cloned pigs a year to test new medicines. Gaur: (2001) was the first endangered species cloned. Cattle: Alpha and Beta (males, 2001) and (2005) Brazil Cat: CopyCat "CC" (female, late 2001), Little Nicky, 2004, was the first cat cloned for commercial reasons Rat: Ralph, the first cloned rat (2003) Mule: Idaho Gem, a john mule born 4 May 2003, was the first horse-family clone. Horse: Prometea, a Haflinger female born 28 May 2003, was the first horse clone. Dog: Snuppy, a male Afghan hound was the first cloned dog (2005). In 2017, the world's first gene-editing clone dog, Apple, was created by Sinogene Biotechnology. Wolf: Snuwolf and Snuwolffy, the first two cloned female wolves (2005). Water buffalo: Samrupa was the first cloned water buffalo. It was born on 6 February 2009, at India's Karnal National Diary Research Institute but died five days later due to lung infection. Pyrenean ibex (2009) was the first extinct animal to be cloned back to life; the clone lived for seven minutes before dying of lung defects. Camel: (2009) Injaz, was the first cloned camel. Pashmina goat: (2012) Noori, is the first cloned pashmina goat. Scientists at the faculty of veterinary sciences and animal husbandry of Sher-e-Kashmir University of Agricultural Sciences and Technology of Kashmir successfully cloned the first Pashmina goat (Noori) using the advanced reproductive techniques under the leadership of Riaz Ahmad Shah. Goat: (2001) Scientists of Northwest A&F University successfully cloned the first goat which use the adult female cell. Gastric brooding frog: (2013) The gastric brooding frog, Rheobatrachus silus, thought to have been extinct since 1983 was cloned in Australia, although the embryos died after a few days. Macaque monkey: (2017) First successful cloning of a primate species using nuclear transfer, with the birth of two live clones named Zhong Zhong and Hua Hua. Conducted in China in 2017, and reported in January 2018. In January 2019, scientists in China reported the creation of five identical cloned gene-edited monkeys, using the same cloning technique that was used with Zhong Zhong and Hua Hua and Dolly the sheep, and the gene-editing Crispr-Cas9 technique allegedly used by He Jiankui in creating the first ever gene-modified human babies Lulu and Nana. The monkey clones were made to study several medical diseases. Black-footed ferret: (2020) In 2020, a team of scientists cloned a female named Willa, who died in the mid-1980s and left no living descendants. Her clone, a female named Elizabeth Ann, was born on December 10. Scientists hope that the contribution of this individual will alleviate the effects of inbreeding and help black-footed ferrets better cope with plague. Experts estimate that this female's genome contains three times as much genetic diversity as any of the modern black-footed ferrets. Human cloning Human cloning is the creation of a genetically identical copy of a human. The term is generally used to refer to artificial human cloning, which is the reproduction of human cells and tissues. It does not refer to the natural conception and delivery of identical twins. The possibility of human cloning has raised controversies. These ethical concerns have prompted several nations to pass legislation regarding human cloning and its legality. As of right now, scientists have no intention of trying to clone people and they believe their results should spark a wider discussion about the laws and regulations the world needs to regulate cloning. Two commonly discussed types of theoretical human cloning are therapeutic cloning and reproductive cloning. Therapeutic cloning would involve cloning cells from a human for use in medicine and transplants, and is an active area of research, but is not in medical practice anywhere in the world, . Two common methods of therapeutic cloning that are being researched are somatic-cell nuclear transfer and, more recently, pluripotent stem cell induction. Reproductive cloning would involve making an entire cloned human, instead of just specific cells or tissues. Ethical issues of cloning There are a variety of ethical positions regarding the possibilities of cloning, especially human cloning. While many of these views are religious in origin, the questions raised by cloning are faced by secular perspectives as well. Perspectives on human cloning are theoretical, as human therapeutic and reproductive cloning are not commercially used; animals are currently cloned in laboratories and in livestock production. Advocates support development of therapeutic cloning to generate tissues and whole organs to treat patients who otherwise cannot obtain transplants, to avoid the need for immunosuppressive drugs, and to stave off the effects of aging. Advocates for reproductive cloning believe that parents who cannot otherwise procreate should have access to the technology. Opponents of cloning have concerns that technology is not yet developed enough to be safe and that it could be prone to abuse (leading to the generation of humans from whom organs and tissues would be harvested), as well as concerns about how cloned individuals could integrate with families and with society at large. Religious groups are divided, with some opposing the technology as usurping "God's place" and, to the extent embryos are used, destroying a human life; others support therapeutic cloning's potential life-saving benefits. Cloning of animals is opposed by animal-groups due to the number of cloned animals that suffer from malformations before they die, and while food from cloned animals has been approved by the US FDA, its use is opposed by groups concerned about food safety. Cloning extinct and endangered species Cloning, or more precisely, the reconstruction of functional DNA from extinct species has, for decades, been a dream. Possible implications of this were dramatized in the 1984 novel Carnosaur and the 1990 novel Jurassic Park. The best current cloning techniques have an average success rate of 9.4 percent (and as high as 25 percent) when working with familiar species such as mice, while cloning wild animals is usually less than 1 percent successful. Several tissue banks have come into existence, including the "Frozen zoo" at the San Diego Zoo, to store frozen tissue from the world's rarest and most endangered species. This is also referred to as "Conservation cloning". In 2001, a cow named Bessie gave birth to a cloned Asian gaur, an endangered species, but the calf died after two days. In 2003, a banteng was successfully cloned, followed by three African wildcats from a thawed frozen embryo. These successes provided hope that similar techniques (using surrogate mothers of another species) might be used to clone extinct species. Anticipating this possibility, tissue samples from | However, a number of other features are needed, and a variety of specialised cloning vectors (small piece of DNA into which a foreign DNA fragment can be inserted) exist that allow protein production, affinity tagging, single stranded RNA or DNA production and a host of other molecular biology tools. Cloning of any DNA fragment essentially involves four steps fragmentation - breaking apart a strand of DNA ligation – gluing together pieces of DNA in a desired sequence transfection – inserting the newly formed pieces of DNA into cells screening/selection – selecting out the cells that were successfully transfected with the new DNA Although these steps are invariable among cloning procedures a number of alternative routes can be selected; these are summarized as a cloning strategy. Initially, the DNA of interest needs to be isolated to provide a DNA segment of suitable size. Subsequently, a ligation procedure is used where the amplified fragment is inserted into a vector (piece of DNA). The vector (which is frequently circular) is linearised using restriction enzymes, and incubated with the fragment of interest under appropriate conditions with an enzyme called DNA ligase. Following ligation the vector with the insert of interest is transfected into cells. A number of alternative techniques are available, such as chemical sensitisation of cells, electroporation, optical injection and biolistics. Finally, the transfected cells are cultured. As the aforementioned procedures are of particularly low efficiency, there is a need to identify the cells that have been successfully transfected with the vector construct containing the desired insertion sequence in the required orientation. Modern cloning vectors include selectable antibiotic resistance markers, which allow only cells in which the vector has been transfected, to grow. Additionally, the cloning vectors may contain colour selection markers, which provide blue/white screening (alpha-factor complementation) on X-gal medium. Nevertheless, these selection steps do not absolutely guarantee that the DNA insert is present in the cells obtained. Further investigation of the resulting colonies must be required to confirm that cloning was successful. This may be accomplished by means of PCR, restriction fragment analysis and/or DNA sequencing. Cell cloning Cloning unicellular organisms Cloning a cell means to derive a population of cells from a single cell. In the case of unicellular organisms such as bacteria and yeast, this process is remarkably simple and essentially only requires the inoculation of the appropriate medium. However, in the case of cell cultures from multi-cellular organisms, cell cloning is an arduous task as these cells will not readily grow in standard media. A useful tissue culture technique used to clone distinct lineages of cell lines involves the use of cloning rings (cylinders). In this technique a single-cell suspension of cells that have been exposed to a mutagenic agent or drug used to drive selection is plated at high dilution to create isolated colonies, each arising from a single and potentially clonal distinct cell. At an early growth stage when colonies consist of only a few cells, sterile polystyrene rings (cloning rings), which have been dipped in grease, are placed over an individual colony and a small amount of trypsin is added. Cloned cells are collected from inside the ring and transferred to a new vessel for further growth. Cloning stem cells Somatic-cell nuclear transfer, popularly known as SCNT, can also be used to create embryos for research or therapeutic purposes. The most likely purpose for this is to produce embryos for use in stem cell research. This process is also called "research cloning" or "therapeutic cloning". The goal is not to create cloned human beings (called "reproductive cloning"), but rather to harvest stem cells that can be used to study human development and to potentially treat disease. While a clonal human blastocyst has been created, stem cell lines are yet to be isolated from a clonal source. Therapeutic cloning is achieved by creating embryonic stem cells in the hopes of treating diseases such as diabetes and Alzheimer's. The process begins by removing the nucleus (containing the DNA) from an egg cell and inserting a nucleus from the adult cell to be cloned. In the case of someone with Alzheimer's disease, the nucleus from a skin cell of that patient is placed into an empty egg. The reprogrammed cell begins to develop into an embryo because the egg reacts with the transferred nucleus. The embryo will become genetically identical to the patient. The embryo will then form a blastocyst which has the potential to form/become any cell in the body. The reason why SCNT is used for cloning is because somatic cells can be easily acquired and cultured in the lab. This process can either add or delete specific genomes of farm animals. A key point to remember is that cloning is achieved when the oocyte maintains its normal functions and instead of using sperm and egg genomes to replicate, the donor's somatic cell nucleus is inserted into the oocyte. The oocyte will react to the somatic cell nucleus, the same way it would to a sperm cell's nucleus. The process of cloning a particular farm animal using SCNT is relatively the same for all animals. The first step is to collect the somatic cells from the animal that will be cloned. The somatic cells could be used immediately or stored in the laboratory for later use. The hardest part of SCNT is removing maternal DNA from an oocyte at metaphase II. Once this has been done, the somatic nucleus can be inserted into an egg cytoplasm. This creates a one-cell embryo. The grouped somatic cell and egg cytoplasm are then introduced to an electrical current. This energy will hopefully allow the cloned embryo to begin development. The successfully developed embryos are then placed in surrogate recipients, such as a cow or sheep in the case of farm animals. SCNT is seen as a good method for producing agriculture animals for food consumption. It successfully cloned sheep, cattle, goats, and pigs. Another benefit is SCNT is seen as a solution to clone endangered species that are on the verge of going extinct. However, stresses placed on both the egg cell and the introduced nucleus can be enormous, which led to a high loss in resulting cells in early research. For example, the cloned sheep Dolly was born after 277 eggs were used for SCNT, which created 29 viable embryos. Only three of these embryos survived until birth, and only one survived to adulthood. As the procedure could not be automated, and had to be performed manually under a microscope, SCNT was very resource intensive. The biochemistry involved in reprogramming the differentiated somatic cell nucleus and activating the recipient egg was also far from being well understood. However, by 2014 researchers were reporting cloning success rates of seven to eight out of ten and in 2016, a Korean Company Sooam Biotech was reported to be producing 500 cloned embryos per day. In SCNT, not all of the donor cell's genetic information is transferred, as the donor cell's mitochondria that contain their own mitochondrial DNA are left behind. The resulting hybrid cells retain those mitochondrial structures which originally belonged to the egg. As a consequence, clones such as Dolly that are born from SCNT are not perfect copies of the donor of the nucleus. Organism cloning Organism cloning (also called reproductive cloning) refers to the procedure of creating a new multicellular organism, genetically identical to another. In essence this form of cloning is an asexual method of reproduction, where fertilization or inter-gamete contact does not take place. Asexual reproduction is a naturally occurring phenomenon in many species, including most plants and some insects. Scientists have made some major achievements with cloning, including the asexual reproduction of sheep and cows. There is a lot of ethical debate over whether or not cloning should be used. However, cloning, or asexual propagation, has been common practice in the horticultural world for hundreds of years. Horticultural The term clone is used in horticulture to refer to descendants of a single plant which were produced by vegetative reproduction or apomixis. Many horticultural plant cultivars are clones, having been derived from a single individual, multiplied by some process other than sexual reproduction. As an example, some European cultivars of grapes represent clones that have been propagated for over two millennia. Other examples are potato and banana. Grafting can be regarded as cloning, since all the shoots and branches coming from the graft are genetically a clone of a single individual, but this particular kind of cloning has not come under ethical scrutiny and is generally treated as an entirely different kind of operation. Many trees, shrubs, vines, ferns and other herbaceous perennials form clonal colonies naturally. Parts of an individual plant may become detached by fragmentation and grow on to become separate clonal individuals. A common example is in the vegetative reproduction of moss and liverwort gametophyte clones by means of gemmae. Some vascular plants e.g. dandelion and certain viviparous grasses also form seeds asexually, termed apomixis, resulting in clonal populations of genetically identical individuals. Parthenogenesis Clonal derivation exists in nature in some animal species and is referred to as parthenogenesis (reproduction of an organism by itself without a mate). This is an asexual form of reproduction that is only found in females of some insects, crustaceans, nematodes, fish (for example the hammerhead shark), and lizards including the Komodo dragon and several whiptails. The growth and development occurs without fertilization by a male. In plants, parthenogenesis means the development of an embryo from an unfertilized egg cell, and is a component process of apomixis. In species that use the XY sex-determination system, the offspring will always be female. An example is the little fire ant (Wasmannia auropunctata), which is native to Central and South America but has spread throughout many tropical environments. Artificial cloning of organisms Artificial cloning of organisms may also be called reproductive cloning. First steps Hans Spemann, a German embryologist was awarded a Nobel Prize in Physiology or Medicine in 1935 for his discovery of the effect now known as embryonic induction, exercised by various parts of the embryo, that directs the development of groups of cells into particular tissues and organs. In 1924 he and his student, Hilde Mangold, were the first to perform somatic-cell nuclear transfer using amphibian embryos – one of the first steps towards cloning. Methods Reproductive cloning generally uses "somatic cell nuclear transfer" (SCNT) to create animals that are genetically identical. This process entails the transfer of a nucleus from a donor adult cell (somatic cell) to an egg from which the nucleus has been removed, or to a cell from |
dissolution process is reversible and are used in the production of regenerated celluloses (such as viscose and cellophane) from dissolving pulp. The most important solubilizing agent is carbon disulfide in the presence of alkali. Other agents include Schweizer's reagent, N-methylmorpholine N-oxide, and lithium chloride in dimethylacetamide. In general these agents modify the cellulose, rendering it soluble. The agents are then removed concomitant with the formation of fibers. Cellulose is also soluble in many kinds of ionic liquids. The history of regenerated cellulose is often cited as beginning with George Audemars, who first manufactured regenerated nitrocellulose fibers in 1855. Although these fibers were soft and strong -resembling silk- they had the drawback of being highly flammable. Hilaire de Chardonnet perfected production of nitrocellulose fibers, but manufacturing of these fibers by his process was relatively uneconomical. In 1890, L.H. Despeissis invented the cuprammonium process – which uses a cuprammonium solution to solubilize cellulose – a method still used today for production of artificial silk. In 1891, it was discovered that treatment of cellulose with alkali and carbon disulfide generated a soluble cellulose derivative known as viscose. This process, patented by the founders of the Viscose Development Company, is the most widely used method for manufacturing regenerated cellulose products. Courtaulds purchased the patents for this process in 1904, leading to significant growth of viscose fiber production. By 1931, expiration of patents for the viscose process led to its adoption worldwide. Global production of regenerated cellulose fiber peaked in 1973 at 3,856,000 tons. Regenerated cellulose can be used to manufacture a wide variety of products. While the first application of regenerated cellulose was as a clothing textile, this class of materials is also used in the production of disposable medical devices as well as fabrication of artificial membranes. Cellulose esters and ethers The hydroxyl groups (−OH) of cellulose can be partially or fully reacted with various reagents to afford derivatives with useful properties like mainly cellulose esters and cellulose ethers (−OR). In principle, although not always in current industrial practice, cellulosic polymers are renewable resources. Ester derivatives include: The cellulose acetate and cellulose triacetate are film- and fiber-forming materials that find a variety of uses. The nitrocellulose was initially used as an explosive and was an early film forming material. With camphor, nitrocellulose gives celluloid. Ether derivatives include: The sodium carboxymethyl cellulose can be cross-linked to give the croscarmellose sodium (E468) for use as a disintegrant in pharmaceutical formulations. Commercial applications Cellulose for industrial use is mainly obtained from wood pulp and from cotton. Paper products: Cellulose is the major constituent of paper, paperboard, and card stock. Electrical insulation paper: Cellulose is used in diverse forms as insulation in transformers, cables, and other electrical equipment. Fibers: Cellulose is the main ingredient of textiles. Cotton and synthetics (nylons) each have about 40% market by volume. Other plant fibers (jute, sisal, hemp) represent about 20% of the market. Rayon, cellophane and other "regenerated cellulose fibers" are a small portion (5%). Consumables: Microcrystalline cellulose (E460i) and powdered cellulose (E460ii) are used as inactive fillers in drug tablets and a wide range of soluble cellulose derivatives, E numbers E461 to E469, are used as emulsifiers, thickeners and stabilizers in processed foods. Cellulose powder is, for example, used in processed cheese to prevent caking inside the package. Cellulose occurs naturally in some foods and is an additive in manufactured foods, contributing an indigestible component used for texture and bulk, potentially aiding in defecation. Building material: Hydroxyl bonding of cellulose in water produces a sprayable, moldable material as an alternative to the use of plastics and resins. The recyclable material can be made water- and fire-resistant. It provides sufficient strength for use as a building material. Cellulose insulation made from recycled paper is becoming popular as an environmentally preferable material for building insulation. It can be treated with boric acid as a fire retardant. Miscellaneous: Cellulose can be converted into cellophane, a thin transparent film. It is the base material for the celluloid that was used for photographic and movie films until the mid-1930s. Cellulose is used to make water-soluble adhesives and binders such as methyl cellulose and carboxymethyl cellulose which are used in wallpaper paste. Cellulose is further used to make hydrophilic and highly absorbent sponges. Cellulose is the raw material in the manufacture of nitrocellulose (cellulose nitrate) which is used in smokeless gunpowder. Pharmaceuticals: Cellulose derivatives, such as microcrystalline cellulose (MCC), have the advantages of retaining water, being a stabilizer and thickening agent, and in reinforcement of drug tablets. Aspirational Energy crops: The major combustible component of non-food energy crops is cellulose, with lignin second. Non-food energy crops produce more usable energy than edible energy crops (which have a large starch component), but still compete with food crops for agricultural land and water resources. Typical non-food energy crops include industrial hemp, switchgrass, Miscanthus, Salix (willow), and Populus (poplar) species. A strain | relatively difficult compared to the breakdown of other polysaccharides. However, this process can be significantly intensified in a proper solvent, e.g. in an ionic liquid. Most mammals have limited ability to digest dietary fiber such as cellulose. Some ruminants like cows and sheep contain certain symbiotic anaerobic bacteria (such as Cellulomonas and Ruminococcus spp.) in the flora of the rumen, and these bacteria produce enzymes called cellulases that hydrolyze cellulose. The breakdown products are then used by the bacteria for proliferation. The bacterial mass is later digested by the ruminant in its digestive system (stomach and small intestine). Horses use cellulose in their diet by fermentation in their hindgut. Some termites contain in their hindguts certain flagellate protozoa producing such enzymes, whereas others contain bacteria or may produce cellulase. The enzymes used to cleave the glycosidic linkage in cellulose are glycoside hydrolases including endo-acting cellulases and exo-acting glucosidases. Such enzymes are usually secreted as part of multienzyme complexes that may include dockerins and carbohydrate-binding modules. Breakdown (thermolysis) At temperatures above 350 °C, cellulose undergoes thermolysis (also called 'pyrolysis'), decomposing into solid char, vapors, aerosols, and gases such as carbon dioxide. Maximum yield of vapors which condense to a liquid called bio-oil is obtained at 500 °C. Semi-crystalline cellulose polymers react at pyrolysis temperatures (350–600 °C) in a few seconds; this transformation has been shown to occur via a solid-to-liquid-to-vapor transition, with the liquid (called intermediate liquid cellulose or molten cellulose) existing for only a fraction of a second. Glycosidic bond cleavage produces short cellulose chains of two-to-seven monomers comprising the melt. Vapor bubbling of intermediate liquid cellulose produces aerosols, which consist of short chain anhydro-oligomers derived from the melt. Continuing decomposition of molten cellulose produces volatile compounds including levoglucosan, furans, pyrans, light oxygenates and gases via primary reactions. Within thick cellulose samples, volatile compounds such as levoglucosan undergo 'secondary reactions' to volatile products including pyrans and light oxygenates such as glycolaldehyde. Hemicellulose Hemicelluloses are polysaccharides related to cellulose that comprise about 20% of the biomass of land plants. In contrast to cellulose, hemicelluloses are derived from several sugars in addition to glucose, especially xylose but also including mannose, galactose, rhamnose, and arabinose. Hemicelluloses consist of shorter chains – between 500 and 3000 sugar units. Furthermore, hemicelluloses are branched, whereas cellulose is unbranched. Regenerated cellulose Cellulose is soluble in several kinds of media, several of which are the basis of commercial technologies. These dissolution process is reversible and are used in the production of regenerated celluloses (such as viscose and cellophane) from dissolving pulp. The most important solubilizing agent is carbon disulfide in the presence of alkali. Other agents include Schweizer's reagent, N-methylmorpholine N-oxide, and lithium chloride in dimethylacetamide. In general these agents modify the cellulose, rendering it soluble. The agents are then removed concomitant with the formation of fibers. Cellulose is also soluble in many kinds of ionic liquids. The history of regenerated cellulose is often cited as beginning with George Audemars, who first manufactured regenerated nitrocellulose fibers in 1855. Although these fibers were soft and strong -resembling silk- they had the drawback of being highly flammable. Hilaire de Chardonnet perfected production of nitrocellulose fibers, but manufacturing of these fibers by his process was relatively uneconomical. In 1890, L.H. Despeissis invented the cuprammonium process – which uses a cuprammonium solution to solubilize cellulose – a method still used today for production of artificial silk. In 1891, it was discovered that treatment of cellulose with alkali and carbon disulfide generated a soluble cellulose derivative known as viscose. This process, patented by the founders of the Viscose Development Company, is the most widely used method for manufacturing regenerated cellulose products. Courtaulds purchased the patents for this process in 1904, leading to significant growth of viscose fiber production. By 1931, expiration of patents for the viscose process led to its adoption worldwide. Global production of regenerated cellulose fiber peaked in 1973 at 3,856,000 tons. Regenerated cellulose can be used to manufacture a wide variety of products. While the first application of regenerated cellulose was as a clothing textile, this class of materials is also used in the production of disposable medical devices as well as fabrication of artificial membranes. Cellulose esters and ethers The hydroxyl groups (−OH) of cellulose can be partially or fully reacted with various reagents to afford derivatives with useful properties like mainly cellulose esters and cellulose ethers (−OR). In principle, although not always in current industrial practice, cellulosic polymers are renewable resources. Ester derivatives include: The cellulose acetate and cellulose triacetate are film- and fiber-forming materials that find a variety of uses. The nitrocellulose was initially used as an explosive and was an early film forming material. With camphor, nitrocellulose gives celluloid. Ether derivatives include: The sodium carboxymethyl cellulose can be cross-linked to give the croscarmellose sodium (E468) for use as a disintegrant in pharmaceutical formulations. Commercial applications Cellulose for industrial use is mainly obtained from wood pulp and from cotton. Paper products: Cellulose is the major constituent of paper, paperboard, and card stock. Electrical insulation paper: |
city and county seat of Montezuma County Cortez, Florida, a census-designated place Cortez, Nevada, ghost town Cortez, Pennsylvania, an unincorporated community Elsewhere Sea of Cortez or Gulf of California, in Mexico Other uses Cortez Motor Home, a Class-A motor coach made in the U.S. from 1963 to 1979 Agnelli & Nelson or Cortez, trance music duo Cortez, a character from The Longest Journey and Dreamfall Cortez, a type of running shoe from Nike People Surname Adrian T. Cortez (1978–2016), American trans woman and performer with the stage name Brittany CoxXx Alberto Cortez (1940–2019), Argentine singer and songwriter Alexandria Ocasio-Cortez (born 1989), American politician and educator Amado Cortez (1928–2003), Filipino | who led an expedition that caused the fall of the Aztec Empire Heidi Cortez (born 1981), American actress, model and writer Jayne Cortez (1936–2012), American poet Joana Cortez (born 1979), Brazilian tennis player Jody Cortez (born c. 1960), American drummer Joe Cortez (born 1943), Puerto Rican boxing referee Jorge Cortez (born 1972), Panamanian baseball player José Cortez (born 1975), American football player José Luis Cortez (born 1979), Ecuadorian footballer Luís Cortez (born 1994), Portuguese footballer Manuel Cortez (born 1979), German–Portuguese actor Mike Cortez (born 1980), American basketball player Page Cortez (born 1961), American politician Paul E. Cortez, American soldier and war criminal Philip Cortez (born 1978), American politician Rafael Cortez (born 1976), Brazilian journalist, actor and comedian Raul Cortez (1932–2006), Brazilian actor Ricardo Cortez (1899–1977), American silent film actor Stanley Cortez (1908–1997), American cinematographer Viorica Cortez (born 1935), Romanian-born French mezzo-soprano Given name Cortez Broughton (born 1997), American football player Cortez Gray (1916–1996), American basketball player Cortez Kennedy (1968–2017), American football player Fictional Fabian Cortez, a Marvel Comics supervillain Sergeant Cortez, protagonist of the TimeSplitters video game series Ian Cortez, a Cuban intelligence agent working for the Colombian Cartel |
controlled by colonial settlers. The term colony originates from the ancient Roman colonia, a type of Roman settlement. Derived from colon-us (farmer, cultivator, planter, or settler), it carries with it the sense of 'farm' and 'landed estate'. Furthermore the term was used to refer to the older Greek apoikia (), which were overseas settlements by ancient Greek city-states. The city that founded such a settlement became known as its metropolis ("mother-city"). Since early-modern times, historians, administrators, and political scientists have generally used the term "colony" to refer mainly to the many different overseas territories of particularly European states between the 15th and 20th centuries CE, with colonialism and decolonization as corresponding phenomena. While colonies often developed from trading outposts or territorial claims, such areas do not need to be a product of colonization, nor become colonially organized territories. Some historians use the term informal colony to refer to a country under the de facto control of another state, although this term is often contentious. Etymology The word "colony" comes from the Latin word , used as concept for Roman military bases and eventually cities. This in turn derives from the word , which was a Roman tenant farmer. The terminology is taken from architectural analogy, where a column pillar is beneath the (often stylized) head capital, which is also a biological analog of the body as subservient beneath the controlling head (with 'capital' coming from the Latin word , meaning 'head'). So colonies are not independently self-controlled, but rather are controlled from a separate entity that serves the capital function. Roman colonies first appeared when the Romans conquered neighbouring Italic peoples. These were small farming settlements that appeared when the Romans had subdued an enemy in war. Though a colony could take many forms, as a trade outpost or a military base in enemy territory, such have not been inherently colonies. Its original definition as a settlement created by people migrating from a central region to an outlying one became the modern definition. Settlements that began as Roman colonia include cities from Cologne (which retains this history in its name), Belgrade to York. A tell-tale sign of a settlement within the Roman sphere of influence once being a Roman colony is a city centre with a grid pattern. Ancient examples Carthage formed as a Phoenician colony Cadiz formed as a Phoenician colony Cyrene was a colony of the Greeks of Thera Sicily was a Phoenician colony Sardinia was a Phoenician colony Marseille formed as a Greek colony Malta was a Phoenician colony Cologne formed as a Roman colony, and its modern name refers to the Latin term "Colonia". Kandahar formed as a Greek colony during the Hellenistic era by Alexander the Great in 330 BC. Modern historical examples : a colony of Portugal from the 16th century to its independence in 1975. gained its independence from Spain in 1810. was formed as a British Dominion in 1901 from a federation of six distinct British colonies which were founded between 1788 and 1829. : was a colony of Great Britain important in the Atlantic slave trade. It gained its independence in 1966. : a colony of Portugal since the 16th century. Independent since 1822. : was colonized first by France as New France (1534–1763) and England (in Newfoundland, 1582) then under British rule (1763–1867), before achieving Dominion status and | United States, classified by the United States as "an unincorporated territory". In 1914, the Puerto Rican House of Delegates voted unanimously in favor of independence from the United States, but this was rejected by the U.S. Congress as "unconstitutional" and in violation of the U.S. 1900 Foraker Act. In 1952, after the US Congress approved Puerto Rico's constitution, its formal name became "Commonwealth of Puerto Rico", but its new name "did not change Puerto Rico's political, social, and economic relationship to the United States." That year, the United States advised the United Nations (UN) that the island was a self-governing territory. The United States has been "unwilling to play in public the imperial role...apparently it has no appetite for acknowledging in a public way the contradictions implicit in frankly colonial rule." The island has been called a colony by many, including US Federal judges, US Congresspeople, the Chief Justice of the Puerto Rico Supreme Court, and numerous scholars. consisted of territories and colonies by various different African and European powers, including the Dutch, the British, and the Nguni. The territory consisting the modern nation was ruled directly by the British from 1806 to 1910; became self-governing dominion of Union of South Africa in 1910. : a British colony from 1815 to 1948. Known as Ceylon. Was a British Dominion until 1972. Also a Portuguese colony in the 16th–17th centuries, and a Dutch colony in the 17th–18th centuries. was a colony of Japan from 1910 to 1945. North and South Korea were established in 1948. Korea was once a vassal state of China until 1895. has a complex history of colonial rule under various powers, including the Dutch (1624–1662), Spanish (1626–1642), Chinese (1683–1895), and Japanese (1895–1945). The precolonial (pre-1624) inhabitants of Taiwan are the ethno-linguistically Austronesian Taiwanese indigenous peoples, rather than the vast majority of present-day Taiwanese people, who are mostly ethno-linguistically Han Chinese. Twice throughout history, Taiwan has served as a quasi rump state for Chinese governments, the first instance being the Ming-loyalist Kingdom of Tungning (1662–1683) and the second instance being the present-day Republic of China (ROC), which officially claims continuity or succession from the Republic of China (1912–1949), having retreated from mainland China to Taiwan in 1949 during the final years of the Chinese Civil War (1927–1949). The ROC, whose de facto territory consists almost entirely of the island of Taiwan and its minor satellite islands, continues to rule Taiwan as if it were a separate country from the People's Republic of China (consisting of mainland China, Hong Kong, and Macau). The was formed from a union of thirteen British colonies. The Colony of Virginia was the first of the thirteen colonies. All thirteen declared independence in July 1776 and expelled the British governors. Current colonies The Special Committee on Decolonization maintains the United Nations list of Non-Self-Governing Territories, which identifies areas the United Nations (though not without controversy) believes are colonies. Given that dependent territories have varying degrees of autonomy and political power in the affairs of the controlling state, there is disagreement over the classification of "colony". See also Colonialism Colonization Decolonization Democracy Peace Theory Exploitation |
photography as the result of an optical illusion due to motion blur, especially in interlaced video recording, and are typically afterimage trails of flying insects and their wingbeats. Optical analysis Robert Todd Carroll (2003), having consulted an entomologist (Doug Yanega), identified rods as images of flying insects recorded over several cycles of wing-beating on video recording devices. The insect captured on image a number of times, while propelling itself forward, gives the illusion of a single elongated rod-like body, with bulges. "The Straight Dope" columnist Cecil Adams (2020) also explained rods as such phenomena, namely tricks of light which result from how (primarily video images) of flying insects are recorded and played back. Adding that investigators have shown the rod-like bodies as resulting from motion blur, if the camera is shooting with relatively long exposure times. The claims of these being extraordinary creatures, possibly alien, have been advanced by either people with active imaginations, or hoaxers. In August 2005, China Central Television (CCTV) aired a two-part documentary about flying rods in China. It reported the events from May to June of the same year at Tonghua Zhenguo Pharmaceutical Company in Tonghua City, Jilin Province, which debunked the flying rods. Surveillance cameras in the facility's compound captured video footage of flying rods identical to those shown in Jose Escamilla's video. Getting no satisfactory answer to the phenomenon, curious scientists at the facility decided that they would try to solve the mystery by attempting to catch these airborne creatures. Huge nets were set up and the same surveillance cameras then captured images of rods flying into the trap. When the nets were inspected, the "rods" were no more than regular moths and other ordinary flying insects. Subsequent investigations proved that the appearance of flying rods on video was an optical illusion created by the slower recording speed of the camera. After attending a lecture by Jose Escamilla, UFO investigator Robert Sheaffer wrote that | flying insects. Subsequent investigations proved that the appearance of flying rods on video was an optical illusion created by the slower recording speed of the camera. After attending a lecture by Jose Escamilla, UFO investigator Robert Sheaffer wrote that "some of his “rods” were obviously insects zipping across the field at a high angular rate" and others appeared to be “appendages” which were birds' wings blurred by the camera exposure. Paranormal claims Various paranormal interpretations of this phenomenon appear in popular culture. One of the more outspoken proponents of rods as alien life forms was the late Jose Escamilla, who claimed to have been the first to film them on March 19, 1994 in Roswell, New Mexico, while attempting to film a UFO. Escamilla later made additional videos and embarked on lecture tours to promote his claims. In popular culture In the manga Jojo's Bizarre Adventure, the Stone Ocean arc features a character named Rykiel with the Stand ability "Sky |
is greater than the proportional limit of the material, the column is experiencing inelastic buckling. Since at this stress the slope of the material's stress-strain curve, Et (called the tangent modulus), is smaller than that below the proportional limit, the critical load at inelastic buckling is reduced. More complex formulas and procedures apply for such cases, but in its simplest form the critical buckling load formula is given as Equation (3), A column with a cross section that lacks symmetry may suffer torsional buckling (sudden twisting) before, or in combination with, lateral buckling. The presence of the twisting deformations renders both theoretical analyses and practical designs rather complex. Eccentricity of the load, or imperfections such as initial crookedness, decreases column strength. If the axial load on the column is not concentric, that is, its line of action is not precisely coincident with the centroidal axis of the column, the column is characterized as eccentrically loaded. The eccentricity of the load, or an initial curvature, subjects the column to immediate bending. The increased stresses due to the combined axial-plus-flexural stresses result in a reduced load-carrying ability. Column elements are considered to be massive if their smallest side dimension is equal to or more than 400 mm. Massive columns have the ability to increase in carrying strength over long time periods (even during periods of heavy load). Taking into account the fact, that possible structural loads may increase over time as well (and also the threat of progressive failure), massive columns have an advantage compared to non-massive ones. Extensions When a column is too long to be built or transported in one piece, it has to be extended or spliced at the construction site. A reinforced concrete column is extended by having the steel reinforcing bars protrude a few inches or feet above the top of the concrete, then placing the next level of reinforcing bars to overlap, and pouring the concrete of the next level. A steel column is extended by welding or bolting splice plates on the flanges and webs or walls of the columns to provide a few inches or feet of load transfer from the upper to the lower column section. A timber column is usually extended by the use of a steel tube or wrapped-around sheet-metal plate bolted onto the two connecting timber sections. Foundations A column that carries the load down to a foundation must have means to transfer the load without overstressing the foundation material. Reinforced concrete and masonry columns are generally built directly on top of concrete foundations. When seated on a concrete foundation, a steel column must have a base plate to spread the load over a larger area, and thereby reduce the bearing pressure. The base plate is a thick, rectangular steel plate usually welded to the bottom end of the column. Orders The Roman author Vitruvius, relying on the writings (now lost) of Greek authors, tells us that the ancient Greeks believed that their Doric order developed from techniques for building in wood. The earlier smoothed tree-trunk was replaced by a stone cylinder. Doric order The Doric order is the oldest and simplest of the classical orders. It is composed of a vertical cylinder that is wider at the bottom. It generally has neither a base nor a detailed capital. It is instead often topped with an inverted frustum of a shallow cone or a cylindrical band of carvings. It is often referred to as the masculine order because it is represented in the bottom level of the Colosseum and the Parthenon, and was therefore considered to be able to hold more weight. The height-to-thickness ratio is about 8:1. The shaft of a Doric Column is almost always fluted. The Greek Doric, developed in the western Dorian region of Greece, is the heaviest and most massive of the orders. It rises from the stylobate without any base; it is from four to six times as tall as its diameter; it has twenty broad flutes; the capital consists simply of a banded necking swelling out into a smooth echinus, which carries a flat square abacus; the Doric entablature is also the heaviest, being about one-fourth the height column. The Greek Doric order was not used after c. 100 B.C. until its “rediscovery” in the mid-eighteenth century. Tuscan order The Tuscan order, also known as Roman Doric, is also a simple design, the base and capital both being series of cylindrical disks of alternating diameter. The shaft is almost never fluted. The proportions vary, but are generally similar to Doric columns. Height to width ratio is about 7:1. Ionic order The Ionic column is considerably more complex than the Doric or Tuscan. It usually has a base and the shaft is often fluted (it has grooves carved up its length). The capital features a volute, an ornament shaped like a scroll, at the four corners. The height-to-thickness ratio is around 9:1. Due to the more refined proportions and scroll capitals, the Ionic column is sometimes associated with academic buildings. Ionic style columns were used on the second level of the Colosseum. Corinthian order The Corinthian order is named for the Greek city-state of Corinth, to which it was connected in the period. However, according to the architectural historian Vitruvius, the column was created by the sculptor Callimachus, probably an Athenian, who drew acanthus leaves growing around a votive basket. In fact, the oldest known Corinthian capital was found in Bassae, dated at 427 BC. It is sometimes called the feminine order because it is on the top level of the Colosseum and holding up the least weight, and also has the slenderest ratio of thickness to height. Height to width ratio is about 10:1. Composite order The Composite order draws its name from the capital being a composite of the Ionic and Corinthian capitals. The acanthus of the Corinthian column already has a scroll-like element, so the distinction is sometimes subtle. Generally the Composite is similar to the Corinthian in proportion and employment, often in the upper tiers of colonnades. Height to width ratio is about 11:1 or 12:1. Solomonic A Solomonic column, sometimes called "barley sugar", begins on a base and ends in a capital, which may be of any order, but the shaft twists in a tight spiral, producing a dramatic, serpentine effect of movement. Solomonic columns were developed in the ancient world, but remained rare there. A famous marble set, probably 2nd century, was brought to Old St. Peter's Basilica by Constantine I, and placed round the saint's shrine, and was thus familiar throughout the Middle Ages, by which time they were thought to have been removed from the Temple of Jerusalem. The style was used in bronze by Bernini for his spectacular St. Peter's baldachin, actually a ciborium (which displaced Constantine's columns), and thereafter became very popular with Baroque and Rococo church architects, above all in Latin America, where they were very often used, especially on a small scale, as they are easy to produce in wood by turning on a lathe (hence also the style's popularity for spindles on furniture and stairs). Caryatid A Caryatid is a sculpted female figure serving as an architectural support taking the place of a column or a pillar supporting an entablature on her head. The Greek term literally means "maidens of Karyai", an ancient town of Peloponnese. Engaged columns In architecture, an engaged column is a column embedded in a wall and partly projecting from the surface of the wall, sometimes defined as semi or three-quarter detached. Engaged columns are rarely found in classical Greek architecture, and then only in exceptional cases, but in Roman architecture they exist in abundance, most commonly embedded in the cella walls of pseudoperipteral buildings. Pillar tombs Pillar tombs are monumental graves, which typically feature a single, prominent pillar or column, often made of stone. A number of world cultures incorporated pillars into tomb structures. In the ancient Greek colony of Lycia in Anatolia, one of these edifices is located at the tomb of Xanthos. In the town of Hannassa in southern Somalia, | porticoes and to support the roofs of the hypostylehall, partly inspired by the ancient Egyptian precedent. Since the columns carried timber beams rather than stone, they could be taller, slimmer and more widely spaced than Egyptian ones. Middle Ages Columns, or at least large structural exterior ones, became much less significant in the architecture of the Middle Ages. The classical forms were abandoned in both Byzantine and Romanesque architecture in favour of more flexible forms, with capitals often using various types of foliage decoration, and in the West scenes with figures carved in relief. During the Romanesque period, builders continued to reuse and imitate ancient Roman columns wherever possible; where new, the emphasis was on elegance and beauty, as illustrated by twisted columns. Often they were decorated with mosaics. Renaissance and later styles Renaissance architecture was keen to revive the classical vocabulary and styles, and the informed use and variation of the classical orders remained fundamental to the training of architects throughout Baroque, Rococo and Neo-classical architecture. Structure Early columns were constructed of stone, some out of a single piece of stone. Monolithic columns are among the heaviest stones used in architecture. Other stone columns are created out of multiple sections of stone, mortared or dry-fit together. In many classical sites, sectioned columns were carved with a centre hole or depression so that they could be pegged together, using stone or metal pins. The design of most classical columns incorporates entasis (the inclusion of a slight outward curve in the sides) plus a reduction in diameter along the height of the column, so that the top is as little as 83% of the bottom diameter. This reduction mimics the parallax effects which the eye expects to see, and tends to make columns look taller and straighter than they are while entasis adds to that effect. There are flutes and fillets that run up the shaft of columns. The flute is the part of the column that is indented in with a semi circular shape. The fillet of the column is the part between each of the flutes on the Ionic order columns. The flute width changes on all tapered columns as it goes up the shaft and stays the same on all non tapered columns. This was done to the columns to add visual interest to them. The Ionic and the Corinthian are the only orders that have fillets and flutes. The Doric style has flutes but not fillets. Doric flutes are connected at a sharp point where the fillets are located on Ionic and Corinthian order columns. Nomenclature Most classical columns arise from a basis, or base, that rests on the stylobate, or foundation, except for those of the Doric order, which usually rest directly on the stylobate. The basis may consist of several elements, beginning with a wide, square slab known as a plinth. The simplest bases consist of the plinth alone, sometimes separated from the column by a convex circular cushion known as a torus. More elaborate bases include two toruses, separated by a concave section or channel known as a scotia or trochilus. Scotiae could also occur in pairs, separated by a convex section called an astragal, or bead, narrower than a torus. Sometimes these sections were accompanied by still narrower convex sections, known as annulets or fillets. At the top of the shaft is a capital, upon which the roof or other architectural elements rest. In the case of Doric columns, the capital usually consists of a round, tapering cushion, or echinus, supporting a square slab, known as an abax or abacus. Ionic capitals feature a pair of volutes, or scrolls, while Corinthian capitals are decorated with reliefs in the form of acanthus leaves. Either type of capital could be accompanied by the same moldings as the base. In the case of free-standing columns, the decorative elements atop the shaft are known as a finial. Modern columns may be constructed out of steel, poured or precast concrete, or brick, left bare or clad in an architectural covering, or veneer. Used to support an arch, an impost, or pier, is the topmost member of a column. The bottom-most part of the arch, called the springing, rests on the impost. Equilibrium, instability, and loads As the axial load on a perfectly straight slender column with elastic material properties is increased in magnitude, this ideal column passes through three states: stable equilibrium, neutral equilibrium, and instability. The straight column under load is in stable equilibrium if a lateral force, applied between the two ends of the column, produces a small lateral deflection which disappears and the column returns to its straight form when the lateral force is removed. If the column load is gradually increased, a condition is reached in which the straight form of equilibrium becomes so-called neutral equilibrium, and a small lateral force will produce a deflection that does not disappear and the column remains in this slightly bent form when the lateral force is removed. The load at which neutral equilibrium of a column is reached is called the critical or buckling load. The state of instability is reached when a slight increase of the column load causes uncontrollably growing lateral deflections leading to complete collapse. For an axially loaded straight column with any end support conditions, the equation of static equilibrium, in the form of a differential equation, can be solved for the deflected shape and critical load of the column. With hinged, fixed or free end support conditions the deflected shape in neutral equilibrium of an initially straight column with uniform cross section throughout its length always follows a partial or composite sinusoidal curve shape, and the critical load is given by where E = elastic modulus of the material, Imin = the minimal moment of inertia of the cross section, and L = actual length of the column between its two end supports. A variant of (1) is given by where r = radius of gyration of column cross-section which is equal to the square root of (I/A), K = ratio of the longest half sine wave to the actual column length, Et = tangent modulus at the stress Fcr, and KL = effective length (length of an equivalent hinged-hinged column). From Equation (2) it can be noted that the buckling strength of a column is inversely proportional to the square of its length. When the critical stress, Fcr (Fcr =Pcr/A, where A = cross-sectional area of the column), is greater than the proportional limit of the material, the column is experiencing inelastic buckling. Since at this stress the slope of the material's stress-strain curve, Et (called the tangent modulus), is smaller than that below the proportional limit, the critical load at inelastic buckling is reduced. More complex formulas and procedures apply for such cases, but in its simplest form the critical buckling load formula is given as Equation (3), A column with a cross section that lacks symmetry may suffer torsional buckling (sudden twisting) before, or in combination with, lateral buckling. The presence of the twisting deformations renders both theoretical analyses and practical designs rather complex. Eccentricity of the load, or imperfections such as initial crookedness, decreases column strength. If the axial load on the column is not concentric, that is, its line of action is not precisely coincident with the centroidal axis of the column, the column is characterized as eccentrically loaded. The eccentricity of the load, or an initial curvature, subjects the column to immediate bending. The increased stresses due to the combined axial-plus-flexural stresses result in a reduced load-carrying ability. Column elements are considered to be massive if their smallest side dimension is equal to or more than 400 mm. Massive columns have the ability to increase in carrying strength over long time periods (even during periods of heavy load). Taking into account the fact, that possible structural loads may increase over time as well (and also the threat of progressive failure), massive columns have an advantage compared to non-massive ones. Extensions When a column is too long to be built or transported in one piece, it has to be extended or spliced at the construction site. A reinforced concrete column is extended by having the steel reinforcing bars protrude a few inches or feet above the |
supernatural layers to consider the story as a "derailed love story" and "a story about our tendency as humans to demonize the other". Music Doctor Carmilla aka Maki Yamazaki is a Retrospective-Futurist Visual Kei multi-instrumentalist, musician and composer. Autumn, album composed by A Letter for Carmilla in October 2017 (Romantic Dungeon synth)Autumn, by A letter for Carmilla The debut track of K-pop girl group Red Velvet subunit Irene & Seulgi, "Monster" (2020) is believed to be thematically inspired by Carmilla, with the music video having Irene loosely portray Laura and Seulgi loosely portraying Carmilla. Debut track of K-pop girl group member Go Won of Loona, "One&Only" (2017) is believed to be thematically inspired by Carmilla, with Go Won loosely portraying Carmilla. The themes for the music video and the song"Carmilla" by Jrock artist "Kaya" are very obvious references. Opera A chamber opera version of Carmilla appeared in Carmilla: A Vampire Tale (1970), music by Ben Johnston, script by Wilford Leach. Seated on a sofa, Laura and Carmilla recount the story retrospectively in song. Rock music Jon English released a song named Carmilla, inspired by the short story, on his 1980 album Calm Before the Storm. British extreme metal band Cradle of Filth's lead singer Dani Filth has often cited Sheridan Le Fanu as an inspiration to his lyrics. For example, their EP, V Empire or Dark Faerytales in Phallustein (1994), includes a track titled "Queen of Winter, Throned", which contains the lyrics: "Iniquitous/I share Carmilla's mask/A gaunt mephitic voyeur/On the black side of the glass". Additionally, the album Dusk... and Her Embrace (1996) was largely inspired by Carmilla and Le Fanu's writings in general. The band has also recorded an instrumental track titled "Carmilla's Masque", and in the track "A Gothic Romance", the lyric "Portrait of the Dead Countess" may reference the portrait found in the novel of the Countess Mircalla. The Discipline band's 1993 album Push & Profit included a ten-minute song entitled Carmilla, based on Le Fanu's character. The lyrics for "Blood and Roses", LaHost's track on the EMI compilation album Fire in Harmony (1985), are loosely based on the Roger Vadim film version of Carmilla. The title track of the album Symphonies of the Night (2013), by the German/Norwegian band Leaves' Eyes, was inspired by Carmilla. Alessandro Nunziati, better known as Lord Vampyr and the former vocalist of Theatres des Vampires, has a song named "Carmilla Whispers from the Grave" in his debut solo album, De Vampyrica Philosophia (2005). Theatres des Vampires, an Italian extreme gothic metal band, has produced a video single called "Carmilla" for its album Moonlight Waltz. They also reference the novel in innumerous other songs. "Carmillas of Love" by the band of Montreal references the novel in its title and lyrics. Periodicals A Japanese lesbian magazine is named after Carmilla, as Carmilla "draws hetero women into the world of love between women". Radio (chronological) The Columbia Workshop presented an adaptation (CBS, July 28, 1940, 30 min.). Lucille Fletcher's script, directed by Earle McGill, relocated the story to contemporary New York state and allows Carmilla (Jeanette Nolan) to claim her victim Helen (Joan Tetzel). The character Dr. Hesselius is featured as an occult sleuth in The Hall of Fantasy episode, "The Shadow People" (September 5, 1952), broadcast on the Mutual Broadcasting Network. In 1975, the CBS Radio Mystery Theater broadcast an adaptation by Ian Martin (CBS, July 31, 1975, rebroadcast December 10, 1975). Mercedes McCambridge played Laura Stanton, Marian Seldes played Carmilla. Vincent Price hosted an adaptation (reset to 1922 Vienna) by Brainard Duffield, produced and directed by Fletcher Markle, on the Sears Radio Theater (CBS, March 7, 1979), with Antoinette Bower and Anne Gibbon. On November 20, 1981, the CBC Radio series Nightfall aired an adaptation of Carmilla written by Graham Pomeroy and John Douglas. BBC Radio 4 broadcast Don McCamphill's Afternoon Play dramatisation June 5, 2003, with Anne-Marie Duff as Laura, Brana Bajic as Carmilla and David Warner as Laura's father. Stage (chronological) Wilford Leach and John Braswell's ETC Company staged an adaptation of Carmilla in repertory at La MaMa Experimental Theatre Club throughout the 1970s. In Elfriede Jelinek's play Illness or Modern Women (1984), a woman, Emily, transforms another woman, Carmilla, into a vampire, and both become lesbians and join to drink the blood of children. A German language adaptation of Carmilla by Friedhelm Schneidewind, from Studio-Theatre Saarbruecken, toured Germany and other European countries (including Romania) from April 1994 until 2000. The Wildclaw Theater in Chicago performed a full-length adaptation of Carmilla by Aly Renee Amidei in January and February 2011. Zombie Joe's Underground Theater Group in North Hollywood performed an hour-long adaptation of Carmilla, by David MacDowell Blue, in February and March 2014. Carmilla was also showcased at Cayuga Community College in Auburn, New York by Harelquin Productions with Meg Owren playing Laura and Dominique Baker-Lanning playing Carmilla. The David MacDowell Blue adaptation of Carmilla was performed by The Reedy Point Players of Delaware City in October 2016. This production was directed by Sean McGuire, produced by Gail Springer Wagner, assistant director Sarah Hammond, technical director Kevin Meinhaldt and technical execution by Aniela Meinhaldt. The performance featured Mariza Esperanza, Shamma Casson and Jada Bennett with appearances by Wade Finner, David Fullerton, Fran Lazartic, Nicole Peters Peirce, Gina Olkowski and Kevin Swed. Television (alphabetical by series title) In Season 2 of Castlevania, Carmilla is introduced as a secondary antagonist, acting as a sly and ambitious general on Dracula's War Council. Unlike her video-game counterpart, who is immensely faithful to her leader, Carmilla takes issue with Dracula's plan to kill off their only source of food and has designs to take Dracula's place and build her own army to subjugate humanity alongside her Council of Sisters, Lenore (inspired by Laura), Striga, and Morana. Her plans are bolstered by Dracula's death at the hands of his son, Alucard, and her kidnapping of the Devil Forgemaster, Hector. She is later personally confronted by Isaac, Dracula's other loyal Devil Forgemaster, when he and his Night Creature horde invade her castle in Styria to rescue Hector and put an end to her ambitions. After singlehandedly fighting him and his host of demons, she commits suicide in Season 4. The Doctor Who serial arc State of Decay (1980) features a vampire named Camilla (not Carmilla) who, in a brief but explicit moment, finds much to "admire" in the Doctor's female travelling companion Romana, who finds she has to turn away from the vampire's intense gaze. A television version for the British series Mystery and Imagination was transmitted on 12 November 1966. Jane Merrow played the title role, Natasha Pyne her victim. In 1989, Gabrielle Beaumont directed Jonathan Furst's adaptation of Carmilla as an episode of the Showtime television series Nightmare Classics, featuring Meg Tilly as the vampire and Ione Skye as her victim Marie. Furst relocated the story to an American antebellum southern plantation. An episode of the anime series Hellsing features a female vampire calling herself "Laura". She is later referred to as "Countess Karnstein". The character is heavily implied to be sexually attracted to Integra Hellsing, a female protagonist of the series. In episode 36 of The Return of Ultraman, the monster of the week in the episode, Draculas, originates from a planet named Carmilla. He possesses the corpse of a woman as his human disguise. In season 2, episodes 5 and 6 of the HBO TV series True Blood, Hotel Carmilla, in Dallas, Texas, has been built for vampires. It features heavy-shaded rooms and provides room service of human "snacks" for their vampire clientele, who can order specific blood types and genders. In the first and second season of the Freeform series Shadowhunters, based on Cassandra Clare's book series The Mortal Instruments, a vampire named Camille is a minor recurring character. In the second season of Yu-Gi-Oh! GX, a vampire named Camula is one of the seven Shadow Riders trying to get the gate keys to Sacred Beast cards. She defeats and traps the souls of Dr. Crowler and Zhane Truesdale, but is defeated by the protagonist, Jaden Yuki, after which her own soul is trapped while the others are released. Web series Carmilla is a web series on YouTube starring Natasha Negovanlis as Carmilla and Elise Bauman as Laura. First released on August 19, 2014, it is a comedic, modern adaptation of the novella which takes place at a modern-day university, where both girls are students. They become roommates after Laura's first roommate mysteriously disappears and Carmilla moves in, taking her place. The final episode of the web series was released on October 13, 2016. In 2017, a movie was made based on the series. The Carmilla Movie was initially released on October 26, 2017, to Canadian audiences through Cineplex theatres for one night only. A digital streaming version was also pre-released on October 26, 2017, for fans who had pre-ordered the film on VHX. The following day the movie enjoyed a wide release on streaming platform Fullscreen. Video games The vampiress Carmilla is an antagonist in the Castlevania series. She is a key figure in Castlevania: Circle of the Moon in which she tries to resurrect Lord Dracula. In the time distorted fighting game Castlevania Judgement, she is a playable character battling to protect her master, and, in the Lords Of Shadow reimagining, she is a recurring boss and former leader of the heroic Brotherhood Of Light. In every game she is portrayed as having great admiration for Dracula that borders on obsessive devotion. In the Japanese action game series OneeChanbara Carmilla is the matriarch of the Vampiric clan. She appears in the 2011 title Oneechanbara Z ~ Kagura ~ as the manipulator & main antagonist of sister heroines Kagura and Saaya, first using them to attack her rivals before trying (and failing) to eliminate them as pawns. The main antagonist in Ace Combat Infinity is a mysterious girl only known as the "Butterfly Master", who | disarms the general and disappears. The general explains that Carmilla is also Millarca, both anagrams for the original name of the vampire Mircalla, Countess Karnstein. The party is joined by Baron Vordenburg, the descendant of the hero who rid the area of vampires long ago. Vordenburg, an authority on vampires, has discovered that his ancestor was romantically involved with the Countess Karnstein before she died and became one of the undead. Using his forefather's notes, he locates Mircalla's hidden tomb. An imperial commission exhumes the body of Mircalla/Millarca/Carmilla. Immersed in blood, it seems to be breathing faintly, its heart beating, its eyes open. A stake is driven through its heart, and it gives a corresponding shriek; then, the head is struck off. The body and head are burned to ashes, which are thrown into a river. Afterwards, Laura's father takes his daughter on a year-long tour through Italy to regain her health and recover from the trauma, which she never fully does. Motifs “Carmilla” exhibits the primary characteristics of Gothic fiction. It includes a supernatural figure, a dark setting of an old castle, a mysterious atmosphere, and ominous or superstitious elements. In the novella, Le Fanu abolishes the Victorian view of women as merely useful possessions of men, relying on them and needing their constant guardianship. The male characters of the story, such as Laura's father and General Spielsdorf, are exposed as being the opposite of the putative Victorian males – helpless and unproductive. The nameless father reaches an agreement with Carmilla's mother, whereas Spielsdorf cannot control the faith of his daughter, Bertha. Both of these scenes portray women as equal, if not superior to men. This female empowerment is even more threatening to men if we consider Carmilla's vampiric predecessors and their relationship with their prey. Carmilla is the opposite of those male vampires – she is actually involved with her victims both emotionally and (theoretically) sexually. Moreover, she is able to exceed even more limitations by dominating death. In the end, that her immortality is suggested to be sustained by the river where her ashes had been scattered. Le Fanu also departs from the negative idea of female parasitism and lesbianism by depicting a mutual and irresistible connection between Carmilla and Laura. The latter, along with other female characters, becomes a symbol of all Victorian women – restrained and judged for their emotional reflexes. The ambiguity of Laura's speech and behaviour reveals her struggles with being fully expressive of her concerns and desires. Another important element of “Carmilla” is the concept of dualism presented through the juxtaposition of vampire and human, as well as lesbian and heterosexual. It is also vivid in Laura's irresolution, since she "feels both attraction and repulsion" towards Carmilla. The duality of Carmilla's character is suggested by her human attributes, the lack of predatory demeanour, and her shared experience with Laura. According to Jönsson, Carmilla can be seen as a representation of the dark side of all mankind. Sources As with Dracula, critics have looked for the sources used in the writing of Carmilla. One source used was from a dissertation on magic, vampires, and the apparitions of spirits written by Dom Augustin Calmet entitled Traité sur les apparitions des esprits et sur les vampires ou les revenants de Hongrie, de Moravie, &c. (1751). This is evidenced by a report analysed by Calmet, from a priest who learned information of a town being tormented by a vampiric entity three years earlier. Having travelled to the town to investigate and collecting information of the various inhabitants there, the priest learned that a vampire had tormented many of the inhabitants at night by coming from the nearby cemetery and would haunt many of the residents on their beds. An unknown Hungarian traveller came to the town during this period and helped the town by setting a trap at the cemetery and decapitating the vampire that resided there, curing the town of their torment. This story was retold by Le Fanu and adapted into the thirteenth chapter of Carmilla According to Matthew Gibson, the Reverend Sabine Baring-Gould's The Book of Were-wolves (1863) and his account of Elizabeth Báthory, Coleridge's Christabel (Part 1, 1797 and Part 2, 1800), and Captain Basil Hall's Schloss Hainfeld; or a Winter in Lower Styria (London and Edinburgh, 1836) are other sources for Le Fanu's Carmilla. Hall's account provides much of the Styrian background and, in particular, a model for both Carmilla and Laura in the figure of Jane Anne Cranstoun, Countess Purgstall. Influence Carmilla, the title character, is the original prototype for a legion of female and lesbian vampires. Although Le Fanu portrays his vampire's sexuality with the circumspection that one would expect for his time, lesbian attraction evidently is the main dynamic between Carmilla and the narrator of the story: When compared to other literary vampires of the 19th century, Carmilla is a similar product of a culture with strict sexual mores and tangible religious fear. While Carmilla selected exclusively female victims, she only becomes emotionally involved with a few. Carmilla had nocturnal habits, but was not confined to the darkness. She had unearthly beauty, and was able to change her form and to pass through solid walls. Her animal alter ego was a monstrous black cat, not a large dog as in Dracula. She did, however, sleep in a coffin. Carmilla works as a Gothic horror story because her victims are portrayed as succumbing to a perverse and unholy temptation that has severe metaphysical consequences for them. Some critics, among them William Veeder, suggest that Carmilla, notably in its outlandish use of narrative frames, was an important influence on Henry James' The Turn of the Screw (1898). Bram Stoker's Dracula Although Carmilla is a lesser known and far shorter Gothic vampire story than the generally considered master work of that genre, Dracula, the latter is influenced by Le Fanu's works: Both stories are told in the first person. Dracula expands on the idea of a first person account by creating a series of journal entries and logs of different persons and creating a plausible background story for their having been compiled. Both authors indulge the air of mystery, though Stoker takes it further than Le Fanu by allowing the characters to solve the enigma of the vampire along with the reader. The descriptions of the title character in Carmilla and of Lucy in Dracula are similar. Additionally, both women sleepwalk. Stoker's Dr. Abraham Van Helsing is similar to Le Fanu's vampire expert Baron Vordenburg: both characters investigate and catalyze actions in opposition to the vampire. The symptoms described in Carmilla and Dracula are highly comparable. Both the titular antagonists - Carmilla and Dracula, respectively, pretend to be the descendants of much older nobles bearing the same names, but are eventually revealed to have the same identities. However, with Dracula, this is left ambiguous. Although it is stated by Van Helsing (a character with a slightly-awkward grasp of the English language) that he "must, indeed, have been that Voivode Dracula who won his name against the Turk, over the great river on the very frontier of Turkey-land", the next statement begins with "If it be so", thereby leaving a thin margin of ambiguity. Dracula's Guest, a short story by Stoker believed to have been a deleted prologue to Dracula, is also set in Styria, where an unnamed Englishman takes shelter in a mausoleum from a storm. There, he meets a female vampire, named Countess Dolingen von Gratz. In popular culture Books (alphabetical by author's last name) In the Japanese light novel series High School DxD, written by Ichiei Ishibumi and illustrated by Miyama-Zero, the vampires are depicted as having a monarchial society divided among two major factions each under the rule of their respective Vampire Royal Family: The Tepes and the Carmilla. The Carmilla faction under the rule of the Carmilla Vampire Royal Family favors a matriarchal society for the world of vampires while the Tepes under the rule of the Tepes Vampire Royal Family prefer a patriarchal government. Carmilla: A Dark Fugue is a short book by David Brian. Although the story is primarily centered around the exploits of General Spielsdorf, it nonetheless relates directly to events which unfold within Carmilla: The Wolves of Styria. Theodora Goss' 2018 novel European Travel for the Monstrous Gentlewoman (the second in The Extraordinary Adventures of the Athena Club series) features a heroic Carmilla and her partner Laura Hollis aiding The Athena Club in their fight against Abraham Van Helsing. Tor.com's review of the novel states, "It’s utterly delightful to see Goss’s version of Carmilla and Laura, a practically married couple living happily in the Austrian countryside, and venturing forth to kick ass and take names." The novel Carmilla: The Wolves of Styria is a re-imagining of the original story. It is a derivative re-working, listed as being authored by J.S. Le Fanu and David Brian. Rachel Klein's 2002 novel The Moth Diaries features several excerpts from Carmilla, as the novel figures into the plot of Klein's story, and both deal with similar subject matter and themes. Carmilla: The Return by Kyle Marffin is the sequel to Carmilla. Erika McGann's book The Night-Time Cat and the Plump Grey Mouse: A Trinity College Tale depicts Carmilla and Dracula as summoned by the ghosts of their respective creators, Sheridan Le Fanu and Bram Stoker, to fight one another, witnessed by the book's titular cat, Pangur Bán, whom Carmilla attempts to befriend after both Dracula and her forfeit their fight. Ro McNulty's novella, Ruin: The Rise of the House of Karnstein, is a sequel to Le Fanu's novella and takes place over 100 years later. Carmilla continues to play games with mortals, inserting herself into their lives and breaking them to her will. She settles herself around a teacher and his family, feeding on his baby daughter. A vampire named Baron Karnstein appears in Kim Newman's novel Anno Dracula (1992). Carmilla herself is mentioned several times as a former (until her death at the hands of vampire hunters) friend of the book's vampire heroine Geneviève. Some short stories set in the Anno Dracula series universe have also included Carmilla. Author Anne Rice has cited Carmilla as an inspiration for The Vampire Chronicles, her series of novels which ran from 1976 to 2018, beginning with Interview with the Vampire. Carmilla and Laura by S.D. Simper is a retelling of the original story with some alterations including more explicit romance between the leads. Robert Statzer's novel To Love a Vampire () depicts Carmilla's encounter with a young Dr. Abraham Van Helsing, the hero of Bram Stoker's Dracula, during his days as a medical student. Originally published as a serial in the pages of Scary Monsters Magazine from March 2011 to June 2013, a revised version of To Love a Vampire was reprinted in paperback and Kindle editions in June 2018. Comics (alphabetical by series title) In 1991, Aircel Comics published a six-issue black and white miniseries of Carmilla by Steven Jones and John Ross. It was based on Le Fanu's story and billed as "The Erotic Horror Classic of Female Vampirism". The first issue was printed in February 1991. The first three issues adapted the original story, while the latter three were a sequel set in the 1930s. Narrating the comicbook Magazine series titled Terrifiying tales of enchantment and Horror: Vampiress Carmilla, a series inspired By the original Warren Publishing companies works. Carmilla is portrayed as one of Dracula's wives, resuscitated by the Devil to take care of various creatures of hell by telling tales of classical horror tropes in a vintage comicbook style. Appearing on the cover and inside the books starting in issue #1 (2021) and ongoing In the first story arc of Dynamite Entertainment's revamp of Vampirella, a villainous vampire, named Le Fanu, inhabits the basement of a Seattle nightclub called Carmilla. Film (chronological) Danish director Carl Dreyer loosely adapted Carmilla for his film Vampyr (1932) but deleted any references to lesbian sexuality. The credits of the original film say that the film is based on In a Glass Darkly. This collection contains five tales, one of which is Carmilla. Actually the film draws its central character, Allan Gray, from Le Fanu's Dr. Hesselius; and the scene in which Gray is buried alive is drawn from "The Room in the Dragon Volant". Dracula's Daughter (1936), Universal Pictures sequel to 1931 Dracula film, was loosely based on Carmilla. French director Roger Vadim's Et mourir de plaisir (literally And to die of pleasure, but actually shown in the UK and US as Blood and Roses, 1960) is based on Carmilla and is considered one of the greatest films of the vampire genre. The Vadim film thoroughly explores the lesbian implications behind Carmilla's selection of victims, and boasts cinematography by Claude Renoir. The film's lesbian eroticism was, however, significantly cut for its US release. Annette Stroyberg, Elsa Martinelli and Mel Ferrer star in the film. A more-or-less faithful adaptation starring Christopher Lee was produced in Italy in 1963 under the title La cripta e l'incubo (Crypt of the Vampire in English). The character of Laura is played by Adriana Ambesi, who fears herself possessed by the spirit of a dead ancestor, played by Ursula Davis (also known as Pier Anna Quaglia). The British Hammer Film Productions also produced a fairly faithful adaptation of Carmilla titled The Vampire Lovers (1970) with Ingrid Pitt in the lead role, Madeline Smith as her victim/lover, and Hammer's regular Peter Cushing. It is the first installment of the Karnstein Trilogy. The Blood Spattered Bride (La novia ensangrentada) is a 1972 Spanish horror film written and directed by Vicente Aranda, is based on the text. The |
infants born with a 46,XX genotype but have genitalia affected by congenital adrenal hyperplasia and are treated surgically with vaginoplasty that often reduces the size of the clitoris without its total removal. The atypical size of the clitoris is due to an endocrine imbalance in utero. Other reasons for the surgery include issues involving a microphallus and those who have Mayer-Rokitansky-Kuster disorder. Treatments on children raise human rights concerns. Technique Clitoridectomy surgical techniques are used to remove an invasive malignancy that extends to the clitoris. Standard surgical procedures are followed in these cases. This includes evaluation and biopsy. Other factors that will affect the technique selected are age, other existing medical conditions, and obesity. Other considerations are the probability of extended hospital care and the development of infection at the surgical site. The surgery proceeds with the use of general anesthesia, and prior to the vulvectomy/clitoridectomy an inguinal lymphyadenectomy is first done. The extent of the surgical site extends one to two centimeters beyond the boundaries of malignancy. Superficial lymph nodes may also need to be removed. If the malignancy is present in muscular tissue in the region, it is also removed. In some cases, the surgeon is able to preserve the clitoris though the malignancy may be extensive. The cancerous tissue is removed and the incision is closed. Post operative care may employ the use of suction drainage to allow the deeper tissues to heal toward the surface. Follow up after surgery includes the stripping of the drainage device to prevent blockage. A typical hospital stay can be up to two weeks. The site of the surgery is left unbandaged to allow for frequent examination. Complications can be the development of lymphedema though not removing the saphenous vein during the surgery will help prevent this. In some instances, foot elevation, diuretic medication and compression stockings can reduce the build up of fluid. In a clitoridectomy for intersex infants, the clitoris is often reduced instead of removed. The surgeon cuts the shaft of the elongated phallus and sews the glans and preserved nerves back onto the stump. In a less common surgery called clitoral recession, the surgeon hides the clitoral shaft under a fold of skin so only the glans remains visible. Society and culture General While much feminist scholarship has described clitoridectomy as a practice aimed at controlling women's sexuality, the historic emergence of the practice in ancient European and Middle Eastern cultures may have possibly derived from ideas about intersex people and the policing of boundaries between the sexes. In the seventeenth century, | Isaac Baker Brown (1812–1873), an English gynaecologist who was president of the Medical Society of London believed that the "unnatural irritation" of the clitoris caused epilepsy, hysteria, and mania, and he worked "to remove [it] whenever he had the opportunity of doing so", according to his obituary in the Medical Times and Gazette. Peter Lewis Allen writes that Brown's views caused outrage, and he died penniless after being expelled from the Obstetrical Society. Occasionally, in American and English medicine of the nineteenth century, circumcision was done as a cure for insanity. Some believed that mental and emotional disorders were related to female reproductive organs and that removing the clitoris would cure the neurosis. This treatment was discontinued in 1867. Aesthetics may determine clitoral norms. A lack of ambiguity of the genitalia is seen as necessary in the assignment of a sex to infants and therefore whether a child's genitalia is normal, but what is ambiguous or normal can vary from person to person. Sexual behavior is another reason for clitoridectomies. Author Sarah Rodriguez stated that the history of medical textbooks has indirectly created accepted ideas about the female body. Medical and gynecological textbooks are also at fault in the way that the clitoris is described in comparison to a male's penis. The importance and originality of a female's clitoris is underscored because it is seen as "a less significant organ, since anatomy texts compared the penis and the clitoris in only one direction." Rodriguez said that a male's penis created the framework of the sexual organ. Not all historical examples of clitoral surgeries should be assumed to be clitoridectomy (removal of the clitoris). In the nineteen thirties, the French psychoanalyst Marie Bonaparte studied African clitoral surgical practices and showed that these often involved removal of the clitoral hood, not the clitoris. She also had a surgery done to her own clitoris by the Viennese surgeon Dr Halban, which entailed cutting the suspensory ligament of the clitoris to permit it to sit closer to her vaginal opening. These sorts of clitoral surgeries, contrary to reducing women's sexual pleasure, actually appear aimed at making coitus more pleasurable for women, though it is unclear if that is ever their actual outcome. Human rights concerns Clitoridectomies are the most common form of female genital mutilation. The World Health Organization (WHO) estimates that clitordectomies have been performed on 200 million girls and women that are currently alive. The regions that most clitoridectomies take place are Asia, the Middle East and west, north and east Africa. The practice also exists in migrants originating from these regions. Most of the surgeries are for cultural or religious reasons. Clitoridectomy of women with intersex conditions is controversial when it takes place during childhood or under duress. Intersex women exposed to |
the French cabale from the medieval Latin cabbala, and was known early in the 17th century through usages linked to Charles II and Oliver Cromwell. By the middle of the 17th century, it had developed further to mean some intrigue entered into by a small group and also referred to the group of people so involved, i.e. a semi-secret political clique. There is a theory that the term took on its present meaning from a group of ministers formed in 1668 - the "Cabal ministry" of King Charles II of England. Members included Sir Thomas Clifford, Lord Arlington, the Duke of Buckingham, Lord Ashley and Lord Lauderdale, whose initial letters coincidentally spelled CABAL, and who were the signatories of the public Treaty of Dover that allied England to France in a prospective war against the Netherlands, and served as a cover for the Secret Treaty of Dover. The theory that the word originated as an acronym from the names of the group of ministers is a folk etymology, although the coincidence was noted at the time and could possibly have popularized its use. Usage in the Netherlands In Dutch, the word kabaal, also kabale or cabale was used during the 18th century in the same way. The Friesche Kabaal (The Frisian Cabal) denoted the Frisian pro-Orange nobility which supported the Stadholderate, but also had | and usually unbeknownst to those who are outside their group. The use of this term usually carries negative connotations of political purpose, conspiracy and secrecy. It can also refer to a secret plot or a clique, or it may be used as a verb (to form a cabal or secretly conspire). Etymology The term cabal is derived from Kabbalah (a word that has numerous spelling variations), the Jewish mystical and spiritual interpretation of the Hebrew scripture (קַבָּלָה). In Hebrew, it means "reception" or "acceptance", denoting the sod (secret) level of Jewish exegesis. In European culture (Christian Cabala, Hermetic Qabalah) it became associated with occult doctrine or a secret. It came into English via the French cabale from the medieval Latin cabbala, and was known early in the 17th century through usages linked to Charles II and Oliver Cromwell. By the middle of the |
c2, with more recent examples designated by their reduced state R-band maximum, e.g. cyt c559. Structure and function The heme group is a highly conjugated ring system (which allows its electrons to be very mobile) surrounding an iron ion. The iron in cytochromes usually exists in a ferrous (Fe2+) and a ferric (Fe3+) state with a ferroxo (Fe4+) state found in catalytic intermediates. Cytochromes are, thus, capable of performing electron transfer reactions and catalysis by reduction or oxidation of their heme iron. The cellular location of cytochromes depends on their function. They can be found as globular proteins and membrane proteins. In the process of oxidative phosphorylation, a globular cytochrome cc protein is involved in the electron transfer from the membrane-bound complex III to complex IV. Complex III itself is composed of several subunits, one of which is a b-type cytochrome while another one is a c-type cytochrome. Both domains are involved in electron transfer within the complex. Complex IV contains a cytochrome a/a3-domain that transfers electrons and catalyzes the | They are classified according to the type of heme and its mode of binding. Four varieties are recognized by the International Union of Biochemistry and Molecular Biology (IUBMB), cytochromes a, cytochromes b, cytochromes c and cytochrome d. Cytochrome function is linked to the reversible redox change from ferrous (Fe(II)) to the ferric (Fe(III)) oxidation state of the iron found in the heme core. In addition to the classification by the IUBMB into four cytochrome classes, several additional classifications such as cytochrome o and cytochrome P450 can be found in biochemical literature. History Cytochromes were initially described in 1884 by Charles Alexander MacMunn as respiratory pigments (myohematin or histohematin). In the 1920s, Keilin rediscovered these respiratory pigments and named them the cytochromes, or “cellular pigments”. He classified these heme proteins on the basis of the position of their lowest energy absorption band in their reduced state, as cytochromes a (605 nm), b (≈565 nm), and c (550 nm). The ultra-violet (UV) to visible spectroscopic signatures of hemes are still used to identify heme type from the reduced bis-pyridine-ligated state, i.e., the pyridine hemochrome method. Within each class, cytochrome a, b, or c, early cytochromes are |
Seymour were interviewed on Rove Live and the band, with Hart and Sherrod, performed "Don't Stop Now" to promote the new album, which was titled Time on Earth. The single was a minor hit in Australia and the UK. The album was released worldwide in June and July. It topped the album chart in New Zealand and made number 2 in Australia and number 3 in the UK. On 6 December 2008 Crowded House played the Homebake festival in Sydney, with warm up gigs at small venues in Hobart, Melbourne and Sydney. For these shows the band were augmented by multi-instrumentalist Don McGlashan and Neil's younger son, Elroy Finn, on guitar. On 14 March 2009 the band joined Neil's older son, Liam Finn, on stage for three songs at the Sound Relief concert in Melbourne. Intriguer, second split and Sydney Opera House shows (2009–2018) Crowded House began recording their follow-up to Time on Earth in April 2009, at Finn's own Roundhead Studios. The album, Intriguer, was produced by Jim Scott who had worked on The Sun Came Out by Neil's 7 Worlds Collide project. In August 2009, Finn travelled to Los Angeles to record some overdubs at Jim Scott's Los Angeles studio before they began mixing tracks. The album was released in June 2010, in time for the band's appearance at the West Coast Blues & Roots Festival near Perth. Finn stated that the album contains some, "Unexpected twists and turns" and some songs that, "Sound like nothing we've done before." Intriguer topped the Australian album chart, reached number 3 in New Zealand and number 12 in the UK. Crowded House undertook an extensive world tour in 2010 in support of Intriguer. This was the first album where the band regularly interacted with fans via the internet on their own re-launched website, Twitter and Facebook. The band sold recordings of the shows on the Intriguer tour on USB flash drives and made individual live tracks available for free download. A new compilation album, The Very Very Best of Crowded House, was released in October 2010 to celebrate the band's 25th anniversary. It includes 19 of the band's greatest hits and is also available in a box set with a 25 track DVD of their music videos. A deluxe digital version, available for download only, has 32 tracks including a rare 1987 live recording of the band's version of the Hunters & Collectors song "Throw Your Arms Around Me". No mention of this album has been made on the band's official website or Twitter page, which suggests that they are not involved with its release. Following the success of the album She Will Have Her Way in 2005, a second album of cover versions of Finn Brothers songs (including Crowded House songs) was released on 12 November 2010. Entitled He Will Have His Way, all tracks are performed by Australasian male artists. In November 2011 an Australian tour featured artists involved with the "She Will Have Her Way" and "He Will Have His Way" projects, including Paul Dempsey, Clare Bowditch, Seeker Lover Keeper (Sarah Blasko, Sally Seltmann and Holly Throsby), Alexander Gow (Oh Mercy) and Lior. The band played what would be their last concert for over five years at the A Day on the Green festival in Auckland on 27 February 2011. Former Crowded House drummer Peter Jones died from brain cancer on 18 May 2012, aged 49. A statement issued by the band described him as, "A warm-hearted, funny and talented man, who was a valuable member of Crowded House." In September 2015, the song "Help is Coming" from the Afterglow album, was released as a download and limited edition 7" single to raise money for the charity Save the Children. The B-side, "Anthem", was a previously-unreleased track, recorded at the same demo session as "Help is Coming" in 1995, with vocals added in 2015. Peter Jones plays drums on both songs. The money will be used to provide shelter, water, sanitation and hygiene for refugees in Syria, Lebanon and Iraq. Neil Finn said of "Help Is Coming"..."It was always a song about refugees, even if at the time I was thinking about the immigrants setting off on ships from Europe to America, looking for a better life for their families. There is such a huge scale and urgency to the current refugee crises that barely a day goes by without some crushing image or news account to confront us. We can't be silent any more."<ref>William, Helen "Charity single Help Is Coming for Syrian refugees to have VAT waived as celebrities rally for help" Mirror 11 September 2015</ref> Neil Finn confirmed in a 2016 interview with the Dutch newspaper Volkskrant that Crowded House had been on indefinite hiatus since the end of the Intriguer tour. Later that year, however, he and Seymour announced a series of concerts at the Sydney Opera House to mark the 20th anniversary of the Farewell to the World show (24 November 1996). The band, with the same lineup as its initial reunion and Tim Finn as guest, performed four shows between 24 and 27 November 2016. Around the same time, each of the band's 7 studio albums (including the rarities collection Afterglow) was reissued in deluxe 2-CD format with bonus tracks including demos, live recordings, alternate mixes, b-sides and outtakes. In April 2018, Neil Finn joined Fleetwood Mac, along with Mike Campbell of Tom Petty and the Heartbreakers, as a full-time member in the wake of Lindsey Buckingham's departure from the band. Reformation, new line-up and Dreamers Are Waiting (2019–present) In August 2019, Crowded House announced a reunion show at the 2020 Byron Bay Bluesfest. Shortly afterwards, Mark Hart announced that he would not be involved in the group's reunion. Finn confirmed Hart's departure on his podcast Fangradio, noting that he "love[s] Hart dearly as a friend, as a contributor and a collaborator" and that "all will be revealed... trust that good thought and good heart gets put into all of these decisions." In December 2019, Neil Finn announced that the new Crowded House line-up would consist of himself, Seymour, the band's original producer Mitchell Froom and his sons Liam and Elroy. He added that they were making a new studio album, the first since 2010's Intriguer. Due to the COVID-19 pandemic, the band's planned 2020 concerts have had to be rescheduled to 2021, and later again to 2022. On 15 October 2020, the band released "Whatever You Want", the first single from the band in over a decade. The band also shared an accompanying music video, starring Mac DeMarco. On 17 February 2021, the band shared another single, "To the Island." The track serves as the second single to the band's seventh studio album, Dreamers Are Waiting, which was announced on the same day for release on 4 June 2021. The band supported the single with a national tour of New Zealand in March 2021. On 19 August 2021, the band performed their single “To the Island” on CBS's The Late Show with Stephen Colbert. On 2 December 2021, the band announced that it will be touring Australia in 2022, with 6 shows around the country, including the 2022 Bluesfest lineup. Style Songwriting and musical influences As the primary songwriter for the band, Neil Finn has always set the tone for the band's sound. Allmusic said that Finn "has consistently proven his knack for crafting high-quality songs that combine irresistible melodies with meticulous lyrical detail." Neil's brother Tim was an early and important musical influence. Neil first saw Tim play with Split Enz in 1972, and said "that performance and those first songs made a lasting impression on me." His mother was another significant musical influence, encouraging him to listen to a variety of genres, including Irish folk music and Māori music. She would play piano at family parties and encourage Neil and Tim to accompany her. Album covers, costumes and set design Bassist Nick Seymour, who is also an artist, designed or co-designed all of the band's album covers and interior artwork. He also designed some of the costumes worn by the group, notably those from the cover of the group's debut album Crowded House. Seymour collaborated with Finn and Hester on the set design of some of their early music videos, including "Don't Dream It's Over" and "Better Be Home Soon". Since the band reunited, Seymour has again designed their album covers. The majority of the covers for the band's singles were not designed by Seymour. The artwork for "Pineapple Head" was created by Reg Mombassa of Mental As Anything. For the first four albums Mombassa and Noel Crombie, who had been the main designer of Split Enz's artwork, assisted Seymour in creating sets and costumes. For the Farewell to the World concerts Crombie designed the set, while Mombassa and Seymour designed promotional materials and artwork. Band members Current members Neil Finn – lead vocals, guitar, keyboards, percussion (1985–1996, 2006–2011, 2016, 2020–present) Nick Seymour – bass, backing vocals, keyboards (1985–1989, 1989–1996, 2006–2011, 2016, 2020–present) Mitchell Froom – keyboards (2020–present) Liam Finn – guitar, drums, backing vocals (2020–present; touring member 2007–2008) Elroy Finn – drums, backing vocals, guitar, keyboards (2020–present; touring member 2008, 2016) Former members Craig Hooper – guitars, backing vocals (1985) Paul Hester – drums, percussion, keyboards, backing and lead vocals (1985–1994, 1996; died 2005) Tim Finn – lead and backing vocals, guitars, keyboards (1990–1991; live guest 1996, 2016) Peter Jones – drums (1994–1996; died 2012) Mark Hart – guitars, keyboards, backing vocals (1992–1996, 2007–2011, 2016; touring member 1989–1992) Matt Sherrod – drums, percussion, backing vocals (2007–2011, 2016) Former touring musicians Gill Civil – keyboards (1986) Miffy Smith – keyboards (1986) Eddie Rayner – keyboards (1987, 1988) Mike Gubb – keyboards (1988) Wally Ingrim – drums (1994) Jules Bowen – keyboards (1994–1996) Davey Lane – guitars, keyboards, backing vocals (2007) Don McGlashan – guitars, keyboards, mandolin, euphonium, vocals (2008) Timeline Discography Studio albumsCrowded House (1986)Temple of Low Men (1988)Woodface (1991)Together Alone (1993)Time on Earth (2007)Intriguer (2010)Dreamers Are Waiting (2021) Awards Crowded House has won several national and international awards. In Australia, the group has won 13 ARIA Awards from 36 nominations, including the inaugural Best New Talent in 1987. The majority of their wins were for their first two albums, Crowded House and Temple of Low Men. They won eight APRA Awards from eleven nominations and were nominated for The New Zealand Silver Scroll for "Don't Stop Now" in 2007. "Don't Dream It's Over" was named the seventh best Australian song of all time in 2001. In 1987, Crowded House won the American MTV Video Music Award for Best New Artist for their song "Don't Dream It's Over", which was also nominated for three other awards. In 1994, the group was named International Group of the Year at the BRIT Awards. In 2009, "Don't Dream It's Over" was ranked number fifty on the Triple J Hottest 100 of All Time, voted by the Australian public. In November 2016 Crowded House was inducted into the ARIA Hall of Fame, 30 years after their formation. See also Music of Australia Music of New Zealand Split Enz Further reading Chunn, Mike, Stranger Than Fiction: The Life and Times of Split Enz, GP Publications, 1992. Chunn, Mike, Stranger Than Fiction: The Life and Times of Split Enz'', (revised, ebook edition), Hurricane Press, 2013. References General Specific External links 1985 establishments in Australia 1996 disestablishments in Australia 2006 establishments in Australia 2011 disestablishments in Australia 2016 establishments in Australia 2016 disestablishments in Australia APRA Award winners ARIA Award winners ATO Records artists Australian alternative rock groups Brit Award winners Capitol Records artists Musical groups disestablished in 1996 Musical groups established in 1985 Musical groups from Melbourne Musical groups reestablished in 2006 Musical groups disestablished in | family commitments. Early albums (1986–1990) Thanks to their Split Enz connection, the newly formed Crowded House had an established Australasian fanbase. They began by playing at festivals in Australia and New Zealand and released their debut album, Crowded House, in August 1986. Capitol Records initially failed to see the band's potential and gave them only low-key promotion, forcing the band to play at small venues to try to gain attention. The album's first single, "Mean to Me", reached the Australian Kent Music Report Singles Chart top 30 in June. It failed to chart in the US, but moderate American airplay introduced US listeners to the group. The next single, "Don't Dream It's Over", was released in October 1986 and proved an international hit, reaching number two on the US Billboard Hot 100 and number one in Canada. New Zealand radio stations initially gave the song little support until months later when it became successful internationally. Ultimately, the song reached number one on the New Zealand singles chart and number eight in Australia. It remains the group's most commercially successful song. In March 1987, the group were awarded "Best New Talent", along with "Song of the Year" and "Best Video" awards for "Don't Dream It's Over", at the inaugural ARIA Music Awards. The video also earned the group the MTV Video Music Award for Best New Artist that year. The song has often been covered by other artists and gave Paul Young a hit single in 1991. It was also used for a New Zealand Tourism Board advertisement in its "100% Pure New Zealand" worldwide promotion from October 2005. In May 2001, "Don't Dream it's Over" was voted seventh in a poll of the best Australian songs of all time by the Australasian Performing Right Association. In June 1987, nearly a year after its release, Crowded House finally reached number one on the Kent Music Report Album Charts. It also reached number three in New Zealand and number twelve on the US Billboard album chart. The follow-up to "Don't Dream it's Over", "Something So Strong", was another global smash, reaching the Top 10 in New Zealand, America, and Canada. "World Where You Live" and "Now We're Getting Somewhere" were also released as singles with chart success. As the band's primary songwriter, Neil Finn was under pressure to create a second album to match their debut and the band joked that one potential title for the new release was Mediocre Follow-Up. Eventually titled Temple of Low Men, their second album was released in July 1988 with strong promotion by Capitol Records. The album did not fare as well as their debut in the US, only reaching number 40, but it achieved Australasian success, reaching number one in Australia and number two in New Zealand. The first single "Better Be Home Soon" peaked at number two on both Australian and New Zealand singles charts and reached top 50 in the US, though the following four singles were less successful. Crowded House undertook a short tour of Australia and Canada to promote the album, with Eddie Rayner on keyboards. Multi-instrumentalist Mark Hart, who would eventually become a full band member, replaced Rayner in January 1989. After the tour, Finn fired Seymour from the band. Music journalist Ed Nimmervoll claimed that Seymour's temporary departure was because Finn blamed him for causing his writer's block; however, Finn cited "artistic differences" as the reason. Seymour said that after a month he contacted Finn and they agreed that he would return to the band. Early 1990s (1991–1994) Crowded House took a break after the Canadian leg of the Temple of Low Men tour. Neil Finn and his brother Tim recorded songs they had co-written for their own album, Finn. Following the recording sessions with Tim, Neil began writing and recording a third Crowded House album with Hester and Seymour, but these tracks were rejected by the record company, so Neil asked Tim if Crowded House could use the Finn songs. Tim jokingly agreed on the proviso that he become a member, which Neil apparently took figuratively. With Tim as an official member, the band returned to the studio. The new tracks, as well as some from the previously rejected recordings were combined to make Woodface, which was released in July 1991. The album features eight tracks co-written by Neil and Tim, which feature the brothers harmonising on lead vocals, except on the sombre "All I Ask" on which Tim sang lead. The track was later used on AIDS awareness commercials in Australia. Five of the album's tracks were Neil's solo compositions and two were by Hester, the exuberant "Italian Plastic", which became a crowd favourite at concerts and the hidden track "I'm Still Here". "Chocolate Cake", a humorous comment on American excesses that was not taken well by some US critics and sections of the American public, was released in June 1991 as the first single. Perhaps unsurprisingly it failed to chart in the US; however, it reached number two on Billboard's Modern Rock Tracks chart. The song peaked at number seven in New Zealand and reached the top 20 in Australia. The second single, "Fall at Your Feet", was less successful in Australia and New Zealand but did at least reach the US Hot 100. The album reached number one in New Zealand, number two in Australia, number six in the UK and made the top 20 in several European countries. The third single from Woodface, "Weather With You", peaked at No. 7 in early 1992 giving the band their highest UK chart placement. By contrast, the album had limited success in the US, only reaching number 83 on the Billboard 200 Album Chart. Tim Finn left Crowded House during the Woodface tour in November 1991, part-way through the UK leg. Performances on this tour, at the Town and Country Club in London, were recorded live and given a limited release in Australia, while individual songs from those shows were released as B-sides of singles in some countries. In June 1993 the New Zealand Government recommended that the Queen award an OBE to Neil and Tim Finn for their contribution to the music of New Zealand. For their fourth album, Together Alone, Crowded House used producer Martin Glover (aka "Youth") and invited touring musician Mark Hart (guitar and keyboards) to become a permanent band member. The album was recorded at Karekare Beach, New Zealand, which gave its name to the opening track, "Kare Kare". The album was released in October 1993 and sold well internationally on the strength of lead single "Distant Sun" and followup "Private Universe". It topped the New Zealand Album Chart, reached number 2 in Australia and number 4 in the UK. "Locked Out" was the album's first US single and received airplay on MTV and VH1. This track and "My Sharona" by The Knack, which were both included on the soundtrack of the film Reality Bites, were bundled together on a jukebox single to promote the film soundtrack. Saying farewell (1994–1996) Crowded House were midway through a US tour when Paul Hester quit the band on 15 April 1994. He flew home to Melbourne to await the birth of his first child and indicated that he required more time with his family. Wally Ingram, drummer for support act Sheryl Crow, temporarily filled in until a replacement, Peter Jones (ex-Harem Scarem, Vince Jones, Kate Ceberano's Septet) was found. After the tour, the Finn Brothers released their album Finn in November 1995. In June 1996, at a press conference to announce the release of their greatest hits album Recurring Dream, Neil revealed that Crowded House were to disband. The June 1996 concerts in Europe and Canada were to be their final performances. Recurring Dream contained four songs from each of the band's studio albums, along with three new songs. The album debuted at number one in Australia, New Zealand and the UK in July 1996. Early copies included a bonus CD of live material. The album's three new songs, which were released as singles, were "Instinct", "Not the Girl You Think You Are" and "Everything Is Good for You", which featured backing vocals from Pearl Jam's Eddie Vedder. Paul Hester returned to the band to play drums on the three new tracks. Worried that their goodbye had been too low-key and had disregarded their home fans, the band performed the Farewell to the World concert on the steps of the Sydney Opera House on 24 November 1996, which raised funds for the Sydney Children's Hospital. The concert featured the line-up of Neil Finn, Nick Seymour, Mark Hart and Paul Hester. Tim Finn and Peter Jones both made guest appearances. Support bands on the day were Custard, Powderfinger and You Am I. The concert had one of the highest live audiences in Australian history with the crowd being estimated at between 120,000 and 250,000 people. Farewell to the World was released on VHS in December 1996. In 2007, a double CD and a DVD were issued to commemorate the concert's tenth anniversary. The DVD featured newly recorded audio commentary by Finn, Hart and Seymour and other new bonus material. Between farewell and reunion (1996–2006) Following the 1996 break-up of Crowded House, the members embarked upon a variety of projects. Neil Finn released two solo studio albums, Try Whistling This (1998) and One Nil (2001), as well as two live albums, Sessions at West 54th (2000) and 7 Worlds Collide (2001). 7 Worlds Collide saw him performing with guest musicians including Eddie Vedder, Johnny Marr, Ed O'Brien and Phil Selway of Radiohead, Tim Finn, Sebastian Steinberg, Lisa Germano and Betchadupa (featuring his son Liam Finn). A double CD and DVD of the shows were released in November 2001. Tim Finn had resumed his solo career after leaving the group in 1992 and he also worked with Neil on a second Finn Brothers album, Everyone Is Here, which was released in 2004. Paul Hester joined The Finn Brothers on stage for three songs at their Palais Theatre show in Melbourne at the end of 2004. Nick Seymour also joined them on stage in Dublin, where he was living, in 2004. Peter Jones and Nick Seymour joined Australian group Deadstar for their second album, Milk, in 1997. Seymour later worked as a record producer in Dublin, producing Irish group Bell X1's debut album, Neither Am I in 2000. Mark Hart rejoined Supertramp in the late 1990s and later toured with Ringo Starr & His All-Starr Band. In 2001 he released a solo album, Nada Sonata. Paul Hester worked with children's entertainers The Wiggles, playing "Paul the Cook". He also had his own ABC show Hessie's Shed in Australia from late 1997. He formed the band Largest Living Things, which was the name rejected by Capitol Records in favour of Crowded House. It was on Hessie's Shed that Finn, Hester and Seymour last shared a stage, on an episode filmed as part of Finn's promotion for his solo album Try Whistling This in 1998. Finn and Hester performed "Not the Girl You Think You Are" with Largest Living Things, before being joined by Seymour for "Sister Madly" and a version of Paul Kelly's "Leaps and Bounds", which also featured Kelly on vocals. In late 2003, Hester hosted the series Music Max's Sessions. Hester and Seymour were reunited when they both joined singer-songwriter Matt O'Donnell's Melbourne-based group Tarmac Adam. The band released one album, 2003's Handheld Torch, which was produced by |
infidelities and partly to her affair with her 16-year-old stepson, Bertrand de Jouvenel. In 1925 she met Maurice Goudeket, who became her final husband; the couple stayed together until her death. Colette was by then an established writer (The Vagabond had received three votes for the prestigious Prix Goncourt). The decades of the 1920s and 1930s were her most productive and innovative period. Set mostly in Burgundy or Paris during the Belle Époque, her work focused on married life and sexuality. It was frequently quasi-autobiographical: Chéri (1920) and Le Blé en Herbe (1923) both deal with love between an aging woman and a very young man, a situation reflecting her relationship with Bertrand de Jouvenel and with her third husband Goudeket, who was 16 years her junior. La Naissance du Jour (1928) is her explicit criticism of the conventional lives of women, expressed in meditation on age and the renunciation of love by the character of her mother, Sido. By this time Colette was frequently acclaimed as France's greatest woman writer. "It... has no plot, and yet tells of three lives all that should be known", wrote Janet Flanner of Sido (1929). "Once again, and at greater length than usual, she has been hailed for her genius, humanities and perfect prose by those literary journals which years ago... lifted nothing at all in her direction except the finger of scorn." During the 1920s she was associated with the Jewish-Algerian writer Elissa Rhaïs, who adopted a Muslim persona in order to market her novels. Last years, 1940–1954 Colette was 67 years old when the Germans defeated and occupied France, and she remained in Paris, in her apartment in the Palais-Royal. Her husband Maurice Goudeket, who was Jewish, was arrested by the Gestapo in December 1941, and although he was released after seven weeks through the intervention of the French wife of the German ambassador, Colette lived through the rest of the war years with the anxiety of a possible second arrest. During the Occupation she produced two volumes of memoirs, Journal à Rebours (1941) and De ma Fenêtre (1942; the two were issued in English in 1975 as Looking Backwards). She wrote life style articles for several pro-Nazi newspapers (cf Colette the Journalist) and her novel Julie de Carneilhan (1941) contains many anti-Semitic slurs. In 1944, Colette published what became perhaps her most famous work, Gigi, which tells the story of sixteen-year-old Gilberte ("Gigi") Alvar. Born into a family of demimondaines, Gigi is trained as a courtesan to captivate a wealthy lover but defies the tradition by marrying him instead. In 1949 it was made into a French film starring Danièle Delorme and Gaby Morlay, then in 1951 adapted for the stage with the then-unknown Audrey Hepburn in the title role, picked by Colette personally; the 1958 Hollywood musical movie, starring Leslie Caron and Louis Jourdan, with a screenplay by Alan Jay Lerner and a score by Lerner and Frederick Loewe, won the Academy Award for Best Picture. In the postwar years, Colette became a famous public figure, crippled by arthritis and cared for by Goudeket, who supervised the preparation of her Œuvres Complètes (1948 – 1950). She continued to write during those years, bringing out L'Etoile Vesper (1944) and Le Fanal Bleu (1949), in which she reflected on the problems of a writer whose inspiration is primarily autobiographical. She was nominated by Claude Farrère for the Nobel Prize in Literature in 1948. Colette the journalist Colette's first pieces of journalism (1895-1900) were written in collaboration with her husband, Gauthier-Villars -- music reviews for La Cocarde, a daily founded by Maurice Barres and a series of pieces for La Fronde. Following her divorce from Gauthier-Villars in 1910, she wrote independently for a wide variety of publications, gaining considerable renown for her articles covering social trends, theater, fashion, and film, as well as crime reporting." In December 1910, Colette agreed to write a regular column in the Paris daily, Le Matin -- at first under a pseudonym, then as "Colette Willy." One of her editors was Henry de Jouvenel, whom she married in 1912. By 1912, Colette had taught herself to be a reporter: "You have to see and not invent, you have to touch, not imagine .. because, when you see the sheets [at a crime scene] drenched in fresh blood, they are a color you could never invent." In 1914, Colette was named Le Matin's literary editor. Colette's separation from Jouvenel in 1923 forced her to sever ties with Le Matin. Over the next three decades her articles appeared in over two dozen publications, | wife and one of the most notorious libertines in Paris, he introduced his wife into avant-garde intellectual and artistic circles and encouraged her lesbian alliances. And it was he who chose the titillating subject matter of the Claudine novels: "the secondary myth of Sappho... the girls' school or convent ruled by a seductive female teacher" who "locked her [Claudine] in her room until she produced enough pages to suit him." Colette and Willy separated in 1906, although their divorce was not final until 1910. Colette had no access to the sizable earnings of the Claudine books – the copyright belonged to Willy – and until 1912 she initiated a stage career in music halls across France, sometimes playing Claudine in sketches from her own novels, earning barely enough to survive and often hungry and ill. To make ends meet, she turned more seriously to journalism in the 1910s. Around this time she also became an avid amateur photographer. This period of her life is recalled in La Vagabonde (1910), which deals with women's independence in a male society, a theme to which she would regularly return in future works. During these years she embarked on a series of relationships with other women, notably with Natalie Clifford Barney and with the gender ambiguous Mathilde de Morny, the Marquise de Belbeuf ("Max"), with whom she sometimes shared the stage. On 3 January 1907, an onstage kiss between Max and Colette in a pantomime entitled "Rêve d'Égypte" caused a near-riot, and as a result, they were no longer able to live together openly, although their relationship continued for another five years. In 1912, Colette married Henry de Jouvenel, the editor of Le Matin. A daughter, Colette de Jouvenel, nicknamed Bel-Gazou, was born to them in 1913. Writing career, 1920s and 1930s In 1920 Colette published Chéri, portraying love between an older woman and a much younger man. Chéri is the lover of Léa, a wealthy courtesan; Léa is devastated when Chéri marries a girl his own age and delighted when he returns to her, but after one final night together she sends him away again. Colette's marriage to Jouvenel ended in divorce in 1924, due partly to his infidelities and partly to her affair with her 16-year-old stepson, Bertrand de Jouvenel. In 1925 she met Maurice Goudeket, who became her final husband; the couple stayed together until her death. Colette was by then an established writer (The Vagabond had received three votes for the prestigious Prix Goncourt). The decades of the 1920s and 1930s were her most productive and innovative period. Set mostly in Burgundy or Paris during the Belle Époque, her work focused on married life and sexuality. It was frequently quasi-autobiographical: Chéri (1920) and Le Blé en Herbe (1923) both deal with love between an aging woman and a very young man, a situation reflecting her relationship with Bertrand de Jouvenel and with her third husband Goudeket, who was 16 years her junior. La Naissance du Jour (1928) is her explicit criticism of the conventional lives of women, expressed in meditation on age and the renunciation of love by the character of her mother, Sido. By this time Colette was frequently acclaimed as France's greatest woman writer. "It... has no plot, and yet tells of three lives all that should be known", wrote Janet Flanner of Sido (1929). "Once again, and at greater length than usual, she has been hailed for her genius, humanities and perfect prose by those literary journals which years ago... lifted nothing at all in her direction except the finger of scorn." During the 1920s she was associated with the Jewish-Algerian writer Elissa Rhaïs, who adopted a Muslim persona in order to market her novels. Last years, 1940–1954 Colette was 67 years old when the Germans defeated and occupied France, and she remained in Paris, in her apartment in the Palais-Royal. Her husband Maurice Goudeket, who was Jewish, was arrested by the Gestapo in December 1941, and although he was released after seven weeks through the intervention of the French wife of the German ambassador, Colette lived through the rest of the war years with the anxiety of a possible second arrest. During the Occupation she produced two volumes of memoirs, Journal à Rebours (1941) and De ma Fenêtre (1942; the two were issued in English in 1975 as Looking Backwards). She wrote life style articles for several pro-Nazi newspapers (cf Colette the Journalist) and her novel Julie de Carneilhan (1941) contains many anti-Semitic slurs. In 1944, Colette published what became perhaps her most famous work, Gigi, which tells the story of sixteen-year-old Gilberte ("Gigi") Alvar. Born into a family of demimondaines, Gigi is trained as a courtesan to captivate a wealthy lover but defies the tradition by marrying him instead. In 1949 it was made into a French film starring Danièle Delorme and Gaby Morlay, then in 1951 adapted for the stage with the then-unknown Audrey Hepburn in the title role, picked by Colette personally; the 1958 Hollywood musical movie, starring Leslie Caron and Louis Jourdan, with a screenplay by Alan Jay Lerner and a score by Lerner and Frederick Loewe, won the Academy Award for Best Picture. In the postwar years, Colette became a famous public figure, crippled by arthritis and cared for by Goudeket, who supervised the preparation of her Œuvres Complètes (1948 – 1950). She continued to write during those years, bringing out L'Etoile Vesper (1944) and Le Fanal Bleu (1949), in which she reflected on the problems of a writer whose inspiration is primarily autobiographical. She was nominated by Claude Farrère for the Nobel Prize in Literature in 1948. Colette the journalist Colette's first pieces of journalism (1895-1900) were written in collaboration with her husband, Gauthier-Villars -- music reviews for La Cocarde, a daily founded by Maurice Barres and a series of pieces for La Fronde. Following her divorce from Gauthier-Villars in 1910, she wrote independently for a wide variety of publications, gaining considerable renown for her articles covering social trends, theater, fashion, and film, as well as crime reporting." In December 1910, Colette agreed to write a regular column in the Paris daily, Le Matin -- at first under a pseudonym, then as "Colette Willy." One of her editors was Henry de Jouvenel, whom she married in 1912. By 1912, Colette had taught herself to be a reporter: "You have to see and not invent, you have to touch, not imagine .. because, when you see the sheets [at a crime scene] drenched in fresh blood, they are a color you could never invent." In 1914, Colette was named Le Matin's literary editor. Colette's separation from Jouvenel in 1923 forced her to sever ties with Le Matin. Over the next three decades her articles appeared in over two dozen publications, including Vogue, Le Figaro, and Paris-Soir. During the German Occupation of France, Colette continued contributing to daily and weekly publications, a number of them collaborationist and pro-Nazi, including Le Petit Parisien, which became a pro-Vichy after January 1941, and La Gerbe, a pro-Nazi weekly. Though her articles were not political in nature, Colette was sharply criticized at the time for lending her prestige to these publications and implicitly accommodating herself to the Vichy regime. Her November 26, 1942 article, "Ma Bourgogne Pauvre" ("My Poor Burgundy") has been singled out by some historians as tactically accepting some of ultra-nationalist goals that hardline Vichyist writers espoused. After 1945, her journalism was sporadic, and her final pieces were more personal essays than reported stories. Over the course of her writing career, Colette published over 1200 articles for newspapers, magazines, and journals. Death and legacy Upon her death, on 3 August 1954, she was refused a religious funeral by the Catholic Church on account of her divorces, but given a state funeral, the first French woman of letters to be granted the honour, and interred in Père-Lachaise cemetery. Colette was elected to the Belgian Royal Academy (1935), the Académie Goncourt (1945, and President in 1949), and a Chevalier (1920) and Grand Officer (1953) of the Légion d'honneur. Colette's numerous biographers have proposed widely differing interpretations of her life and work over the decades. Initially |
for a series of genre portraits depicting southern black life. In 1940, he completed Tobacco Farmer, the portrait of a young black farmer in white overalls and a blue shirt with a youthful yet serious look upon his face, sitting in front of the landscape and buildings he works on and in. That same year Alston received a second round of funding from the Rosenwald Fund to travel South, and he spent extended time at Atlanta University. During the 1930s and early 1940s, Alston created illustrations for magazines such as Fortune, Mademoiselle, The New Yorker, Melody Maker and others. He also designed album covers for artists such as Duke Ellington and Coleman Hawkins. Alston became staff artist at the Office of War Information and Public Relations in 1940, creating drawings of notable African Americans. These images were used in over 200 black newspapers across the country by the government to "foster goodwill with the black citizenry." Eventually Alston left commercial work to focus on his own artwork. In 1950, he became the first African-American instructor at the Art Students League, where he remained on faculty until 1971. In 1950, his Painting was exhibited at the Metropolitan Museum of Art, and his artwork was one of the few pieces purchased by the museum. He landed his first solo exhibition in 1953 at the John Heller Gallery, which represented artists such as Roy Lichtenstein. He exhibited there five times from 1953 to 1958. In 1956, Alston became the first African-American instructor at the Museum of Modern Art, where he taught for a year before going to Belgium on behalf of MOMA and the State Department. He coordinated the children's community center at Expo 58. In 1958, he was awarded a grant from and was elected as a member of the American Academy of Arts and Letters. In 1963, Alston co-founded Spiral with his cousin Romare Bearden and Hale Woodruff. Spiral served as a collective of conversation and artistic exploration for a large group of artists who "addressed how black artists should relate to American society in a time of segregation." Artists and arts supporters gathered for Spiral, such as Emma Amos, Perry Ferguson and Merton Simpson. This group served as the 1960s version of "306". Alston was described as an "intellectual activist", and in 1968 he spoke at Columbia about his activism. In the mid-1960s Spiral organized an exhibition of black and white artworks, but the exhibition was never officially sponsored by the group, due to internal disagreements. In 1968, Alston received a presidential appointment from Lyndon Johnson to the National Council of Culture and the Arts. Mayor John Lindsay appointed him to the New York City Art Commission in 1969. In 1973, he was made full professor at City College of New York, where he had taught since 1968. In 1975, he was awarded the first Distinguished Alumni Award from Teachers College. The Art Student's League created a 21-year merit scholarship in 1977 under Alston's name to commemorate each year of his tenure. Painting a person and a culture Alston shared studio space with Henry Bannarn at 306 W. 141st Street, which served as an open space for artists, photographers, musicians, writers and the like. Other artists held studio space at "306", such as Jacob Lawrence, Addison Bate and his brother Leon. During this time Alston founded the Harlem Artists Guild with Savage and Elba Lightfoot to work toward equality in WPA art programs in New York. During the early years of 306, Alston focused on mastering portraiture. His early works such as Portrait of a Man (1929) show Alston's detailed and realistic style depicted through pastels and charcoals, inspired by the style of Winold Reiss. In his Girl in a Red Dress (1934) and The Blue Shirt (1935), Alston used modern and innovative techniques for his portraits of young individuals in Harlem. Blue Shirt is thought to be a portrait of Jacob Lawrence. During this time he also created Man Seated with Travel Bag (c. 1938–40), showing the seedy and bleak environment, contrasting with work like the racially charged Vaudeville (c. 1930) and its caricature style of a man in blackface. Inspired by his trip south, Alston began his "family series" in the 1940s. Intensity and angularity come through in the faces of the youth in his portraits Untitled (Portrait of a Girl) and Untitled (Portrait of a Boy). These works also show the influence that African sculpture had on his portraiture, with Portrait of a Boy showing more cubist features. Later family portraits show Alston's exploration of religious symbolism, color, form and space. His family group portraits are often faceless, which Alston states is the way that white America views blacks. Paintings such as Family (1955) show a woman seated and a man standing with two children – the parents seem almost solemn while the children are described as hopeful and with a use of color made famous by Cézanne. In Family Group (c. 1950) Alston's use of gray and ochre tones brings together the parents and son as if one with geometric patterns connecting them together as if a puzzle. The simplicity of the look, style and emotion upon the family is reflective and probably inspired by Alston's trip south. His work during this time has been described as being "characterized by his reductive use of form combined with a sun-hued" palette. During this time he also started to experiment with ink and wash painting, which is seen in work such as Portrait of a Woman (1955), as well as creating portraits to illustrate the music surrounding him in Harlem. Blues Singer #4 shows a female singer on stage with a white flower on her shoulder and a bold red dress. Girl in a Red Dress is thought to be Bessie Smith, whom he drew many times when she was recording and performing. Jazz was an important influence in Alston's work and social life, which he expressed in such works as Jazz (1950) and Harlem at Night. The 1960s civil rights movement influenced his work deeply, and he made artworks expressing feelings related to inequality and race relations in the United States. One of his few religious artworks was Christ Head (1960), which had an angular "Modiglianiesque" portrait of Jesus Christ. Seven years later he created You never really meant it, did you, Mr. Charlie? which, in a similar style as Christ Head, shows a black man standing against a red sky "looking as frustrated as any individual can look", according to Alston. Modernism Experimenting with the use of negative space and organic forms in the late 1940s, by the mid-1950s Alston began creating notably modernist style paintings. Woman with Flowers (1949) has been described as a tribute to Modigliani. Ceremonial (1950) shows that he was influenced by African art. Untitled works during this era show his use of color overlay, using muted colors to create simple layered abstracts of still lifes. Symbol (1953) relates to Picasso's Guernica, which was a favorite work of Alston's. His final work of the 1950s, Walking, was inspired by the Montgomery bus boycott. It is taken to represent "the surge of energy among African Americans to organize in their struggle for full equality." Alston is quoted as saying, "The idea of a march was growing....It was in the air...and this painting just came. I called it Walking on purpose. It wasn't the militancy that you saw later. It was a very definite walk-not going back, no hesitation." Black and white The civil rights movement of the 1960s was a major influence on Alston. In the late 1950s, he began working in black and white, which he continued up until the mid-1960s, and the period | Lindsay appointed him to the New York City Art Commission in 1969. In 1973, he was made full professor at City College of New York, where he had taught since 1968. In 1975, he was awarded the first Distinguished Alumni Award from Teachers College. The Art Student's League created a 21-year merit scholarship in 1977 under Alston's name to commemorate each year of his tenure. Painting a person and a culture Alston shared studio space with Henry Bannarn at 306 W. 141st Street, which served as an open space for artists, photographers, musicians, writers and the like. Other artists held studio space at "306", such as Jacob Lawrence, Addison Bate and his brother Leon. During this time Alston founded the Harlem Artists Guild with Savage and Elba Lightfoot to work toward equality in WPA art programs in New York. During the early years of 306, Alston focused on mastering portraiture. His early works such as Portrait of a Man (1929) show Alston's detailed and realistic style depicted through pastels and charcoals, inspired by the style of Winold Reiss. In his Girl in a Red Dress (1934) and The Blue Shirt (1935), Alston used modern and innovative techniques for his portraits of young individuals in Harlem. Blue Shirt is thought to be a portrait of Jacob Lawrence. During this time he also created Man Seated with Travel Bag (c. 1938–40), showing the seedy and bleak environment, contrasting with work like the racially charged Vaudeville (c. 1930) and its caricature style of a man in blackface. Inspired by his trip south, Alston began his "family series" in the 1940s. Intensity and angularity come through in the faces of the youth in his portraits Untitled (Portrait of a Girl) and Untitled (Portrait of a Boy). These works also show the influence that African sculpture had on his portraiture, with Portrait of a Boy showing more cubist features. Later family portraits show Alston's exploration of religious symbolism, color, form and space. His family group portraits are often faceless, which Alston states is the way that white America views blacks. Paintings such as Family (1955) show a woman seated and a man standing with two children – the parents seem almost solemn while the children are described as hopeful and with a use of color made famous by Cézanne. In Family Group (c. 1950) Alston's use of gray and ochre tones brings together the parents and son as if one with geometric patterns connecting them together as if a puzzle. The simplicity of the look, style and emotion upon the family is reflective and probably inspired by Alston's trip south. His work during this time has been described as being "characterized by his reductive use of form combined with a sun-hued" palette. During this time he also started to experiment with ink and wash painting, which is seen in work such as Portrait of a Woman (1955), as well as creating portraits to illustrate the music surrounding him in Harlem. Blues Singer #4 shows a female singer on stage with a white flower on her shoulder and a bold red dress. Girl in a Red Dress is thought to be Bessie Smith, whom he drew many times when she was recording and performing. Jazz was an important influence in Alston's work and social life, which he expressed in such works as Jazz (1950) and Harlem at Night. The 1960s civil rights movement influenced his work deeply, and he made artworks expressing feelings related to inequality and race relations in the United States. One of his few religious artworks was Christ Head (1960), which had an angular "Modiglianiesque" portrait of Jesus Christ. Seven years later he created You never really meant it, did you, Mr. Charlie? which, in a similar style as Christ Head, shows a black man standing against a red sky "looking as frustrated as any individual can look", according to Alston. Modernism Experimenting with the use of negative space and organic forms in the late 1940s, by the mid-1950s Alston began creating notably modernist style paintings. Woman with Flowers (1949) has been described as a tribute to Modigliani. Ceremonial (1950) shows that he was influenced by African art. Untitled works during this era show his use of color overlay, using muted colors to create simple layered abstracts of still lifes. Symbol (1953) relates to Picasso's Guernica, which was a favorite work of Alston's. His final work of the 1950s, Walking, was inspired by the Montgomery bus boycott. It is taken to represent "the surge of energy among African Americans to organize in their struggle for full equality." Alston is quoted as saying, "The idea of a march was growing....It was in the air...and this painting just came. I called it Walking on purpose. It wasn't the militancy that you saw later. It was a very definite walk-not going back, no hesitation." Black and white The civil rights movement of the 1960s was a major influence on Alston. In the late 1950s, he began working in black and white, which he continued up until the mid-1960s, and the period is considered one of his most powerful. Some of the works are simple abstracts of black ink on white paper, similar to a Rorschach test. Untitled (c. 1960s) shows a boxing match, with an attempt to express the drama of the fight through few brushstrokes. Alston worked with oil-on-Masonite during this period as well, using impasto, cream, and ochre to create a moody cave-like artwork. Black and White #1 (1959) is one of Alston's more "monumental" works. Gray, white and black come together to fight for space on an abstract canvas, in a softer form than the more harsh Franz Kline. Alston continued to explore the relationship between monochromatic hues throughout the series which Wardlaw describes as "some of the most profoundly beautiful works of twentieth-century American art." Murals In the beginning Charles Alston's mural work was inspired by the work of Aaron Douglas, Diego Rivera and José Clemente Orozco. He met Orozco when they did mural work in New York. In 1943, Alston was elected to the board of directors of the National Society of Mural Painters. He created murals for the Harlem Hospital, Golden State Mutual, American Museum of Natural History, Public School 154, the Bronx Family and Criminal Court and the Abraham Lincoln High School in Brooklyn, New York. Harlem Hospital Murals Originally hired as an easel painter, in 1935 Alston became the first African-American supervisor to work for the WPA's Federal Art Project (FAP) in New York. This was his first mural. At this time he was awarded WPA Project Number 1262 – an opportunity to oversee a group of artists creating murals and to supervise their painting for the Harlem Hospital. It was the first government commission ever awarded to African-American artists, who included Beauford Delaney, Seabrook Powell and Vertis Hayes. He also had the chance to create and paint his own contribution to the collection: Magic in Medicine and Modern Medicine. These paintings were part of a diptych completed in 1936 depicting the history of medicine in the African-American community and Beauford Delaney served as assistant. When creating the murals, Alston was inspired by the work of Aaron Douglas, who a year earlier had created the public art piece Aspects of Negro Life for the New York Public Library. He had researched traditional African culture, including traditional African medicine. Magic in Medicine, which depicts African culture and holistic healing, is considered one of "America's first public scenes of Africa". All of the mural sketches submitted were accepted by the FAP; however, hospital superintendent Lawrence T. Dermody and commissioner of hospitals S.S. Goldwater rejected four proposals, due to what they said was |
seconds. This then allows recruitment of the DNA repair enzyme MRE11, to initiate DNA repair, within 13 seconds. γH2AX, the phosphorylated form of H2AX is also involved in the early steps leading to chromatin decondensation after DNA damage occurrence. The histone variant H2AX constitutes about 10% of the H2A histones in human chromatin. γH2AX (H2AX phosphorylated on serine 139) can be detected as soon as 20 seconds after irradiation of cells (with DNA double-strand break formation), and half maximum accumulation of γH2AX occurs in one minute. The extent of chromatin with phosphorylated γH2AX is about two million base pairs at the site of a DNA double-strand break. γH2AX does not, itself, cause chromatin decondensation, but within 30 seconds of irradiation, RNF8 protein can be detected in association with γH2AX. RNF8 mediates extensive chromatin decondensation, through its subsequent interaction with CHD4, a component of the nucleosome remodeling and deacetylase complex NuRD. After undergoing relaxation subsequent to DNA damage, followed by DNA repair, chromatin recovers to a compaction state close to its pre-damage level after about 20 min. Methods to investigate chromatin ChIP-seq (Chromatin immunoprecipitation sequencing), aimed against different histone modifications, can be used to identify chromatin states throughout the genome. Different modifications have been linked to various states of chromatin. DNase-seq (DNase I hypersensitive sites Sequencing) uses the sensitivity of accessible regions in the genome to the DNase I enzyme to map open or accessible regions in the genome. FAIRE-seq (Formaldehyde-Assisted Isolation of Regulatory Elements sequencing) uses the chemical properties of protein-bound DNA in a two-phase separation method to extract nucleosome depleted regions from the genome. ATAC-seq (Assay for Transposable Accessible Chromatin sequencing) uses the Tn5 transposase to integrate (synthetic) transposons into accessible regions of the genome consequentially highlighting the localisation of nucleosomes and transcription factors across the genome. DNA footprinting is a method aimed at identifying protein-bound DNA. It uses labeling and fragmentation coupled to gel electrophoresis to identify areas of the genome that have been bound by proteins. MNase-seq (Micrococcal Nuclease sequencing) uses the micrococcal nuclease enzyme to identify nucleosome positioning throughout the genome. Chromosome conformation capture determines the spatial organization of chromatin in the nucleus, by inferring genomic locations that physically interact. MACC profiling (Micrococcal nuclease ACCessibility profiling) uses titration series of chromatin digests with micrococcal nuclease to identify chromatin accessibility as well as to map nucleosomes and non-histone DNA-binding proteins in both open and closed regions of the genome. Chromatin and knots It has been a puzzle how decondensed interphase chromosomes remain essentially unknotted. The natural expectation is that in the presence of type II DNA topoisomerases that permit passages of double-stranded DNA regions through each other, all chromosomes should reach the state of topological equilibrium. The topological equilibrium in highly crowded interphase chromosomes forming chromosome territories would result in formation of highly knotted chromatin fibres. However, Chromosome Conformation Capture (3C) methods revealed that the decay of contacts with the genomic distance in interphase chromosomes is practically the same as in the crumpled globule state that is formed when long polymers condense without formation of any knots. To remove knots from highly crowded chromatin, one would need an active process that should not only provide the energy to move the system from the state of topological equilibrium but also guide topoisomerase-mediated passages in such a way that knots would be efficiently unknotted instead of making the knots even more complex. It has been shown that the process of chromatin-loop extrusion is ideally suited to actively unknot chromatin fibres in interphase chromosomes. Chromatin: alternative definitions The term, introduced by Walther Flemming, has multiple meanings: Simple and concise definition: Chromatin is a macromolecular complex of a DNA macromolecule and protein macromolecules (and RNA). The proteins package and arrange the DNA and control its functions within the cell nucleus. A biochemists’ operational definition: Chromatin is the DNA/protein/RNA complex extracted from eukaryotic lysed interphase nuclei. Just which of the multitudinous substances present in a nucleus will constitute a part of the extracted material partly depends on the technique each researcher uses. Furthermore, the composition and properties of chromatin vary from one cell type to another, during the development of a specific cell type, and at different stages in the cell cycle. The DNA + histone = chromatin definition: The DNA double helix in the cell nucleus is packaged by special proteins termed histones. The formed protein/DNA complex is called chromatin. The basic structural unit of chromatin is the nucleosome. The first definition allows for "chromatins" to be defined in other domains of life like bacteria and archaea, using any DNA-binding proteins that condenses the molecule. These proteins are usually referred to nucleoid-associated proteins (NAPs); examples include AsnC/LrpC with HU. In addition, some archaea do produce nucleosomes from proteins homologous to eukaryotic histones. Nobel Prizes The following scientists were recognized for their contributions to chromatin research with Nobel Prizes: See also Active chromatin sequence Chromatid DAnCER database (2010) Epigenetics Histone-modifying enzymes Position-effect variegation Salt-and-pepper chromatin Transcriptional bursting Notes References Additional sources Cooper, Geoffrey M. 2000. The Cell, 2nd edition, | chromatin differs vastly to that of interphase. It is optimised for physical strength and manageability, forming the classic chromosome structure seen in karyotypes. The structure of the condensed chromatin is thought to be loops of 30 nm fibre to a central scaffold of proteins. It is, however, not well-characterised. Chromosome scaffolds play an important role to hold the chromatin into compact chromosomes. Loops of 30 nm structure further condense with scaffold, into higher order structures. Chromosome scaffolds are made of proteins including condensin, type IIA topoisomerase and kinesin family member 4 (KIF4). The physical strength of chromatin is vital for this stage of division to prevent shear damage to the DNA as the daughter chromosomes are separated. To maximise strength the composition of the chromatin changes as it approaches the centromere, primarily through alternative histone H1 analogues. During mitosis, although most of the chromatin is tightly compacted, there are small regions that are not as tightly compacted. These regions often correspond to promoter regions of genes that were active in that cell type prior to chromatin formation. The lack of compaction of these regions is called bookmarking, which is an epigenetic mechanism believed to be important for transmitting to daughter cells the "memory" of which genes were active prior to entry into mitosis. This bookmarking mechanism is needed to help transmit this memory because transcription ceases during mitosis. Chromatin and bursts of transcription Chromatin and its interaction with enzymes has been researched, and a conclusion being made is that it is relevant and an important factor in gene expression. Vincent G. Allfrey, a professor at Rockefeller University, stated that RNA synthesis is related to histone acetylation. The lysine amino acid attached to the end of the histones is positively charged. The acetylation of these tails would make the chromatin ends neutral, allowing for DNA access. When the chromatin decondenses, the DNA is open to entry of molecular machinery. Fluctuations between open and closed chromatin may contribute to the discontinuity of transcription, or transcriptional bursting. Other factors are probably involved, such as the association and dissociation of transcription factor complexes with chromatin. The phenomenon, as opposed to simple probabilistic models of transcription, can account for the high variability in gene expression occurring between cells in isogenic populations. Alternative chromatin organizations During metazoan spermiogenesis, the spermatid's chromatin is remodeled into a more spaced-packaged, widened, almost crystal-like structure. This process is associated with the cessation of transcription and involves nuclear protein exchange. The histones are mostly displaced, and replaced by protamines (small, arginine-rich proteins). It is proposed that in yeast, regions devoid of histones become very fragile after transcription; HMO1, an HMG-box protein, helps in stabilizing nucleosomes-free chromatin. Chromatin and DNA repair The packaging of eukaryotic DNA into chromatin presents a barrier to all DNA-based processes that require recruitment of enzymes to their sites of action. To allow the critical cellular process of DNA repair, the chromatin must be remodeled. In eukaryotes, ATP-dependent chromatin remodeling complexes and histone-modifying enzymes are two predominant factors employed to accomplish this remodeling process. Chromatin relaxation occurs rapidly at the site of a DNA damage. This process is initiated by PARP1 protein that starts to appear at DNA damage in less than a second, with half maximum accumulation within 1.6 seconds after the damage occurs. Next the chromatin remodeler Alc1 quickly attaches to the product of PARP1, and completes arrival at the DNA damage within 10 seconds of the damage. About half of the maximum chromatin relaxation, presumably due to action of Alc1, occurs by 10 seconds. This then allows recruitment of the DNA repair enzyme MRE11, to initiate DNA repair, within 13 seconds. γH2AX, the phosphorylated form of H2AX is also involved in the early steps leading to chromatin decondensation after DNA damage occurrence. The histone variant H2AX constitutes about 10% of the H2A histones in human chromatin. γH2AX (H2AX phosphorylated on serine 139) can be detected as soon as 20 seconds after irradiation of cells (with DNA double-strand break formation), and half maximum accumulation of γH2AX occurs in one minute. The extent of chromatin with phosphorylated γH2AX is about two million base pairs at the site of a DNA double-strand break. γH2AX does not, itself, cause chromatin decondensation, but within 30 seconds of irradiation, RNF8 protein can be detected in association with γH2AX. RNF8 mediates extensive chromatin decondensation, through its subsequent interaction with CHD4, a component of the nucleosome remodeling and deacetylase complex NuRD. After undergoing relaxation subsequent to DNA damage, followed by DNA repair, chromatin recovers to a compaction state close to its pre-damage level after about 20 min. Methods to investigate chromatin ChIP-seq (Chromatin immunoprecipitation sequencing), aimed against different histone modifications, can be used to identify chromatin states throughout the genome. Different modifications have been linked to various states of chromatin. DNase-seq (DNase I hypersensitive sites Sequencing) uses the sensitivity of accessible regions in the genome to the DNase I enzyme to map open or accessible regions in the genome. FAIRE-seq (Formaldehyde-Assisted Isolation of Regulatory Elements sequencing) uses the chemical properties of protein-bound DNA in a two-phase separation method to extract nucleosome depleted regions from the genome. ATAC-seq (Assay for Transposable Accessible Chromatin sequencing) uses the Tn5 transposase to integrate (synthetic) transposons into accessible regions of the genome consequentially highlighting the localisation of nucleosomes and transcription factors across the genome. DNA footprinting is a method aimed at identifying protein-bound DNA. It uses labeling and fragmentation coupled to gel electrophoresis to identify areas of the genome that have been bound by proteins. MNase-seq (Micrococcal Nuclease sequencing) uses the micrococcal nuclease enzyme to identify nucleosome positioning throughout the genome. Chromosome conformation capture determines the spatial organization of chromatin in the nucleus, by inferring genomic locations that physically interact. MACC profiling (Micrococcal nuclease ACCessibility profiling) uses titration series of chromatin digests with micrococcal nuclease to identify chromatin accessibility as well as to map nucleosomes and non-histone DNA-binding proteins in both open and closed regions of the genome. Chromatin and knots It has been a puzzle how decondensed interphase chromosomes remain essentially unknotted. The natural expectation is that in the presence of type II DNA topoisomerases that permit passages of double-stranded DNA regions through each other, all chromosomes should reach the state of topological equilibrium. The topological equilibrium in highly crowded interphase chromosomes forming chromosome territories would result in formation of highly knotted chromatin fibres. However, Chromosome Conformation Capture (3C) methods revealed that the decay of contacts with the genomic distance in interphase chromosomes is practically the same as in the crumpled globule state that is formed when long polymers condense without formation of any knots. To remove knots from highly crowded chromatin, one would need an active process that should not only provide the energy to move the system from the state of topological equilibrium but also guide topoisomerase-mediated passages in such a way that knots would be efficiently unknotted instead of making the knots even more complex. It has been shown that the process of chromatin-loop extrusion is ideally suited to actively unknot chromatin fibres in interphase chromosomes. |
is unitary, then The condition number with respect to L2 arises so often in numerical linear algebra that it is given a name, the condition number of a matrix. If is the matrix norm induced by the (vector) norm] and is lower triangular non-singular (i.e. for all ), then recalling that the eigenvalues of any triangular matrix are simply the diagonal entries. The condition number computed with this norm is generally larger than the condition number computed relative to the Euclidean norm, but it can be evaluated more easily (and this is often the only practicably computable condition number, when the problem to solve involves a non-linear algebra, for example when approximating irrational and transcendental functions or numbers with numerical methods). If the condition number is not too much larger than one, the matrix is well-conditioned, which means that its inverse can be computed with good accuracy. If the condition number is very large, then the matrix is said to be ill-conditioned. Practically, such a matrix is almost singular, and the computation of its inverse, or solution of a linear system of equations is prone to large numerical errors. A matrix that is not invertible has condition number equal to infinity. Nonlinear Condition numbers can also be defined for nonlinear functions, and can be computed using calculus. The condition number varies with the point; in some cases one can use the maximum (or supremum) condition number over the domain of the function or domain of the question as an overall condition number, while in other cases the condition number at a particular point is of more interest. One variable The condition number of a differentiable function in one variable as a function is . Evaluated at a point , this is Most elegantly, this can be understood as (the absolute value of) the ratio of the logarithmic derivative of , which is , and the logarithmic derivative of , which is , yielding a ratio of . This is because the logarithmic derivative is the infinitesimal rate of relative change in a function: it is the derivative scaled by the value of . Note that if a function has a zero at a point, its condition number at the point is infinite, as infinitesimal changes in the input can change the output from zero to positive or negative, yielding a ratio with zero in the denominator, hence infinite relative change. More directly, given a small change in , the relative change in is , while the relative change in is . Taking the ratio yields The last term is the difference quotient (the slope of the secant line), and taking the limit yields the derivative. Condition numbers of common elementary functions are particularly important in computing significant figures and can be computed immediately from the derivative; see significance arithmetic of transcendental functions. A few important ones are given below: Several variables Condition numbers can be defined for any function mapping its data from some domain (e.g. an -tuple of real numbers ) into some codomain (e.g. an -tuple of real numbers ), where both the domain and codomain are Banach spaces. They express how sensitive that function is to small changes (or small errors) in its arguments. This is crucial in assessing the sensitivity and potential accuracy difficulties of numerous computational problems, for example, polynomial root finding or computing eigenvalues. The condition number of at a point (specifically, its relative condition number) is then defined to be the maximum ratio of the fractional change in to any fractional change in , in the limit where the change in becomes infinitesimally small: where is a norm on the domain/codomain of . If is differentiable, this is equivalent to: where denotes the Jacobian matrix of partial derivatives of at , and is the induced norm on the matrix. See also Numerical methods for linear least squares Hilbert matrix Ill-posed problem Singular value References Further reading External links Condition Number of a Matrix | The ratio of the relative error in the solution to the relative error in b is The maximum value (for nonzero b and e) is then seen to be the product of the two operator norms as follows: The same definition is used for any consistent norm, i.e. one that satisfies When the condition number is exactly one (which can only happen if A is a scalar multiple of a linear isometry), then a solution algorithm can find (in principle, meaning if the algorithm introduces no errors of its own) an approximation of the solution whose precision is no worse than that of the data. However, it does not mean that the algorithm will converge rapidly to this solution, just that it will not diverge arbitrarily because of inaccuracy on the source data (backward error), provided that the forward error introduced by the algorithm does not diverge as well because of accumulating intermediate rounding errors. The condition number may also be infinite, but this implies that the problem is ill-posed (does not possess a unique, well-defined solution for each choice of data; that is, the matrix is not invertible), and no algorithm can be expected to reliably find a solution. The definition of the condition number depends on the choice of norm, as can be illustrated by two examples. If is the matrix norm induced by the (vector) Euclidean norm (sometimes known as the L2 norm and typically denoted as ), then where and are maximal and minimal singular values of respectively. Hence: If is normal, then where and are maximal and minimal (by moduli) eigenvalues of respectively. If is unitary, then The condition number with respect to L2 arises so often in numerical linear algebra that it is given a name, the condition number of a matrix. If is the matrix norm induced by the (vector) norm] and is lower triangular non-singular (i.e. for all ), then recalling that the eigenvalues of any triangular matrix are simply the diagonal entries. The condition number computed with this norm is generally larger than the condition number computed relative to the Euclidean norm, but it can be evaluated more easily (and this is often the only practicably computable condition number, when the problem to solve involves a non-linear algebra, for example when approximating irrational and transcendental functions or numbers with numerical methods). If the condition number is not too much larger than one, the matrix |
most popular cheese in the UK, accounting for 51% of the country's £1.9 billion annual cheese market. It is the second-most popular cheese in the US behind mozzarella, with an average annual consumption of per capita. The US produced approximately of cheddar cheese in 2014, and the UK produced in 2008. History The cheese originates from the village of Cheddar in Somerset, south west England. Cheddar Gorge on the edge of the village contains a number of caves, which provided the ideal humidity and steady temperature for maturing the cheese. Cheddar cheese traditionally had to be made within of Wells Cathedral. Cheddar has been produced since at least the 12th century. A pipe roll of King Henry II from 1170 records the purchase of at a farthing per pound (totalling £10.13s.4d). Charles I (1600–1649) also bought cheese from the village. Romans may have brought the recipe to Britain from the Cantal region of France. The 19th-century Somerset dairyman Joseph Harding was central to the modernisation and standardisation of Cheddar cheese. For his technical innovations, promotion of dairy hygiene, and volunteer dissemination of modern cheese-making techniques, he has been dubbed "the father of Cheddar cheese". Harding introduced new equipment to the process of cheese-making, including his "revolving breaker" for curd cutting, saving much manual effort. The "Joseph Harding method" was the first modern system for Cheddar production based upon scientific principles. Harding stated that Cheddar cheese is "not made in the field, nor in the byre, nor even in the cow, it is made in the dairy". Together, Joseph Harding and his wife were behind the introduction of the cheese into Scotland and North America, while his sons Henry and William Harding were responsible for introducing Cheddar cheese production to Australia and facilitating the establishment of the cheese industry in New Zealand respectively. During the Second World War and for nearly a decade after, most of the milk in Britain was used to make a single kind of cheese nicknamed "Government Cheddar" as part of the war economy and rationing. This almost resulted in wiping out all other cheese production in the country. Before the First World War, more than 3,500 cheese producers were in Britain; fewer than 100 remained after the Second World War. According to a United States Department of Agriculture researcher, Cheddar cheese is the world's most popular variety of cheese, and it is the most studied type of cheese in scientific publications. Process During the manufacture of cheddar cheese, the curds and whey are separated using rennet, an enzyme complex normally produced from the stomachs of newborn calves (in vegetarian or kosher cheeses, bacterial, yeast or mould-derived chymosin is used). "Cheddaring" refers to an additional step in the production of Cheddar cheese where, after heating, the curd is kneaded with salt, cut into cubes to drain the whey, and then stacked and turned. Strong, extra-mature Cheddar, sometimes called vintage, needs to be matured for 15 months or more. The cheese is kept at a constant temperature, often requiring special facilities. As with other hard cheese varieties produced worldwide, caves provide an ideal environment for maturing cheese; still, today, some Cheddar cheese is matured in the caves at Wookey Hole and Cheddar Gorge. Additionally, some versions of Cheddar cheese are smoked. Character The ideal quality of the original Somerset Cheddar was described by Joseph Harding in 1864 as "close and firm in texture, | precipitated when matured for times longer than six months. Cheddar can be a deep to pale yellow (off-white) colour, or a yellow-orange colour when certain plant extracts are added, such as beet juice. One commonly used spice is annatto, extracted from seeds of the tropical achiote tree. Originally added to simulate the colour of high-quality milk from grass-fed Jersey and Guernsey cows, annatto may also impart a sweet, nutty flavour. The largest producer of Cheddar cheese in the United States, Kraft, uses a combination of annatto and oleoresin paprika, an extract of the lipophilic (oily) portion of paprika. Cheddar cheese was sometimes (and still can be found) packaged in black wax, but was more commonly packaged in larded cloth, which was impermeable to contaminants, but still allowed the cheese to "breathe". Original-cheddar designation The Slow Food Movement has created a Cheddar Presidium, arguing that only three cheeses should be called "original Cheddar". Their specifications, which go further than the "West Country Farmhouse Cheddar" PDO, require that Cheddar cheese be made in Somerset and with traditional methods, such as using raw milk, traditional animal rennet, and a cloth wrapping. International production The "Cheddar cheese" name is used internationally; its name does not have a protected designation of origin, but the use of the name "West Country Farmhouse Cheddar" does. In addition to the United Kingdom, Cheddar cheese is also made in Australia, Argentina, Belgium, Canada, Germany, Ireland, the Netherlands, New Zealand, South Africa, Sweden, Finland and the United States. Cheddars can be either industrial or artisan cheeses. The flavour, colour, and quality of industrial cheese varies significantly, and food packaging will usually indicate a strength, such as mild, medium, strong, tasty, sharp, extra sharp, mature, old, or vintage; this may indicate the maturation period, or food additives used to enhance the flavour. Artisan varieties develop strong and diverse flavours over time. Australia As of 2013, Cheddar accounts for over 55% of the Australian cheese market, with average annual consumption around per person. Cheddar is so commonly found that the name is rarely used: instead, Cheddar is sold by strength alone as e.g. "mild", "tasty" or "sharp". Canada Following a wheat midge outbreak in Canada in the mid-19th century, farmers in Ontario began to convert to dairy farming in large numbers, and Cheddar cheese became their main exportable product, even being exported to England. By the turn of the 20th century, 1,242 Cheddar factories were in Ontario, and Cheddar had become Canada's second-largest export after timber. Cheddar exports totalled in 1904, but by 2012, Canada was a net importer of cheese. James L. Kraft grew up on a dairy farm in Ontario, before moving to Chicago. According to the writer Sarah Champman, "Although we cannot wholly lay the decline of cheese craft in Canada at the feet of James Lewis Kraft, it did correspond with the rise of Kraft’s processed cheese empire." Most Canadian Cheddar is produced in the provinces of Québec (40.8%) and Ontario (36%), though other provinces produce some and some smaller artisanal producers exist. The annual production is 120,000 tons. It is aged a minimum of three months, but much of it is held for much longer, up to 10 years. Canadian Cheddar cheese soup is a featured dish at the Canada pavilion at Epcot, in Walt Disney World. Percentage of milk fat must be labelled by the words milk fat or abbreviations B.F. or M.F. New Zealand Most of the cheddar produced in New Zealand is factory-made, although some are handmade by artisan cheesemakers. Factory-made cheddar is generally sold relatively young within New Zealand, but the Anchor dairy company ships New Zealand cheddars to the UK, where the blocks mature for another year or so. United Kingdom Only one producer of the cheese is now based in the village of Cheddar, the Cheddar Gorge Cheese Co. The name "cheddar" is not protected under European Union or UK law, though the name "West Country Farmhouse Cheddar" has an EU and (following Brexit) a UK protected designation of origin (PDO) registration, and may only be produced in Somerset, Devon, Dorset and Cornwall, using milk sourced from those counties. Cheddar is usually sold as mild, medium, mature, extra mature or vintage. Cheddar produced in Orkney is registered as an EU protected geographical indication under the name "Orkney Scottish Island Cheddar". This protection highlights the use of traditional methods, passed down through generations since 1946 and its uniqueness in comparison to other cheddar cheeses. "West Country Farmhouse Cheddar" is protected outside the UK and the EU as a Geographical Indication also in China, Georgia, Iceland, Japan, Moldova, Montenegro, Norway, Serbia, Switzerland and Ukraine. Furthermore, a Protected Geographical Indication (PGI) was registered for Orkney Scottish Island Cheddar in 2013 in the EU, which also applies under UK law. It is protected as a geographical indication in Iceland, Montenegro, Norway and Serbia. United States The state of Wisconsin produces the most cheddar cheese in the United States; other centres of production include California, Idaho, New York, Vermont, Oregon, Texas, and Oklahoma. It is sold in several varieties, namely mild, medium, sharp, extra sharp, New York style, white, and Vermont. New York style Cheddar is particularly sharp/acidic, but tends to be somewhat softer than the milder-tasting varieties. Cheddar that does not contain annatto is frequently |
defining the concept of "order" Following the examples of Vitruvius and the five books of the Regole generali di architettura sopra le cinque maniere de gli edifici by Sebastiano Serlio published from 1537 onwards, Giacomo Barozzi da Vignola produced an architecture rule book that was not only more practical than the previous two treatises, but also was systematically and consistently adopting, for the first time, the term 'order' to define each of the five different species of columns inherited from antiquity. A first publication of the various plates, as separate sheets, appeared in Rome in 1562, with the title: Regola delli cinque ordini d'architettura ("Canon of the Five Orders of Architecture"). As David Watkin has pointed out, Vignola's book "was to have an astonishing publishing history of over 500 editions in 400 years in ten languages, Italian, Dutch, English, Flemish, French, German, Portuguese, Russian, Spanish, Swedish, during which it became perhaps the most influential book of all times". The book consisted simply of an introduction followed by 32 annotated plates, highlighting the proportional system with all the minute details of the Five Architectural Orders. According to Christof Thoenes, the main expert of Renaissance architectural treatises, "in accordance with Vitruvius’s example, Vignola chose a "module" equal to a half-diameter which is the base of the system. All the other measurements are expressed in fractions or in multiples of this module. The result is an arithmetical model, and with its help each order, harmoniously proportioned, can easily be adapted to any given height, of a façade or an interior. From this point of view, Vignola's Regola is a remarkable intellectual achievement". In America, The American Builder's Companion, written in the early 19th century by the architect Asher Benjamin, influenced many builders in the eastern states, particularly those who developed what became known as the Federal style. The last American re-interpretation of Vignola's Regola, was edited in 1904 by William Robert Ware. The break from the classical mode came first with the Gothic Revival architecture, then the development of modernism during the 19th century. The Bauhaus promoted pure functionalism, stripped of superfluous ornament, and that has become one of the defining characteristics of modern architecture. There are some exceptions. Postmodernism introduced an ironic use of the orders as a cultural reference, divorced from the strict rules of composition. On the other hand, a number of practitioners such as Quinlan Terry in England, and Michael Dwyer, Richard Sammons, and Duncan Stroik in the United States, continue the classical tradition, and use the classical orders in their work. Nonce orders Several orders, usually based upon the composite order and only varying in the design of the capitals, have been invented under the inspiration of specific occasions, but have not been used again. They are termed "nonce orders" by analogy to nonce words; several examples follow below. These nonce orders all express the “speaking architecture” (architecture parlante) that was taught in the Paris courses, most explicitly by Étienne-Louis Boullée, in which sculptural details of classical architecture could be enlisted to speak symbolically, the better to express the purpose of the structure and enrich its visual meaning with specific appropriateness. This idea was taken up strongly in the training of Beaux-Arts architecture, ca 1875–1915. British nonce orders Robert Adam's brother James was in Rome in 1762, drawing antiquities under the direction of Clérisseau; he invented a "British order" and published an engraving of it. Its capital the heraldic lion and unicorn take the place of the Composite's volutes, a Byzantine or Romanesque conception, but expressed in terms of neoclassical realism. Adam's ink-and-wash rendering with red highlighting is at the Avery Library, Columbia University. In 1789 George Dance invented an Ammonite order, a variant of Ionic substituting volutes in the form of fossil ammonites for John Boydell's Shakespeare Gallery in Pall Mall, London. An adaptation of the Corinthian order by William Donthorne that used turnip leaves and mangelwurzel is termed the Agricultural order. Sir Edwin Lutyens, who from 1912 laid out New Delhi as the new seat of government for the British Empire in India, designed a Delhi order having a capital displaying a band of vertical ridges, and with bells hanging at each corner as a replacement for volutes. His design for the new city's central palace, Viceroy's House, now the Presidential residence Rashtrapati Bhavan, was a thorough integration of elements of Indian architecture into a building of classical forms and proportions, and made use of the order throughout. The Delhi Order reappears in some later Lutyens buildings including Campion Hall, Oxford. American orders In the United States Benjamin Latrobe, the architect of the Capitol building in Washington DC, designed a series of botanical American orders. Most famous is the order substituting corncobs and their husks, which was executed by Giuseppe Franzoni and employed in the small domed Vestibule of the Supreme Court. Only the Supreme Court survived the fire of 24 August 1814, nearly intact. With peace restored, Latrobe designed an American order that substituted for the acanthus tobacco leaves, of which he sent a sketch to Thomas Jefferson in a letter, 5 November 1816. He was encouraged to send a model of it, which remains at Monticello. In the 1830s Alexander Jackson Davis admired it enough to make a drawing of it. In 1809 Latrobe invented a second American order, employing magnolia flowers constrained within the profile of classical mouldings, as his drawing demonstrates. It was intended for "the Upper Columns in the Gallery of the Entrance of the Chamber of | with plain, round capitals (tops) and no base. With a height that is only four to eight times its diameter, the columns are the most squat of all orders. The shaft of the Doric order is channeled with 20 flutes. The capital consists of a necking or annulet, which is a simple ring. The echinus is convex, or circular cushion like stone, and the abacus is square slab of stone. Above the capital is a square abacus connecting the capital to the entablature. The entablature is divided into three horizontal registers, the lower part of which is either smooth or divided by horizontal lines. The upper half is distinctive for the Doric order. The frieze of the Doric entablature is divided into triglyphs and metopes. A triglyph is a unit consisting of three vertical bands which are separated by grooves. Metopes are the plain or carved reliefs between two triglyphs. The Greek forms of the Doric order come without an individual base. They instead are placed directly on the stylobate. Later forms, however, came with the conventional base consisting of a plinth and a torus. The Roman versions of the Doric order have smaller proportions. As a result, they appear lighter than the Greek orders. Ionic order The Ionic order came from eastern Greece, where its origins are entwined with the similar but little known Aeolic order. It is distinguished by slender, fluted pillars with a large base and two opposed volutes (also called "scrolls") in the echinus of the capital. The echinus itself is decorated with an egg-and-dart motif. The Ionic shaft comes with four more flutes than the Doric counterpart (totalling 24). The Ionic base has two convex moldings called tori, which are separated by a scotia. The Ionic order is also marked by an entasis, a curved tapering in the column shaft. A column of the Ionic order is nine times its lower diameter. The shaft itself is eight diameters high. The architrave of the entablature commonly consists of three stepped bands (fasciae). The frieze comes without the Doric triglyph and metope. The frieze sometimes comes with a continuous ornament such as carved figures instead. Corinthian order The Corinthian order is the most elaborated of the Greek orders, characterized by a slender fluted column having an ornate capital decorated with two rows of acanthus leaves and four scrolls. The shaft of the Corinthian order has 24 flutes. The column is commonly ten diameters high. The Roman writer Vitruvius credited the invention of the Corinthian order to Callimachus, a Greek sculptor of the 5th century BC. The oldest known building built according to this order is the Choragic Monument of Lysicrates in Athens, constructed from 335 to 334 BC. The Corinthian order was raised to rank by the writings of Vitruvius in the 1st century BC. Roman orders The Romans adapted all the Greek orders and also developed two orders of their own, basically modifications of Greek orders. However, it was not until the Renaissance that these were named and formalized as the Tuscan and Composite, respectively the plainest and most ornate of the orders. The Romans also invented the Superposed order. A superposed order is when successive stories of a building have different orders. The heaviest orders were at the bottom, whilst the lightest came at the top. This means that the Doric order was the order of the ground floor, the Ionic order was used for the middle story, while the Corinthian or the Composite order was used for the top story. The Giant order was invented by architects in the Renaissance. The Giant order is characterized by columns that extend the height of two or more stories. Tuscan order The Tuscan order has a very plain design, with a plain shaft, and a simple capital, base, and frieze. It is a simplified adaptation of the Doric order by the Greeks. The Tuscan order is characterized by an unfluted shaft and a capital that only consists of an echinus and an abacus. In proportions it is similar to the Doric order, but overall it is significantly plainer. The column is normally seven diameters high. Compared to the other orders, the Tuscan order looks the most solid. Composite order The Composite order is a mixed order, combining the volutes of the Ionic with the leaves of the Corinthian order. Until the Renaissance it was not ranked as a separate order. Instead it was considered as a late Roman form of the Corinthian order. The column of the Composite order is typically ten diameters high. Historical development of the orders The Renaissance period saw renewed interest in the literary sources of the ancient cultures of Greece and Rome, and the fertile development of a new architecture based on classical principles. The treatise De architectura by Roman theoretician, architect and engineer Vitruvius, is the only architectural writing that survived from Antiquity. Rediscovered in the 15th century, Vitruvius was instantly hailed as the authority on architecture. However, in his text the word order is not to be found. To describe the four species of columns (he only mentions: Tuscan, Doric, Ionic and Corinthian) he uses, in fact, various words such as: genus (gender), mos (habit, fashion, manner), opera (work). The term order, as well as the idea of redefining the canon started circulating in Rome, at the beginning of the 16th century, probably during the studies of Vitruvius' text conducted and shared by Peruzzi, Raphael, and Sangallo. Ever since, the definition of the canon has been a collective endeavor that involved several generations of European architects, from Renaissance and Baroque periods, basing their theories both on the study of Vitruvius' writings and the observation of Roman ruins (the Greek ruins became available only after Greek Independence, 1821–23). What was added were rules for the use of the Architectural Orders, and the exact proportions of them down to the most minute detail. Commentary on the appropriateness of the orders for temples devoted to particular deities (Vitruvius I.2.5) were elaborated by Renaissance theorists, with Doric characterized as bold and manly, Ionic as matronly, and Corinthian as maidenly. Vignola defining the concept of "order" Following the examples of Vitruvius and the five books of |
"The Black Hole of Negrav" (1975) Collected in The Unorthodox Engineers (1979) Other stories "Breaking Point" (1959) "Survival Problem" (1959) "Lambda I" (1962) "The Night-Flame" (1964) "Hunger Over Sweet Waters" (1965) "Ambassador to Verdammt" (1967) "The Imagination Trap" (1967) "The Cloudbuilders" (1968) "I Bring You Hands" (1968) "Gottlos" (1969), notable for having (along with Keith Laumer's Bolo series) inspired Steve Jackson's classic game of 21st century tank warfare Ogre. "The Teacher" (1969) "Letter from an Unknown Genius" (1971) "What the Thunder Said" (1972) "Which Way Do I Go For Jericho?" (1972) "The Old King's Answers" (1973) "Crimescan" (1973) "What The Thunder Said" (1973) "Mephisto and the Ion | (1964) "The Pen and the Dark" (1966) "Getaway from Getawehi" (1969) "The Black Hole of Negrav" (1975) Collected in The Unorthodox Engineers (1979) Other stories "Breaking Point" (1959) "Survival Problem" (1959) "Lambda I" (1962) "The Night-Flame" (1964) "Hunger Over Sweet Waters" (1965) "Ambassador to Verdammt" (1967) "The Imagination Trap" (1967) "The Cloudbuilders" (1968) "I Bring You Hands" (1968) "Gottlos" (1969), notable for having (along with Keith Laumer's Bolo series) inspired Steve Jackson's classic game of 21st century tank warfare Ogre. "The Teacher" (1969) "Letter from an Unknown Genius" (1971) "What the Thunder Said" (1972) "Which Way Do I Go For Jericho?" (1972) "The Old King's Answers" (1973) "Crimescan" (1973) "What The Thunder Said" (1973) "Mephisto and the Ion Explorer" (1974) "War of the Wastelife" (1974) "Cassius and the Mind-Jaunt" (1975) "Something in the City" (1984) "An Alternative to Salt" (1986) |
at Old St. Paul's Cathedral. A dowry of 200,000 ducats had been agreed, and half was paid shortly after the marriage. Once married, Arthur was sent to Ludlow Castle on the borders of Wales to preside over the Council of Wales and the Marches, as was his duty as Prince of Wales, and his bride accompanied him. A few months later, they both became ill, possibly with the sweating sickness, which was sweeping the area. Arthur died on 2 April 1502; 16-year-old Catherine recovered to find herself a widow. At this point, Henry VII faced the challenge of avoiding the obligation to return her 200,000-ducat dowry, half of which he had not yet received, to her father, as required by her marriage contract should she return home. Following the death of Queen Elizabeth in February 1503, King Henry VII initially considered marrying Catherine himself, but the opposition of her father and potential questions over the legitimacy of the couple's issue ended the idea. To settle the matter, it was agreed that Catherine would marry Henry VII's second son, Henry, Duke of York, who was five years younger than she was. The death of Catherine's mother, however, meant that her "value" in the marriage market decreased. Castile was a much larger kingdom than Aragon, and it was inherited by Catherine's elder sister, Joanna. Ostensibly, the marriage was delayed until Henry was old enough, but Ferdinand II procrastinated so much over payment of the remainder of Catherine's dowry that it became doubtful that the marriage would take place. She lived as a virtual prisoner at Durham House in London. Some of the letters she wrote to her father complaining of her treatment have survived. In one of these letters she tells him that "I choose what I believe, and say nothing. For I am not as simple as I may seem." She had little money and struggled to cope, as she had to support her ladies-in-waiting as well as herself. In 1507 she served as the Spanish ambassador to England, the first female ambassador in European history. While Henry VII and his counsellors expected her to be easily manipulated, Catherine went on to prove them wrong. Marriage to Arthur's brother depended on the Pope granting a dispensation because canon law forbade a man to marry his brother's widow (Lev. 18:16). Catherine testified that her marriage to Arthur was never consummated as, also according to canon law, a marriage was dissoluble unless consummated. Queenship Wedding Catherine's second wedding took place on 11 June 1509, seven years after Prince Arthur's death. She married Henry VIII, who had only just acceded to the throne, in a private ceremony in the church of the Observant Friars outside Greenwich Palace. She was 23 years of age. Coronation On Saturday 23 June 1509, the traditional eve-of-coronation procession to Westminster was greeted by a large and enthusiastic crowd. As was the custom, the couple spent the night before their coronation at the Tower of London. On Midsummer's Day, Sunday, 1509, Henry VIII and Catherine were anointed and crowned together by the Archbishop of Canterbury at a lavish ceremony at Westminster Abbey. The coronation was followed by a banquet in Westminster Hall. Many new Knights of the Bath were created in honour of the coronation. In that month that followed, many social occasions presented the new Queen to the English public. She made a fine impression and was well received by the people of England. Influence On 11 June 1513, Henry appointed Catherine Regent in England with the titles "Governor of the Realm and Captain General," while he went to France on a military campaign. When Louis d'Orléans, Duke of Longueville, was captured at Thérouanne, Henry sent him to stay in Catherine's household. She wrote to Wolsey that she and her council would prefer the Duke to stay in the Tower of London as the Scots were "so busy as they now be" and she added her prayers for "God to sende us as good lukke against the Scotts, as the King hath ther." The war with Scotland occupied her subjects, and she was "horrible busy with making standards, banners, and badges" at Richmond Palace. The Scots invaded and on 3 September 1513, she ordered Thomas Lovell to raise an army in the midland counties. Catherine rode north in full armour to address the troops, despite being heavily pregnant at the time. Her fine speech was reported to the historian Peter Martyr d'Anghiera in Valladolid within a fortnight. Although an Italian newsletter said she was north of London when news of the victory at Battle of Flodden Field reached her, she was near Buckingham. From Woburn Abbey she sent a letter to Henry along with a piece of the bloodied coat of King James IV of Scotland, who died in the battle, for Henry to use as a banner at the siege of Tournai. Catherine's religious dedication increased as she became older, as did her interest in academics. She continued to broaden her knowledge and provide training for her daughter, Mary. Education among women became fashionable, partly because of Catherine's influence, and she donated large sums of money to several colleges. Henry, however, still considered a male heir essential. The Tudor dynasty was new, and its legitimacy might still be tested. A long civil war (1135–1154) had been fought the last time a woman (Empress Matilda) had inherited the throne. The disasters of civil war were still fresh in living memory from the Wars of the Roses. In 1520, Catherine's nephew, the Holy Roman Emperor Charles V, paid a state visit to England, and she urged Henry to enter an alliance with Charles rather than with France. Immediately after his departure, she accompanied Henry to France on the celebrated visit to Francis I, the Field of the Cloth of Gold. Within two years, war was declared against France and the Emperor was once again welcome in England, where plans were afoot to betroth him to Catherine's daughter Mary. Pregnancies and children The King's great matter In 1525, Henry VIII became enamoured of Anne Boleyn, a lady-in-waiting to Queen Catherine; Anne was between ten and seventeen years younger than Henry, being born between 1501 and 1507. Henry began pursuing her; Catherine was no longer able to bear children by this time. Henry began to believe that his marriage was cursed and sought confirmation from the Bible, which he interpreted to say that if a man marries his brother's wife, the couple will be childless. Even if her marriage to Arthur had not been consummated (and Catherine would insist to her dying day that she had come to Henry's bed a virgin), Henry's interpretation of that biblical passage meant that their marriage had been wrong in the eyes of God. Whether the Pope at the time of Henry and Catherine's marriage had the right to overrule Henry's claimed scriptural impediment would become a hot topic in Henry's campaign to wrest an annulment from the present Pope. It is possible that the idea of annulment had been suggested to Henry much earlier than this, and is highly probable that it was motivated by his desire for a son. Before Henry's father ascended the throne, England was beset by civil warfare over rival claims to the English crown, and Henry may have wanted to avoid a similar uncertainty over the succession. It soon became the one absorbing object of Henry's desires to secure an annulment. Catherine was defiant when it was suggested that she quietly retire to a nunnery, saying: "God never called me to a nunnery. I am the King's true and legitimate wife." He set his hopes upon an appeal to the Holy See, acting independently of Cardinal Thomas Wolsey, whom he told nothing of his plans. William Knight, the King's secretary, was sent to Pope Clement VII to sue for an annulment, on the grounds that the dispensing bull of Pope Julius II was obtained by false pretenses. As the Pope was, at that time, the prisoner of Catherine's nephew Emperor Charles V following the Sack of Rome in May 1527, Knight had difficulty in obtaining access to him. In the end, Henry's envoy had to return without accomplishing much. Henry now had no choice but to put this great matter into the hands of Wolsey, who did all he could to secure a decision in Henry's favour. Wolsey went so far as to convene an ecclesiastical court in England with a representative of the Pope presiding, and Henry and Catherine herself in attendance. The Pope had no intention of allowing a decision to be reached in England, and his legate was recalled. (How far the Pope was influenced by Charles V is difficult to say, but it is clear Henry saw that the Pope was unlikely to annul his marriage to the Emperor's aunt.) The Pope forbade Henry to marry again before a decision was given in Rome. Wolsey had failed and was dismissed from public office in 1529. Wolsey then began a secret plot to have Anne Boleyn forced into exile and began communicating with the Pope to that end. When this was discovered, Henry ordered Wolsey's arrest and, had he not been terminally ill and died in 1530, he might have been executed for treason. A year later, Catherine was banished from court, and her old rooms were given to Anne Boleyn. Catherine wrote in a letter to Charles V in 1531: My tribulations are so great, my life so disturbed by the plans daily invented to further the King's wicked intention, the surprises which the King gives me, with certain persons of his council, are so mortal, and my treatment is what God knows, that it is enough to shorten ten lives, much more mine. When Archbishop of Canterbury William Warham died, the Boleyn family's chaplain, Thomas Cranmer, was appointed to the vacant position. When Henry decided to annul his marriage to Catherine, John Fisher became her most trusted counsellor and one of her chief supporters. He appeared in the legates' court on her behalf, where he shocked people with the directness of his language, and by declaring that, like John the Baptist, he was ready to die on behalf of the indissolubility of marriage. Henry was so enraged by this that he wrote a long Latin address to the legates in answer to Fisher's speech. Fisher's copy of this still exists, with his manuscript annotations in the margin which show how little he feared Henry's anger. The removal of the cause to Rome ended Fisher's role in the matter, but Henry never forgave him. Other people who supported Catherine's case included Thomas More; Henry's own sister Mary Tudor, Queen of France; María de Salinas; Holy Roman Emperor Charles V; Pope Paul III; and Protestant Reformers Martin Luther and William Tyndale. Banishment and death Upon returning to Dover from a meeting with King Francis I of France in Calais, Henry married Anne Boleyn in a secret ceremony. Some sources speculate that Anne was already pregnant at the time (and Henry did not want to risk a son being born illegitimate) but others testify that Anne (who had seen her sister Mary Boleyn taken up as the king's mistress and summarily cast aside) refused to sleep with Henry until they were married. Henry defended the lawfulness of their union by pointing out that Catherine had previously been married. If she and Arthur had consummated their marriage, Henry by canon law had the right to remarry. On 23 May 1533, Cranmer, sitting in judgement at a special court convened at Dunstable Priory to rule on the validity of Henry's marriage to Catherine, declared the marriage unlawful, even though Catherine had testified that she and Arthur had never had physical relations. Five days later, on 28 May 1533, Cranmer ruled that Henry and Anne's marriage was valid. Until the end of her life, Catherine would refer to herself as Henry's only lawful wedded wife and England's only rightful queen, and her servants continued to address her as such. Henry refused her the right to any title but "Dowager Princess of Wales" in recognition of her position as his brother's widow. Catherine went to live at The More castle late in 1531. After that, | starting an extensive programme for the relief of the poor. She was a patron of Renaissance humanism, and a friend of the great scholars Erasmus of Rotterdam and Thomas More. Early life Catherine was born at the Archbishop's Palace of Alcalá de Henares near Madrid, on the early hours of 16 December 1485. She was the youngest surviving child of King Ferdinand II of Aragon and Queen Isabella I of Castile. Catherine was quite short in stature with long red hair, wide blue eyes, a round face, and a fair complexion. She was descended, on her maternal side, from the House of Lancaster, an English royal house; her great-grandmother Catherine of Lancaster, after whom she was named, and her great-great-grandmother Philippa of Lancaster were both daughters of John of Gaunt and granddaughters of Edward III of England. Consequently, she was third cousin of her father-in-law, Henry VII of England, and fourth cousin of her mother-in-law Elizabeth of York. Catherine was educated by a tutor, Alessandro Geraldini, who was a clerk in Holy Orders. She studied arithmetic, canon and civil law, classical literature, genealogy and heraldry, history, philosophy, religion, and theology. She had a strong religious upbringing and developed her Roman Catholic faith that would play a major role in later life. She learned to speak, read and write in Castilian Spanish and Latin, and spoke French and Greek. She was also taught domestic skills, such as cooking, dancing, drawing, embroidery, good manners, lace-making, music, needlepoint, sewing, spinning, and weaving. Scholar Erasmus later said that Catherine "loved good literature which she had studied with success since childhood". At an early age, Catherine was considered a suitable wife for Arthur, Prince of Wales, heir apparent to the English throne, due to the English ancestry she inherited from her mother. By means of her mother, Catherine had a stronger legitimate claim to the English throne than King Henry VII himself through the first two wives of John of Gaunt, 1st Duke of Lancaster: Blanche of Lancaster and Constance of Castile. In contrast, Henry VII was the descendant of Gaunt's third marriage to Katherine Swynford, whose children were born out of wedlock and only legitimised after the death of Constance and the marriage of John to Katherine. The children of John and Katherine, while legitimised, were barred from inheriting the English throne, a stricture that was ignored in later generations. Because of Henry's descent through illegitimate children barred from succession to the English throne, the Tudor monarchy was not accepted by all European kingdoms. At the time, the House of Trastámara was the most prestigious in Europe, due to the rule of the Catholic Monarchs, so the alliance of Catherine and Arthur validated the House of Tudor in the eyes of European royalty and strengthened the Tudor claim to the English throne via Catherine of Aragon's ancestry. It would have given a male heir an indisputable claim to the throne. The two were married by proxy on 19 May 1499 and corresponded in Latin until Arthur turned fifteen, when it was decided that they were old enough to be married. Catherine was accompanied to England by the ambassadors Diego Fernández de Córdoba y Mendoza, 3rd Count of Cabra, Alonso de Fonseca, archbishop of Santiago de Compostela, and Antonio de Rojas Manrique, bishop of Mallorca. She brought a group of her African attendants with her, including one identified as the trumpeter John Blanke. They are the first Africans recorded to have arrived in London at the time, and were considered luxury servants. They caused a great impression about the princess and the power of her family. Her Spanish retinue was supervised by her duenna, Elvira Manuel. At first it was thought Catherine's ship would arrive at Gravesend. A number of English gentlewomen were appointed to be ready to welcome her on arrival in October 1501. They were to escort Catherine in a flotilla of barges on the Thames to the Tower of London. As wife and widow of Arthur Then-15-year-old Catherine departed from A Coruña on 17 August 1501 and met Arthur on 4 November at Dogmersfield in Hampshire. Little is known about their first impressions of each other, but Arthur did write to his parents-in-law that he would be "a true and loving husband" and told his parents that he was immensely happy to "behold the face of his lovely bride". The couple had corresponded in Latin, but found that they could not understand each other's spoken conversation, because they had learned different Latin pronunciations. Ten days later, on 14 November 1501, they were married at Old St. Paul's Cathedral. A dowry of 200,000 ducats had been agreed, and half was paid shortly after the marriage. Once married, Arthur was sent to Ludlow Castle on the borders of Wales to preside over the Council of Wales and the Marches, as was his duty as Prince of Wales, and his bride accompanied him. A few months later, they both became ill, possibly with the sweating sickness, which was sweeping the area. Arthur died on 2 April 1502; 16-year-old Catherine recovered to find herself a widow. At this point, Henry VII faced the challenge of avoiding the obligation to return her 200,000-ducat dowry, half of which he had not yet received, to her father, as required by her marriage contract should she return home. Following the death of Queen Elizabeth in February 1503, King Henry VII initially considered marrying Catherine himself, but the opposition of her father and potential questions over the legitimacy of the couple's issue ended the idea. To settle the matter, it was agreed that Catherine would marry Henry VII's second son, Henry, Duke of York, who was five years younger than she was. The death of Catherine's mother, however, meant that her "value" in the marriage market decreased. Castile was a much larger kingdom than Aragon, and it was inherited by Catherine's elder sister, Joanna. Ostensibly, the marriage was delayed until Henry was old enough, but Ferdinand II procrastinated so much over payment of the remainder of Catherine's dowry that it became doubtful that the marriage would take place. She lived as a virtual prisoner at Durham House in London. Some of the letters she wrote to her father complaining of her treatment have survived. In one of these letters she tells him that "I choose what I believe, and say nothing. For I am not as simple as I may seem." She had little money and struggled to cope, as she had to support her ladies-in-waiting as well as herself. In 1507 she served as the Spanish ambassador to England, the first female ambassador in European history. While Henry VII and his counsellors expected her to be easily manipulated, Catherine went on to prove them wrong. Marriage to Arthur's brother depended on the Pope granting a dispensation because canon law forbade a man to marry his brother's widow (Lev. 18:16). Catherine testified that her marriage to Arthur was never consummated as, also according to canon law, a marriage was dissoluble unless consummated. Queenship Wedding Catherine's second wedding took place on 11 June 1509, seven years after Prince Arthur's death. She married Henry VIII, who had only just acceded to the throne, in a private ceremony in the church of the Observant Friars outside Greenwich Palace. She was 23 years of age. Coronation On Saturday 23 June 1509, the traditional eve-of-coronation procession to Westminster was greeted by a large and enthusiastic crowd. As was the custom, the couple spent the night before their coronation at the Tower of London. On Midsummer's Day, Sunday, 1509, Henry VIII and Catherine were anointed and crowned together by the Archbishop of Canterbury at a lavish ceremony at Westminster Abbey. The coronation was followed by a banquet in Westminster Hall. Many new Knights of the Bath were created in honour of the coronation. In that month that followed, many social occasions presented the new Queen to the English public. She made a fine impression and was well received by the people of England. Influence On 11 June 1513, Henry appointed Catherine Regent in England with the titles "Governor of the Realm and Captain General," while he went to France on a military campaign. When Louis d'Orléans, Duke of Longueville, was captured at Thérouanne, Henry sent him to stay in Catherine's household. She wrote to Wolsey that she and her council would prefer the Duke to stay in the Tower of London as the Scots were "so busy as they now be" and she added her prayers for "God to sende us as good lukke against the Scotts, as the King hath ther." The war with Scotland occupied her subjects, and she was "horrible busy with making standards, banners, and badges" at Richmond Palace. The Scots invaded and on 3 September 1513, she ordered Thomas Lovell to raise an army in the midland counties. Catherine rode north in full armour to address the troops, despite being heavily pregnant at the time. Her fine speech was reported to the historian Peter Martyr d'Anghiera in Valladolid within a fortnight. Although an Italian newsletter said she was north of London when news of the victory at Battle of Flodden Field reached her, she was near Buckingham. From Woburn Abbey she sent a letter to Henry along with a piece of the bloodied coat of King James IV of Scotland, who died in the battle, for Henry to use as a banner at the siege of Tournai. Catherine's religious dedication increased as she became older, as did her interest in academics. She continued to broaden her knowledge and provide training for her daughter, Mary. Education among women became fashionable, partly because of Catherine's influence, and she donated large sums of money to several colleges. Henry, however, still considered a male heir essential. The Tudor dynasty was new, and its legitimacy might still be tested. A long civil war (1135–1154) had been fought the last time a woman (Empress Matilda) had inherited the throne. The disasters of civil war were still fresh in living memory from the Wars of the Roses. In 1520, Catherine's nephew, the Holy Roman Emperor Charles V, paid a state visit to England, and she urged Henry to enter an alliance with Charles rather than with France. Immediately after his departure, she accompanied Henry to France on the celebrated visit to Francis I, the Field of the Cloth of Gold. Within two years, war was declared against France and the Emperor was once again welcome in England, where plans were afoot to betroth him to Catherine's daughter Mary. Pregnancies and children The King's great matter In 1525, Henry VIII became enamoured of Anne Boleyn, a lady-in-waiting to Queen Catherine; Anne was between ten and seventeen years younger than Henry, being born between 1501 and 1507. Henry began pursuing her; Catherine was no longer able to bear children by this time. Henry began to believe that his marriage was cursed and sought confirmation from the Bible, which he interpreted to say that if a man marries his brother's wife, the couple will be childless. Even if her marriage to Arthur had not been consummated (and Catherine would insist to her dying day that she had come to Henry's bed a virgin), Henry's interpretation of that biblical passage meant that their marriage had been wrong in the eyes of God. Whether the Pope at the time of Henry and Catherine's marriage had the right to overrule Henry's claimed scriptural impediment would become a hot topic in Henry's campaign to wrest an annulment from the present Pope. It is possible that the idea of annulment had been suggested to Henry much earlier than this, and is highly probable that it was motivated by his desire for a son. Before Henry's father ascended the throne, England was beset by civil warfare over rival claims to the English crown, and Henry may have wanted to avoid a similar uncertainty over the succession. It soon became the one absorbing object of Henry's desires to secure an annulment. Catherine was defiant when it was suggested that she quietly retire to a nunnery, saying: "God never called me to a nunnery. I am the King's true and legitimate wife." He set his hopes upon an appeal to the Holy See, acting independently of Cardinal Thomas Wolsey, whom he told nothing of his plans. William Knight, the King's secretary, was sent to Pope Clement VII to sue for an annulment, on the grounds that the dispensing bull of Pope Julius II was obtained by false pretenses. As the Pope was, at that time, the prisoner of Catherine's nephew Emperor Charles V following the Sack of Rome in May 1527, Knight had difficulty in obtaining access to him. In the end, Henry's envoy had to return without accomplishing much. Henry now had no choice but to put this great matter into the hands of Wolsey, who did all he could to secure a decision in Henry's favour. Wolsey went so far as to convene an ecclesiastical court in England with a representative of the Pope presiding, and Henry and Catherine herself in attendance. The Pope had no intention of allowing a decision to be reached in England, and his legate was recalled. (How far the Pope was influenced by Charles V is difficult to say, but it is clear Henry saw that the Pope was unlikely to annul his marriage to the Emperor's aunt.) The Pope forbade Henry to marry again before a decision was given in Rome. Wolsey had failed and was dismissed from public office in 1529. Wolsey then began a secret plot to have Anne Boleyn forced into exile and began communicating with the Pope to that end. When this was discovered, Henry ordered Wolsey's arrest and, had he not been terminally ill and died in 1530, he might have been executed for treason. A year later, Catherine was banished from court, and her old rooms were given to Anne Boleyn. Catherine wrote in a letter to Charles V in 1531: My tribulations are so great, my life so disturbed by the plans daily invented to further the King's wicked intention, the surprises which the King gives me, with certain persons of his council, are so mortal, and my treatment is what God knows, that it is enough to shorten ten lives, much more mine. When Archbishop of Canterbury William Warham died, the Boleyn family's chaplain, Thomas Cranmer, was appointed to the vacant position. When Henry decided to annul his marriage to Catherine, John Fisher became her most trusted counsellor and one of her chief supporters. He appeared in the legates' court on her behalf, where he shocked people with the directness of his language, and by declaring that, like John the Baptist, he was ready to die on behalf of the indissolubility of marriage. Henry was so enraged by this that he wrote a long Latin address to the legates in answer to Fisher's speech. Fisher's copy of this still exists, with his manuscript annotations in the margin which show how little he feared Henry's anger. The removal of the cause to Rome ended Fisher's role in the matter, but Henry never forgave him. Other people who supported Catherine's case included Thomas More; Henry's own sister Mary Tudor, Queen of France; María de Salinas; Holy Roman Emperor Charles V; Pope Paul III; and Protestant Reformers Martin Luther and William Tyndale. Banishment and death Upon returning to Dover from a meeting with King Francis I of France in Calais, Henry married Anne Boleyn in a secret ceremony. Some sources speculate that Anne was already pregnant at the time (and Henry did not want to risk a son being born illegitimate) but others testify that Anne (who |
the empty tube. The voltage applied between the electrodes accelerates these low mass particles to high velocities. Cathode rays are invisible, but their presence was first detected in these Crookes tubes when they struck the glass wall of the tube, exciting the atoms of the glass and causing them to emit light, a glow called fluorescence. Researchers noticed that objects placed in the tube in front of the cathode could cast a shadow on the glowing wall, and realized that something must be traveling in straight lines from the cathode. After the electrons strike the back of the tube they make their way to the anode, then travel through the anode wire through the power supply and back through the cathode wire to the cathode, so cathode rays carry electric current through the tube. The current in a beam of cathode rays through a vacuum tube can be controlled by passing it through a metal screen of wires (a grid) between cathode and anode, to which a small negative voltage is applied. The electric field of the wires deflects some of the electrons, preventing them from reaching the anode. The amount of current that gets through to the anode depends on the voltage on the grid. Thus, a small voltage on the grid can be made to control a much larger voltage on the anode. This is the principle used in vacuum tubes to amplify electrical signals. The triode vacuum tube developed between 1907 and 1914 was the first electronic device that could amplify, and is still used in some applications such as radio transmitters. High speed beams of cathode rays can also be steered and manipulated by electric fields created by additional metal plates in the tube to which voltage is applied, or magnetic fields created by coils of wire (electromagnets). These are used in cathode ray tubes, found in televisions and computer monitors, and in electron microscopes. History After the 1654 invention of the vacuum pump by Otto von Guericke, physicists began to experiment with passing high voltage electricity through rarefied air. In 1705, it was noted that electrostatic generator sparks travel a longer distance through low pressure air than through atmospheric pressure air. Gas discharge tubes In 1838, Michael Faraday applied a high voltage between two metal electrodes at either end of a glass tube that had been partially evacuated of air, and noticed a strange light arc with its beginning at the cathode (negative electrode) and its end at the anode (positive electrode). In 1857, German physicist and glassblower Heinrich Geissler sucked even more air out with an improved pump, to a pressure of around 10−3 atm and found that, instead of an arc, a glow filled the tube. The voltage applied between the two electrodes of the tubes, generated by an induction coil, was anywhere between a few kilovolts and 100 kV. These were called Geissler tubes, similar to today's neon signs. The explanation of these effects was that the high voltage accelerated free electrons and electrically charged atoms (ions) naturally present in the air of the tube. At low pressure, there was enough space between the gas atoms that the electrons could accelerate to high enough speeds that when they struck an atom they knocked electrons off of it, creating more positive ions and free electrons, which went on to create more ions and electrons in a chain reaction, known as a glow discharge. The positive ions were attracted to the cathode and when they struck it knocked more electrons out of it, which were attracted toward the anode. Thus the ionized air was electrically conductive and an electric current flowed through the tube. Geissler tubes had enough air in them that the electrons could only travel a tiny distance before colliding with an atom. The electrons in these tubes moved in a slow diffusion process, never gaining much speed, so these tubes didn't produce cathode rays. Instead, they produced a colorful glow discharge (as in a modern neon light), caused when the electrons struck gas atoms, exciting their orbital electrons to higher energy levels. The electrons released this energy as light. This process is called fluorescence. Cathode rays By the 1870s, British physicist William Crookes and others were able to evacuate tubes to a lower pressure, below 10−6 atm. These were called Crookes tubes. Faraday had been the first to notice a dark space just in front of the cathode, where there was no luminescence. This came to be called the "cathode dark space", "Faraday dark space" | depends on the voltage on the grid. Thus, a small voltage on the grid can be made to control a much larger voltage on the anode. This is the principle used in vacuum tubes to amplify electrical signals. The triode vacuum tube developed between 1907 and 1914 was the first electronic device that could amplify, and is still used in some applications such as radio transmitters. High speed beams of cathode rays can also be steered and manipulated by electric fields created by additional metal plates in the tube to which voltage is applied, or magnetic fields created by coils of wire (electromagnets). These are used in cathode ray tubes, found in televisions and computer monitors, and in electron microscopes. History After the 1654 invention of the vacuum pump by Otto von Guericke, physicists began to experiment with passing high voltage electricity through rarefied air. In 1705, it was noted that electrostatic generator sparks travel a longer distance through low pressure air than through atmospheric pressure air. Gas discharge tubes In 1838, Michael Faraday applied a high voltage between two metal electrodes at either end of a glass tube that had been partially evacuated of air, and noticed a strange light arc with its beginning at the cathode (negative electrode) and its end at the anode (positive electrode). In 1857, German physicist and glassblower Heinrich Geissler sucked even more air out with an improved pump, to a pressure of around 10−3 atm and found that, instead of an arc, a glow filled the tube. The voltage applied between the two electrodes of the tubes, generated by an induction coil, was anywhere between a few kilovolts and 100 kV. These were called Geissler tubes, similar to today's neon signs. The explanation of these effects was that the high voltage accelerated free electrons and electrically charged atoms (ions) naturally present in the air of the tube. At low pressure, there was enough space between the gas atoms that the electrons could accelerate to high enough speeds that when they struck an atom they knocked electrons off of it, creating more positive ions and free electrons, which went on to create more ions and electrons in a chain reaction, known as a glow discharge. The positive ions were attracted to the cathode and when they struck it knocked more electrons out of it, which were attracted toward the anode. Thus the ionized air was electrically conductive and an electric current flowed through the tube. Geissler tubes had enough air in them that the electrons could only travel a tiny distance before colliding with an atom. The electrons in these tubes moved in a slow diffusion process, never gaining much speed, so these tubes didn't produce cathode rays. Instead, they produced a colorful glow discharge (as in a modern neon light), caused when the electrons struck gas atoms, exciting their orbital electrons to higher energy levels. The electrons released this energy as light. This process is called fluorescence. Cathode rays By the 1870s, British physicist William Crookes and others were able to evacuate tubes to a lower pressure, below 10−6 atm. These were called Crookes tubes. Faraday had been the first to notice a dark space just in front of the cathode, where there was no luminescence. This came to be called the "cathode dark space", "Faraday dark space" or "Crookes dark space". Crookes found that as he pumped more air out of the tubes, the Faraday dark space spread down the tube from the cathode toward the anode, until the tube was totally dark. But at the anode (positive) end of the tube, the glass of the tube itself began to glow. What was happening was that as more air was pumped from the tube, the electrons knocked out of the cathode when positive ions struck it could travel farther, on average, before they struck a gas atom. By the time the tube was dark, most of the electrons could travel in straight lines from the cathode to the anode end of the tube without a collision. With no obstructions, these low mass particles were accelerated to high velocities by the voltage between the electrodes. These were the cathode rays. When they reached the anode end of the tube, they were traveling so fast that, although they were attracted to it, they often flew past the anode and struck the back wall of the tube. When they struck atoms in the glass wall, they excited their orbital electrons to higher energy levels. When the electrons returned to their original energy level, they released the energy as light, causing the glass to fluoresce, usually a greenish or bluish color. Later researchers painted the inside back wall with fluorescent chemicals such as zinc sulfide, to make the glow more visible. Cathode rays themselves are invisible, but this accidental fluorescence allowed researchers to notice that objects in the tube in front of the cathode, such as the anode, cast sharp-edged shadows on the glowing back wall. In 1869, German physicist Johann Hittorf was first to realize that something must be traveling in straight lines from the cathode to cast the shadows. Eugen Goldstein named them cathode rays (German kathodenstrahlen). Discovery of the electron At this time, atoms were the smallest particles known, and were believed to be indivisible. What carried electric currents was a mystery. During the last quarter of the 19th century, many historic experiments were done with Crookes tubes to determine what cathode rays were. There were two theories. Crookes and Arthur Schuster believed they were particles of "radiant matter," that is, electrically charged atoms. German scientists Eilhard Wiedemann, Heinrich Hertz and Goldstein believed they were "aether waves", some new form of electromagnetic radiation, and were separate from what carried the electric current through the tube. The debate was resolved in 1897 when J. J. Thomson measured the mass of cathode rays, showing they were made of particles, but were around 1800 times lighter than the lightest atom, hydrogen. Therefore, they were not atoms, but a new particle, the first subatomic particle to be discovered, which he originally called "corpuscle" but was later named electron, after particles postulated by George Johnstone Stoney in 1874. He also showed they were identical with particles given off by photoelectric and radioactive materials. It was quickly recognized that they are the particles that carry electric currents in metal wires, and carry the negative electric charge of the atom. Thomson was given the 1906 Nobel prize for physics for this work. Philipp Lenard also contributed a great deal to cathode ray theory, winning the Nobel prize for physics in 1905 for his research on cathode rays and their |
electrolytic cell where the copper electrode is the positive terminal and also the anode. In a diode, the cathode is the negative terminal at the pointed end of the arrow symbol, where current flows out of the device. Note: electrode naming for diodes is always based on the direction of the forward current (that of the arrow, in which the current flows "most easily"), even for types such as Zener diodes or solar cells where the current of interest is the reverse current. In vacuum tubes (including cathode ray tubes) it is the negative terminal where electrons enter the device from the external circuit and proceed into the tube's near-vacuum, constituting a positive current flowing out of the device. Etymology The word was coined in 1834 from the Greek κάθοδος (kathodos), 'descent' or 'way down', by William Whewell, who had been consulted by Michael Faraday over some new names needed to complete a paper on the recently discovered process of electrolysis. In that paper Faraday explained that when an electrolytic cell is oriented so that electric current traverses the "decomposing body" (electrolyte) in a direction "from East to West, or, which will strengthen this help to the memory, that in which the sun appears to move", the cathode is where the current leaves the electrolyte, on the West side: "kata downwards, `odos a way ; the way which the sun sets". The use of 'West' to mean the 'out' direction (actually 'out' → 'West' → 'sunset' → 'down', i.e. 'out of view') may appear unnecessarily contrived. Previously, as related in the first reference cited above, Faraday had used the more straightforward term "exode" (the doorway where the current exits). His motivation for changing it to something meaning 'the West electrode' (other candidates had been "westode", "occiode" and "dysiode") was to make it immune to a possible later change in the direction convention for current, whose exact nature was not known at the time. The reference he used to this effect was the Earth's magnetic field direction, which at that time was believed to be invariant. He fundamentally defined his arbitrary orientation for the cell as being that in which the internal current would run parallel to and in the same direction as a hypothetical magnetizing current loop around the local line of latitude which would induce a magnetic dipole field oriented like the Earth's. This made the internal current East to West as previously mentioned, but in the event of a later convention change it would have become West to East, so that the West electrode would not have been the 'way out' any more. Therefore, "exode" would have become inappropriate, whereas "cathode" meaning 'West electrode' would have remained correct with respect to the unchanged direction of the actual phenomenon underlying the current, then unknown but, he thought, unambiguously defined by the magnetic reference. In retrospect the name change was unfortunate, not only because the Greek roots alone do not reveal the cathode's function any more, but more importantly because, as we now know, the Earth's magnetic field direction on which the "cathode" term is based is subject to reversals whereas the current direction convention on which the "exode" term was based has no reason to change in the future. Since the later discovery of the electron, an easier to remember, and more durably technically correct (although historically false), etymology has been suggested: cathode, from the Greek kathodos, 'way down', 'the way (down) into the cell (or other device) for electrons'. In chemistry In chemistry, a cathode is the electrode of an electrochemical cell at which reduction occurs; a useful mnemonic to remember this is AnOx RedCat (Oxidation at the Anode = Reduction at the Cathode). Another mnemonic is to note the cathode has a 'c', as does 'reduction'. Hence, reduction at the cathode. Perhaps most useful would be to remember cathode corresponds to cation (acceptor) and anode corresponds to anion (donor). The cathode can be negative like when the cell is electrolytic (where electrical energy provided to the cell is being used for decomposing chemical compounds); or positive as when the cell is galvanic (where chemical reactions are used for generating electrical energy). The cathode supplies electrons to the positively charged cations which flow to it from the electrolyte (even if the cell is galvanic, i.e., when the cathode is positive and therefore would be expected to repel the positively charged cations; this is due to electrode potential relative to the electrolyte solution being different for the anode and cathode metal/electrolyte systems in a galvanic cell). The cathodic current, in electrochemistry, is the flow of electrons from the cathode interface to a species in solution. The anodic current is the flow of electrons into the anode from a species in solution. Electrolytic cell In an electrolytic cell, the cathode is where the negative polarity is applied to drive the cell. Common results of reduction at the cathode are hydrogen gas or pure metal from metal ions. When discussing the relative reducing power of two redox agents, the couple for generating the more reducing species is said to be more "cathodic" with respect to the more easily reduced reagent. Galvanic cell In a galvanic cell, the cathode is where the positive pole is connected to allow the circuit to be completed: as the anode of the galvanic cell gives off electrons, they return from the circuit into the cell through the cathode. Electroplating metal cathode (electrolysis) When metal ions are reduced from ionic solution, they form a pure metal surface on the cathode. Items to be plated with pure metal are attached to and become part of the cathode in the electrolytic solution. In electronics Vacuum tubes In a vacuum tube or electronic vacuum system, the cathode is a metal surface which emits free electrons into the evacuated space. Since the electrons are attracted to the positive nuclei of the metal atoms, they normally stay inside the metal and require energy to leave it; this is called the work function of the metal. Cathodes are induced to emit electrons by several mechanisms: Thermionic emission: The cathode can be heated. The increased thermal motion of the metal atoms "knocks" electrons out of the surface, an effect called thermionic emission. This technique is used in most vacuum tubes. Field electron emission: A strong electric field can be applied to the surface by placing an electrode with a high positive voltage near the cathode. The positively charged electrode attracts the electrons, causing some electrons to leave the cathode's surface. This process is used in cold cathodes in some | in which the internal current would run parallel to and in the same direction as a hypothetical magnetizing current loop around the local line of latitude which would induce a magnetic dipole field oriented like the Earth's. This made the internal current East to West as previously mentioned, but in the event of a later convention change it would have become West to East, so that the West electrode would not have been the 'way out' any more. Therefore, "exode" would have become inappropriate, whereas "cathode" meaning 'West electrode' would have remained correct with respect to the unchanged direction of the actual phenomenon underlying the current, then unknown but, he thought, unambiguously defined by the magnetic reference. In retrospect the name change was unfortunate, not only because the Greek roots alone do not reveal the cathode's function any more, but more importantly because, as we now know, the Earth's magnetic field direction on which the "cathode" term is based is subject to reversals whereas the current direction convention on which the "exode" term was based has no reason to change in the future. Since the later discovery of the electron, an easier to remember, and more durably technically correct (although historically false), etymology has been suggested: cathode, from the Greek kathodos, 'way down', 'the way (down) into the cell (or other device) for electrons'. In chemistry In chemistry, a cathode is the electrode of an electrochemical cell at which reduction occurs; a useful mnemonic to remember this is AnOx RedCat (Oxidation at the Anode = Reduction at the Cathode). Another mnemonic is to note the cathode has a 'c', as does 'reduction'. Hence, reduction at the cathode. Perhaps most useful would be to remember cathode corresponds to cation (acceptor) and anode corresponds to anion (donor). The cathode can be negative like when the cell is electrolytic (where electrical energy provided to the cell is being used for decomposing chemical compounds); or positive as when the cell is galvanic (where chemical reactions are used for generating electrical energy). The cathode supplies electrons to the positively charged cations which flow to it from the electrolyte (even if the cell is galvanic, i.e., when the cathode is positive and therefore would be expected to repel the positively charged cations; this is due to electrode potential relative to the electrolyte solution being different for the anode and cathode metal/electrolyte systems in a galvanic cell). The cathodic current, in electrochemistry, is the flow of electrons from the cathode interface to a species in solution. The anodic current is the flow of electrons into the anode from a species in solution. Electrolytic cell In an electrolytic cell, the cathode is where the negative polarity is applied to drive the cell. Common results of reduction at the cathode are hydrogen gas or pure metal from metal ions. When discussing the relative reducing power of two redox agents, the couple for generating the more reducing species is said to be more "cathodic" with respect to the more easily reduced reagent. Galvanic cell In a galvanic cell, the cathode is where the positive pole is connected to allow the circuit to be completed: as the anode of the galvanic cell gives off electrons, they return from the circuit into the cell through the cathode. Electroplating metal cathode (electrolysis) When metal ions are reduced from ionic solution, they form a pure metal surface on the cathode. Items to be plated with pure metal are attached to and become part of the cathode in the electrolytic solution. In electronics Vacuum tubes In a vacuum tube or electronic vacuum system, the cathode is a metal surface which emits free electrons into the evacuated space. Since the electrons are attracted to the positive nuclei of the metal atoms, they normally stay inside the metal and require energy to leave it; this is called the work function of the metal. Cathodes are induced to emit electrons by several mechanisms: Thermionic emission: The cathode can be heated. The increased thermal motion of the metal atoms "knocks" electrons out of the surface, an effect called thermionic emission. This technique is used in most vacuum tubes. Field electron emission: A strong electric field can be applied to the surface by placing an electrode with a high positive voltage near the cathode. The positively charged electrode attracts the electrons, causing some electrons to leave the cathode's surface. This process is used in cold cathodes in some electron microscopes, and in microelectronics fabrication, Secondary emission: An electron, atom or molecule colliding with the surface of the cathode with enough energy can knock electrons out of the surface. These electrons are called secondary electrons. This mechanism is used in gas-discharge lamps such as neon lamps. Photoelectric emission: Electrons can also be emitted from the electrodes of certain metals when light of frequency greater than the threshold frequency falls on it. This effect is called photoelectric emission, and the electrons produced are called photoelectrons. This effect is used in phototubes and image intensifier tubes. Cathodes can be divided into two types: Hot cathode A hot cathode is a cathode that is heated by a filament to produce electrons by thermionic emission. The filament is a thin wire of a refractory metal like tungsten heated red-hot by an electric current passing through it. Before the advent of transistors in the 1960s, virtually all electronic equipment used hot-cathode vacuum tubes. Today hot cathodes are used in vacuum tubes in radio transmitters and microwave ovens, to produce the electron beams in older cathode ray tube (CRT) type televisions and computer monitors, in x-ray generators, electron microscopes, and fluorescent tubes. There are two types of hot cathodes: Directly heated cathode: In this type, the filament itself is the cathode and emits the electrons directly. Directly heated cathodes were used in the first vacuum tubes, but today they are only used in fluorescent tubes, some large transmitting vacuum tubes, and all X-ray tubes. Indirectly heated cathode: In this type, the filament is not the cathode but rather heats the cathode which then emits electrons. Indirectly heated cathodes are used in most devices today. For example, in most vacuum tubes the cathode is a nickel tube with the filament inside it, and the heat from the filament causes the outside surface of the tube to emit electrons. The filament of an indirectly heated cathode is usually called the heater. The main reason for using an indirectly heated cathode is to isolate the rest of the vacuum tube from the electric potential across the filament. Many vacuum tubes use alternating current to heat the filament. In a tube in which the filament itself was the cathode, the alternating electric field from the filament surface would affect the movement of |
RGB color signals into luma and chrominance allows the bandwidth of each to be determined separately. Typically, the chrominance bandwidth is reduced in analog composite video by reducing the bandwidth of a modulated color subcarrier, and in digital systems by chroma subsampling. History The idea of transmitting a color television signal with distinct luma and chrominance components originated with Georges Valensi, who patented the idea in 1938. Valensi's patent application described: The use of two channels, one transmitting the predominating color (signal T), and the other the mean brilliance (signal t) output from a single television transmitter to be received not only by color television receivers provided with the necessary more expensive equipment, but also by the ordinary type of television receiver which is more numerous and less expensive and which reproduces the pictures in black and white only. Previous schemes for color television systems, which were incompatible with existing monochrome receivers, transmitted RGB signals in various ways. Television standards In analog television, chrominance is encoded into a video signal using a subcarrier frequency. Depending on the video standard, the chrominance subcarrier may be either quadrature-amplitude-modulated (NTSC and PAL) or frequency-modulated (SECAM). In the PAL system, the color subcarrier is 4.43 MHz above the video carrier, while in the NTSC system it is 3.58 MHz above the video carrier. The NTSC and PAL standards are the most commonly used, although there are other video standards that employ different | from the accompanying luma signal (or Y' for short). Chrominance is usually represented as two color-difference components: U = B′ − Y′ (blue − luma) and V = R′ − Y′ (red − luma). Each of these difference components may have scale factors and offsets applied to it, as specified by the applicable video standard. In composite video signals, the U and V signals modulate a color subcarrier signal, and the result is referred to as the chrominance signal; the phase and amplitude of this modulated chrominance signal correspond approximately to the hue and saturation of the color. In digital-video and still-image color spaces such as Y′CbCr, the luma and chrominance components are digital sample values. Separating RGB color signals into luma and chrominance allows the bandwidth of each to be determined separately. Typically, the chrominance bandwidth is reduced in analog composite video by reducing the bandwidth of a modulated color subcarrier, and in digital systems by chroma subsampling. History The idea of transmitting a color television signal with distinct luma and chrominance components originated with Georges Valensi, who patented the idea in 1938. Valensi's patent application described: The use of two channels, one transmitting the predominating color (signal T), and the other the mean brilliance (signal t) output from a single television transmitter to be received not only by color television receivers provided with the necessary more expensive equipment, but also by the ordinary type of television receiver which is more numerous and less expensive and which reproduces the pictures in black and white only. Previous schemes for color television systems, which were incompatible with existing monochrome receivers, transmitted RGB signals in various ways. Television standards In analog television, chrominance is encoded into a video signal using a subcarrier frequency. Depending on the |
media Chirality (mathematics), the property of a figure not being identical to its mirror image Chirality (physics), when a phenomenon is not identical to its mirror image Chirality (journal), an academic journal dealing with chiral chemistry Chirality (manga), a 4-volume yuri manga series | image Chirality (journal), an academic journal dealing with chiral chemistry Chirality (manga), a 4-volume yuri manga series written and illustrated by author Satoshi Urushihara Chirality (album), a 2014 solo piano |
college campus includes libraries, lecture halls, residence halls, student centers or dining halls, and park-like settings. A modern campus is a collection of buildings and grounds that belong to a given institution, either academic or non-academic. Examples include the Googleplex and the Apple Campus. Etymology The word derives from a Latin word for "field" and was first used to describe the large field adjacent Nassau Hall of the College of New Jersey (now Princeton University) in 1774. The field separated Princeton from the small nearby town. Some other American colleges later adopted the word to describe individual fields at their own institutions, but "campus" did not yet describe the whole university property. A school might have one space called a campus, another called a field, and still another called a yard. History The tradition of a campus began with the medieval European universities where the students and teachers lived and worked together in a cloistered environment. The notion of the importance of the setting to academic life later migrated to America, and early | university property. A school might have one space called a campus, another called a field, and still another called a yard. History The tradition of a campus began with the medieval European universities where the students and teachers lived and worked together in a cloistered environment. The notion of the importance of the setting to academic life later migrated to America, and early colonial educational institutions were based on the Scottish and English collegiate system. The campus evolved from the cloistered model in Europe to a diverse set of independent styles in the United States. Early colonial colleges were all built in proprietary styles, with some contained in single buildings, such as the campus of Princeton University or arranged in a version of the cloister reflecting American values, such as Harvard's. Both the campus designs |
the Cham by a Chinese in 1171. The Khmer also had double bow crossbows mounted on elephants, which Michel Jacq-Hergoualc’h suggests were elements of Cham mercenaries in Jayavarman VII's army. Ancient Greece The earliest crossbow-like weapons in Europe probably emerged around the late 5th century BC when the gastraphetes, an ancient Greek crossbow, appeared. The device was described by the Greek author Heron of Alexandria in his Belopoeica ("On Catapult-making"), which draws on an earlier account of his compatriot engineer Ctesibius (fl. 285–222 BC). According to Heron, the gastraphetes was the forerunner of the later catapult, which places its invention some unknown time prior to 399 BC. The gastraphetes was a crossbow mounted on a stock divided into a lower and upper section. The lower was a case fixed to the bow while the upper was a slider which had the same dimensions as the case. Meaning "belly-bow", it was called as such because the concave withdrawal rest at one end of the stock was placed against the stomach of the operator, which he could press to withdraw the slider before attaching a string to the trigger and loading the bolt; this could thus store more energy than regular Greek bows. It was used in the Siege of Motya in 397 BC. This was a key Carthaginian stronghold in Sicily, as described in the 1st century AD by Heron of Alexandria in his book Belopoeica. Other arrow shooting machines such as the larger ballista and smaller Scorpio also existed starting from around 338 BC, but these are torsion catapults and not considered crossbows. Arrow-shooting machines (katapeltai) are briefly mentioned by Aeneas Tacticus in his treatise on siegecraft written around 350 BC. An Athenian inventory from 330–329 BC includes catapults bolts with heads and flights. Arrow-shooting machines in action are reported from Philip II's siege of Perinthos in Thrace in 340 BC. At the same time, Greek fortifications began to feature high towers with shuttered windows in the top, presumably to house anti-personnel arrow shooters, as in Aigosthena. Ancient Rome The late 4th century author Vegetius provides the only contemporary account of ancient Roman crossbows. In his De Re Militaris, he describes arcubalistarii (crossbowmen) working together with archers and artillerymen. However it is disputed if arcuballistas were crossbows or torsion powered weapons. The idea that the arcuballista was a crossbow is based on the fact that Vegetius refers to it and the manuballista, which was torsion powered, separately. Therefore, if the arcuballista was not like the manuballista, it may have been a crossbow. The etymology is not clear and their definitions obscure. According to Vegetius, these were well-known devices, and hence he did not describe them in depth. Arrian's earlier Ars Tactica, written around 136 AD, does mention 'missiles shot not from a bow but from a machine' and that this machine was used on horseback while in full gallop. It is presumed that this was a crossbow. The only pictorial evidence of Roman arcuballistas comes from sculptural reliefs in Roman Gaul depicting them in hunting scenes. These are aesthetically similar to both the Greek and Chinese crossbows, but it's not clear what kind of release mechanism they used. Archaeological evidence suggests they were based on the rolling nut mechanism of medieval Europe. Medieval Europe References to the crossbow are basically nonexistent in Europe from the 5th century until the 10th century. There is however a depiction of a crossbow as a hunting weapon on four Pictish stones from early medieval Scotland (6th to 9th centuries): St. Vigeans no. 1, Glenferness, Shandwick, and Meigle. The crossbow reappeared again in 947 as a French weapon during the siege of Senlis and again in 984 at the siege of Verdun. Crossbows were used at the battle of Hastings in 1066 and by the 12th century they had become common battlefield weapons. The earliest extant European crossbow remains to date were found at Lake Paladru and has been dated to the 11th century. The crossbow superseded hand bows in many European armies during the 12th century, except in England, where the longbow was more popular. Later crossbows (sometimes referred to as arbalests), utilizing all-steel prods, were able to achieve power close (and sometime superior) to longbows, but were more expensive to produce and slower to reload because they required the aid of mechanical devices such as the cranequin or windlass to draw back their extremely heavy bows. Usually these could only shoot two bolts per minute versus twelve or more with a skilled archer, often necessitating the use of a pavise to protect the operator from enemy fire. Along with polearm weapons made from farming equipment, the crossbow was also a weapon of choice for insurgent peasants such as the Taborites. Genoese crossbowmen were famous mercenaries hired throughout medieval Europe, while the crossbow also played an important role in anti-personnel defense of ships. Crossbows were eventually replaced in warfare by gunpowder weapons. Early hand cannons had slower rates of fire and much worse accuracy than contemporary crossbows, but the arquebus (which proliferated in the mid to late 15th century) matched their rate of fire while being far more powerful. The Battle of Cerignola in 1503 was largely won by Spain through the use of matchlock arquebuses, marking the first time a major battle was won through the use of hand-held firearms. Later, similar competing tactics would feature harquebusiers or musketeers in formation with pikemen, pitted against cavalry firing pistols or carbines. While the military crossbow had largely been supplanted by firearms on the battlefield by 1525, the sporting crossbow in various forms remained a popular hunting weapon in Europe until the eighteenth century. Crossbows saw irregular use throughout the rest of the 16th century; for example, Maria Pita's husband was killed by a crossbowman of the English Armada in 1589. Islamic world There are no references to crossbows in Islamic texts earlier than the 14th century. Arabs in general were averse to the crossbow and considered it a foreign weapon. They called it qaus al-rijl (foot-drawn bow), qaus al-zanbūrak (bolt bow) and qaus al-faranjīyah (Frankish bow). Although Muslims did have crossbows, there seems to be a split between eastern and western types. Muslims in Spain used the typical European trigger while eastern Muslim crossbows had a more complex trigger mechanism. Mamluk cavalry used crossbows. Elsewhere In Western Africa and Central Africa, crossbows served as a scouting weapon and for hunting, with African slaves bringing this technology to natives in America. In the US South, the crossbow was used for hunting and warfare when firearms or gunpowder were unavailable because of economic hardships or isolation. In the North of Northern America, light hunting crossbows were traditionally used by the Inuit. These are technologically similar to the African derived crossbows, but have a different route of influence. Spanish conquistadors continued to use crossbows in the Americas long after they were replaced in European battlefields by firearms. Only in the 1570s did firearms became completely dominant among the Spanish in the Americas. The French and the British used a Sauterelle (French for grasshopper) in World War I. It was lighter and more portable than the Leach Trench Catapult, but less powerful. It weighed and could throw an F1 grenade or Mills bomb . The Sauterelle replaced the Leach Catapult in British service and was in turn replaced in 1916 by the 2-inch Medium Trench Mortar and Stokes mortar. Modern use Hunting, leisure and science Crossbows are used for shooting sports and bowhunting in modern archery and for blubber biopsy samples in scientific research. In some countries such as Canada or the United Kingdom, they may be less heavily regulated than firearms, and thus more popular for hunting; some jurisdictions have bow and/or crossbow only seasons. Modern military and paramilitary use In modern times, crossbows are no longer used for war, but there are still some applications. For example, in the Americas, the Peruvian army (Ejército) equips some soldiers with crossbows and rope, to establish a zip-line in difficult terrain. In Brazil the CIGS (Jungle Warfare Training Center) also trains soldiers in the use of crossbows. In the United States of America, SAA International Ltd manufacture a crossbow-launched version of the U.S. Army type classified Launched Grapnel Hook (LGH), among other mine countermeasure solutions designed for the middle-eastern theatre. It has been successfully evaluated in Cambodia and Bosnia. It is used to probe for and detonate tripwire initiated mines and booby traps at up to . The concept is similar to the LGH device originally only fired from a rifle, as a plastic retrieval line is attached. Reusable up to 20 times, the line can be reeled back in without exposing oneself. The device is of particular use in tactical situations where noise discipline is important. In Europe, Barnett International sold crossbows to Serbian forces which according to The Guardian were later used "in ambushes and as a counter-sniper weapon" against the Kosovo Liberation Army during the Kosovo War in the areas of Pec and Djakovica, south west of Kosovo. Whitehall launched an investigation, though the Department of Trade and Industry established that not being "on the military list", crossbows were not covered by such export regulations. Paul Beaver of Jane's Defence Publications commented that, "They are not only a silent killer, they also have a psychological effect". On 15 February 2008, Serbian Minister of Defence Dragan Sutanovac was pictured testing a Barnett crossbow during a public exercise of the Serbian Army's Special Forces in Nis, south of capital Belgrade. Special forces in both Greece and Turkey also continue to employ the crossbow. Spain's Green Berets still use the crossbow as well. In Asia, some Chinese armed forces use crossbows, including the special force Snow Leopard Commando Unit of the People's Armed Police and the People's Liberation Army. One justification for this comes in the crossbow's ability to stop persons carrying explosives without risk of causing detonation. During the Xinjiang riots of July 2009, crossbows were used alongside modern military hardware to quell protests. The Indian Navy's Marine Commando Force were equipped until the late 1980s with crossbows supplied with cyanide-tipped bolts, as an alternative to suppressed handguns. Comparison to conventional bows With a crossbow, archers could release a draw force far in excess of what they could have handled with a bow. Furthermore, the crossbow could hold the tension | an arbalist (after the arbalest, a European crossbow variant used during the 12th century). Although crossbows and bows use the same launch principle, the difference is that an archer must maintain a bow's draw manually by pitching the bowstring with fingers, pulling it back with arm and back muscles and then holding that same form in order to aim (which distresses the body and demands significant physical strength and stamina); while a crossbow utilizes a locking mechanism to maintain the draw, limiting the shooter's exertion to only pulling the string into lock and then releasing the shot via depressing a lever/trigger. This not only enables a crossbowman to handle stronger draw weight, but also to hold for longer with significantly less physical strain, thus potentially achieving better precision. Historically, crossbows played a significant role in the warfare of East Asia and Europe. The earliest known crossbows were invented in the first millennium BC, not later than the 7th century BC in ancient China, not later than the 1st century AD in Greece (as the gastraphetes). Crossbows brought about a major shift in the role of projectile weaponry in wars, such as during Qin's unification wars and later the Han campaigns against northern nomads and western states. The medieval European crossbow was called by many names, including "crossbow" itself; most of these names derived from the word ballista, an ancient Greek torsion siege engine similar in appearance but different in design principle. The traditional bow and arrow had long been a specialized weapon that required considerable training, physical strength, and expertise to operate with any degree of practical efficiency. Many cultures treated archers as a separate and superior warrior caste, despite usually being drawn from the common class, as their archery skill-set was essentially trained and strengthened from early childhood (similar to many cavalry-oriented cultures) and was impossible to reproduce outside a pre-established cultural tradition, which many cultures lacked. In contrast, the crossbow was the first ranged weapon to be simple, cheap and physically undemanding enough to be operated by large numbers of untrained conscript soldiers, thus enabling virtually any military body to field a potent force of crossbowmen with little expense beyond the cost of the weapons themselves. In modern times, firearms have largely supplanted bows and crossbows as weapons of warfare. However, crossbows still remain widely used for competitive shooting sports and hunting, or for relatively silent shooting. It is possible to turn at least some store-bought bows into a crossbow. It is done by marrying a stock-and-trigger system to a bow. Terminology A crossbowman or crossbow-maker is sometimes called an arbalista, arbalist or arbalest. The latter two are also used to refer to the crossbow. Arrow, bolt and quarrel are all suitable terms for crossbow projectiles. The lath, also called the prod, is the bow of the crossbow. According to W.F. Peterson, the prod came into usage in the 19th century as a result of mistranslating rodd in a 16th-century list of crossbow effects. The stock is the wooden body on which the bow is mounted, although the medieval tiller is also used. The lock refers to the release mechanism, including the string, sears, trigger lever, and housing. Construction A crossbow is essentially a bow mounted on an elongated frame (called a tiller or stock) with a built-in mechanism that holds the drawn bow string, as well as a trigger mechanism that allows the string to be released. Chinese vertical trigger lock The Chinese trigger was a complex mechanism typically composed of three cast bronze pieces housed inside a hollow bronze enclosure. The entire mechanism is then dropped into a carved slot within the tiller and secured together by two bronze rods. The string catch (nut) is shaped like a "J" because it usually have a tall erect rear spine that protrudes above the housing, which serves the function of both a cocking lever (by pushing the drawn string onto it) and a primitive rear sight. It is held stationary against tension by the second piece, which is shaped like a flattened "C" and acts as the sear. The sear cannot move as it is trapped by the third piece, i.e. the actual trigger blade, which hangs vertically below the enclosure and catches the sear via a notch. The two bearing surfaces between the three trigger pieces each offers a mechanical advantage, which allow for handling significant draw weights with a much smaller pull weight. During shooting, the user will hold the crossbow at eye level by a vertical handle and aim along the arrow using the sighting spine for elevation, similar to how a modern rifleman shoots with iron sights. When the trigger blade is pulled, its notch disengages from the sear and allows the latter to drop downwards, which in turn frees up the nuts to pivot forward and release the bowstring. European rolling nut lock The earliest European designs featured a transverse slot in the top surface of the frame, down into which the string was placed. To shoot this design, a vertical rod is thrust up through a hole in the bottom of the notch, forcing the string out. This rod is usually attached perpendicular to a rear-facing lever called a tickler. A later design implemented a rolling cylindrical pawl called a nut to retain the string. This nut has a perpendicular centre slot for the bolt, and an intersecting axial slot for the string, along with a lower face or slot against which the internal trigger sits. They often also have some form of strengthening internal sear or trigger face, usually of metal. These roller nuts were either free-floating in their close-fitting hole across the stock, tied in with a binding of sinew or other strong cording; or mounted on a metal axle or pins. Removable or integral plates of wood, ivory, or metal on the sides of the stock kept the nut in place laterally. Nuts were made of antler, bone, or metal. Bows could be kept taut and ready to shoot for some time with little physical straining, allowing crossbowmen to aim better without fatiguing. Bow Chinese crossbow bows were made of composite material from the start. European crossbows from the 10th to 12th centuries used wood for the bow, also called the prod or lath, which tended to be ash or yew. Composite bows started appearing in Europe during the 13th century and could be made from layers of different material, often wood, horn, and sinew glued together and bound with animal tendon. These composite bows made of several layers are much stronger and more efficient in releasing energy than simple wooden bows. As steel became more widely available in Europe around the 14th century, steel prods came into use. Traditionally, the prod was often lashed to the stock with rope, whipcord, or other strong cording. This cording is called the bridle. Spanning mechanism The Chinese used winches for large crossbows mounted on fortifications or wagons, known as "bedded crossbows" (床弩). Winches may have been used for handheld crossbows during the Han dynasty (202 BC–9 AD, 25–220 AD), but there is only one known depiction of it. The 11th century Chinese military text Wujing Zongyao mentions types of crossbows using winch mechanisms, but it is not known if these were actually handheld crossbows or mounted crossbows. Another drawing method involved the shooters sitting on the ground, and using the combined strength of leg, waist, back and arm muscles to help span much heavier crossbows, which were aptly called "waist-spun crossbows" (腰張弩). During the Medieval period, both Chinese and European crossbows used stirrups as well as belt hooks. In the 13th century, European crossbows started using winches, and from the |
ratios for the development of SJS or TEN in people who carry the allele can be in the double, triple or even quadruple digits, depending on the population studied. HLA-B*1502 occurs almost exclusively in people with ancestry across broad areas of Asia, but has a very low or absent frequency in European, Japanese, Korean and African populations. However, the HLA-A*31:01 allele has been shown to be a strong predictor of both mild and severe adverse reactions, such as the DRESS form of severe cutaneous reactions, to carbamazepine among Japanese, Chinese, Korean, and Europeans. It is suggested that carbamazepine acts as a potent antigen that binds to the antigen-presenting area of HLA-B*1502 alike, triggering an everlasting activation signal on immature CD8-T cells, thus resulting in widespread cytotoxic reactions like SJS/TEN. Interactions Carbamazepine has a potential for drug interactions. Drugs that decrease breaking down of carbamazepine or otherwise increase its levels include erythromycin, cimetidine, propoxyphene, and calcium channel blockers. Grapefruit juice raises the bioavailability of carbamazepine by inhibiting the enzyme CYP3A4 in the gut wall and in the liver. Lower levels of carbamazepine are seen when administrated with phenobarbital, phenytoin, or primidone, which can result in breakthrough seizure activity. Valproic acid and valnoctamide both inhibit microsomal epoxide hydrolase (mEH), the enzyme responsible for the breakdown of the active metabolite carbamazepine-10,11-epoxide into inactive metabolites. By inhibiting mEH, valproic acid and valnoctamide cause a build-up of the active metabolite, prolonging the effects of carbamazepine and delaying its excretion. Carbamazepine, as an inducer of cytochrome P450 enzymes, may increase clearance of many drugs, decreasing their concentration in the blood to subtherapeutic levels and reducing their desired effects. Drugs that are more rapidly metabolized with carbamazepine include warfarin, lamotrigine, phenytoin, theophylline, valproic acid, many benzodiazepines, and methadone. Carbamazepine also increases the metabolism of the hormones in birth control pills and can reduce their effectiveness, potentially leading to unexpected pregnancies. Pharmacology Mechanism of action Carbamazepine is a sodium channel blocker. It binds preferentially to voltage-gated sodium channels in their inactive conformation, which prevents repetitive and sustained firing of an action potential. Carbamazepine has effects on serotonin systems but the relevance to its antiseizure effects is uncertain. There is evidence that it is a serotonin releasing agent and possibly even a serotonin reuptake inhibitor. Pharmacokinetics Carbamazepine is relatively slowly but practically completely absorbed after administration by mouth. Highest concentrations in the blood plasma are reached after 4 to 24 hours depending on the dosage form. Slow release tablets result in about 15% lower absorption and 25% lower peak plasma concentrations than ordinary tablets, as well as in less fluctuation of the concentration, but not in significantly lower minimum concentrations. 20 to 30% of the substance are circulating in form of carbamazepine itself, the rest are metabolites. 70 to 80% are bound to plasma proteins. Concentrations in the breast milk are 25 | improves remission) when compared to phenytoin and valproate the choice of medications should be considered for each person individually as further research is needed to determine which medication is most helpful for people with newly-onset seizures. In the United States, the FDA-approved medical uses are epilepsy (including partial seizures, generalized tonic-clonic seizures and mixed seizures), trigeminal neuralgia, and manic and mixed episodes of bipolar I disorder. Carbamazepine is the only FDA approved drug for the use of trigeminal neuralgia. The drug is also claimed to be effective for ADHD. As of 2014, a controlled release formulation was available for which there is tentative evidence showing fewer side effects and unclear evidence with regard to whether there is a difference in efficacy. Adverse effects In the US, the label for carbamazepine contains warnings concerning: effects on the body's production of red blood cells, white blood cells, and platelets: rarely, there are major effects of aplastic anemia and agranulocytosis reported and more commonly, there are minor changes such as decreased white blood cell or platelet counts, but these do not progress to more serious problems. increased risks of suicide increased risks of hyponatremia and SIADH risk of seizures, if the person stops taking the drug abruptly risks to the fetus in women who are pregnant, specifically congenital malformations like spina bifida, and developmental disorders. Common adverse effects may include drowsiness, dizziness, headaches and migraines, motor coordination impairment, nausea, vomiting, and/or constipation. Alcohol use while taking carbamazepine may lead to enhanced depression of the central nervous system. Less common side effects may include increased risk of seizures in people with mixed seizure disorders, abnormal heart rhythms, blurry or double vision. Also, rare case reports of an auditory side effect have been made, whereby patients perceive sounds about a semitone lower than previously; this unusual side effect is usually not noticed by most people, and disappears after the person stops taking carbamazepine. Pharmacogenetics Serious skin reactions such as Stevens–Johnson syndrome (SJS) or toxic epidermal necrolysis (TEN) due to carbamazepine therapy are more common in people with a particular human leukocyte antigen gene-variant (allele), HLA-B*1502. Odds ratios for the development of SJS or TEN in people who carry the allele can be in the double, triple or even quadruple digits, depending on the population studied. HLA-B*1502 occurs almost exclusively in people with ancestry across broad areas of Asia, but has a very low or absent frequency in European, Japanese, Korean and African populations. However, the HLA-A*31:01 allele has been shown to be a strong predictor of both mild and severe adverse reactions, such as the DRESS form of severe cutaneous reactions, to carbamazepine among Japanese, Chinese, Korean, and Europeans. It is suggested that carbamazepine acts as a potent antigen that binds to the antigen-presenting area of HLA-B*1502 alike, triggering an everlasting activation signal on immature CD8-T cells, thus resulting in widespread cytotoxic reactions like SJS/TEN. Interactions Carbamazepine has a potential for drug interactions. Drugs that decrease breaking down of carbamazepine or otherwise increase its levels include erythromycin, cimetidine, propoxyphene, and calcium channel blockers. Grapefruit juice raises the bioavailability of carbamazepine by inhibiting the enzyme CYP3A4 in the gut wall and in the liver. Lower levels of carbamazepine are seen when administrated with phenobarbital, phenytoin, or primidone, which can result in breakthrough seizure activity. Valproic acid and valnoctamide both inhibit microsomal epoxide hydrolase (mEH), the enzyme responsible for the breakdown of the active metabolite carbamazepine-10,11-epoxide into inactive metabolites. By inhibiting mEH, valproic acid |
four-letter abbreviation that may stand for: California Coalition for Immigration Reform, a California political advocacy group for immigration reduction Campaign for Comprehensive Immigration Reform, a Washington, DC organization for immigrant rights Canadian Centre for Investigative Reporting, produces thoroughly researched reporting in the public interest Centre for Counseling Innovation and Research (CCIR), at | reporting in the public interest Centre for Counseling Innovation and Research (CCIR), at Kish Island, Tehran, Mashhad Comité Consultatif International pour la Radio, a forerunner of the ITU-R CCIR 601, the former name of a broadcasting standard promulgated by the CCIR CCIR-tones, A selective calling system used in some radio communications systems in some |
beliefs. The reference to "co-essential with the Father" was directed at Arianism; "co-essential with us" is directed at Apollinarianism; "Two Natures unconfusedly, unchangeably" refutes Eutychianism; and "indivisibly, inseparably" and "Theotokos" are against Nestorianism. Oriental Orthodox dissent The Chalcedonian Definition was written amid controversy between the Western and Eastern churches over the meaning of the Incarnation (see Christology). The Western church readily accepted the creed, but some Eastern churches did not. Political disturbances prevented the Armenian bishops from attending. Even though Chalcedon reaffirmed the Third Council's condemnation of Nestorius, the Non-Chalcedonians always suspected that the Chalcedonian Definition tended towards Nestorianism. This was in part because of the restoration of a number of bishops deposed at the Second Council of Ephesus, bishops who had previously indicated what appeared to be support of Nestorian positions. The Coptic Church of Alexandria dissented, holding to Cyril of Alexandria's preferred formula for the oneness of Christ's nature in the incarnation of God the Word as "out of two natures". Cyril's language is not consistent and he may have countenanced the view that it is possible to contemplate in theory | Non-Chalcedonian. Context The Council of Chalcedon was summoned to consider the Christological question in light of the "one-nature" view of Christ proposed by Eutyches, archimandrite at Constantinople, which prevailed at the Second Council of Ephesus in 449, sometimes referred to as the "Robber Synod". The Council first solemnly ratified the Nicene Creed adopted in 325 and that creed as amended by the First Council of Constantinople in 381. It also confirmed the authority of two synodical letters of Cyril of Alexandria and the letter of Pope Leo I to Flavian of Constantinople. Content The full text of the definition reaffirms the decisions of the Council of Ephesus, the pre-eminence of the Creed of Nicaea (325) and the further definitions of the Council of Constantinople (381). In one of the translations into English, the key section, emphasizing the double nature of Christ (human and divine), runs: The Definition implicitly addressed a number of popular heretical beliefs. The reference to "co-essential with the Father" was directed at Arianism; "co-essential with us" is directed at Apollinarianism; "Two Natures unconfusedly, unchangeably" refutes Eutychianism; and "indivisibly, inseparably" and "Theotokos" are against Nestorianism. Oriental Orthodox dissent |
to our understanding of the physical world, in that they describe which processes can or cannot occur in nature. For example, the conservation law of energy states that the total quantity of energy in an isolated system does not change, though it may change form. In general, the total quantity of the property governed by that law remains unchanged during physical processes. With respect to classical physics, conservation laws include conservation of energy, mass (or matter), linear momentum, angular momentum, and electric charge. With respect to particle physics, particles cannot be created or destroyed except in pairs, where one is ordinary and the other is an antiparticle. With respect to symmetries and invariance principles, three special conservation laws have been described, associated with inversion or reversal of space, time, and charge. Conservation laws are considered to be fundamental laws of nature, with broad application in physics, as well as in other fields such as chemistry, biology, geology, and engineering. Most conservation laws are exact, or absolute, in the sense that they apply to all possible processes. Some conservation laws are partial, in that they hold for some processes but not for others. One particularly important result concerning conservation laws is Noether theorem, which states that there is a one-to-one correspondence between each one of them and a differentiable symmetry of nature. For example, the conservation of energy follows from the time-invariance of physical systems, and the conservation of angular momentum arises from the fact that physical systems behave the same regardless of how they are oriented in space. Exact laws A partial listing of physical conservation equations due to symmetry that are said to be exact laws, or more precisely have never been proven to be violated: Approximate laws There are also approximate conservation laws. These are approximately true in particular situations, such as low speeds, short time scales, or certain interactions. Conservation of mechanical energy Conservation of rest mass Conservation of baryon number (See chiral anomaly and sphaleron) Conservation of lepton number (In the Standard Model) Conservation of flavor (violated by the weak interaction) Conservation of parity (violated by the weak interaction) Invariance under charge conjugation Invariance under time reversal CP symmetry, the combination of charge conjugation and parity (equivalent to time reversal if CPT holds) Global and local conservation laws The total amount of some conserved quantity in the universe could remain unchanged if an equal amount were to appear at one point A and simultaneously disappear from another separate point B. For example, an amount of energy could appear on Earth without changing the total amount in the Universe if the same amount of energy were to disappear from some other region of the Universe. This weak form of "global" conservation is really not a conservation law because it is not Lorentz invariant, so phenomena like the above do not occur in nature. Due to special relativity, if the appearance of the energy at A and disappearance of the energy at B are simultaneous in one inertial reference frame, they will | have been described, associated with inversion or reversal of space, time, and charge. Conservation laws are considered to be fundamental laws of nature, with broad application in physics, as well as in other fields such as chemistry, biology, geology, and engineering. Most conservation laws are exact, or absolute, in the sense that they apply to all possible processes. Some conservation laws are partial, in that they hold for some processes but not for others. One particularly important result concerning conservation laws is Noether theorem, which states that there is a one-to-one correspondence between each one of them and a differentiable symmetry of nature. For example, the conservation of energy follows from the time-invariance of physical systems, and the conservation of angular momentum arises from the fact that physical systems behave the same regardless of how they are oriented in space. Exact laws A partial listing of physical conservation equations due to symmetry that are said to be exact laws, or more precisely have never been proven to be violated: Approximate laws There are also approximate conservation laws. These are approximately true in particular situations, such as low speeds, short time scales, or certain interactions. Conservation of mechanical energy Conservation of rest mass Conservation of baryon number (See chiral anomaly and sphaleron) Conservation of lepton number (In the Standard Model) Conservation of flavor (violated by the weak interaction) Conservation of parity (violated by the weak interaction) Invariance under charge conjugation Invariance under time reversal CP symmetry, the combination of charge conjugation and parity (equivalent to time reversal if CPT holds) Global and local conservation laws The total amount of some conserved quantity in the universe could remain unchanged if an equal amount were to appear at one point A and simultaneously disappear from another separate point B. For example, an amount of energy could appear on Earth without changing the total amount in the Universe if the same amount of energy were to disappear from some other region of the Universe. This weak form of "global" conservation is really not a conservation law because it is not Lorentz invariant, so phenomena like the above do not occur in nature. Due to special relativity, if the appearance of the energy at A and disappearance of the energy at B are simultaneous in one inertial reference frame, they will not be simultaneous in other inertial reference frames moving with respect to the first. In a moving frame one will occur before the other; either the energy at A will appear before or after the energy at B disappears. In both cases, during the interval energy will not be conserved. A stronger form of conservation law requires that, for the amount of a conserved quantity at a point to change, there must be a flow, or flux of the quantity into or out of the point. For example, the amount of electric charge at a point is never found to change without an electric current into or out of the point that carries the difference in charge. Since it only involves continuous local changes, this stronger type of conservation law is Lorentz invariant; a quantity conserved in one reference frame is conserved in all moving reference frames. This is called a local conservation law. Local conservation also implies global conservation; that the total amount of the conserved quantity in the Universe remains constant. All of the conservation laws listed above are local conservation laws. A local conservation law is expressed mathematically by a continuity equation, which states that the change in the quantity in a volume is equal to the total net "flux" of the quantity through the surface of the volume. The following sections discuss continuity equations in general. Differential forms In continuum mechanics, the most general form of an exact conservation law is given by a continuity equation. For example, conservation of electric charge q is where ∇⋅ is the divergence operator, ρ is the density of q (amount per unit volume), j is the flux of q (amount crossing a unit area in unit time), and t is time. If we assume that the motion u of the charge is a continuous function of position and time, then In one space dimension this can be put into the form of a homogeneous first-order quasilinear hyperbolic equation: where the dependent variable y is called the density of a conserved quantity, and A(y) is called the current Jacobian, and the subscript notation for partial derivatives has been employed. The more general inhomogeneous case: is not a conservation equation but the general kind of balance equation describing a dissipative system. The dependent variable y is called a nonconserved quantity, and the inhomogeneous term s(y,x,t) is the-source, |
otherwise unconnected railway lines. Andrew Chord, a comic book character who is the former mentor of the New Warriors Chord Overstreet, American actor and musician Canadian Hydrogen Observatory and Radio-transient Detector (CHORD), a proposed successor to the CHIME radio telescope The Chord (painting), a c.1715 painting by Antoine Watteau Chord may also refer to: Mouse chording or a chorded keyboard, where multiple buttons are held down simultaneously to produce a specific action The Chords may refer to: The Chords (British band), 1970s British mod revival band The Chords (American band), 1950s American doo-wop group Chords may refer to: Chords | in the direction of the normal airflow. The term chord was selected due to the curved nature of the wing's surface Chord (peer-to-peer), a peer-to-peer protocol and algorithm for distributed hash tables (DHT) Chord (concurrency), a concurrency construct in some object-oriented programming languages In British railway terminology, a chord can refer to a short curve of track connecting two otherwise unconnected railway lines. Andrew Chord, a comic book character who is the former mentor of the New Warriors Chord Overstreet, American actor and musician Canadian Hydrogen Observatory and Radio-transient Detector (CHORD), a proposed successor to the CHIME radio telescope The Chord (painting), a c.1715 painting by Antoine Watteau Chord may also refer to: Mouse chording or a chorded keyboard, where multiple buttons are held down simultaneously to produce a specific action The Chords may refer |
how!"), Greek tailor Euripides Eumenades ("You rip-a these, you mend-a these"), cloakroom attendant Mahatma Coate ("My hat, my coat"), seat cushion tester Mike Easter (my keister) and many, many others, usually concluding with Erasmus B. Dragon ("Her ass must be draggin'"), whose job title varied, but who was often said to be head of the show's working mothers' support group. They sometimes advised that "our chief counsel from the law firm of Dewey, Cheetham, & Howe is Hugh Louis Dewey, known to a group of people in Harvard Square as Huey Louie Dewey." Huey, Louie, and Dewey were the juvenile nephews being raised by Donald Duck in Walt Disney's Comics and Stories. Guest accommodations were provided by The Horseshoe Road Inn ("the horse you rode in"). At the end of the show, Ray warns the audience, "Don't drive like my brother!" to which Tom replies, "And don't drive like my brother!" The original tag line was "Don't drive like a knucklehead!" There were variations such as, "Don't drive like my brother ..." "And don't drive like his brother!" and "Don't drive like my sister ..." "And don't drive like my sister!" The tagline was heard in the Pixar film Cars, in which Tom and Ray voiced anthropomorphized vehicles (Rusty and Dusty Rust-eze, respectively a 1963 Dodge Dart and 1963 Dodge A100 van, as Lightning McQueen's racing sponsors) with personalities similar to their own on-air personae. Tom notoriously once owned a "convertible, green with large areas of rust!" Dodge Dart, known jokingly on the program by the faux-elegant name "Dartre". History In 1977, radio station WBUR-FM in Boston scheduled a panel of local car mechanics to discuss car repairs on one of its programs, but only Tom Magliozzi showed up. He did so well that he was asked to return as a guest, and he invited his younger brother Ray (who was actually more of a car repair expert) to join him. The brothers were soon asked to host their own radio show on WBUR, which they continued to do every week. In 1986, NPR decided to distribute their show nationally. In 1989, the brothers started a newspaper column Click and Clack Talk Cars which, like the radio show, mixed serious advice with humor. King Features distributes the column. Ray Magliozzi continues to write the column, retitled Car Talk, after his brother's death in 2014, knowing he would have wanted the advice and humor to continue. In 1992, Car Talk won a Peabody Award, saying "Each week, master mechanics Tom and Ray Magliozzi provide useful information about preserving and protecting our cars. But the real core of this program is what it tells us about human mechanics ... The insight and laughter provided by Messrs. Magliozzi, in conjunction with their producer Doug Berman, provide a weekly mental tune-up for a vast and ever-growing public radio audience." In 2005, Tom and Ray Magliozzi founded the Car Talk Vehicle Donation Program, "as a way to give back to the stations that were our friends and partners for decades — and whose programs we listen to every day." Since the Car Talk Vehicle Donation Program was founded, over 40,000 vehicles have been donated to support local NPR stations and programs, with over $40 million donated. Approximately 70% of the proceeds generated go directly toward funding local NPR affiliates and programs. In May 2007, the program, which previously had been available digitally only as a paid subscription from Audible.com, became a free podcast distributed by NPR, after a two-month test period where only a "call of the week" was available via podcast. As of 2012, it had 3.3 million listeners each week, on about 660 stations. On June 8, 2012, the brothers announced that they would no longer broadcast new episodes as of October. Executive producer Doug Berman said the best material from 25 years of past shows would be used to put together "repurposed" shows for NPR to broadcast. Berman estimated the archives contain enough for eight years' worth of material before anything would have to be repeated. Ray Magliozzi, however, would occasionally record new taglines and sponsor announcements that were aired at the end of the show. The show was inducted into the National Radio Hall of Fame in 2014. Ray Magliozzi hosted a special Car Talk memorial episode for his brother Tom after he died in November 2014. However, Ray continued to write their syndicated newspaper column, saying that his brother would want him to. The Best of Car Talk episodes ended their weekly broadcast on NPR on September 30, 2017, although past episodes would continue availability online and via podcasts. 120 of the 400 stations intended to continue airing the show. NPR announced one option for the time slot would be their new news-talk program It's Been a Minute. On June 11, 2021, it was announced that radio distribution of Car Talk would officially end on October 1, 2021, and that NPR would begin distribution of a twice-weekly podcast that will be 35-40 minutes in length and include early versions of every show, in sequential order. Hosts The Magliozzis were long-time auto mechanics. Ray Magliozzi has a bachelor of science degree in humanities and science from MIT, while Tom had a bachelor of science degree in economics from MIT, an MBA from Northeastern University, and a DBA from the Boston University School of Management. The Magliozzis operated a do-it-yourself garage together in the 1970s which became more of a conventional repair shop in the 1980s. Ray continued to have a hand in the day-to-day operations of the shop for years, while his brother Tom semi-retired, often joking on Car Talk about his distaste for doing "actual work". The show's offices were located near their shop at the corner of JFK Street and Brattle Street in Harvard Square, marked as "Dewey, Cheetham & Howe", the imaginary law firm to which they referred on-air. DC&H doubled as the business name of Tappet Brothers Associates, the corporation established to manage the business end of Car Talk. Initially a joke, the company was incorporated after the show expanded from a single station to national syndication. The two were commencement speakers at MIT in 1999. Executive producer Doug Berman said in 2012, "The guys are culturally right up there with Mark Twain and the Marx Brothers. They will stand the test of time. People will still be enjoying them years from now. They're that good." Tom Magliozzi died on November 3, 2014, at age 77, due to complications from Alzheimer's disease. Adaptations The show was the inspiration for the short-lived The George Wendt Show, which briefly aired on CBS in the 1994-1995 season as a mid-season replacement. In July 2007, PBS announced that it had green-lit an animated adaptation of Car Talk, to air on prime-time in 2008. The show, titled Click and Clack's As the Wrench Turns is based on the adventures of the fictional "Click and Clack" brothers' garage at "Car Talk Plaza". The ten episodes aired in July and August 2008. Car Talk: The Musical!!! was written and directed by Wesley | one occasion, the show featured Martha Stewart as an in-studio guest, whom the Magliozzis twice during the segment referred to as "Margaret". Celebrities and public figures were featured as "callers" as well, including Geena Davis, Ashley Judd, Morley Safer, Gordon Elliott, former Major League Baseball pitcher Bill Lee, and astronaut John M. Grunsfeld. Space program calls Astronaut and engineer John Grunsfeld called into the show during Space Shuttle mission STS-81 in January 1997, in which Atlantis docked to the Mir space station. In this call he complained about the performance of his serial-numbered, Rockwell-manufactured "government van". To wit, it would run very loud and rough for about two minutes, quieter and smoother for another six and a half, and then the engine would stop with a jolt. He went on to state that the brakes of the vehicle, when applied, would glow red-hot, and that the vehicle's odometer displayed "about 60 million miles". This created some consternation for the hosts, until they noticed the audio of Grunsfeld's voice, being relayed from Mir via TDRS satellite, sounded similar to that of Tom Hanks in the then-recent film Apollo 13, after which they realized the call was from space and the government van in question was, in fact, the Space Shuttle. In addition to the on-orbit call, the Brothers once received a call asking advice on winterizing an electric car. When they asked what kind of car, the caller stated it was a "kit car", a $400 million "kit car". It was a joke call from NASA's Jet Propulsion Laboratory concerning the preparation of the Mars Opportunity rover for the oncoming Martian winter, during which temperatures drop to several hundred degrees below freezing. Click and Clack have also been featured in editorial cartoons, including one where a befuddled NASA engineer called them to ask how to fix the Space Shuttle. Humor Humor and wisecracking pervaded the program. Tom and Ray are known for their self-deprecating humor, often joking about the supposedly poor quality of their advice and the show in general. They also commented at the end of each show: "Well, it's happened again—you've wasted another perfectly good hour listening to Car Talk." At some point in almost every show, usually when giving the address for the Puzzler answers or fan mail, Ray mentioned Cambridge, Massachusetts (where the show originated), at which point Tom reverently interjected with a tone of civic pride, "Our fair city". Ray invariably mocked "'Cambridge, MA', the United States Postal Service's two-letter abbreviation for 'Massachusetts"', by pronouncing the "MA" as a word. Preceding each break in the show, one of the hosts led up to the network identification with a humorous take on a disgusted reaction of some usually famous person to hearing that identification. The full line went along the pattern of, for example, "And even though Roger Clemens stabs his radio with a syringe whenever he hears us say it, this is NPR: National Public Radio" (later just "... this is NPR"). At one point in the show, often after the break, Ray usually stated that: "Support for this show is provided by," followed by an absurd fundraiser. The ending credits of the show started with thanks to the colorfully nicknamed actual staffers: producer Doug "the subway fugitive, not a slave to fashion, bongo boy frogman" Berman; "John 'Bugsy' Lawlor, just back from the ..." every week a different eating event with rhyming foodstuff names; David "Calves of Belleville" Greene; Catherine "Frau Blücher" Fenollosa, whose name caused a horse to neigh and gallop (an allusion to a running gag in the movie Young Frankenstein); and Carly "High Voltage" Nix, among others. Following the real staff was a lengthy list of pun-filled fictional staffers and sponsors such as statistician Marge Innovera ("margin of error"), customer care representative Haywood Jabuzoff ("Hey, would ya buzz off"), meteorologist Claudio Vernight ("cloudy overnight"), optometric firm C. F. Eye Care ("see if I care"), Russian chauffeur Picov Andropov ("pick up and drop off"), Leo Tolstoy biographer Warren Peace ("War and Peace"), hygiene officer and chief of the Tokyo office Oteka Shawa ("oh, take a shower"), Swedish snowboard instructor Soren Derkeister ("sore in the keister"), law firm Dewey, Cheetham & Howe ("Do we cheat 'em? And how!"), Greek tailor Euripides Eumenades ("You rip-a these, you mend-a these"), cloakroom attendant Mahatma Coate ("My hat, my coat"), seat cushion tester Mike Easter (my keister) and many, many others, usually concluding with Erasmus B. Dragon ("Her ass must be draggin'"), whose job title varied, but who was often said to be head of the show's working mothers' support group. They sometimes advised that "our chief counsel from the law firm of Dewey, Cheetham, & Howe is Hugh Louis Dewey, known to a group of people in Harvard Square as Huey Louie Dewey." Huey, Louie, and Dewey were the juvenile nephews being raised by Donald Duck in Walt Disney's Comics and Stories. Guest accommodations were provided by The Horseshoe Road Inn ("the horse you rode in"). At the end of the show, Ray warns the audience, "Don't drive like my brother!" to which Tom replies, "And don't drive like my brother!" The original tag line was "Don't drive like a knucklehead!" There were variations such as, "Don't drive like my brother ..." "And don't drive like his brother!" and "Don't drive like my sister ..." "And don't drive like my sister!" The tagline was heard in the Pixar film Cars, in which Tom and Ray voiced anthropomorphized vehicles (Rusty and |
reasons in the actual text of the Canon that the episcopacy of these cities had been granted their status was the importance of these cities as major cities of the empire of the time. Consequently, Pope Leo declared canon 28 null and void. Confession of Chalcedon The Confession of Chalcedon provides a clear statement on the two natures of Christ, human and divine: The full text of the definition reaffirms the decisions of the Council of Ephesus and the pre-eminence of the Creed of Nicea (325). It also canonises as authoritative two of Cyril of Alexandria's letters and the Tome of Leo written against Eutyches and sent to Archbishop Flavian of Constantinople in 449. Canons The work of the council was completed by a series of 30 disciplinary canons, the Ancient Epitomes of which are: The canons of every Synod of the holy Fathers shall be observed. Whoso buys or sells an ordination, down to a Prosmonarius, shall be in danger of losing his grade. Such shall also be the case with go-betweens, if they be clerics they shall be cut off from their rank, if laymen or monks, they shall be anathematized. Those who assume the care of secular houses should be corrected, unless perchance the law called them to the administration of those not yet come of age, from which there is no exemption. Unless further their Bishop permits them to take care of orphans and widows. Domestic oratories and monasteries are not to be erected contrary to the judgment of the bishop. Every monk must be subject to his bishop, and must not leave his house except at his suggestion. A slave, however, can not enter the monastic life without the consent of his master. Those who go from city to city shall be subject to the canon law on the subject. In Martyries and Monasteries ordinations are strictly forbidden. Should any one be ordained therein, his ordination shall be reputed of no effect. If any cleric or monk arrogantly affects the military or any other dignity, let him be cursed. Any clergyman in an almshouse or monastery must submit himself to the authority of the bishop of the city. But he who rebels against this let him pay the penalty. Litigious clerics shall be punished according to canon, if they despise the episcopal and resort to the secular tribunal. When a cleric has a contention with a bishop let him wait till the synod sits, and if a bishop have a contention with his metropolitan let him carry the case to Constantinople. No cleric shall be recorded on the clergy-list of the churches of two cities. But if he shall have strayed forth, let him be returned to his former place. But if he has been transferred, let him have no share in the affairs of his former church. Let the poor who stand in need of help make their journey with letters pacificatory and not commendatory: for letters commendatory should only be given to those who are open to suspicion. One province shall not be cut into two. Whoever shall do this shall be cast out of the episcopate. Such cities as are cut off by imperial rescript shall enjoy only the honour of having a bishop settled in them: but all the rights pertaining to the true metropolis shall be preserved. No cleric shall be received to communion in another city without a letter commendatory. A Cantor or Lector alien to the sound faith, if being then married, he shall have begotten children let him bring them to communion, if they had there been baptized. But if they had not yet been baptized they shall not be baptized afterwards by the heretics. No person shall be ordained deaconess except she be forty years of age. If she shall dishonour her ministry by contracting a marriage, let her be anathema. Monks or nuns shall not contract marriage, and if they do so let them be excommunicated. Village and rural parishes if they have been possessed for thirty years, they shall so continue. But if within that time, the matter shall be subject to adjudication. But if by the command of the Emperor a city be renewed, the order of ecclesiastical parishes shall follow the civil and public forms. Clerics and Monks, if they shall have dared to hold conventicles and to conspire against the bishop, shall be cast out of their rank. Twice each year the Synod shall be held wherever the bishop of the Metropolis shall designate, and all matters of pressing interest shall be determined. A clergyman of one city shall not be given a cure in another. But if he has been driven from his native place and shall go into another he shall be without blame. If any bishop receives clergymen from without his diocese he shall be excommunicated as well as the cleric he receives. A cleric or layman making charges rashly against his bishop shall not be received. Whoever seizes the goods of his deceased bishop shall be cast forth from his rank. Clerics or monks who spend much time at Constantinople contrary to the will of their bishop, and stir up seditions, shall be cast out of the city. A monastery erected with the consent of the bishop shall be immovable. And whatever pertains to it shall not be alienated. Whoever shall take upon him to do otherwise, shall not be held guiltless. Let the ordination of bishops be within three months: necessity however may make the time longer. But if anyone shall ordain counter to this decree, he shall be liable to punishment. The revenue shall remain with the œconomus. The œconomus in all churches must be chosen from the clergy. And the bishop who neglects to do this is not without blame. If a clergyman elope with a woman, let him be expelled from the Church. If a layman, let him be anathema. The same shall be the lot of any that assist him. The bishop of New Rome (Constantinople) shall enjoy the same privileges as the bishop of Old Rome, on account of the removal of the Empire. For this reason the [metropolitans] of Pontus, of Asia, and of Thrace, as well as the Barbarian bishops shall be ordained by the bishop of Constantinople. He is sacrilegious who degrades a bishop to the rank of a presbyter. For he that is guilty of crime is unworthy of the priesthood. But he that was deposed without cause, let him be [still] bishop. It is the custom of the Egyptians that none subscribe without the permission of their Archbishop. Wherefore they are not to be blamed who did not subscribe the Epistle of the holy Leo until an Archbishop had been appointed for them. Canon 28 grants equal privileges () to Constantinople as of Rome because Constantinople is the New Rome as renewed by canon 36 of the Quinisext Council. Pope Leo declared the canon 28 null and void and only approved the canons of the council which were pertaining to faith. Initially, the Council indicated their understanding that Pope Leo's ratification was necessary for the canon to be binding, writing, "we have made still another enactment which we have deemed necessary for the maintenance of good order and discipline, and we are persuaded that your Holiness will approve and confirm our decree.... We are confident you will shed upon the Church of Constantinople a ray of that Apostolic splendor which you possess, for you have ever cherished this church, and you are not at all niggardly in imparting your riches to your children. … Vouchsafe then, most Holy and most Blessed Father, to accept what we have done in your name, and in a friendly spirit (hos oikeia te kai phila). For your legates have made a violent stand against it, desiring, no doubt, that this good deed should proceed, in the first instance, from your provident hand. But we, wishing to gratify the pious Christian emperors, and the illustrious Senate, and the capital of the empire, have judged that an Ecumenical Council was the fittest occasion for effecting this measure. Hence we have made bold to confirm the privileges of the afore-mentioned city (tharresantes ekurosamen) as if your holiness had taken the initiative, for we know how tenderly you love your children, and we feel that in honoring the child we have honored its parent.... We have informed you of everything with a view of proving our sincerity, and of obtaining for our labors your confirmation and consent." Following Leo's rejection of the canon, Bishop Anatolius of Constantinople conceded, "Even so, the whole force of confirmation of the acts was reserved for the authority of Your Blessedness. Therefore, let Your Holiness know for certain that I did nothing to further the matter, knowing always that I held myself bound to avoid the lusts of pride and covetousness." However, the Canon has since been viewed as valid by the Eastern Orthodox Church. According to some ancient Greek collections, canons 29 and 30 are attributed to the council: canon 29, which states that an unworthy bishop cannot be demoted but can be removed, is an extract from the minutes of the 19th session; canon 30, which grants the Egyptians time to consider their rejection of Leo's Tome, is an extract from the minutes of the fourth session. In all likelihood an official record of the proceedings was made either during the council itself or shortly afterwards. The assembled bishops informed the pope that a copy of all the "Acta" would be transmitted to him; in March, 453, Pope Leo commissioned Julian of Cos, then at Constantinople, to make a collection of all the Acts and translate them into Latin. Most of the documents, chiefly the minutes of the sessions, were written in Greek; others, e.g. the imperial letters, were issued in both languages; others, again, e.g. the papal letters, were written in Latin. Eventually nearly all of them were translated into both languages. The status of the sees of Constantinople and Jerusalem The status of Jerusalem The metropolitan of Jerusalem was given independence from the metropolitan of Antioch and from any other higher-ranking bishop, given what is now known as autocephaly, in the council's seventh session whose "Decree on the Jurisdiction of Jerusalem and Antioch" contains: "the bishop of Jerusalem, or rather the most holy Church which is under him, shall have under his own power the three Palestines". This led to Jerusalem becoming a patriarchate, one of the five patriarchates known as the pentarchy, when the title of "patriarch" was created in 531 by Justinian. The Oxford Dictionary of the Christian Church, s.v. patriarch (ecclesiastical), also calls it "a title dating from the 6th century, for the bishops of the five great sees of Christendom". Merriam-Webster's Encyclopedia of World Religions, says: "Five patriarchates, collectively called the pentarchy, were the first to be recognized by the legislation of the emperor Justinian (reigned 527–565)". The status of Constantinople In a canon of disputed validity, the Council of Chalcedon also elevated the See of Constantinople to a position "second in eminence and power to the Bishop of Rome". The Council of Nicaea in 325 had noted that the Sees of Alexandria, Antioch, and Rome should have primacy over other, lesser dioceses. At the time, the See of Constantinople was not yet of ecclesiastical prominence, but its proximity to the Imperial court gave rise to its importance. The Council of Constantinople in 381 modified the situation somewhat by placing Constantinople second in honor, above Alexandria and Antioch, stating in Canon III, that "the bishop of Constantinople... shall have the prerogative of honor after the bishop of Rome; because Constantinople is New Rome". In the early 5th century, this status was challenged by the bishops of Alexandria, but the Council of Chalcedon confirmed in Canon XXVIII: In making their case, the council fathers argued that tradition had accorded "honor" to the see of older Rome because it was the first imperial city. Accordingly, "moved by the same purposes" the fathers "apportioned equal prerogatives to the most holy see of new Rome" because "the city which is honored by the imperial power and senate and enjoying privileges equaling older imperial Rome should also be elevated to her level in ecclesiastical affairs and take second place after her". The framework for allocating ecclesiastical authority advocated by the council fathers mirrored the allocation of imperial authority in the later period of the Roman Empire. The Eastern position could be characterized as being political in nature, as opposed to a doctrinal view. In practice, all Christians East and West addressed the papacy as the See of Peter and Paul or the Apostolic See rather than the See of the Imperial Capital. Rome understands this to indicate that its precedence has always come from its direct lineage from the apostles Peter and Paul rather than its association with Imperial authority. After the passage of the Canon 28, Rome filed a protest against the reduction of honor given to Antioch and Alexandria. However, fearing that withholding Rome's approval would be interpreted as a rejection of the entire council, in 453 the pope confirmed the council's canons while declaring the 28th null and void. (This position would change and later be accepted in 1215 at the Fourth Council of the Lateran.) Consequences: Chalcedonian Schism The near-immediate result of the council was a major schism. The bishops who were uneasy with the language of Pope Leo's Tome repudiated the council, saying that the acceptance of two physes was tantamount to Nestorianism. Dioscorus of Alexandria advocated miaphysitism and had dominated the Council of Ephesus. Churches that rejected Chalcedon in favor of Ephesus broke off from the rest of the Eastern Church in a schism, the most significant among these being the Church of Alexandria, today known as the Coptic Orthodox Church. The rise of the "so-called" Monophysitism in the East (as branded by the West) was led by the Copts of Egypt. This must be regarded as the outward expression of the growing nationalist trends in that province against the gradual intensification of Byzantine imperialism, soon to reach its consummation during the reign of Emperor Justinian. A significant effect on the Orthodox Christians in Egypt, was a series of persecutions by the Roman (later, Byzantine) empire forcing followers of the Eastern Orthodox Church to claim allegiance to Leo's Tome, or Chalcedon. This led to the martyrdom, persecution and death of thousands of Egyptian saints and bishops till the Arab Conquest of Egypt. As a result, The Council of Chalcedon is referred to as "Chalcedon, the Ominous" among Coptic Egyptians given how it led to Christians persecuting other Christians for the first time in history. Coptic Orthodox Christians continue to distinguish themselves from followers of Chalcedon to this day. Although the theological differences are seen as limited (if non-existent), it is politics, the subsequent persecutions and the power struggles of a rising Roman Empire, that may have led to the Great Schism, or at least contributed significantly to amplifying it through the centuries. Justinian I attempted to bring those monks who still rejected the decision of the Council of Chalcedon into communion with the greater church. The exact time of this event is unknown, but it is believed to have been between 535 and 548. St Abraham of Farshut was summoned to Constantinople and he chose to bring with him four monks. Upon arrival, Justinian summoned them and informed them that they would either accept the decision of the council or lose their positions. Abraham refused to entertain the idea. Theodora tried to persuade Justinian to change his mind, seemingly to no avail. Abraham himself stated in a letter to his monks that he preferred to remain in exile rather than subscribe to a faith contrary to that of Athanasius. They were not alone, and the non-Chalcedon churches compose Oriental Orthodoxy, with the Church of Alexandria as their primus inter pares. Only in recent years has a degree of rapprochement between Chalcedonian Christians and the Oriental Orthodox been seen. Oriental Orthodox view Several Oriental Orthodox Church historians have viewed the Council as a dispute with the Church of Rome over precedence among the various patriarchal sees. Coptic sources both in Coptic and in Arabic, suggest that questions of political and ecclesiastical authority exaggerated differences between the two professions of faith. The Copts consistently repudiate the Western identification of Alexandrine Christianity with the Eutychianism which originated in Constantinople and which they have always regarded as a flagrant heresy (monophysitism) since it declared the complete absorption of Christ's manhood in his single divine nature whereas the Copts clearly upheld the doctrine of the two natures, divine and human - mystically united in one (miaphysitism) without confusion, corruption, or change. As a strictly traditional church, its religious leaders have sought biblical justification for this interpretation of the Nicean Creed and the Cyrilian formula, but meanwhile have restricted the substance of their variance to interpretation. Liturgical commemorations The Eastern Orthodox Church commemorates the "Holy Fathers of the 4th Ecumenical Council, who assembled in Chalcedon" on the Sunday on or after July 13; however, in some places (e.g. Russia) on that date is rather a feast of the Fathers of the First Six Ecumenical Councils. For both of the above complete propers have been composed and are found in the Menaion. For the former "The Office of the 630 Holy and God-bearing Fathers of the 4th ... Summoned against the Monophysites Eftyches and Dioskoros ..." was composed in the middle of the 14th century by Patriarch Philotheus I of Constantinople. This contains numerous hymns exposing the council's teaching, commemorating its leaders whom it praises and whose prayers it implores, and naming its opponents pejoratively. e.g., "Come let us clearly reject the errors of ... but praise in divine songs the fourth council of pious fathers." For the latter the propers are titled "We Commemorate Six Holy Ecumenical Councils". This repeatedly | of these cities as major cities of the empire of the time. Consequently, Pope Leo declared canon 28 null and void. Confession of Chalcedon The Confession of Chalcedon provides a clear statement on the two natures of Christ, human and divine: The full text of the definition reaffirms the decisions of the Council of Ephesus and the pre-eminence of the Creed of Nicea (325). It also canonises as authoritative two of Cyril of Alexandria's letters and the Tome of Leo written against Eutyches and sent to Archbishop Flavian of Constantinople in 449. Canons The work of the council was completed by a series of 30 disciplinary canons, the Ancient Epitomes of which are: The canons of every Synod of the holy Fathers shall be observed. Whoso buys or sells an ordination, down to a Prosmonarius, shall be in danger of losing his grade. Such shall also be the case with go-betweens, if they be clerics they shall be cut off from their rank, if laymen or monks, they shall be anathematized. Those who assume the care of secular houses should be corrected, unless perchance the law called them to the administration of those not yet come of age, from which there is no exemption. Unless further their Bishop permits them to take care of orphans and widows. Domestic oratories and monasteries are not to be erected contrary to the judgment of the bishop. Every monk must be subject to his bishop, and must not leave his house except at his suggestion. A slave, however, can not enter the monastic life without the consent of his master. Those who go from city to city shall be subject to the canon law on the subject. In Martyries and Monasteries ordinations are strictly forbidden. Should any one be ordained therein, his ordination shall be reputed of no effect. If any cleric or monk arrogantly affects the military or any other dignity, let him be cursed. Any clergyman in an almshouse or monastery must submit himself to the authority of the bishop of the city. But he who rebels against this let him pay the penalty. Litigious clerics shall be punished according to canon, if they despise the episcopal and resort to the secular tribunal. When a cleric has a contention with a bishop let him wait till the synod sits, and if a bishop have a contention with his metropolitan let him carry the case to Constantinople. No cleric shall be recorded on the clergy-list of the churches of two cities. But if he shall have strayed forth, let him be returned to his former place. But if he has been transferred, let him have no share in the affairs of his former church. Let the poor who stand in need of help make their journey with letters pacificatory and not commendatory: for letters commendatory should only be given to those who are open to suspicion. One province shall not be cut into two. Whoever shall do this shall be cast out of the episcopate. Such cities as are cut off by imperial rescript shall enjoy only the honour of having a bishop settled in them: but all the rights pertaining to the true metropolis shall be preserved. No cleric shall be received to communion in another city without a letter commendatory. A Cantor or Lector alien to the sound faith, if being then married, he shall have begotten children let him bring them to communion, if they had there been baptized. But if they had not yet been baptized they shall not be baptized afterwards by the heretics. No person shall be ordained deaconess except she be forty years of age. If she shall dishonour her ministry by contracting a marriage, let her be anathema. Monks or nuns shall not contract marriage, and if they do so let them be excommunicated. Village and rural parishes if they have been possessed for thirty years, they shall so continue. But if within that time, the matter shall be subject to adjudication. But if by the command of the Emperor a city be renewed, the order of ecclesiastical parishes shall follow the civil and public forms. Clerics and Monks, if they shall have dared to hold conventicles and to conspire against the bishop, shall be cast out of their rank. Twice each year the Synod shall be held wherever the bishop of the Metropolis shall designate, and all matters of pressing interest shall be determined. A clergyman of one city shall not be given a cure in another. But if he has been driven from his native place and shall go into another he shall be without blame. If any bishop receives clergymen from without his diocese he shall be excommunicated as well as the cleric he receives. A cleric or layman making charges rashly against his bishop shall not be received. Whoever seizes the goods of his deceased bishop shall be cast forth from his rank. Clerics or monks who spend much time at Constantinople contrary to the will of their bishop, and stir up seditions, shall be cast out of the city. A monastery erected with the consent of the bishop shall be immovable. And whatever pertains to it shall not be alienated. Whoever shall take upon him to do otherwise, shall not be held guiltless. Let the ordination of bishops be within three months: necessity however may make the time longer. But if anyone shall ordain counter to this decree, he shall be liable to punishment. The revenue shall remain with the œconomus. The œconomus in all churches must be chosen from the clergy. And the bishop who neglects to do this is not without blame. If a clergyman elope with a woman, let him be expelled from the Church. If a layman, let him be anathema. The same shall be the lot of any that assist him. The bishop of New Rome (Constantinople) shall enjoy the same privileges as the bishop of Old Rome, on account of the removal of the Empire. For this reason the [metropolitans] of Pontus, of Asia, and of Thrace, as well as the Barbarian bishops shall be ordained by the bishop of Constantinople. He is sacrilegious who degrades a bishop to the rank of a presbyter. For he that is guilty of crime is unworthy of the priesthood. But he that was deposed without cause, let him be [still] bishop. It is the custom of the Egyptians that none subscribe without the permission of their Archbishop. Wherefore they are not to be blamed who did not subscribe the Epistle of the holy Leo until an Archbishop had been appointed for them. Canon 28 grants equal privileges () to Constantinople as of Rome because Constantinople is the New Rome as renewed by canon 36 of the Quinisext Council. Pope Leo declared the canon 28 null and void and only approved the canons of the council which were pertaining to faith. Initially, the Council indicated their understanding that Pope Leo's ratification was necessary for the canon to be binding, writing, "we have made still another enactment which we have deemed necessary for the maintenance of good order and discipline, and we are persuaded that your Holiness will approve and confirm our decree.... We are confident you will shed upon the Church of Constantinople a ray of that Apostolic splendor which you possess, for you have ever cherished this church, and you are not at all niggardly in imparting your riches to your children. … Vouchsafe then, most Holy and most Blessed Father, to accept what we have done in your name, and in a friendly spirit (hos oikeia te kai phila). For your legates have made a violent stand against it, desiring, no doubt, that this good deed should proceed, in the first instance, from your provident hand. But we, wishing to gratify the pious Christian emperors, and the illustrious Senate, and the capital of the empire, have judged that an Ecumenical Council was the fittest occasion for effecting this measure. Hence we have made bold to confirm the privileges of the afore-mentioned city (tharresantes ekurosamen) as if your holiness had taken the initiative, for we know how tenderly you love your children, and we feel that in honoring the child we have honored its parent.... We have informed you of everything with a view of proving our sincerity, and of obtaining for our labors your confirmation and consent." Following Leo's rejection of the canon, Bishop Anatolius of Constantinople conceded, "Even so, the whole force of confirmation of the acts was reserved for the authority of Your Blessedness. Therefore, let Your Holiness know for certain that I did nothing to further the matter, knowing always that I held myself bound to avoid the lusts of pride and covetousness." However, the Canon has since been viewed as valid by the Eastern Orthodox Church. According to some ancient Greek collections, canons 29 and 30 are attributed to the council: canon 29, which states that an unworthy bishop cannot be demoted but can be removed, is an extract from the minutes of the 19th session; canon 30, which grants the Egyptians time to consider their rejection of Leo's Tome, is an extract from the minutes of the fourth session. In all likelihood an official record of the proceedings was made either during the council itself or shortly afterwards. The assembled bishops informed the pope that a copy of all the "Acta" would be transmitted to him; in March, 453, Pope Leo commissioned Julian of Cos, then at Constantinople, to make a collection of all the Acts and translate them into Latin. Most of the documents, chiefly the minutes of the sessions, were written in Greek; others, e.g. the imperial letters, were issued in both languages; others, again, e.g. the papal letters, were written in Latin. Eventually nearly all of them were translated into both languages. The status of the sees of Constantinople and Jerusalem The status of Jerusalem The metropolitan of Jerusalem was given independence from the metropolitan of Antioch and from any other higher-ranking bishop, given what is now known as autocephaly, in the council's seventh session whose "Decree on the Jurisdiction of Jerusalem and Antioch" contains: "the bishop of Jerusalem, or rather the most holy Church which is under him, shall have under his own power the three Palestines". This led to Jerusalem becoming a patriarchate, one of the five patriarchates known as the pentarchy, when the title of "patriarch" was created in 531 by Justinian. The Oxford Dictionary of the Christian Church, s.v. patriarch (ecclesiastical), also calls it "a title dating from the 6th century, for the bishops of the five great sees of Christendom". Merriam-Webster's Encyclopedia of World Religions, says: "Five patriarchates, collectively called the pentarchy, were the first to be recognized by the legislation of the emperor Justinian (reigned 527–565)". The status of Constantinople In a canon of disputed validity, the Council of Chalcedon also elevated the See of Constantinople to a position "second in eminence and power to the Bishop of Rome". The Council of Nicaea in 325 had noted that the Sees of Alexandria, Antioch, and Rome should have primacy over other, lesser dioceses. At the time, the See of Constantinople was not yet of ecclesiastical prominence, but its proximity to the Imperial court gave rise to its importance. The Council of Constantinople in 381 modified the situation somewhat by placing Constantinople second in honor, above Alexandria and Antioch, stating in Canon III, that "the bishop of Constantinople... shall have the prerogative of honor after the bishop of Rome; because Constantinople is New Rome". In the early 5th century, this status was challenged by the bishops of Alexandria, but the Council of Chalcedon confirmed in Canon XXVIII: In making their case, the council fathers argued that tradition had accorded "honor" to the see of older Rome because it was the first imperial city. Accordingly, "moved by the same purposes" the fathers "apportioned equal prerogatives to the most holy see of new Rome" because "the city which is honored by the imperial power and senate and enjoying privileges equaling older imperial Rome should also be elevated to her level in ecclesiastical affairs and take second place after her". The framework for allocating ecclesiastical authority advocated by the council fathers mirrored the allocation of imperial authority in the later period of the Roman Empire. The Eastern position could be characterized as being political in nature, as opposed to a doctrinal view. In practice, all Christians East and West addressed the papacy as the See of Peter and Paul or the Apostolic See rather than the See of the Imperial Capital. Rome understands this to indicate that its precedence has always come from its direct lineage from the apostles Peter and Paul rather than its association with Imperial authority. After the passage of the Canon 28, Rome filed a protest against the reduction of honor given to Antioch and Alexandria. However, fearing that withholding Rome's approval would be interpreted as a rejection of the entire council, in 453 the pope confirmed the council's canons while declaring the 28th null and void. (This position would change and later be accepted in 1215 at the Fourth Council of the Lateran.) Consequences: Chalcedonian Schism The near-immediate result of the council was a major schism. The bishops who were uneasy with the language of Pope Leo's Tome repudiated the council, saying that the acceptance of two physes was tantamount to Nestorianism. Dioscorus of Alexandria advocated miaphysitism and had dominated the Council of Ephesus. Churches that rejected Chalcedon in favor of Ephesus broke off from the rest of the Eastern Church in a schism, the most significant among these being the Church of Alexandria, today known as the Coptic Orthodox Church. The rise of the "so-called" Monophysitism in the East (as branded by the West) was led by the Copts of Egypt. This must be regarded as the outward expression of the growing nationalist trends in that province against the gradual intensification of Byzantine imperialism, soon to reach its consummation during the reign of Emperor Justinian. A significant effect on the Orthodox Christians in Egypt, was a series of persecutions by the Roman (later, Byzantine) empire forcing followers of the Eastern Orthodox Church to claim allegiance to Leo's Tome, or Chalcedon. This led to the martyrdom, persecution and death of thousands of Egyptian saints and bishops till the Arab Conquest of Egypt. As a result, The Council of Chalcedon is referred to as "Chalcedon, the Ominous" among Coptic Egyptians given how it led to Christians persecuting other Christians for the first time in history. Coptic Orthodox Christians continue to distinguish themselves from followers of Chalcedon to this day. Although the theological differences are seen as limited (if non-existent), it is politics, the subsequent persecutions and the power struggles of a rising Roman Empire, that may have led to the Great Schism, or at least contributed significantly to amplifying it through the centuries. Justinian I attempted to bring those monks who still rejected the decision of the Council of Chalcedon into communion with the greater church. The exact time of this event is unknown, but it is believed to have been between 535 and 548. St Abraham of Farshut was summoned to Constantinople and he chose to bring with him four monks. Upon arrival, Justinian summoned them and informed them that they would either accept the decision of the council or lose their positions. Abraham refused to entertain the idea. Theodora tried to persuade Justinian to change his mind, seemingly to no avail. Abraham himself stated in a letter to his monks that he preferred to remain in exile rather than subscribe to a faith contrary to that of Athanasius. They were not alone, and the non-Chalcedon churches compose Oriental Orthodoxy, with the Church of Alexandria as their primus inter pares. Only in recent years has a degree of rapprochement between Chalcedonian Christians and the Oriental Orthodox been seen. Oriental Orthodox view Several Oriental Orthodox Church historians have viewed the Council as a dispute with the Church of Rome over precedence among the various patriarchal sees. Coptic sources both in Coptic and in Arabic, suggest that questions of political and ecclesiastical authority exaggerated differences between the two professions of faith. The Copts consistently repudiate the Western identification of Alexandrine Christianity with the Eutychianism which originated in Constantinople and which they have always regarded as a flagrant heresy (monophysitism) since it declared the complete absorption of Christ's manhood in |
of the provinces, has also never hosted a CFL game. League play Canadian football is played at several levels in Canada; the top league is the professional nine-team Canadian Football League (CFL). The CFL regular season begins in June, and playoffs for the Grey Cup are completed by late November. In cities with outdoor stadiums such as Edmonton, Winnipeg, Calgary, and Regina, low temperatures and icy field conditions can seriously affect the outcome of a game. Amateur football is governed by Football Canada. At the university level, 27 teams play in four conferences under the auspices of U Sports; the U Sports champion is awarded the Vanier Cup. Junior football is played by many after high school before joining the university ranks. There are 18 junior teams in three divisions in the Canadian Junior Football League competing for the Canadian Bowl. The Quebec Junior Football League includes teams from Ontario and Quebec who battle for the Manson Cup. Semi-professional leagues have grown in popularity in recent years, with the Alberta Football League becoming especially popular. The Northern Football Conference formed in Ontario in 1954 has also surged in popularity for former college players who do not continue to professional football. The Ontario champion plays against the Alberta champion for the "National Championship". The Canadian Major Football League is the governing body for the semi-professional game. Women's football has gained attention in recent years in Canada. The first Canadian women's league to begin operations was the Maritime Women's Football League in 2004. The largest women's league is the Western Women's Canadian Football League. The field The Canadian football field is long and wide, within which the goal areas are deep, and the goal lines are apart. Weighted pylons are placed on the inside corner of the intersections of the goal lines and end lines. Including the End zone, the total area of the field is . At each goal line is a set of goalposts, which consist of two uprights joined by an crossbar which is above the goal line. The goalposts may be H-shaped (both posts fixed in the ground) although in the higher-calibre competitions the tuning-fork design (supported by a single curved post behind the goal line, so that each post starts above the ground) is preferred. The sides of the field are marked by white sidelines, the goal line is marked in white or yellow, and white lines are drawn laterally across the field every from the goal line. These lateral lines are called "yard lines" and often marked with the distance in yards from and an arrow pointed toward the nearest goal line. Prior to the early 1980s, arrows were not used and all yard lines (in both multiples of 5 and 10) were usually marked with the distance to the goal line, including the goal line itself which was marked with either a "0" or "00"; in most stadiums today, only the yard markers in multiples of 10 are marked with numbers, with the goal line sometimes being marked with a "G". The centre (55-yard) line usually is marked with a "C" (or, more rarely, with a "55"). "Hash marks" are painted in white, parallel to the yardage lines, at intervals, from the sidelines. On fields that have a surrounding running track, such as Molson Stadium and many universities, the end zones are often cut off in the corners to accommodate the track. Until 1986, the end zones were deep, giving the field an overall length of , and a correspondingly larger cutoff could be required at the corners. The first field to feature the shorter 20-yard endzones was Vancouver's BC Place (home of the BC Lions), which opened in 1983. This was particularly common among U.S.-based teams during the CFL's American expansion, where few American stadiums were able to accommodate the much longer and noticeably wider CFL field. The end zones in Toronto's BMO Field are only 18 yards instead of 20 yards. Gameplay Teams advance across the field through the execution of quick, distinct plays, which involve the possession of a brown, prolate spheroid ball with ends tapered to a point. The ball has two one-inch-wide white stripes. Start of play At the beginning of a match, an official tosses a coin and allows the captain of the visiting team to call heads or tails. The captain of the team winning the coin toss is given the option of having first choice, or of deferring first choice to the other captain. The captain making first choice may either choose a) to kick off or receive the kick at the beginning of the half, or b) which direction of the field to play in. The remaining choice is given to the opposing captain. Before the resumption of play in the second half, the captain that did not have first choice in the first half is given first choice. Teams usually choose to defer, so it is typical for the team that wins the coin toss to kick to begin the first half and receive to begin the second. Play begins at the start of each half with one team place-kicking the ball from its own 35-yard line. Both teams then attempt to catch the ball. The player who recovers the ball may run while holding the ball, or lateral throw the ball to a teammate. Stoppage of play Play stops when the ball carrier's knee, elbow, or any other body part aside from the feet and hands, is forced to the ground (a tackle); when a forward pass is not caught on the fly (during a scrimmage); when a touchdown (see below) or a field goal is scored; when the ball leaves the playing area by any means (being carried, thrown, or fumbled out of bounds); or when the ball carrier is in a standing position but can no longer move forwards (called forward progress). If no score has been made, the next play starts from scrimmage. Scrimmage Before scrimmage, an official places the ball at the spot it was at the stop of clock, but no nearer than 24 yards from the sideline or 1 yard from the goal line. The line parallel to the goal line passing through the ball (line from sideline to sideline for the length of the ball) is referred to as the line of scrimmage. This line is similar to "no-man's land"; players must stay on their respective sides of this line until the play has begun again. For a scrimmage to be valid the team in possession of the football must have seven players, excluding the quarterback, within one yard of the line of scrimmage. The defending team must stay a yard or more back from the line of scrimmage. On the field at the beginning of a play are two teams of 12 (and not 11 as in American football). The team in possession of the ball is the offence and the team defending is referred to as the defence. Play begins with a backwards pass through the legs (the snap) by a member of the offensive team, to another member of the offensive team. This is usually the quarterback or punter, but a "direct snap" to a running back is also not uncommon. If the quarterback or punter receives the ball, he may then do any of the following: run with the ball, attempting to run farther down field (gaining yardage). The ball-carrier may run in any direction he sees fit (including backwards). drop-kick the ball, dropping it onto the ground and kicking it on the bounce. (This play is now quite rare in both Canadian and American football.) pass the ball laterally or backwards to a teammate. This play is known as a lateral, and may come at any time on the play. A pass which has any amount of forward momentum is a forward pass (see below); forward passes are subject to many restrictions which do not apply to laterals. hand-off—hand the ball off to a teammate, typically a halfback or the fullback. punt the ball; dropping it in the air and kicking it before it touches the ground. When the ball is punted, only opposing players (the receiving team), the kicker, and anyone behind the kicker when he punted the ball are able to touch the ball, or even go within five yards of the ball until it is touched by an eligible player (the no-yards rule, which is applied to all kicking plays). place the ball on the ground for a place kick throw a forward pass, where the ball is thrown to a receiver located farther down field (closer to the opponent's goal) than the thrower is. Forward passes are subject to the following restrictions: They must be made from behind the line of scrimmage Only one forward pass may be made on a play The pass must be made in the direction of an eligible receiver or pass 10 yards after the line of scrimmage Each play constitutes a down. The offence must advance the ball at least ten yards towards the opponents' goal line within three downs or forfeit the ball to their opponents. Once ten yards have been gained the offence gains a new set of three downs (rather than the four downs given in American football). Downs do not accumulate. If the offensive team completes 10 yards on their first play, they lose the other two downs and are granted another set of three. If a team fails to gain ten yards in two downs they usually punt the ball on third down or try to kick a field goal (see below), depending on their position on the field. The team may, however use its third down in an attempt to advance the ball and gain a cumulative 10 yards. Change in possession The ball changes possession in the following instances: If the offence scores a field goal, the scored-against team can either scrimmage from its 35-yard line or have the scoring team kickoff from its 35-yard line. If a team scores a touchdown, the scoring team must kickoff from their own 35-yard line. If the defence scores on a safety (bringing the ball down in the offence's own end zone), they have the right to claim possession. If one team kicks the ball; the other team has the right to recover the ball and attempt a return. If a kicked ball goes out of bounds, or the kicking team scores a single or field goal as a result of the kick, the other team likewise gets possession. If the offence fails to make ten yards in three plays, the defence takes over on downs. If the offence attempts a forward pass and it is intercepted by the defence; the defence takes possession immediately (and may try to advance the ball on the play). Note that incomplete forward passes (those which go out of bounds, or which touch the ground without being first cleanly caught by a player) result in the end of the play, and are not returnable by either team. If the offence fumbles (a ball carrier drops the football, or has it dislodged by an opponent, or if the intended player fails to catch a lateral pass or a snap from centre, or a kick attempt is blocked by an opponent), the ball may be recovered (and advanced) by either team. If a fumbled ball goes out of bounds, the team whose player last touched it is awarded possession at the spot where it went out of bounds. A fumble by the offence in their own end zone, which goes out of bounds, results in a safety. When the first half ends, the team which kicked to start the first half will receive a kickoff to start the second half. After the three-minute warning near the end of each half, the offence can lose possession for a time count violation (failure to legally put the ball into play within the 20-second duration of the play clock). However, this can only occur if three specific criteria are met: The offence committed a time count violation on its last attempted scrimmage play. This prior violation took place on third down. The referee deemed said violation to be deliberate, and warned the offence that it had to legally place the ball into play within the 20-second clock or lose possession. Such a loss of possession is statistically treated as the defence taking over on downs. Rules of contact There are many rules to contact in this type of football. The only player on the field who may be legally tackled is the player currently in possession of the football (the ball carrier). On a passing play a receiver, that is to say, an offensive player sent down the field to receive a pass, may not be interfered with (have his motion impeded, be blocked, etc.) unless he is within five yards of the line of scrimmage. Prior to a pass that goes beyond the line of scrimmage, a defender may not be impeded more than one yard past that line. Otherwise, any player may block another player's passage, so long as he does not hold or trip the player he intends to block. The kicker may not be contacted after the kick but before his kicking leg returns to the ground (this rule is not enforced upon a player who has blocked a kick). The quarterback may not be hit or tackled after throwing the ball, nor may he be hit while in the pocket (i.e. behind the offensive line) prior to that point below the knees or above the shoulders. Kicking Canadian football distinguishes four ways of kicking the ball: Place kick Kicking a ball held on the ground by a teammate, or, on a kickoff, optionally placed on a tee (two different tees are used for kickoffs and convert/field goal attempts). Drop kick Kicking a ball after bouncing it on the ground. Although rarely used today, it has the same status in scoring as a place kick. This play is part of the game's rugby heritage, and was largely made obsolete when the ball with pointed ends was adopted. Unlike the American game, Canadian rules allow a drop kick to be attempted at any time by any player, but the move is very rare. Punt Kicking the ball after it has been released from the kicker's hand and before it hits the ground. Punts may not score a field goal, even if one should travel through the uprights. As with drop kicks, players may punt at any time. Dribbled ball A dribbled ball is one that has been kicked while not in possession of a player, for example, a loose ball following a fumble, a blocked kick, a kickoff, or a kick from scrimmage. The kicker of the dribbled ball and any player onside when the ball was kicked may legally recover the ball. On any kicking play, all onside players (the kicker, and teammates behind the kicker at the time of the kick) may recover and advance the ball. Players on the kicking team who are not onside may not approach within five yards of the ball until it has been touched by the receiving team, or by an onside teammate. Scoring The methods of scoring are: Touchdown Achieved when the ball is in possession of a player in the opponent's end zone, or when the ball in the possession of a player crosses or touches the plane of the opponent's goal-line, worth 6 points (5 points until 1956). A touchdown in Canadian football is often referred to as a "major score" or simply a "major". Conversion (or convert) After a touchdown, the team that scored gets one scrimmage play to attempt to add one or two more points. If they make what would normally be a field goal, they score one point (a "point-after"); what would normally be a touchdown scores two points (a "two-point conversion"). In amateur games, this scrimmage is taken at the opponents' 5-yard line. The CFL formerly ran all conversion attempts from the 5-yard line as well (for a 12-yard kick), but starting in 2015 the line of scrimmage for one-point kick attempts became the 25-yard line (for a 32-yard kick), while two-point attempts are scrimmaged at the 3-yard line. No matter what happens on the convert attempt, play then continues with a kickoff (see below). Field goal Scored by a drop kick or place kick (except on a kickoff) when the ball, after being kicked and without again touching the ground, goes over the cross bar and between the goal posts (or between lines extended from the top of the goal posts) of the opponent's goal, worth three points. If the ball hits the upright above the cross-bar before going through, it is not considered a dead ball, and the points are scored. (Rule 5, Sect 4, Art 4(d)) If the field goal is missed, but the ball is not returnable after crossing the dead-ball-line, then it constitutes a rouge (see below). Safety Scored when the ball becomes dead in the possession of a team in its own goal area, or when the ball touches or crosses the dead-line, or side-line-in-goal and touches the ground, a player, or some object beyond these lines as a result of the team scored against making a play. It is worth two points. This is different from a single (see below) in that the team scored against begins with possession of the ball. The most common safety is on a third down punt from the end zone, in which the kicker decides not to punt and keeps the ball in his team's own goal area. The ball is then turned over to the receiving team (who gained the two points), by way of a kickoff from the 25-yard line or scrimmaging from the line on their side of the field. Single (rouge) Scored when the ball becomes dead in the possession of a team in its own goal area, or when the ball touches or crosses the dead-line, or side-line-in-goal, and touches the ground, a player, or some object beyond these lines as a result of the ball having been kicked from the field of play into the goal area by the scoring team. It is worth one point. This is different from a Safety (see above) in that team scored against receives possession of the ball after the score. Officially, the single is called a rouge (French for "red") but is often referred to as a single. The exact derivation of the term is unknown, but it has been thought that in early Canadian football, the scoring of a single was signalled with a red flag. A rouge is also a method of scoring in the Eton field game, which dates from at least 1815. Resumption of play Resumption of play following a score is conducted under procedures which vary with the type of score. Following a touchdown and convert attempt (successful or not), play resumes with the scoring team kicking off from its own 35-yard line (45-yard line in amateur leagues). Following a field goal, the non-scoring team may choose for play to resume either with a kickoff as above, or by scrimmaging the ball from its own 35-yard line. Following a safety, the scoring team may choose for play to resume in either of the above ways, or it may choose to kick off from its own 35-yard line. Following a single/rouge, play resumes with the non-scoring team scrimmaging from its own 35-yard line, unless the single is awarded on a missed field goal, in which case the non-scoring team scrimmages from either the 35-yard line or the yard line from which the field goal was attempted, whichever is greater. Game timing The game consists of two 30-minute halves, each of which is divided into two 15-minute quarters. The clock counts down from 15:00 in each quarter. Timing rules change when there are three minutes remaining in a half. A short break interval of 2 minutes occurs after the end of each quarter (a longer break of 15 minutes at halftime), and the two teams then change goals. In the first 27 minutes of a half, the clock stops when: points are scored, the ball goes out of bounds, a forward pass is incomplete, the ball is dead and a penalty flag has been thrown, the ball is dead and teams are making substitutions (e.g., possession has changed, punting situation, short yardage situation), the ball is dead and a player is injured, or the ball is dead and a captain or a coach calls a time-out. The clock starts again when the referee determines the ball is ready for scrimmage, except for team time-outs (where the clock starts at the snap), after a time count foul (at the snap) and kickoffs (where the clock starts not at the kick but when the ball is first touched after the kick). In the last three minutes of a half, the clock stops whenever the ball becomes dead. On kickoffs, the clock starts when the ball is first touched after the kick. On scrimmages, when it starts depends on what ended the previous play. The clock starts when the ball is ready for scrimmage except that it starts on the snap when on the previous play the ball was kicked off, the ball was punted, the ball changed possession, the ball went out of bounds, there were points scored, there was an incomplete forward pass, there was a penalty applied (not declined), or there was a team time-out. During the last three minutes of a half, the penalty for failure to place the ball in play within the 20-second play clock, known as a "time count violation" (this foul is known as "delay of game" in American football), is dramatically different from during the first 27 minutes. Instead of the penalty being 5 yards with the down repeated, the base penalty (except | field. The team may, however use its third down in an attempt to advance the ball and gain a cumulative 10 yards. Change in possession The ball changes possession in the following instances: If the offence scores a field goal, the scored-against team can either scrimmage from its 35-yard line or have the scoring team kickoff from its 35-yard line. If a team scores a touchdown, the scoring team must kickoff from their own 35-yard line. If the defence scores on a safety (bringing the ball down in the offence's own end zone), they have the right to claim possession. If one team kicks the ball; the other team has the right to recover the ball and attempt a return. If a kicked ball goes out of bounds, or the kicking team scores a single or field goal as a result of the kick, the other team likewise gets possession. If the offence fails to make ten yards in three plays, the defence takes over on downs. If the offence attempts a forward pass and it is intercepted by the defence; the defence takes possession immediately (and may try to advance the ball on the play). Note that incomplete forward passes (those which go out of bounds, or which touch the ground without being first cleanly caught by a player) result in the end of the play, and are not returnable by either team. If the offence fumbles (a ball carrier drops the football, or has it dislodged by an opponent, or if the intended player fails to catch a lateral pass or a snap from centre, or a kick attempt is blocked by an opponent), the ball may be recovered (and advanced) by either team. If a fumbled ball goes out of bounds, the team whose player last touched it is awarded possession at the spot where it went out of bounds. A fumble by the offence in their own end zone, which goes out of bounds, results in a safety. When the first half ends, the team which kicked to start the first half will receive a kickoff to start the second half. After the three-minute warning near the end of each half, the offence can lose possession for a time count violation (failure to legally put the ball into play within the 20-second duration of the play clock). However, this can only occur if three specific criteria are met: The offence committed a time count violation on its last attempted scrimmage play. This prior violation took place on third down. The referee deemed said violation to be deliberate, and warned the offence that it had to legally place the ball into play within the 20-second clock or lose possession. Such a loss of possession is statistically treated as the defence taking over on downs. Rules of contact There are many rules to contact in this type of football. The only player on the field who may be legally tackled is the player currently in possession of the football (the ball carrier). On a passing play a receiver, that is to say, an offensive player sent down the field to receive a pass, may not be interfered with (have his motion impeded, be blocked, etc.) unless he is within five yards of the line of scrimmage. Prior to a pass that goes beyond the line of scrimmage, a defender may not be impeded more than one yard past that line. Otherwise, any player may block another player's passage, so long as he does not hold or trip the player he intends to block. The kicker may not be contacted after the kick but before his kicking leg returns to the ground (this rule is not enforced upon a player who has blocked a kick). The quarterback may not be hit or tackled after throwing the ball, nor may he be hit while in the pocket (i.e. behind the offensive line) prior to that point below the knees or above the shoulders. Kicking Canadian football distinguishes four ways of kicking the ball: Place kick Kicking a ball held on the ground by a teammate, or, on a kickoff, optionally placed on a tee (two different tees are used for kickoffs and convert/field goal attempts). Drop kick Kicking a ball after bouncing it on the ground. Although rarely used today, it has the same status in scoring as a place kick. This play is part of the game's rugby heritage, and was largely made obsolete when the ball with pointed ends was adopted. Unlike the American game, Canadian rules allow a drop kick to be attempted at any time by any player, but the move is very rare. Punt Kicking the ball after it has been released from the kicker's hand and before it hits the ground. Punts may not score a field goal, even if one should travel through the uprights. As with drop kicks, players may punt at any time. Dribbled ball A dribbled ball is one that has been kicked while not in possession of a player, for example, a loose ball following a fumble, a blocked kick, a kickoff, or a kick from scrimmage. The kicker of the dribbled ball and any player onside when the ball was kicked may legally recover the ball. On any kicking play, all onside players (the kicker, and teammates behind the kicker at the time of the kick) may recover and advance the ball. Players on the kicking team who are not onside may not approach within five yards of the ball until it has been touched by the receiving team, or by an onside teammate. Scoring The methods of scoring are: Touchdown Achieved when the ball is in possession of a player in the opponent's end zone, or when the ball in the possession of a player crosses or touches the plane of the opponent's goal-line, worth 6 points (5 points until 1956). A touchdown in Canadian football is often referred to as a "major score" or simply a "major". Conversion (or convert) After a touchdown, the team that scored gets one scrimmage play to attempt to add one or two more points. If they make what would normally be a field goal, they score one point (a "point-after"); what would normally be a touchdown scores two points (a "two-point conversion"). In amateur games, this scrimmage is taken at the opponents' 5-yard line. The CFL formerly ran all conversion attempts from the 5-yard line as well (for a 12-yard kick), but starting in 2015 the line of scrimmage for one-point kick attempts became the 25-yard line (for a 32-yard kick), while two-point attempts are scrimmaged at the 3-yard line. No matter what happens on the convert attempt, play then continues with a kickoff (see below). Field goal Scored by a drop kick or place kick (except on a kickoff) when the ball, after being kicked and without again touching the ground, goes over the cross bar and between the goal posts (or between lines extended from the top of the goal posts) of the opponent's goal, worth three points. If the ball hits the upright above the cross-bar before going through, it is not considered a dead ball, and the points are scored. (Rule 5, Sect 4, Art 4(d)) If the field goal is missed, but the ball is not returnable after crossing the dead-ball-line, then it constitutes a rouge (see below). Safety Scored when the ball becomes dead in the possession of a team in its own goal area, or when the ball touches or crosses the dead-line, or side-line-in-goal and touches the ground, a player, or some object beyond these lines as a result of the team scored against making a play. It is worth two points. This is different from a single (see below) in that the team scored against begins with possession of the ball. The most common safety is on a third down punt from the end zone, in which the kicker decides not to punt and keeps the ball in his team's own goal area. The ball is then turned over to the receiving team (who gained the two points), by way of a kickoff from the 25-yard line or scrimmaging from the line on their side of the field. Single (rouge) Scored when the ball becomes dead in the possession of a team in its own goal area, or when the ball touches or crosses the dead-line, or side-line-in-goal, and touches the ground, a player, or some object beyond these lines as a result of the ball having been kicked from the field of play into the goal area by the scoring team. It is worth one point. This is different from a Safety (see above) in that team scored against receives possession of the ball after the score. Officially, the single is called a rouge (French for "red") but is often referred to as a single. The exact derivation of the term is unknown, but it has been thought that in early Canadian football, the scoring of a single was signalled with a red flag. A rouge is also a method of scoring in the Eton field game, which dates from at least 1815. Resumption of play Resumption of play following a score is conducted under procedures which vary with the type of score. Following a touchdown and convert attempt (successful or not), play resumes with the scoring team kicking off from its own 35-yard line (45-yard line in amateur leagues). Following a field goal, the non-scoring team may choose for play to resume either with a kickoff as above, or by scrimmaging the ball from its own 35-yard line. Following a safety, the scoring team may choose for play to resume in either of the above ways, or it may choose to kick off from its own 35-yard line. Following a single/rouge, play resumes with the non-scoring team scrimmaging from its own 35-yard line, unless the single is awarded on a missed field goal, in which case the non-scoring team scrimmages from either the 35-yard line or the yard line from which the field goal was attempted, whichever is greater. Game timing The game consists of two 30-minute halves, each of which is divided into two 15-minute quarters. The clock counts down from 15:00 in each quarter. Timing rules change when there are three minutes remaining in a half. A short break interval of 2 minutes occurs after the end of each quarter (a longer break of 15 minutes at halftime), and the two teams then change goals. In the first 27 minutes of a half, the clock stops when: points are scored, the ball goes out of bounds, a forward pass is incomplete, the ball is dead and a penalty flag has been thrown, the ball is dead and teams are making substitutions (e.g., possession has changed, punting situation, short yardage situation), the ball is dead and a player is injured, or the ball is dead and a captain or a coach calls a time-out. The clock starts again when the referee determines the ball is ready for scrimmage, except for team time-outs (where the clock starts at the snap), after a time count foul (at the snap) and kickoffs (where the clock starts not at the kick but when the ball is first touched after the kick). In the last three minutes of a half, the clock stops whenever the ball becomes dead. On kickoffs, the clock starts when the ball is first touched after the kick. On scrimmages, when it starts depends on what ended the previous play. The clock starts when the ball is ready for scrimmage except that it starts on the snap when on the previous play the ball was kicked off, the ball was punted, the ball changed possession, the ball went out of bounds, there were points scored, there was an incomplete forward pass, there was a penalty applied (not declined), or there was a team time-out. During the last three minutes of a half, the penalty for failure to place the ball in play within the 20-second play clock, known as a "time count violation" (this foul is known as "delay of game" in American football), is dramatically different from during the first 27 minutes. Instead of the penalty being 5 yards with the down repeated, the base penalty (except during convert attempts) becomes loss of down on first or second down, and 10 yards on third down with the down repeated. In addition, as noted previously, the referee can give possession to the defence for repeated deliberate time count violations on third down. The clock does not run during convert attempts in the last three minutes of a half. If the 15 minutes of a quarter expire while the ball is live, the quarter is extended until the ball becomes dead. If a quarter's time expires while the ball is dead, the quarter is extended for one more scrimmage. A quarter cannot end while a penalty is pending: after the penalty yardage is applied, the quarter is extended one scrimmage. Note that the non-penalized team has the option to decline any penalty it considers disadvantageous, so a losing team cannot indefinitely prolong a game by repeatedly committing infractions. Overtime In the CFL, if the game is tied at the end of regulation play, then each team is given an equal number of offensive possessions to break the tie. A coin toss is held to determine which team will take possession first; the first team scrimmages the ball at the opponent's 35-yard line and conducts a series of downs until it scores or loses possession. If the team scores a touchdown, starting with the 2010 season, it is required to attempt a two-point conversion. The other team then scrimmages the ball at the opponent's 35-yard line and has the same opportunity to score. After the teams have completed their possessions, if one team is ahead, then it is declared the winner; otherwise, the two teams each get another chance to score, scrimmaging from the other 35-yard line. After this second round, if there is still no winner, during the regular season the game ends as a tie. In a playoff game, the teams continue to attempt to score from alternating 35-yard lines, until one team is leading after both have had an equal number of possessions. In U Sports football, for the Uteck Bowl, Mitchell Bowl, and Vanier Cup, the same overtime procedure is followed until there is a winner. Officials and fouls Officials are responsible for enforcing game rules and monitoring the clock. All officials carry a whistle and wear black-and-white striped shirts and black caps except for the referee, whose cap is white. Each carries a weighted orange flag that is thrown to the ground to signal that a foul has been called. An official who spots multiple fouls will throw their cap as a secondary signal. The seven officials (of a standard seven-man crew; lower levels of play up to the university level use fewer officials) on the field are each tasked with a different set of responsibilities: The referee is positioned behind and to the side of the offensive backs. The referee is charged with oversight and control of the game and is the authority on the score, the down number, and any rule interpretations in discussions among the other officials. The referee announces all penalties and discusses the infraction with the offending team's captain, monitors for illegal hits against the quarterback, makes requests for first-down measurements, and notifies the head coach whenever a player is ejected. The referee positions themselves to the passing arm side of the quarterback. In most games, the referee is responsible for spotting the football prior to a play from scrimmage. The umpire is positioned in the defensive backfield. The umpire watches play along the line of scrimmage to make sure that no more than 12 offensive players are on the field before the snap. The umpire monitors contact between offensive and defensive linemen and calls most of the holding penalties. The umpire records the number of timeouts taken and the winner of the coin toss and the game score, assists the referee in situations involving possession of the ball close to the line of scrimmage, determines whether player equipment is legal, and dries wet balls prior to the snap if a game is played in rain. The back judge is positioned deep in the defensive backfield, behind the umpire. The back judge ensures that the defensive team has no more than 12 players on the field and determines whether catches are legal, whether field goal or extra point attempts are good, and whether a pass interference violation occurred. The back judge is also responsible for the play clock, the time between each play, when a visible play clock is not used. The head linesman/down judge is positioned on one end of the line of scrimmage. The head linesman watches for any line-of-scrimmage and holding violations and assists the line judge with illegal procedure calls. The head linesman also rules on out-of-bounds calls that happen on their side of the field, oversees the chain crew and marks the forward progress of a runner when a play has been whistled dead. The side judge is positioned 20 yards downfield of the head linesman. The side judge mainly duplicates the functions of the field judge. On field goal and extra point attempts, the side judge is positioned lateral to the umpire. The line judge is positioned on the end of the line of scrimmage, opposite the head linesman. They supervise player substitutions, the line of scrimmage during punts, and game timing. The line judge notifies the referee when time has expired at the end of a quarter and notifies the head coach of the home team when five minutes remain for halftime. In the CFL, the line judge also alerts the referee when three minutes remain in the half. If the clock malfunctions or becomes inoperable, the line judge becomes the official timekeeper. The field judge is positioned 20 yards downfield from the line judge. The field judge monitors and controls the play clock, counts the number of defensive players on the field and watches for offensive pass interference and holding violations by offensive players. The field judge also makes decisions regarding catches, recoveries and the ball spot when a player goes out of bounds. On field goal and extra-point attempts, the field judge is stationed under the upright opposite the back judge. Another set of officials, the chain crew, is responsible for moving the chains. The chains, consisting of two large sticks with a 10-yard-long chain between them, are used to measure for a first down. The chain crew stays on the sidelines during the game, but if requested by the officials they will briefly bring the chains on to the field to measure. A typical chain crew will have at least three people—two members of the chain crew will hold either of the two sticks, while a third will hold the down marker. The down marker, a large stick with a dial on it, is flipped after each play to indicate the current down and is typically moved to the approximate spot of the ball. The chain crew system has been used for over 100 years and is considered to be an accurate measure of distance, rarely subject to criticism from either side. Severe weather In the CFL, a game must be delayed if lightning strikes within of the stadium or for other severe weather conditions, or if dangerous weather is anticipated. In the regular season, if play has not resumed after 1 hour and at least half of the third quarter has been completed, the score stands as final; this happened for the first time on August 9, 2019, when a Saskatchewan–Montreal game was stopped late in the third quarter. If the stoppage is earlier in the game, or if it is a playoff or Grey Cup game, play may be stopped for up to 3 hours and then resume. After 3 hours of stoppage, play is terminated at least for the day. A playoff or Grey Cup game must then be resumed the following day at the point where it left off. In the regular season, if a game is stopped for 3 hours and one team is leading by at least a certain amount, then that team is awarded a win. The size of lead required is 21, 17, or 13 depending on whether the stoppage is in the first, second, or third quarter respectively. If neither team is leading by that much and they are not scheduled to play again in the season, the game is declared a tie. If a regular-season game is stopped for 3 hours and neither team is leading by the required amount to be awarded a win, but the two teams are scheduled to play again later in the season, then the stopped game is decided by a "two-possession shootout" procedure held before the later game is started. The procedure is generally similar to overtime in the CFL, with two major exceptions: each team must play exactly two possessions regardless of what happens; and while the score from the stopped game is not added to the shootout score, it is used instead to determine the yard line where each team starts its possessions, so the team that was leading still has an advantage. Positions The positions in Canadian football have evolved throughout the years, and are not officially defined in the rules. However, there are still several standard positions, as outlined below. Offence The offence must have at least seven players lined up along the line of scrimmage on every play. The players on either end (usually the wide receivers) are eligible to receive forward passes, and may be in motion along the line of scrimmage prior to the snap. The other players on the line of scrimmage (usually the offensive linemen) are |
Days 21 to 29 are written with the character Niàn () before the characters one through nine; Niànsān (), for example, is the 23rd day of the month. Day 30 (as applicable) is written as the numeral Sānshí (). History books use days of the month numbered with the 60 stem-branches: Because astronomical observation determines month length, dates on the calendar correspond to moon phases. The first day of each month is the new moon. On the seventh or eighth day of each month, the first-quarter moon is visible in the afternoon and early evening. On the 15th or 16th day of each month, the full moon is visible all night. On the 22nd or 23rd day of each month, the last-quarter moon is visible late at night and in the morning. Since the beginning of the month is determined by when the new moon occurs, other countries using this calendar use their own time standards to calculate it; this results in deviations. The first new moon in 1968 was at 16:29 UTC on 29 January. Since North Vietnam used UTC+07:00 to calculate their Vietnamese calendar and South Vietnam used UTC+08:00 (Beijing time) to calculate theirs, North Vietnam began the Tết holiday at 29 January at 23:29 while South Vietnam began it on 30 January at 00:15. The time difference allowed asynchronous attacks in the Tet Offensive. Names of months Lunar months were originally named according to natural phenomena. Current naming conventions use numbers as the month names. Every month is also associated with one of the twelve Earthly Branches. Gregorian dates are approximate and should be used with caution. Many years have intercalary months. Chinese lunar date conventions Though the numbered month names are often used for the corresponding month number in the Gregorian calendar, it is important to realize that the numbered month names are not interchangeable with the Gregorian months when talking about lunar dates. Incorrect: The Dragon Boat Festival falls on 5 May in the Lunar Calendar, whereas the Double Ninth Festival, Lantern Festival, and Qixi Festival fall on 9 September 15 January, and 7 July in the Lunar Calendar, respectively. Correct: The Dragon Boat Festival falls on Wǔyuè 5th (or, 5th day of the fifth month) in the Lunar Calendar, whereas the Double Ninth Festival, Lantern Festival, and Qixi Festival fall on Jiǔyuè 9th (or, 9th day of the ninth month), Zhēngyuè 15th (or, 15th day of the first month), and Qīyuè 7th (or, 7th day of the seventh month) in the Lunar Calendar, respectively. Alternate Chinese Zodiac correction: The Dragon Boat Festival falls on Horse Month 5th on Lunar Calendar, whereas the Double Ninth Festival, Lantern Festival, and Qixi Festival fall on Dog Month 9th, Tiger Month 15th, and Monkey Month 7th on Lunar Calendar, respectively. One may even find out the heavenly stem and earthly branch corresponding to a particular day in the month, and those corresponding to its month, and those to its year, to determine the Four Pillars of Destiny associated with it, for which the Tung Shing, also referred to as the Chinese Almanac of the year, or the Huangli, and containing the essential information concerning Chinese astrology, is the most convenient publication to consult. Days rotate through a sexagenary cycle marked by coordination between heavenly stems and earthly branches, hence the referral to the Four Pillars of Destiny as, "Bazi", or "Birth Time Eight Characters", with each pillar consisting of a character for its corresponding heavenly stem, and another for its earthly branch. Since Huangli days are sexagenaric, their order is quite independent of their numeric order in each month, and of their numeric order within a week (referred to as True Animals in relation to the Chinese zodiac). Therefore, it does require painstaking calculation for one to arrive at the Four Pillars of Destiny of a particular given date, which rarely outpaces the convenience of simply consulting the Huangli by looking up its Gregorian date. Solar term The solar year (), the time between winter solstices, is divided into 24 solar terms known as jié qì (節氣). Each term is a 15° portion of the ecliptic. These solar terms mark both Western and Chinese seasons, as well as equinoxes, solstices, and other Chinese events. The even solar terms (marked with "Z", for ) are considered the major terms, while the odd solar terms (marked with "J", for ) are deemed minor. The solar terms qīng míng (清明) on 5 April and dōng zhì (冬至) on 22 December are both celebrated events in China. Solar year The calendar solar year, known as the suì, () begins at the December solstice and proceeds through the 24 solar terms. Due to the fact that the speed of the Sun's apparent motion in the elliptical is variable, the time between major solar terms is not fixed. This variation in time between major solar terms results in different solar year lengths. There are generally 11 or 12 complete months, plus two incomplete months around the winter solstice, in a solar year. The complete months are numbered from 0 to 10, and the incomplete months are considered the 11th month. If there are 12 complete months in the solar year, it is known as a leap solar year, or leap suì. Due to the inconsistencies in the length of the solar year, different versions of the traditional calendar might have different average solar year lengths. For example, one solar year of the 1st century BC Tàichū calendar is (365.25016) days. A solar year of the 13th-century Shòushí calendar is (365.2425) days, identical to the Gregorian calendar. The additional .00766 day from the Tàichū calendar leads to a one-day shift every 130.5 years. Pairs of solar terms are climate terms, or solar months. The first solar term is "pre-climate" (), and the second is "mid-climate" (). If there are 12 complete months within a solar year, The first month without a mid-climate is the leap, or intercalary, month. In other words, the first month that doesn't include a major solar term is the leap month. Leap months are numbered with rùn , the character for "intercalary", plus the name of the month they follow. In 2017, the intercalary month after month six was called Rùn Liùyuè, or "intercalary sixth month" () and written as 6i or 6+. The next intercalary month (in 2020, after month four) will be called Rùn Sìyuè () and written 4i or 4+. Lunisolar year The lunisolar year begins with the first spring month, Zhēngyuè (), and ends with the last winter month, Làyuè (). All other months are named for their number in the month order. Years were traditionally numbered by the reign in ancient China, but this was abolished after the founding of the People's Republic of China in 1949. For example, the year from 8 February 2016 to 27 January 2017 was a Bǐngshēn year () of . During the Tang dynasty, the Earthly Branches were used to mark the months from December 761 to May 762. Over this period, the year began with the winter solstice. Age reckoning In China, a person's official age is based on the Gregorian calendar. For traditional use, age is based on the Chinese Sui calendar. 100 days after birth, a child is considered one year old,(9 months gestation plus 3) using ordinal numerals (instead of "zero" using cardinal numerals); after each Chinese New Year, one year is added to their traditional age. That is, age is the number of Chinese years in which they have lived. Because of the potential for confusion, infant ages are often given in months instead of years. After the Gregorian calendar's introduction in China, the Chinese traditional-age was referred to as the "nominal age" () and the Gregorian age was known as the "real age" (). Year-numbering systems Eras In ancient China, years were numbered from a new emperor's assumption of the throne or an existing emperor's announcement of a new era name. The first recorded reign title was Jiànyuán (), from 140 BC; the last reign title was Xuāntǒng (), from AD 1908. The era system was abolished in 1912, after which the current or Republican era was used. Stem-branches The 60 stem-branches have been used to mark the date since the Shang dynasty (1600–1046 BC). Astrologers knew that the orbital period of Jupiter is about 4,332 days. Since 4332 is 12 × 361, Jupiter's orbital period was divided into 12 years () of 361 days each. The stem-branches system solved the era system's problem of unequal reign lengths. Continuous numbering Nomenclature similar to that of the Christian era has occasionally been used: Huángdì year (), starting at the beginning of the reign of the Yellow Emperor (year 1 at 2697 BC or 2698 BC; year or at AD) Yáo year (), starting at the beginning of the reign of Emperor Yao (year 1 at 2156 BC; year at AD) Gònghé year (), starting at the beginning of the Gonghe Regency (year 1 at 841 BC; year at AD) Confucius year (), starting at the birth year of Confucius (year 1 at 551 BC; year at AD) Unity year (), starting at the beginning of the reign of Qin Shi Huang (year 1 at 221 BC; year at AD) No reference date is universally accepted. The most popular is the Gregorian calendar (). On 2 January 1912, Sun Yat-sen announced changes to the official calendar and era. 1 January was 14 Shíyīyuè 4609 Huángdì year, assuming a year 1 of 2698 BC, which makes at AD. The change was adopted by many overseas Chinese communities, such as San Francisco's Chinatown. During the 17th century, the Jesuits tried to determine the epochal year of the Han calendar. In his Sinicae historiae decas prima (published in Munich in 1658), Martino Martini (1614–1661) dated the ascension of the Yellow Emperor to 2697 BC and began the Chinese calendar with the reign of Fuxi (which, according to Martini, began in 2952 BC. Philippe Couplet's 1686 Chronological table of Chinese monarchs (Tabula chronologica monarchiae sinicae) gave the same date for the Yellow Emperor. The Jesuits' dates provoked interest in Europe, where they were used for comparison with Biblical chronology. Modern Chinese chronology has generally accepted Martini's dates, except that it usually places the reign of the Yellow Emperor at 2698 BC and omits his predecessors Fuxi and Shennong as "too legendary to include". Publications began using the estimated birth date of the Yellow Emperor as the first year of the Han calendar in 1903, with newspapers and magazines proposing different dates. The province of Jiangsu counted 1905 as the year 4396 (using a year 1 of 2491 BC, and implying that is AD), and the newspaper Ming Pao () reckoned 1905 as 4603 (using a year 1 of 2698 BC, and implying that is AD). Liu Shipei (, 1884–1919) created the Yellow Emperor Calendar, with year 1 as the birth of the emperor (which he determined as 2711 BC, implying that is AD). There is no evidence that this calendar was used before the 20th century. Liu calculated that the 1900 international expedition sent by the Eight-Nation Alliance to suppress the Boxer Rebellion entered Beijing in the 4611th year of the Yellow Emperor. Chinese New Year The date of the Chinese New Year accords with the patterns of the lunisolar calendar and hence is variable from year to year. However, two general rules govern the date. Firstly, Chinese New Year transpires on the second new moon following the December solstice. If there is a leap month after the eleventh or twelfth month, then Chinese New Year falls on the third new moon after the December solstice. Alternatively, Chinese New Year will fall on the new moon that is closest to lì chūn, or the solar term that begins spring (typically falls on 4 February). However, this rule is not as reliable since it can be difficult to determine which new moon is the closest in the case of an early or late Chinese New Year. It has been found that Chinese New Year moves back by either 10, 11, or 12 days in some years. If it falls before 21 January, then it moves forward in the next year by either 18, 19, or 20 days. Phenology The plum-rains season (), the rainy season in late spring and early summer, begins on the first bǐng day after Mangzhong () and ends on the first wèi day after Xiaoshu (). The Three Fu () are three periods of hot weather, counted from the first gēng day | To fix this, traditional Chinese years have a 13-month year approximately once every three years. The 13-month version has the same alternation of long and short months, but adds a 30-day leap month () at the end of the year. Years with 12 months are called common years, and 13-month years are known as long years. Although most of the above rules were used until the Tang dynasty, different eras used different systems to keep lunar and solar years aligned. The synodic month of the Taichu calendar was days long. The 7th-century, Tang-dynasty Wùyín Yuán Calendar was the first to determine month length by synodic month instead of the cycling method. Since then, month lengths have primarily been determined by observation and prediction. The days of the month are always written with two characters and numbered beginning with 1. Days one to 10 are written with the day's numeral, preceded by the character Chū (); Chūyī () is the first day of the month, and Chūshí () the 10th. Days 11 to 20 are written as regular Chinese numerals; Shíwǔ () is the 15th day of the month, and Èrshí () the 20th. Days 21 to 29 are written with the character Niàn () before the characters one through nine; Niànsān (), for example, is the 23rd day of the month. Day 30 (as applicable) is written as the numeral Sānshí (). History books use days of the month numbered with the 60 stem-branches: Because astronomical observation determines month length, dates on the calendar correspond to moon phases. The first day of each month is the new moon. On the seventh or eighth day of each month, the first-quarter moon is visible in the afternoon and early evening. On the 15th or 16th day of each month, the full moon is visible all night. On the 22nd or 23rd day of each month, the last-quarter moon is visible late at night and in the morning. Since the beginning of the month is determined by when the new moon occurs, other countries using this calendar use their own time standards to calculate it; this results in deviations. The first new moon in 1968 was at 16:29 UTC on 29 January. Since North Vietnam used UTC+07:00 to calculate their Vietnamese calendar and South Vietnam used UTC+08:00 (Beijing time) to calculate theirs, North Vietnam began the Tết holiday at 29 January at 23:29 while South Vietnam began it on 30 January at 00:15. The time difference allowed asynchronous attacks in the Tet Offensive. Names of months Lunar months were originally named according to natural phenomena. Current naming conventions use numbers as the month names. Every month is also associated with one of the twelve Earthly Branches. Gregorian dates are approximate and should be used with caution. Many years have intercalary months. Chinese lunar date conventions Though the numbered month names are often used for the corresponding month number in the Gregorian calendar, it is important to realize that the numbered month names are not interchangeable with the Gregorian months when talking about lunar dates. Incorrect: The Dragon Boat Festival falls on 5 May in the Lunar Calendar, whereas the Double Ninth Festival, Lantern Festival, and Qixi Festival fall on 9 September 15 January, and 7 July in the Lunar Calendar, respectively. Correct: The Dragon Boat Festival falls on Wǔyuè 5th (or, 5th day of the fifth month) in the Lunar Calendar, whereas the Double Ninth Festival, Lantern Festival, and Qixi Festival fall on Jiǔyuè 9th (or, 9th day of the ninth month), Zhēngyuè 15th (or, 15th day of the first month), and Qīyuè 7th (or, 7th day of the seventh month) in the Lunar Calendar, respectively. Alternate Chinese Zodiac correction: The Dragon Boat Festival falls on Horse Month 5th on Lunar Calendar, whereas the Double Ninth Festival, Lantern Festival, and Qixi Festival fall on Dog Month 9th, Tiger Month 15th, and Monkey Month 7th on Lunar Calendar, respectively. One may even find out the heavenly stem and earthly branch corresponding to a particular day in the month, and those corresponding to its month, and those to its year, to determine the Four Pillars of Destiny associated with it, for which the Tung Shing, also referred to as the Chinese Almanac of the year, or the Huangli, and containing the essential information concerning Chinese astrology, is the most convenient publication to consult. Days rotate through a sexagenary cycle marked by coordination between heavenly stems and earthly branches, hence the referral to the Four Pillars of Destiny as, "Bazi", or "Birth Time Eight Characters", with each pillar consisting of a character for its corresponding heavenly stem, and another for its earthly branch. Since Huangli days are sexagenaric, their order is quite independent of their numeric order in each month, and of their numeric order within a week (referred to as True Animals in relation to the Chinese zodiac). Therefore, it does require painstaking calculation for one to arrive at the Four Pillars of Destiny of a particular given date, which rarely outpaces the convenience of simply consulting the Huangli by looking up its Gregorian date. Solar term The solar year (), the time between winter solstices, is divided into 24 solar terms known as jié qì (節氣). Each term is a 15° portion of the ecliptic. These solar terms mark both Western and Chinese seasons, as well as equinoxes, solstices, and other Chinese events. The even solar terms (marked with "Z", for ) are considered the major terms, while the odd solar terms (marked with "J", for ) are deemed minor. The solar terms qīng míng (清明) on 5 April and dōng zhì (冬至) on 22 December are both celebrated events in China. Solar year The calendar solar year, known as the suì, () begins at the December solstice and proceeds through the 24 solar terms. Due to the fact that the speed of the Sun's apparent motion in the elliptical is variable, the time between major solar terms is not fixed. This variation in time between major solar terms results in different solar year lengths. There are generally 11 or 12 complete months, plus two incomplete months around the winter solstice, in a solar year. The complete months are numbered from 0 to 10, and the incomplete months are considered the 11th month. If there are 12 complete months in the solar year, it is known as a leap solar year, or leap suì. Due to the inconsistencies in the length of the solar year, different versions of the traditional calendar might have different average solar year lengths. For example, one solar year of the 1st century BC Tàichū calendar is (365.25016) days. A solar year of the 13th-century Shòushí calendar is (365.2425) days, identical to the Gregorian calendar. The additional .00766 day from the Tàichū calendar leads to a one-day shift every 130.5 years. Pairs of solar terms are climate terms, or solar months. The first solar term is "pre-climate" (), and the second is "mid-climate" (). If there are 12 complete months within a solar year, The first month without a mid-climate is the leap, or intercalary, month. In other words, the first month that doesn't include a major solar term is the leap month. Leap months are numbered with rùn , the character for "intercalary", plus the name of the month they follow. In 2017, the intercalary month after month six was called Rùn Liùyuè, or "intercalary sixth month" () and written as 6i or 6+. The next intercalary month (in 2020, after month four) will be called Rùn Sìyuè () and written 4i or 4+. Lunisolar year The lunisolar year begins with the first spring month, Zhēngyuè (), and ends with the last winter month, Làyuè (). All other months are named for their number in the month order. Years were traditionally numbered by the reign in ancient China, but this was abolished after the founding of the People's Republic of China in 1949. For example, the year from 8 February 2016 to 27 January 2017 was a Bǐngshēn year () of . During the Tang dynasty, the Earthly Branches were used to mark the months from December 761 to May 762. Over this period, the year began with the winter solstice. Age reckoning In China, a person's official age is based on the Gregorian calendar. For traditional use, age is based on the Chinese Sui calendar. 100 days after birth, a child is considered one year old,(9 months gestation plus 3) using ordinal numerals (instead of "zero" using cardinal numerals); after each Chinese New Year, one year is added to their traditional age. That is, age is the number of Chinese years in which they have lived. Because of the potential for confusion, infant ages are often given in months instead of years. After the Gregorian calendar's introduction in China, the Chinese traditional-age was referred to as the "nominal age" () and the Gregorian age was known as the "real age" (). Year-numbering systems Eras In ancient China, years were numbered from a new emperor's assumption of the throne or an existing emperor's announcement of a new era name. The first recorded reign title was Jiànyuán (), from 140 BC; the last reign title was Xuāntǒng (), from AD 1908. The era system was abolished in 1912, after which the current or Republican era was used. Stem-branches The 60 stem-branches have been used to mark the date since the Shang dynasty (1600–1046 BC). Astrologers knew that the orbital period of Jupiter is about 4,332 days. Since 4332 is 12 × 361, Jupiter's orbital period was divided into 12 years () of 361 days each. The stem-branches system solved the era system's problem of unequal reign lengths. Continuous numbering Nomenclature similar to that of the Christian era has occasionally been used: Huángdì year (), starting at the beginning of the reign of the Yellow Emperor (year 1 at 2697 BC or 2698 BC; year or at AD) Yáo year (), starting at the beginning of the reign of Emperor Yao (year 1 at 2156 BC; year at AD) Gònghé year (), starting at the beginning of the Gonghe Regency (year 1 at 841 BC; year at AD) Confucius year (), starting at the birth year of Confucius (year 1 at 551 BC; year at AD) Unity year (), starting at the beginning of the reign of Qin Shi Huang (year 1 at 221 BC; year at AD) No reference date is universally accepted. The most popular is the Gregorian calendar (). On 2 January 1912, Sun Yat-sen announced changes to the official calendar and era. 1 January was 14 Shíyīyuè 4609 Huángdì year, assuming a year 1 of 2698 BC, which makes at AD. The change was adopted by many overseas Chinese communities, such as San Francisco's Chinatown. During the 17th century, the Jesuits tried to determine the epochal year of the Han calendar. In his Sinicae historiae decas prima (published in Munich in 1658), Martino Martini (1614–1661) dated the ascension of the Yellow Emperor to 2697 BC and began the Chinese calendar with the reign of Fuxi (which, according to Martini, began in 2952 BC. Philippe Couplet's 1686 Chronological table of Chinese monarchs (Tabula chronologica monarchiae sinicae) gave the same date for the Yellow Emperor. The Jesuits' dates provoked interest in Europe, where they were used for comparison with Biblical chronology. Modern Chinese chronology has generally accepted Martini's dates, except that it usually places the reign of the Yellow Emperor at 2698 BC and omits his predecessors Fuxi and Shennong as "too legendary to include". Publications began using the estimated birth date of the Yellow Emperor as the first year of the Han calendar in 1903, with newspapers and magazines proposing different dates. The province of Jiangsu counted 1905 as the year 4396 (using a year 1 of 2491 BC, and implying that is AD), and the newspaper Ming Pao () reckoned 1905 as 4603 (using a year 1 of 2698 BC, and implying that is AD). Liu Shipei (, 1884–1919) created the Yellow Emperor Calendar, with year 1 as the birth of the emperor (which he determined as 2711 BC, implying that is AD). There is no evidence that this calendar was used before the 20th century. Liu calculated that the 1900 international expedition sent by the Eight-Nation Alliance to suppress the Boxer Rebellion entered Beijing in the 4611th year of the Yellow Emperor. Chinese New Year The date of the Chinese New Year accords with the patterns of the lunisolar calendar and hence is variable from year to year. However, two general rules govern the date. Firstly, Chinese New Year transpires on the second new moon following the December solstice. If there is a leap month after the eleventh or twelfth month, then Chinese New Year falls on the third new moon after the December solstice. Alternatively, Chinese New Year will fall on the new moon that is closest to lì chūn, or the solar term that begins spring (typically falls on 4 February). However, this rule is not as reliable since it can be difficult to determine which new moon is the closest in the case of an early or late Chinese New Year. It has been found that Chinese New Year moves back by either 10, 11, or 12 days in some years. If it falls before 21 January, then it moves forward in the next year by either 18, 19, or 20 days. Phenology The plum-rains season (), the rainy season in late spring and early summer, begins on the first bǐng day after Mangzhong () and ends on the first wèi day after Xiaoshu (). The Three Fu () are three periods of hot weather, counted from the first gēng day after the summer solstice. The first fu () is 10 days long. The mid-fu () is 10 or 20 days long. The last fu () is 10 days from the first gēng day after the beginning of autumn. The Shujiu cold days () are the 81 days after the winter solstice (divided into nine sets of nine days), and are considered the coldest days of the year. Each nine-day unit is known by its order in the set, followed by "nine" (). Common holidays based on the Chinese (lunisolar) calendar There are several traditional and religious holidays shared by communities throughout the world that use the Chinese (Lunisolar) calendar: Holidays with the same day and same month The Chinese New Year (known as the Spring Festival/春節 in China) is on the first day of the first month and was traditionally called the Yuan Dan (元旦) or Zheng Ri (正日). In Vietnam, it is known as Tết Nguyên Đán () and in Korea, it is known as 설날. Traditionally it was the most important holiday of the year. It is an official holiday in China, Hong Kong, Macau, Taiwan, Vietnam, Korea, the Philippines, Malaysia, Singapore, and Indonesia. It is also a public holiday in Thailand's Narathiwat, Pattani, Yala, and Satun provinces and is an official public school holiday in New York City. The Double Third Festival is on the third day of the third month and in Korea is known as 삼짇날 (samjinnal). The Dragon Boat Festival, or the Duanwu Festival (端午節), is on the fifth day of the fifth month and is an official holiday in China, Hong Kong, Macau, and Taiwan. It is also celebrated in Vietnam where it is known as Tết Đoan Ngọ (節端午) and in Korea where it is known as 단오 (端午) (Dano) or 수릿날 (戌衣日/水瀨日) (surinal) (both Hanja are used as they are homonyms). The Qixi Festival (七夕節) is celebrated in the evening of the seventh day of the seventh month. It is also celebrated in Vietnam where it is known as Thất tịch (七夕) and in Korea where is known as 칠석 (七夕) (chilseok). The Double Ninth Festival (重陽節) is celebrated on the ninth day of the ninth month. It is also celebrated in Vietnam where it is known as Tết Trùng Cửu (節重九) and in Korea where it is known as 중양절 (jungyangjeol). Full moon holidays (holidays on the fifteenth day) The Lantern Festival is celebrated on the fifteenth day of the first month and was traditionally called the Yuan Xiao (元宵) or Shang Yuan Festival (上元節). In Vietnam, it is known as Tết Thượng Nguyên (節上元) and in Korea, it is known as 대보름 (大보름) Daeboreum (or the Great Full Month). The Zhong Yuan Festival is celebrated on the fifteenth day of the seventh month. In Vietnam, it is celebrated as Tết Trung Nguyên (中元節) or Lễ Vu Lan (禮盂蘭) and in Korea it is known as 백중 (百中/百種) Baekjong or 망혼일 (亡魂日) Manghongil (Deceased Spirit Day) or 중원 (中元) Jungwon. The Mid-Autumn Festival is celebrated on the fifteenth day of the eighth month. In Vietnam, it is celebrated as Tết Trung Thu (節中秋) and in Korea it is known as 추석 (秋夕) Chuseok. The Xia Yuan Festival is celebrated on the fifteenth day |
forums can also be used to collect and analyze information. Understanding the customer and capturing this data allows companies to convert customers' signals into information and knowledge that the firm can use to understand a potential customer's desired relations with a brand. Employee training Many firms have also implemented training programs to teach employees how to recognize and effectively create strong customer-brand relationships. For example, Harley Davidson sent its employees on the road with customers, who were motorcycle enthusiasts, to help solidify relationships. Other employees have also been trained in social psychology and the social sciences to help bolster strong customer relationships. Customer service representatives must be educated to value customer relationships and trained to understand existing customer profiles. Even the finance and legal departments should understand how to manage and build relationships with customers. In practice Call centers Contact centre CRM providers are popular for small and mid-market businesses. These systems codify the interactions between the company and customers by using analytics and key performance indicators to give the users information on where to focus their marketing and customer service. This allows agents to have access to a caller's history to provide personalized customer communication. The intention is to maximize average revenue per user, decrease churn rate and decrease idle and unproductive contact with the customers. Growing in popularity is the idea of gamifying, or using game design elements and game principles in a non-game environment such as customer service environments. The gamification of customer service environments includes providing elements found in games like rewards and bonus points to customer service representatives as a method of feedback for a job well done. Gamification tools can motivate agents by tapping into their desire for rewards, recognition, achievements, and competition. Contact-center automation Contact-center automation, CCA, the practice of having an integrated system that coordinates contacts between an organization and the public, is designed to reduce the repetitive and tedious parts of a contact center agent's job. Automation prevents this by having pre-recorded audio messages that help customers solve their problems. For example, an automated contact center may be able to re-route a customer through a series of commands asking him or her to select a certain number to speak with a particular contact center agent who specializes in the field in which the customer has a question. Software tools can also integrate with the agent's desktop tools to handle customer questions and requests. This also saves time on behalf of the employees. Social media Social CRM involves the use of social media and technology to engage and learn from consumers. Because the public, especially young people, are increasingly using social networking sites, companies use these sites to draw attention to their products, services and brands, with the aim of building up customer relationships to increase demand. With the increase in the use of social media platforms, integrating CRM with the help of social media can potentially be a quicker and more cost-friendly process. Some CRM systems integrate social media sites like Twitter, LinkedIn, and Facebook to track and communicate with customers. These customers also share their own opinions and experiences with a company's products and services, giving these firms more insight. Therefore, these firms can both share their own opinions and also track the opinions of their customers. Enterprise feedback management software platforms combine internal survey data with trends identified through social media to allow businesses to make more accurate decisions on which products to supply. Location-based services CRM systems can also include technologies that create geographic marketing campaigns. The systems take in information based on a customer's physical location and sometimes integrates it with popular location-based GPS applications. It can be used for networking or contact management as well to help increase sales based on location. Business-to-business transactions Despite the general notion that CRM systems were created for customer-centric businesses, they can also be applied to B2B environments to streamline and improve customer management conditions. For the best level of CRM operation in a B2B environment, the software must be personalized and delivered at individual levels. The main differences between business-to-consumer (B2C) and business-to-business CRM systems concern aspects like sizing of contact databases and length of relationships. Market trends Social networking In the Gartner CRM Summit 2010 challenges like "system tries to capture data from social networking traffic like Twitter, handles Facebook page addresses or other online social networking sites" were discussed and solutions were provided that would help in bringing more clientele. The era of the "social customer" refers to the use of social media by customers. Mobile Some CRM systems are equipped with mobile capabilities, making information accessible to remote sales staff. Cloud computing and SaaS Many CRM vendors offer subscription-based web tools (cloud computing) and SaaS. Salesforce.com was the first company to provide enterprise applications through a web browser, and has maintained its leadership position. Traditional providers moved into the cloud-based market via acquisitions of smaller providers: Oracle purchased RightNow in October 2011, and Taleo and Eloqua in 2012; and SAP acquired SuccessFactors in December 2011. Sales and sales force automation Sales forces also play an important role in CRM, as maximizing sales effectiveness and increasing sales productivity is a driving force behind the adoption of CRM software. Some of the top CRM trends identified in 2021 include focusing on customer service automation such as chatbots, hyper-personalization based on customer data and insights, and the use of unified CRM systems. CRM vendors support sales productivity with different products, such as tools that measure the effectiveness of ads that appear in 3D video games. Pharmaceutical companies were some of the first investors in sales force automation (SFA) and some are on their third- or fourth-generation implementations. However, until recently, the deployments did not extend beyond SFA—limiting their scope and interest to Gartner analysts. Vendor relationship management Another related development is vendor relationship management (VRM), which provide tools and services that allow customers to manage their individual relationship with vendors. VRM development has grown out of efforts by ProjectVRM at Harvard's Berkman Center for Internet & Society and Identity Commons' Internet Identity Workshops, as well as by a growing number of startups and established companies. VRM was the subject of a cover story in the May 2010 issue of CRM Magazine. Customer success Another trend worth noting is the rise of Customer Success as a discipline within companies. More and more companies establish Customer Success teams as separate from the traditional Sales team and task them with managing existing customer relations. This trend fuels demand for additional capabilities for a more holistic understanding of customer health, which is a limitation for many existing vendors in the space. As a result, a growing number of new entrants enter the market while existing vendors add capabilities in this area to their suites. AI and predictive analytics In 2017, artificial intelligence and predictive analytics were identified as the newest trends in CRM. Criticism Companies face large challenges when trying to implement CRM systems. Consumer companies frequently manage their customer relationships haphazardly and unprofitably. They may not effectively or adequately use their | different reasons. Firstly, firms can customize their offerings for each customer. By accumulating information across customer interactions and processing this information to discover hidden patterns, CRM applications help firms customize their offerings to suit the individual tastes of their customers. This customization enhances the perceived quality of products and services from a customer's viewpoint, and because the perceived quality is a determinant of customer satisfaction, it follows that CRM applications indirectly affect customer satisfaction. CRM applications also enable firms to provide timely, accurate processing of customer orders and requests and the ongoing management of customer accounts. For example, Piccoli and Applegate discuss how Wyndham uses IT tools to deliver a consistent service experience across its various properties to a customer. Both an improved ability to customize and reduced variability of the consumption experience enhance perceived quality, which in turn positively affects customer satisfaction. Furthermore, CRM applications also help firms manage customer relationships more effectively across the stages of relationship initiation, maintenance, and termination. Customer benefits With Customer relationship management systems, customers are served better on the day-to-day process. With more reliable information, their demand for self-service from companies will decrease. If there is less need to interact with the company for different problems, customer satisfaction level increases. These central benefits of CRM will be connected hypothetically to the three kinds of equity that are relationship, value, and brand, and in the end to customer equity. Eight benefits were recognized to provide value drivers. Enhanced ability to target profitable customers. Integrated assistance across channels. Enhanced sales force efficiency and effectiveness. Improved pricing. Customized products and services. Improved customer service efficiency and effectiveness. Individualized marketing messages are also called campaigns. Connect customers and all channels on a single platform. In 2012, after reviewing the previous studies, someone selected some of those benefits which are more significant in customer satisfaction and summarized them into the following cases: Improve customer services: In general, customers would have some questions, concerns, or requests. CRM services provide the ability to a company for producing, allocating, and managing requests or something made by customers. For example, call centre software, which helps to connect a customer to the manager or person who can best assist them with their existing problem, is one of the CRM abilities that can be implemented to increase efficiency. Increased personalized service or one-to-one service: Personalizing customer service or one-to-one service provides companies to improve understanding and gaining knowledge of the customers and also to have better knowledge about their customers' preferences, requirements and demands. Responsive to customer's needs: Customers' situations and needs can be understood by the firms focusing on customer needs and requirements. Customer segmentation: In CRM, segmentation is used to categorize customers, according to some similarity, such as industry, job or some other characteristics, into similar groups. Although these characteristics, can be one or more attributes. It can be defined as a subdividing the customers based on already known good discriminator. Improve customization of marketing: Meaning of customization of marketing is that the firm or organization adapt and changes its services or products based on presenting a different and unique product or service for each customer. To ensure that customer needs and requirements are met Customization is used by the organization. Companies can put investment in information from customers and then customize their products or services to maintain customer interests. Multichannel integration: Multichannel integration shows the point of co-creation of customer value in CRM. On the other hand, a company's skill to perform multichannel integration successfully is heavily dependent on the organization's ability to get together customer information from all channels and incorporate it with other related information. Time saving: CRM will let companies interact with customers more frequently, by personalized message and communication way which can be produced rapidly and matched on a timely basis, and finally they can better understand their customers and therefore look forward to their needs. Improve customer knowledge: Firms can make and improve products and services through the information from tracking (e.g. via website tracking) customer behaviour to customer tastes and needs. CRM could contribute to a competitive advantage in improving a firm's ability of customer information collecting to customize products and services according to customer needs. Examples Research has found a 5% increase in customer retention boosts lifetime customer profits by 50% on average across multiple industries, as well as a boost of up to 90% within specific industries such as insurance. Companies that have mastered customer relationship strategies have the most successful CRM programs. For example, MBNA Europe has had a 75% annual profit growth since 1995. The firm heavily invests in screening potential cardholders. Once proper clients are identified, the firm retains 97% of its profitable customers. They implement CRM by marketing the right products to the right customers. The firm's customers' card usage is 52% above the industry norm, and the average expenditure is 30% more per transaction. Also 10% of their account holders ask for more information on cross-sale products. Amazon has also seen great success through its customer proposition. The firm implemented personal greetings, collaborative filtering, and more for the customer. They also used CRM training for the employees to see up to 80% of customers repeat. Customer profile A customer profile is a detailed description of any particular classification of customer which is created to represent the typical users of a product or service. Customer profiling is a method to understand your customers in terms of demographics, behaviour and lifestyle. It is used to help make customer-focused decisions without confusing the scope of the project with personal opinion. Overall profiling is gathering information that sums up consumption habits so far and projects them into the future so that they can be grouped for marketing and advertising purposes. Customer or consumer profiles are the essences of the data that is collected alongside core data (name, address, company) and processed through customer analytics methods, essentially a type of profiling. The three basic methods of customer profiling are the psychographic approach, the consumer typology approach, and the consumer characteristics approach. These customer profiling methods help you design your business around who your customers are and help you make better customer-centered decisions. Improving CRM within a firm Consultants argue that it is important for companies to establish strong CRM systems to improve their relational intelligence. According to this argument, a company must recognize that people have many different types of relationships with different brands. One research study analyzed relationships between consumers in China, Germany, Spain, and the United States, with over 200 brands in 11 industries including airlines, cars, and media. This information is valuable as it provides demographic, behavioral, and value-based customer segmentation. These types of relationships can be both positive and negative. Some customers view themselves as friends of the brands, while others as enemies, and some are mixed with a love-hate relationship with the brand. Some relationships are distant, intimate, or anything in between. Analyzing the information Managers must understand the different reasons for the types of relationships, and provide the customer with what they are looking for. Companies can collect this information by using surveys, interviews, and more, with current customers. Companies must also improve the relational intelligence of their CRM systems. These days, companies store and receive huge amounts of data through emails, online chat sessions, phone calls, and more. Many companies do not properly make use of this great amount of data, however. All of these are signs of what types of relationships the customer wants with the firm, and therefore companies may consider investing more time and effort in building out their relational intelligence. Companies can use data mining technologies and web searches to understand relational signals. Social media such as social networking sites, blogs, and forums can also be used to collect and analyze information. Understanding the customer and capturing this data allows companies to convert customers' signals into information and knowledge that the firm can use to understand a potential customer's desired relations with a brand. Employee training Many firms have also implemented training programs to teach employees how to recognize and effectively create strong customer-brand relationships. For example, Harley Davidson sent its employees on the road with customers, who were motorcycle enthusiasts, to help solidify relationships. Other employees have also been trained in social psychology and the social sciences to help bolster strong customer relationships. Customer service representatives must be educated to value customer relationships and trained to understand existing customer profiles. Even the finance and legal departments should understand how to manage and build relationships with customers. In practice Call centers Contact centre CRM providers are popular for small and mid-market businesses. These systems codify the interactions between the company and customers by using analytics and key performance indicators to give the users information on where to focus their marketing and customer service. This allows agents to have access to a caller's history to provide personalized customer communication. The intention is to maximize average revenue per user, decrease churn rate and decrease idle and unproductive contact with the customers. Growing in popularity is the idea of gamifying, or using game design elements and game principles in a non-game environment such as customer service environments. The gamification of customer service environments includes providing elements found in games like rewards and bonus points to customer service representatives as a method of feedback for a job well done. Gamification tools can motivate agents by tapping into their desire for rewards, recognition, achievements, and competition. Contact-center automation Contact-center automation, CCA, the practice of having an integrated system that coordinates contacts between an organization and the public, is designed to reduce the repetitive and tedious parts of a contact center agent's job. Automation prevents this by having pre-recorded audio messages that help customers solve their problems. For example, an automated contact center may be able to re-route a customer through a series of commands asking him or her to select a certain number to speak with a particular contact center agent who specializes in the field in which the customer has a question. Software tools can also integrate with the agent's desktop tools to handle customer questions and requests. This also saves time on behalf of the employees. Social media Social CRM involves the use of social media and technology to engage and learn from consumers. Because the public, especially young people, are increasingly using social networking sites, companies use these sites to draw attention to their products, services and brands, with the aim of building up customer relationships to increase demand. With the increase in the use of social media platforms, integrating CRM with the help of social media can potentially be a quicker and more cost-friendly process. Some CRM systems integrate social media sites like Twitter, LinkedIn, and Facebook to track and communicate with customers. These customers also share their own opinions and experiences with a company's products and services, giving these firms more insight. Therefore, these firms can both share their own opinions and also track the opinions of their customers. Enterprise feedback management software platforms combine internal survey data with trends identified through social media to allow businesses to make more accurate decisions on which products to supply. Location-based services CRM systems can also include technologies that create geographic marketing campaigns. The systems take in information based on a customer's physical location and sometimes integrates it with popular location-based GPS applications. It can be used for networking or contact management as well to help increase sales based on location. Business-to-business transactions Despite the general notion that CRM systems were created for customer-centric businesses, they can also |
than most other casino games and can be much greater. For example, there are 216 (6 × 6 × 6) possible outcomes for a single throw of three dice. For a specific number: there are 75 possible outcomes, where only one die will match the number; there are 15 possible outcomes, where only two dice will match; and there is one possible outcome, where all three dice will match. At odds of 1 to 1, 2 to 1 and 10 to 1 respectively for each of these types of outcome, the expected loss as a percentage of the stake wagered is: 1 - ((75/216) × 2 + (15/216) × 3 + (1/216) × 11) = 4.6% At worse odds of 1 to 1, 2 to 1 and 3 to 1, the expected loss as a percentage of the stake wagered is: 1 - ((75/216) × 2 + (15/216) × 3 + (1/216) × 4) = 7.9% If the odds are adjusted to 1 to 1, 3 to 1 and 5 to 1 respectively, the expected loss as a percentage is: 1 - ((75/216) × 2 + (15/216) × 4 + (1/216) × 6) = 0% Commercially organised gambling games almost always have a house advantage which acts as a fee for the privilege of being allowed to play the game, so the last scenario would represent a payout system used for a home game, where players take turns being the role of banker/casino. Variants Chuck-a-luck is essentially identical to the traditional Vietnamese game Bau cua ca cop. A version of the Big Six wheel is loosely based on chuck-a-luck, with various combinations of three dice appearing in 54 slots on a spinning wheel. Because of the distribution of the combinations, the house advantage or edge for this wheel is greater than for chuck-a-luck. In popular culture There is a reference to chuck-a-luck in the Abbott and Costello film Hold That Ghost. In Fritz Lang's 1952 film, Rancho Notorious, chuck-a-luck is the name of the ranch run by Altar Keane (played by Marlene Dietrich) where outlaws hide from the law. | game resembles Crown and anchor, but with numbered dice instead of symbols. Additional wagers that are commonly seen, and their associated odds, are set out in the table below. House advantage or edge Chuck-a-luck is a game of chance. On average, the players are expected to lose more than they win. The casino's advantage (house advantage or house edge) is greater than most other casino games and can be much greater. For example, there are 216 (6 × 6 × 6) possible outcomes for a single throw of three dice. For a specific number: there are 75 possible outcomes, where only one die will match the number; there are 15 possible outcomes, where only two dice will match; and there is one possible outcome, where all three dice will match. At odds of 1 to 1, 2 to 1 and 10 to 1 respectively for each of these types of outcome, the expected loss as a percentage of the stake wagered is: 1 - ((75/216) × 2 + (15/216) × 3 + (1/216) × 11) = 4.6% At worse odds of 1 to 1, 2 to 1 and 3 to 1, the expected loss as a percentage of the stake wagered is: 1 - ((75/216) × 2 + (15/216) × 3 + (1/216) × 4) = 7.9% If the odds are adjusted to 1 to 1, 3 to 1 and 5 to 1 respectively, the expected loss as a percentage is: 1 - ((75/216) × 2 + (15/216) × 4 + (1/216) × 6) = 0% Commercially organised gambling games almost always have a house advantage which acts as a fee for the privilege of being allowed to play the game, so the last scenario would represent a payout system used for a home game, where players take turns being the role of banker/casino. Variants Chuck-a-luck is essentially identical to the traditional Vietnamese game Bau cua ca cop. A version of the Big Six wheel is loosely based on chuck-a-luck, with various combinations of three dice appearing in 54 slots on a |
have an omnivorous diet primarily consisting of seeds, nuts and other fruits, and buds. They also commonly eat grass, shoots, and many other forms of plant matter, as well as fungi, insects and other arthropods, small frogs, worms, and bird eggs. They will also occasionally eat newly hatched baby birds. Around humans, chipmunks can eat cultivated grains and vegetables, and other plants from farms and gardens, so they are sometimes considered pests. Chipmunks mostly forage on the ground, but they climb trees to obtain nuts such as hazelnuts and acorns. At the beginning of autumn, many species of chipmunk begin to stockpile nonperishable foods for winter. They mostly cache their foods in a larder in their burrows and remain in their nests until spring, unlike some other species which make multiple small caches of food. Cheek pouches allow chipmunks to carry food items to their burrows for either storage or consumption. Ecology and life history Eastern chipmunks, the largest of the chipmunks, mate in early spring and again in early summer, producing litters of four or five young twice each year. Western chipmunks breed only once a year. The young emerge from the burrow after about six weeks and strike out on their own within the next two weeks. These small mammals fulfill several important functions in forest ecosystems. Their activities harvesting and hoarding tree seeds play a crucial role in seedling establishment. They consume many different kinds of fungi, including those involved in symbiotic mycorrhizal associations with trees, and are an important vector for dispersal of the spores of subterranean sporocarps (truffles) which have co-evolved with these and other mycophagous mammals and thus lost the ability to disperse their spores through the air. Chipmunks construct extensive burrows which can be more than in length with several well-concealed entrances. The sleeping quarters are kept clear of shells, and feces are stored in refuse tunnels. The eastern chipmunk hibernates in the winter, while western chipmunks do not, relying on the stores in their burrows. Chipmunks play an important role as prey for various predatory mammals and birds but are also opportunistic predators themselves, particularly with regard to bird eggs and nestlings, as in the case of eastern chipmunks and mountain bluebirds (Siala currucoides). Chipmunks typically live about three years, although some have been observed living to nine years in captivity. Chipmunks are diurnal. In captivity, they are said to sleep for an average of about 15 hours a day. It is thought that mammals which can sleep in hiding, such as rodents and bats, tend to sleep longer than those | the twentieth century have placed the chipmunks into a single genus. However, studies of mitochondrial DNA show that the divergence between each of the three chipmunk groups is comparable to the genetic differences between Marmota and Spermophilus, so the three genera classifications have been adopted here. The common name originally may have been spelled "chitmunk", from the native Odawa (Ottawa) word jidmoonh, meaning "red squirrel" (cf. Ojibwe ajidamoo). The earliest form cited in the Oxford English Dictionary (from 1842) is "chipmonk", but "chipmunk" appears in several books from the 1820s and 1830s. Other early forms include "chipmuck" and "chipminck", and in the 1830s they were also referred to as "chip squirrels", probably in reference to the sound they make. In the mid-19th century, John James Audubon and his sons included a lithograph of the chipmunk in their Viviparous Quadrupeds of North America, calling it the "chipping squirrel [or] hackee". Chipmunks have also been referred to as "striped squirrels", "timber tigers", "minibears", and "ground squirrels" (although the name "ground squirrel" usually refers to other squirrels, such as those of the genus Spermophilus). Diet Chipmunks have an omnivorous diet primarily consisting of seeds, nuts and other fruits, and buds. They also commonly eat grass, shoots, and many other forms of plant matter, as well as fungi, insects and other arthropods, small frogs, worms, and bird eggs. They will also occasionally eat newly hatched baby birds. Around humans, chipmunks can eat cultivated grains and vegetables, and other plants from farms and gardens, so they are sometimes considered pests. Chipmunks mostly forage on the ground, but they climb trees to obtain nuts such as hazelnuts and acorns. At the beginning of |
Koenig produced algorithmic composition programs which were a generalisation of his own serial composition practice. This is not exactly similar to Xenakis' work as he used mathematical abstractions and examined how far he could explore these musically. Koenig's software translated the calculation of mathematical equations into codes which represented musical notation. This could be converted into musical notation by hand and then performed by human players. His programs Project 1 and Project 2 are examples of this kind of software. Later, he extended the same kind of principles into the realm of synthesis, enabling the computer to produce the sound directly. SSP is an example of a program which performs this kind of function. All of these programs were produced by Koenig at the Institute of Sonology in Utrecht in the 1970s. In the 2000s, Andranik Tangian developed a computer algorithm to determine the time event structures for rhythmic canons and rhythmic fugues, which were then "manually" worked out into harmonic compositions Eine kleine Mathmusik I and Eine kleine Mathmusik II performed by computer; for scores and recordings see. Computer-generated scores for performance by human players Computers have also been used in an attempt to imitate the music of great composers of the past, such as Mozart. A present exponent of this technique is David Cope, whose computer programs analyse works of other composers to produce new works in a similar style. Cope's best-known program is Emily Howell. Melomics, a research project from the University of Málaga (Spain), developed a computer composition cluster named Iamus, which composes complex, multi-instrument pieces for editing and performance. Since its inception, Iamus has composed a full album in 2012, appropriately named Iamus, which New Scientist described as "the first major work composed by a computer and performed by a full orchestra". The group has also developed an API for developers to utilize the technology, and makes its music available on its website. Computer-aided algorithmic composition Computer-aided algorithmic composition (CAAC, pronounced "sea-ack") is the implementation and use of algorithmic composition techniques in software. This label is derived from the combination of two labels, each too vague for continued use. The label computer-aided composition lacks the specificity of using generative algorithms. Music produced with notation or sequencing software could easily be considered computer-aided composition. The label algorithmic composition is likewise too broad, particularly in that it does not specify the use of a computer. The term computer-aided, rather than computer-assisted, is used in the same manner as computer-aided design. Machine improvisation Machine improvisation uses computer algorithms to create improvisation on existing music materials. This is usually done by sophisticated recombination of musical phrases extracted from existing music, either live or pre-recorded. In order to achieve credible improvisation in particular style, machine improvisation uses machine learning and pattern matching algorithms to analyze existing musical examples. The resulting patterns are then used to create new variations "in the style" of the original music, developing a notion of stylistic reinjection. This is different from other improvisation methods with computers that use algorithmic composition to generate new music without performing analysis of existing music examples. Statistical style modeling Style modeling implies building a computational representation of the musical surface that captures important stylistic features from data. Statistical approaches are used to capture the redundancies in terms of pattern dictionaries or repetitions, which are later recombined to generate new musical data. Style mixing can be realized by analysis of a database containing multiple musical examples in different styles. Machine Improvisation builds upon a long musical tradition of statistical modeling that began with Hiller and Isaacson's Illiac Suite for String Quartet (1957) and Xenakis' uses of Markov chains and stochastic processes. Modern methods include the use of lossless data compression for incremental parsing, prediction suffix tree, string searching and more. Style mixing is possible by blending models derived from several musical sources, with the first style mixing done by S. Dubnov in a piece NTrope Suite using Jensen-Shannon joint source model. Later the use of factor oracle algorithm (basically a factor oracle is a finite state automaton constructed in linear time and space in an incremental fashion) was adopted for music by Assayag and Dubnov and became the basis for several systems that use stylistic re-injection. Implementations The first implementation of statistical style modeling was the LZify method in Open Music, followed by the Continuator system that implemented interactive machine improvisation that interpreted the LZ incremental parsing in terms of Markov models and used it for real time style modeling developed by François Pachet at Sony CSL Paris in 2002. Matlab implementation of the Factor Oracle machine improvisation can be found as part of Computer Audition toolbox. There is also an NTCC implementation of the Factor Oracle machine improvisation. OMax is a software environment developed in IRCAM. OMax uses OpenMusic and Max. It is based on researches on stylistic modeling carried out by Gerard Assayag and Shlomo Dubnov and on researches on improvisation with the computer by G. Assayag, M. Chemillier and G. Bloch (a.k.a. the OMax Brothers) in the Ircam Music Representations group. One of the problems in modeling audio signals with factor oracle is the symbolization of features from continuous values to a discrete alphabet. This problem was solved in the Variable Markov Oracle (VMO) available as python implementation, using an information rate criteria for finding the optimal or most informative representation. Live coding Live coding (sometimes known as 'interactive programming', 'on-the-fly programming', 'just in time programming') is the name given to the process of writing software in realtime as part of a performance. Recently it has been explored as a more rigorous alternative to laptop musicians who, live coders often feel, lack the charisma and pizzazz of musicians performing live. See also Acousmatic music Adaptive music Chiptune Comparison of audio synthesis environments Csound Digital audio workstation Digital synthesizer Electronic music Emily Howell Fast Fourier transform Human–computer interaction Laptronica List of music software Module file Music information retrieval Music Macro Language Music notation software Music sequencer New Interfaces for Musical Expression Physical modeling synthesis Programming (music) Sampling (music) Sound and music computing Tracker Vaporwave Video game music Vocaloid References Further reading Ariza, C. 2005. "Navigating the Landscape of Computer-Aided Algorithmic Composition Systems: A Definition, Seven Descriptors, and a Lexicon of Systems and Research." In Proceedings of the International Computer Music Conference. San Francisco: International Computer Music Association. 765–772. Ariza, C. 2005. An Open Design for Computer-Aided Algorithmic Music Composition: | present exponent of this technique is David Cope, whose computer programs analyse works of other composers to produce new works in a similar style. Cope's best-known program is Emily Howell. Melomics, a research project from the University of Málaga (Spain), developed a computer composition cluster named Iamus, which composes complex, multi-instrument pieces for editing and performance. Since its inception, Iamus has composed a full album in 2012, appropriately named Iamus, which New Scientist described as "the first major work composed by a computer and performed by a full orchestra". The group has also developed an API for developers to utilize the technology, and makes its music available on its website. Computer-aided algorithmic composition Computer-aided algorithmic composition (CAAC, pronounced "sea-ack") is the implementation and use of algorithmic composition techniques in software. This label is derived from the combination of two labels, each too vague for continued use. The label computer-aided composition lacks the specificity of using generative algorithms. Music produced with notation or sequencing software could easily be considered computer-aided composition. The label algorithmic composition is likewise too broad, particularly in that it does not specify the use of a computer. The term computer-aided, rather than computer-assisted, is used in the same manner as computer-aided design. Machine improvisation Machine improvisation uses computer algorithms to create improvisation on existing music materials. This is usually done by sophisticated recombination of musical phrases extracted from existing music, either live or pre-recorded. In order to achieve credible improvisation in particular style, machine improvisation uses machine learning and pattern matching algorithms to analyze existing musical examples. The resulting patterns are then used to create new variations "in the style" of the original music, developing a notion of stylistic reinjection. This is different from other improvisation methods with computers that use algorithmic composition to generate new music without performing analysis of existing music examples. Statistical style modeling Style modeling implies building a computational representation of the musical surface that captures important stylistic features from data. Statistical approaches are used to capture the redundancies in terms of pattern dictionaries or repetitions, which are later recombined to generate new musical data. Style mixing can be realized by analysis of a database containing multiple musical examples in different styles. Machine Improvisation builds upon a long musical tradition of statistical modeling that began with Hiller and Isaacson's Illiac Suite for String Quartet (1957) and Xenakis' uses of Markov chains and stochastic processes. Modern methods include the use of lossless data compression for incremental parsing, prediction suffix tree, string searching and more. Style mixing is possible by blending models derived from several musical sources, with the first style mixing done by S. Dubnov in a piece NTrope Suite using Jensen-Shannon joint source model. Later the use of factor oracle algorithm (basically a factor oracle is a finite state automaton constructed in linear time and space in an incremental fashion) was adopted for music by Assayag and Dubnov and became the basis for several systems that use stylistic re-injection. Implementations The first implementation of statistical style modeling was the LZify method in Open Music, followed by the Continuator system that implemented interactive machine improvisation that interpreted the LZ incremental parsing in terms of Markov models and used it for real time style modeling developed by François Pachet at Sony CSL Paris in 2002. Matlab implementation of the Factor Oracle machine improvisation can be found as part of Computer Audition toolbox. There is also an NTCC implementation of the Factor Oracle machine improvisation. OMax is a software environment developed in IRCAM. OMax uses OpenMusic and Max. It is based on researches on stylistic modeling carried out by Gerard Assayag and Shlomo Dubnov and on researches on improvisation with the computer by G. Assayag, M. Chemillier and G. Bloch (a.k.a. the OMax Brothers) in the Ircam Music Representations group. One of the problems in modeling audio signals with factor oracle is the symbolization of features from continuous values to a discrete alphabet. This problem was solved in the Variable Markov Oracle (VMO) available as python implementation, using an information rate criteria for finding the optimal or most informative representation. Live coding Live coding (sometimes known as 'interactive programming', 'on-the-fly programming', 'just in time programming') is the name given to the process of writing software in realtime as part of a performance. Recently it has been explored as a more rigorous alternative to laptop musicians who, live coders often feel, lack the charisma and pizzazz of musicians performing live. See also Acousmatic music Adaptive music Chiptune Comparison of audio |
the brain uses to denote a class of things in the world. This is to say that it is literally, a symbol or group of symbols together made from the physical material of the brain. Concepts are mental representations that allow us to draw appropriate inferences about the type of entities we encounter in our everyday lives. Concepts do not encompass all mental representations, but are merely a subset of them. The use of concepts is necessary to cognitive processes such as categorization, memory, decision making, learning, and inference. Concepts are thought to be stored in long term cortical memory, in contrast to episodic memory of the particular objects and events which they abstract, which are stored in hippocampus. Evidence for this separation comes from hippocampal damaged patients such as patient HM. The abstraction from the day's hippocampal events and objects into cortical concepts is often considered to be the computation underlying (some stages of) sleep and dreaming. Many people (beginning with Aristotle) report memories of dreams which appear to mix the day's events with analogous or related historical concepts and memories, and suggest that they were being sorted or organised into more abstract concepts. ("Sort" is itself another word for concept, and "sorting" thus means to organise into concepts.) Concepts as abstract objects The semantic view of concepts suggests that concepts are abstract objects. In this view, concepts are abstract objects of a category out of a human's mind rather than some mental representations. There is debate as to the relationship between concepts and natural language. However, it is necessary at least to begin by understanding that the concept "dog" is philosophically distinct from the things in the world grouped by this concept—or the reference class or extension. Concepts that can be equated to a single word are called "lexical concepts". The study of concepts and conceptual structure falls into the disciplines of linguistics, philosophy, psychology, and cognitive science. In the simplest terms, a concept is a name or label that regards or treats an abstraction as if it had concrete or material existence, such as a person, a place, or a thing. It may represent a natural object that exists in the real world like a tree, an animal, a stone, etc. It may also name an artificial (man-made) object like a chair, computer, house, etc. Abstract ideas and knowledge domains such as freedom, equality, science, happiness, etc., are also symbolized by concepts. It is important to realize that a concept is merely a symbol, a representation of the abstraction. The word is not to be mistaken for the thing. For example, the word "moon" (a concept) is not the large, bright, shape-changing object up in the sky, but only represents that celestial object. Concepts are created (named) to describe, explain and capture reality as it is known and understood. A priori concepts Kant maintained the view that human minds possess pure or a priori concepts. Instead of being abstracted from individual perceptions, like empirical concepts, they originate in the mind itself. He called these concepts categories, in the sense of the word that means predicate, attribute, characteristic, or quality. But these pure categories are predicates of things in general, not of a particular thing. According to Kant, there are twelve categories that constitute the understanding of phenomenal objects. Each category is that one predicate which is common to multiple empirical concepts. In order to explain how an a priori concept can relate to individual phenomena, in a manner analogous to an a posteriori concept, Kant employed the technical concept of the schema. He held that the account of the concept as an abstraction of experience is only partly correct. He called those concepts that result from abstraction "a posteriori concepts" (meaning concepts that arise out of experience). An empirical or an a posteriori concept is a general representation (Vorstellung) or non-specific thought of that which is common to several specific perceived objects (Logic, I, 1., §1, Note 1) A concept is a common feature or characteristic. Kant investigated the way that empirical a posteriori concepts are created. Embodied content In cognitive linguistics, abstract concepts are transformations of concrete concepts derived from embodied experience. The mechanism of transformation is structural mapping, in which properties of two or more source domains are selectively mapped onto a blended space (Fauconnier & Turner, 1995; see conceptual blending). A common class of blends are metaphors. This theory contrasts with the rationalist view that concepts are perceptions (or recollections, in Plato's term) of an independently existing world of ideas, in that it denies the existence of any such realm. It also contrasts with the empiricist view that concepts are abstract generalizations of individual experiences, because the contingent and bodily experience is preserved in a concept, and not abstracted away. While the perspective is compatible with Jamesian pragmatism, the notion of the transformation of embodied concepts through structural mapping makes a distinct contribution to the problem of concept formation. Realist universal concepts Platonist views of the mind construe concepts as abstract objects. Plato was the starkest proponent of the realist thesis of universal concepts. By his view, concepts (and ideas in general) are innate ideas that were instantiations of a transcendental world of pure forms that lay behind the veil of the physical world. In this way, universals were explained as transcendent objects. Needless to say, this form of realism was tied deeply with Plato's ontological projects. This remark on Plato is not of merely historical interest. For example, the view that numbers are Platonic objects was revived by Kurt Gödel as a result of certain puzzles that he took to arise from the phenomenological accounts. Sense and reference Gottlob Frege, founder of the analytic tradition in philosophy, famously argued for the analysis of language in terms of sense and reference. For him, the sense of an expression in language describes a certain state of affairs in the world, namely, the way that some object is presented. Since many commentators view the notion of sense as identical to the notion of concept, and Frege regards senses as the linguistic representations of states of affairs in the world, it seems to follow that we may understand concepts as the manner in which we grasp the world. Accordingly, concepts (as senses) have an ontological status. Concepts in calculus According to Carl Benjamin Boyer, in the introduction to his The History of the Calculus and its Conceptual Development, concepts in calculus do not refer to perceptions. As long as the concepts are useful and mutually compatible, they are accepted on their own. For example, the concepts of the derivative and the integral are not considered to refer to spatial or temporal perceptions of the external world of experience. Neither are they related in any way to mysterious limits in which quantities are on the verge of nascence or evanescence, that is, coming into or going out of existence. The abstract concepts are now considered to be totally autonomous, even though they originated from the process of abstracting or taking away qualities from perceptions until only the common, essential attributes remained. Notable theories on the structure of concepts Classical theory The classical theory of concepts, also referred to as the empiricist theory of concepts, is the oldest theory about the structure of concepts (it can be traced back to Aristotle), and was prominently held until the 1970s. The classical theory of concepts says that concepts have a definitional structure. Adequate definitions of the kind required by this theory usually take the form of a list of features. These features must have two important qualities to provide a comprehensive definition. Features entailed by the definition of a concept must be both necessary and sufficient for membership in the class of things covered by a particular concept. A feature is considered necessary if every member of the denoted class has that feature. A feature is considered sufficient if something has all the parts required by the definition. For example, the classic example bachelor is said to be defined by unmarried and man. An entity is a bachelor (by this definition) if and only if it is both unmarried and a man. To check whether something is a member of the class, you compare its qualities to the features in the definition. Another key part of this theory is that it obeys the law of the excluded middle, which means that there are no partial members of a class, you are either in or out. The classical theory persisted for so long unquestioned because it seemed intuitively correct and has great explanatory power. It can explain how concepts would be acquired, how we use them to categorize and how we use the structure of a concept to determine its referent class. In fact, for many years it was one of the major activities in philosophy—concept analysis. Concept analysis is the act of trying to articulate the necessary and sufficient conditions for the membership in the referent class | (or recollections, in Plato's term) of an independently existing world of ideas, in that it denies the existence of any such realm. It also contrasts with the empiricist view that concepts are abstract generalizations of individual experiences, because the contingent and bodily experience is preserved in a concept, and not abstracted away. While the perspective is compatible with Jamesian pragmatism, the notion of the transformation of embodied concepts through structural mapping makes a distinct contribution to the problem of concept formation. Realist universal concepts Platonist views of the mind construe concepts as abstract objects. Plato was the starkest proponent of the realist thesis of universal concepts. By his view, concepts (and ideas in general) are innate ideas that were instantiations of a transcendental world of pure forms that lay behind the veil of the physical world. In this way, universals were explained as transcendent objects. Needless to say, this form of realism was tied deeply with Plato's ontological projects. This remark on Plato is not of merely historical interest. For example, the view that numbers are Platonic objects was revived by Kurt Gödel as a result of certain puzzles that he took to arise from the phenomenological accounts. Sense and reference Gottlob Frege, founder of the analytic tradition in philosophy, famously argued for the analysis of language in terms of sense and reference. For him, the sense of an expression in language describes a certain state of affairs in the world, namely, the way that some object is presented. Since many commentators view the notion of sense as identical to the notion of concept, and Frege regards senses as the linguistic representations of states of affairs in the world, it seems to follow that we may understand concepts as the manner in which we grasp the world. Accordingly, concepts (as senses) have an ontological status. Concepts in calculus According to Carl Benjamin Boyer, in the introduction to his The History of the Calculus and its Conceptual Development, concepts in calculus do not refer to perceptions. As long as the concepts are useful and mutually compatible, they are accepted on their own. For example, the concepts of the derivative and the integral are not considered to refer to spatial or temporal perceptions of the external world of experience. Neither are they related in any way to mysterious limits in which quantities are on the verge of nascence or evanescence, that is, coming into or going out of existence. The abstract concepts are now considered to be totally autonomous, even though they originated from the process of abstracting or taking away qualities from perceptions until only the common, essential attributes remained. Notable theories on the structure of concepts Classical theory The classical theory of concepts, also referred to as the empiricist theory of concepts, is the oldest theory about the structure of concepts (it can be traced back to Aristotle), and was prominently held until the 1970s. The classical theory of concepts says that concepts have a definitional structure. Adequate definitions of the kind required by this theory usually take the form of a list of features. These features must have two important qualities to provide a comprehensive definition. Features entailed by the definition of a concept must be both necessary and sufficient for membership in the class of things covered by a particular concept. A feature is considered necessary if every member of the denoted class has that feature. A feature is considered sufficient if something has all the parts required by the definition. For example, the classic example bachelor is said to be defined by unmarried and man. An entity is a bachelor (by this definition) if and only if it is both unmarried and a man. To check whether something is a member of the class, you compare its qualities to the features in the definition. Another key part of this theory is that it obeys the law of the excluded middle, which means that there are no partial members of a class, you are either in or out. The classical theory persisted for so long unquestioned because it seemed intuitively correct and has great explanatory power. It can explain how concepts would be acquired, how we use them to categorize and how we use the structure of a concept to determine its referent class. In fact, for many years it was one of the major activities in philosophy—concept analysis. Concept analysis is the act of trying to articulate the necessary and sufficient conditions for the membership in the referent class of a concept. For example, Shoemaker's classic "Time Without Change" explored whether the concept of the flow of time can include flows where no changes take place, though change is usually taken as a definition of time. Arguments against the classical theory Given that most later theories of concepts were born out of the rejection of some or all of the classical theory, it seems appropriate to give an account of what might be wrong with this theory. In the 20th century, philosophers such as Wittgenstein and Rosch argued against the classical theory. There are six primary arguments summarized as follows: It seems that there simply are no definitions—especially those based in sensory primitive concepts. It seems as though there can be cases where our ignorance or error about a class means that we either don't know the definition of a concept, or have incorrect notions about what a definition of a particular concept might entail. Quine's argument against analyticity in Two Dogmas of Empiricism also holds as an argument against definitions. Some concepts have fuzzy membership. There are items for which it is vague whether or not they fall into (or out of) a particular referent class. This is not possible in the classical theory as everything has equal and full membership. Rosch found typicality effects which cannot be explained by the classical theory of concepts, these sparked the prototype theory. See below. Psychological experiments show no evidence for our using concepts as strict definitions. Prototype theory Prototype theory came out of problems with the classical view of conceptual structure. Prototype theory says that concepts specify properties that members of a class tend to possess, rather than must possess. Wittgenstein, Rosch, Mervis, Berlin, Anglin, and Posner are a few of the key proponents and creators of this theory. Wittgenstein describes the relationship between members of a class as family resemblances. There are not necessarily any necessary conditions for membership; a dog can still be a dog with only three legs. This view is particularly supported by psychological experimental evidence for prototypicality effects. Participants willingly and consistently rate objects in categories like 'vegetable' or 'furniture' as more or less typical of that class. It seems that our categories are fuzzy psychologically, and so this structure has explanatory power. We can judge an item's membership of the referent class of a concept by comparing it to the typical member—the most central member of the concept. If it is similar enough in the relevant ways, it will be cognitively admitted as a member of the relevant class of entities. Rosch suggests that every category is represented by a central exemplar which embodies all or the maximum possible number of features of a given category. Lech, Gunturkun, and Suchan explain that categorization involves many areas of the brain. Some of these are: visual association areas, prefrontal cortex, basal ganglia, and temporal lobe. The Prototype perspective is proposed as an alternative view to the Classical approach. While the Classical theory requires an all-or-nothing membership in a group, prototypes allow for more fuzzy boundaries and are characterized by attributes. Lakeoff stresses that experience and cognition are critical to the function of language, and Labov's experiment found that the function that an artifact contributed to what people categorized it as. For example, a container holding mashed potatoes versus tea swayed people toward classifying them as a bowl and a cup, respectively. This experiment also illuminated the optimal dimensions of what the prototype for "cup" is. Prototypes also deal with the essence of things and to what extent they belong to a |
now published biweekly. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2017 impact factor of 3.304. See | and Tissue Research References External links Molecular and cellular biology journals Biweekly journals Publications established in 2002 English-language journals Taylor & Francis academic |
Competition (Brussels, Belgium) Ronald Sachs International Music Competition – Strings, Woodwinds, Brass, Piano (North Carolina, US) Trinity International Music Competition - Online (Toronto, Canada) TROMP international music competition & Festival (Eindhoven, Netherlands) TONALi Music Competition (Hamburg, Germany) SVIRÉL International Music Competition and Festival for Soloists and Chamber Groups (Slovenia) Triomphe de l'Art International Music Competition (Brussels, Belgium) Furioso Violin Competition (online) Youth Zeal Music Competition (Helsinki, Finland) Zodiac International Music Competition (online) Large Ensembles Ictus International Music Competition. (online international competition for bands, orchestras and jazz ensembles) Musicology Annual Musicology Competition (Worldwide) Piano/keyboard Vienna Youth International Piano Competition (Vienna, Austria) Aarhus International Piano Competition (Aarhus, Denmark) Ambitus Orgelconcours (The Netherlands) American Paderewski Piano Competition (Los Angeles, US) American Protege International Piano and Strings Competition (US) Anton Rubinstein Competition (Dresden, Germany) Arthur Rubinstein International Piano Master Competition (Tel Aviv, Israel) Astral National Auditions (Philadelphia, US) Beethoven National Piano Competition (Chantilly, US) BNDES International Piano Competition (Rio de Janeiro, Brazil) Bradshaw and Buono International Piano Competition (New York, US) César Franck International Piano Competition (Brussels, Belgium) Ciutat de Carlet International Piano Competition (Valencia, Spain) Clara Haskil International Piano Competition (Vevey, Switzerland) Cleveland International Piano Competition (Cleveland, US) Concours Géza Anda (Zurich, Switzerland) Dallas Chamber Symphony International Piano Competition (Dallas, US) The Dranoff International Two Piano Foundation (Dallas, US) Dublin International Piano Competition (Dublin, Ireland) Epinal International Piano Competition (Epinal, France) EPTA - International Piano Competition Svetislav Stančić (Zagreb, Croatia) Eugen d'Albert International Music Competition (Giubiasco, Switzerland) Euro Elite International Music Competition (Toronto, Canada) Ferruccio Busoni International Piano Competition (Bolzano, Italy) Malta International Piano Competition, Valletta/Mdina, Malta Festival for Creative Pianists (US) George Enescu International Competition (Piano section) (Bucharest, Romania) Gina Bachauer International Piano Competition (Salt Lake City, US) Grand Maestro International Music Competition - Online (Canada) Grand Metropolitan International Music Competition - Online (Canada) Hamamatsu International Piano Competition (Hamamatsu, JP) Hilton Head International Piano Competition (Hilton Head Island, South Carolina, US) Holland International Piano Competition (The Hague, Netherlands) Honens International Piano Competition (Calgary, Canada) International Carl Bechstein Piano Competition (various, Europe) International Piano Competition "Johann Sebastian Bach" (Würzburg, Germany) International Beethoven Piano Competition Vienna (Internationaler Beethoven Klavierwettbewerb Wien; Vienna, Austria) International Chopin Piano Competition (Poland) International Cochran Piano Competition (Poland/Australia) International Ettore Pozzoli Piano Competition (Seregno, Italy) International Franz Liszt Piano Competition (Utrecht, Netherlands) International Fryderyk Chopin Piano Competition for Children and Youth (Szafarnia, Poland) International Geelvinck Fortepiano Concours (Amsterdam, The Netherlands) International Goedicke Organ Competition (Moscow, Russia) International Johann Sebastian Bach Competition (organ; Leipzig, Germany) International Mozart Piano Competition (Frascati (Rome), Italy) International Mozart Piano Competition (Salzburg, Austria) International Piano Competition for Outstanding Amateurs (Paris, France) International Radio Competition for Young Musicians Concertino Praga (Prague, Czech Republic) International Tchaikovsky Competition (Russia) International Telekom Beethoven Competition (Bonn, Germany) ISANGYUN Competition (Tongyeong, South Korea) JMIPC – James Mottram International Piano Competition at the Royal Northern College of Music (Manchester, England) José Iturbi International Piano Competition (Los Angeles, US) Karlovac International Piano Competition (Karlovac, Croatia) Kerikeri National Piano Competition (Kerikeri, New Zealand) Kissingen Piano Olympics (Kissinger Klavierolymp, Bad Kissingen, Germany) Lagny-sur-Marne International Piano Competition (Lagny, France) Leeds International Pianoforte Competition (Leeds, UK) Maria Canals International Music Competition (Barcelona, Spain) North American Virtuoso International Music Competition (Online) NTD International Piano Competition (New York, US) Olga Kern International Piano Competition (Albuquerque, New Mexico, US) ON STAGE International Classical Music Competition (Online Competition) Paloma O'Shea International Piano Competition (Santander, Spain) Panama International Piano Competition (Panama City, Panama) Peabody Mason International Piano Competition (Boston, US) Petr Eben International Organ Competition (Opava, Czech Republic) Pilar Bayona Piano Competition (Spain) Quebec Music Competition (Montreal, Quebec, Canada) Live and Online Queen Elisabeth Music Competition (Belgium) Ricard Viñes International Piano Competition (Lleida, Spain) Ronald Sachs International Music Competition The Gurwitz International Piano Competition, (San Antonio, Texas, US) San Diego International Piano Competition and Festival for Outstanding Amateurs (San Diego, California, US) Seattle Symphony Piano Competition (Seattle, Washington, US) Sendai International Music Competition (Sendai, Japan) The Shean Piano Competition (Edmonton, Alberta, Canada) St Albans International Organ Competition (St Albans, UK) Sussex International Piano Competition (Worthing, UK) Sydney International Piano Competition (Sydney, Australia) Thailand International Chopin Piano Competition (Bangkok, Thailand) Trinity International Music Competition - Online (Toronto, Canada) Triomphe de l'Art International Music Competition (Brussels, Belgium) US New Star Piano Competition (San Jose, US) Valencia International Piano Competition Prize Iturbi (Valencia, Spain) Van Cliburn International Piano Competition (Fort Worth, US) Vancouver International Music Competition (Vancouver, Canada) Vigo International Piano Competition (Vigo, Spain) World International Piano Competition (Santa Fe, US) Youth Zeal Music Competition (Helsinki, Finland) Young Pianist of the North International competition (Newcastle upon Tyne, UK) String instruments American Protege International Piano and Strings Competition (US) Animato International Violin Competition (Brisbane, Australia) Antonio Janigro International Cello Competition (Croatia) Appassionato International Youth Music Festival (Quebec, Canada) Carl Flesch International Violin Competition (London, UK; 1945–1992) Carl Nielsen International Music Competition (Odense, Denmark) Città di Brescia International Violin Competition (Brescia, Italy) Furioso violin Competition (online) George Enescu International Competition (Violin, Cello section) (Bucharest, Romania) Henryk Wieniawski Violin Competition (Poznań, Poland) International Arthur Grumiaux Competition for Young Violinists (Brussels, Belgium) International Brahms Competition (Pörtschach am Wörthersee, Austria) International Fritz Kreisler Competition (Vienna, Austria) International Jean Sibelius Violin Competition (Helsinki, Finland) International Violin Competition Leopold Mozart in Augsburg (Augsburg, Germany) International Radio Competition for Young Musicians Concertino Praga (Prague, Czech Republic) International Tchaikovsky Competition (Moscow, Russia) International Violin Competition Henri Marteau (Lichtenberg and Hof, Germany) International Violin Competition of Indianapolis (Indianapolis, US) ISANGYUN Competition (Tongyeong, South Korea) Klaipeda International Cello Competition (Klaipeda, Lithuania) Klein Competition (San Francisco, US) Lionel Tertis International Viola Competition (Isle of Man, UK) Maurice Vieux International Viola Competition (France) Michael Hill International Violin Competition (New Zealand) Moscow International David Oistrakh Violin Competition (Moscow, Russia) Paganini International Competition (Genoa, Italy) Primrose International Viola Competition (Albuquerque, New Mexico) Quebec Music Competition (Montreal, Quebec, Canada) Live and Online Queen Elisabeth Music Competition (Belgium) Ronald Sachs International Music Competition Mstislav Rostropovich International Cello Competition (Paris, France) Sendai International Music Competition (Sendai, Japan) Schoenfeld International String Competition (Harbin, China) The Shean Strings Competition (Edmonton, Alberta, Canada) Shanghai Isaac Stern International Violin Competition (Shanghai, China) Singapore International Violin Competition (Singapore) STREICHWERK International String Competition (Berlin, Germany) Stulberg International String Competition (US) Washington International Competition for Strings (Washington, D.C., US) National Chamber Ensemble Outstanding Young Artist Achievement Award Competition (Arlington, VA, US) Windsor Festival International String Competition (Windsor, UK) Triomphe de l'Art International Music Competition (Brussels, Belgium) Vancouver International Music Competition (Vancouver, Canada) Yehudi Menuhin | Competition (Berlin, Germany) The 1st International IMMA Records Classical Music Competition Gaudeamus Competition (Netherlands) Grand Maestro International Music Competition - Online (Canada) Grand Metropolitan International Music Competition - Online (Canada) International Guitar Competition & Festival (Berlin, Germany) International Joseph Joachim Violin Competition Hanover International Performers Competition Brno (Brno, Czech Republic) International Radio Competition for Young Musicians Concertino Praga (Prague, Czech Republic) Italy Percussion Competition (Montesilvano Pescara) Leos Janacek International Competition (Brno, Czech Republic) Montreal International Music Competition (Montreal, Canada) Matosinhos International Competition (Online) North American Virtuoso International Music Competition (Online) Quebec Music Competition (Montreal, Quebec, Canada) Live and Online Queen Elisabeth Music Competition (Brussels, Belgium) Ronald Sachs International Music Competition – Strings, Woodwinds, Brass, Piano (North Carolina, US) Trinity International Music Competition - Online (Toronto, Canada) TROMP international music competition & Festival (Eindhoven, Netherlands) TONALi Music Competition (Hamburg, Germany) SVIRÉL International Music Competition and Festival for Soloists and Chamber Groups (Slovenia) Triomphe de l'Art International Music Competition (Brussels, Belgium) Furioso Violin Competition (online) Youth Zeal Music Competition (Helsinki, Finland) Zodiac International Music Competition (online) Large Ensembles Ictus International Music Competition. (online international competition for bands, orchestras and jazz ensembles) Musicology Annual Musicology Competition (Worldwide) Piano/keyboard Vienna Youth International Piano Competition (Vienna, Austria) Aarhus International Piano Competition (Aarhus, Denmark) Ambitus Orgelconcours (The Netherlands) American Paderewski Piano Competition (Los Angeles, US) American Protege International Piano and Strings Competition (US) Anton Rubinstein Competition (Dresden, Germany) Arthur Rubinstein International Piano Master Competition (Tel Aviv, Israel) Astral National Auditions (Philadelphia, US) Beethoven National Piano Competition (Chantilly, US) BNDES International Piano Competition (Rio de Janeiro, Brazil) Bradshaw and Buono International Piano Competition (New York, US) César Franck International Piano Competition (Brussels, Belgium) Ciutat de Carlet International Piano Competition (Valencia, Spain) Clara Haskil International Piano Competition (Vevey, Switzerland) Cleveland International Piano Competition (Cleveland, US) Concours Géza Anda (Zurich, Switzerland) Dallas Chamber Symphony International Piano Competition (Dallas, US) The Dranoff International Two Piano Foundation (Dallas, US) Dublin International Piano Competition (Dublin, Ireland) Epinal International Piano Competition (Epinal, France) EPTA - International Piano Competition Svetislav Stančić (Zagreb, Croatia) Eugen d'Albert International Music Competition (Giubiasco, Switzerland) Euro Elite International Music Competition (Toronto, Canada) Ferruccio Busoni International Piano Competition (Bolzano, Italy) Malta International Piano Competition, Valletta/Mdina, Malta Festival for Creative Pianists (US) George Enescu International Competition (Piano section) (Bucharest, Romania) Gina Bachauer International Piano Competition (Salt Lake City, US) Grand Maestro International Music Competition - Online (Canada) Grand Metropolitan International Music Competition - Online (Canada) Hamamatsu International Piano Competition (Hamamatsu, JP) Hilton Head International Piano Competition (Hilton Head Island, South Carolina, US) Holland International Piano Competition (The Hague, Netherlands) Honens International Piano Competition (Calgary, Canada) International Carl Bechstein Piano Competition (various, Europe) International Piano Competition "Johann Sebastian Bach" (Würzburg, Germany) International Beethoven Piano Competition Vienna (Internationaler Beethoven Klavierwettbewerb Wien; Vienna, Austria) International Chopin Piano Competition (Poland) International Cochran Piano Competition (Poland/Australia) International Ettore Pozzoli Piano Competition (Seregno, Italy) International Franz Liszt Piano Competition (Utrecht, Netherlands) International Fryderyk Chopin Piano Competition for Children and Youth (Szafarnia, Poland) International Geelvinck Fortepiano Concours (Amsterdam, The Netherlands) International Goedicke Organ Competition (Moscow, Russia) International Johann Sebastian Bach Competition (organ; Leipzig, Germany) International Mozart Piano Competition (Frascati (Rome), Italy) International Mozart Piano Competition (Salzburg, Austria) International Piano Competition for Outstanding Amateurs (Paris, France) International Radio Competition for Young Musicians Concertino Praga (Prague, Czech Republic) International Tchaikovsky Competition (Russia) International Telekom Beethoven Competition (Bonn, Germany) ISANGYUN Competition (Tongyeong, South Korea) JMIPC – James Mottram International Piano Competition at the Royal Northern College of Music (Manchester, England) José Iturbi International Piano Competition (Los Angeles, US) Karlovac International Piano Competition (Karlovac, Croatia) Kerikeri National Piano Competition (Kerikeri, New Zealand) Kissingen Piano Olympics (Kissinger Klavierolymp, Bad Kissingen, Germany) Lagny-sur-Marne International Piano Competition (Lagny, France) Leeds International Pianoforte Competition (Leeds, UK) Maria Canals International Music Competition (Barcelona, Spain) North American Virtuoso International Music Competition (Online) NTD International Piano Competition (New York, US) Olga Kern International Piano Competition (Albuquerque, New Mexico, US) ON STAGE International Classical Music Competition (Online Competition) Paloma O'Shea International Piano Competition (Santander, Spain) Panama International Piano Competition (Panama City, Panama) Peabody Mason International Piano Competition (Boston, US) Petr Eben International Organ Competition (Opava, Czech Republic) Pilar Bayona Piano Competition (Spain) Quebec Music Competition (Montreal, Quebec, Canada) Live and Online Queen Elisabeth Music Competition (Belgium) Ricard Viñes International Piano Competition (Lleida, Spain) Ronald Sachs International Music Competition The Gurwitz International Piano Competition, (San Antonio, Texas, US) San Diego International Piano Competition and Festival for Outstanding Amateurs (San Diego, California, US) Seattle Symphony Piano Competition (Seattle, Washington, US) Sendai International Music Competition (Sendai, Japan) The Shean Piano Competition (Edmonton, Alberta, Canada) St Albans International Organ Competition (St Albans, UK) Sussex International Piano Competition (Worthing, UK) Sydney International Piano Competition (Sydney, Australia) Thailand International Chopin Piano Competition (Bangkok, Thailand) Trinity International Music Competition - Online (Toronto, Canada) Triomphe de l'Art International Music Competition (Brussels, Belgium) US New Star Piano Competition (San Jose, US) Valencia International Piano Competition Prize Iturbi (Valencia, Spain) Van Cliburn International Piano Competition (Fort Worth, US) Vancouver International Music Competition (Vancouver, Canada) Vigo International Piano Competition (Vigo, Spain) World International Piano Competition (Santa Fe, US) Youth Zeal Music Competition (Helsinki, Finland) Young Pianist of the North International competition (Newcastle upon Tyne, UK) String instruments American Protege International Piano and Strings Competition (US) Animato International Violin Competition (Brisbane, Australia) Antonio Janigro International Cello Competition (Croatia) Appassionato International Youth Music Festival (Quebec, Canada) Carl Flesch International Violin Competition (London, UK; 1945–1992) Carl Nielsen International Music Competition (Odense, Denmark) Città di Brescia International Violin Competition (Brescia, Italy) Furioso violin Competition (online) George Enescu International Competition (Violin, Cello section) (Bucharest, Romania) Henryk Wieniawski Violin Competition (Poznań, Poland) International Arthur Grumiaux Competition for Young Violinists (Brussels, Belgium) International Brahms Competition (Pörtschach am Wörthersee, Austria) International Fritz Kreisler Competition (Vienna, Austria) International Jean Sibelius Violin Competition (Helsinki, Finland) International Violin Competition Leopold Mozart in Augsburg (Augsburg, Germany) International Radio Competition for Young Musicians Concertino Praga (Prague, Czech Republic) International Tchaikovsky Competition (Moscow, Russia) International Violin Competition Henri Marteau (Lichtenberg and Hof, Germany) International Violin Competition of Indianapolis (Indianapolis, US) ISANGYUN Competition (Tongyeong, South Korea) Klaipeda International Cello Competition (Klaipeda, Lithuania) Klein Competition (San Francisco, US) Lionel Tertis International Viola Competition (Isle of Man, UK) Maurice Vieux International Viola Competition (France) Michael Hill International Violin Competition (New Zealand) Moscow International David Oistrakh Violin Competition (Moscow, Russia) Paganini International Competition (Genoa, Italy) Primrose International Viola Competition (Albuquerque, New Mexico) Quebec Music Competition (Montreal, Quebec, Canada) Live and Online Queen Elisabeth Music Competition (Belgium) Ronald Sachs International Music Competition Mstislav Rostropovich International Cello Competition (Paris, France) Sendai International Music Competition (Sendai, Japan) Schoenfeld International String Competition (Harbin, China) The Shean Strings Competition (Edmonton, Alberta, Canada) Shanghai Isaac Stern International Violin Competition (Shanghai, China) Singapore International Violin Competition (Singapore) STREICHWERK International String Competition (Berlin, Germany) Stulberg International String Competition (US) Washington International Competition for Strings (Washington, D.C., US) National Chamber Ensemble Outstanding Young Artist Achievement Award Competition (Arlington, VA, US) Windsor Festival International String |
the information prior to his presentation, Powell pushed for reform in the intelligence community, including the creation of a national intelligence director who would assure that "what one person knew, everyone else knew". Other foreign policy issues Additionally, Powell was critical of other aspects of U.S. foreign policy in the past, such as its support for the 1973 Chilean coup d'état. From two separate interviews in 2003, Powell stated in one about the 1973 event "I can't justify or explain the actions and decisions that were made at that time. It was a different time. There was a great deal of concern about communism in this part of the world. Communism was a threat to the democracies in this part of the world. It was a threat to the United States." In another interview, however, he also simply stated, "With respect to your earlier comment about Chile in the 1970s and what happened with Mr. Allende, it is not a part of American history that we're proud of." In September 2004, Powell described the Darfur genocide as "genocide", thus becoming the first cabinet member to apply the term "genocide" to events in an ongoing conflict. In November the president "forced Powell to resign", according to Walter LaFeber. Powell announced his resignation as Secretary of State on November 15, 2004, shortly after Bush was reelected. Bush's desire for Powell to resign was communicated to Powell via a phone call by Bush's chief of staff, Andrew Card. The following day, Bush nominated National Security Advisor Condoleezza Rice as Powell's successor. In mid-November, Powell stated that he had seen new evidence suggesting that Iran was adapting missiles for a nuclear delivery system. The accusation came at the same time as the settlement of an agreement between Iran, the IAEA, and the European Union. Although biographer Jeffrey J. Matthews is highly critical of how Powell misled the United Nations Security Council regarding weapons of mass destruction in Iraq, he credits Powell with a series of achievements at the State Department. These include restoration of morale to a psychologically demoralized professional diplomats, leadership of the international HIV/AIDS initiative, resolving a crisis with China, and blocking efforts to tie Saddam Hussein to the 9/11 attacks on the United States. Life after diplomatic service After retiring from the role of Secretary of State, Powell returned to private life. In April 2005, he was privately telephoned by Republican senators Lincoln Chafee and Chuck Hagel, at which time Powell expressed reservations and mixed reviews about the nomination of John Bolton as ambassador to the United Nations, but refrained from advising the senators to oppose Bolton (Powell had clashed with Bolton during Bush's first term). The decision was viewed as potentially dealing significant damage to Bolton's chances of confirmation. Bolton was put into the position via a recess appointment because of the strong opposition in the Senate. On April 28, 2005, an opinion piece in The Guardian by Sidney Blumenthal (a former top aide to President Bill Clinton) claimed that Powell was in fact "conducting a campaign" against Bolton because of the acrimonious battles they had had while working together, which among other things had resulted in Powell cutting Bolton out of talks with Iran and Libya after complaints about Bolton's involvement from the British. Blumenthal added that "The foreign relations committee has discovered that Bolton made a highly unusual request and gained access to 10 intercepts by the National Security Agency. Staff members on the committee believe that Bolton was probably spying on Powell, his senior advisors and other officials reporting to him on diplomatic initiatives that Bolton opposed." In September 2005, Powell criticized the response to Hurricane Katrina, and said thousands of people were not properly protected because they were poor, rather than because they were black. On January 5, 2006, he participated in a meeting at the White House of former Secretaries of Defense and State to discuss United States foreign policy with Bush administration officials. In September 2006, Powell sided with more moderate Senate Republicans in supporting more rights for detainees and opposing President Bush's terrorism bill. He backed Senators John Warner, John McCain, and Lindsey Graham in their statement that U.S. military and intelligence personnel in future wars will suffer for abuses committed in 2006 by the U.S. in the name of fighting terrorism. Powell stated that "The world is beginning to doubt the moral basis of our fight against terrorism." In 2007, he joined the board of directors of Steve Case's new company Revolution Health. Powell also served on the Council on Foreign Relations Board of directors. In 2008, Powell served as a spokesperson for National Mentoring Month, a campaign held each January to recruit volunteer mentors for at-risk youth. Soon after Barack Obama's 2008 election, Powell began being mentioned as a possible cabinet member. He was not nominated. In September 2009, Powell advised President Obama against surging U.S. forces in Afghanistan. The president announced the surge the following December. On March 14, 2014, Salesforce.com announced that Powell had joined its board of directors. Political positions Powell was a moderate Republican from 1995 until 2021, when he became an independent following the 2021 United States Capitol attack. He was pro-choice regarding abortion, and expressed some support for an assault weapons ban. He stated in his autobiography that he supported affirmative action that levels the playing field, without giving a leg up to undeserving persons because of racial issues. Powell originally suggested the don't ask, don't tell policy to President Clinton, though he later supported its repeal as proposed by Robert Gates and Admiral Mike Mullen in January 2010, saying "circumstances had changed". According to Mark Perry, however, Powell actually disagreed with the policy from the beginning. Powell gained attention in 2004 when, in a conversation with British Foreign Secretary Jack Straw, he reportedly referred to neoconservatives within the Bush administration as "fucking crazies". In a September 2006 letter to John McCain, Powell expressed opposition to President Bush's push for military tribunals of those formerly and currently classified as enemy combatants. Specifically, he objected to the effort in Congress to "redefine Common Article 3 of the Geneva Convention." He also asserted: "The world is beginning to doubt the moral basis of our fight against terrorism." Justifying the Iraq War At the 2007 Aspen Ideas Festival in Colorado, Powell stated that he had spent two and a half hours explaining to President Bush "the consequences of going into an Arab country and becoming the occupiers". During this discussion, he insisted that the U.S. appeal to the United Nations first, but if diplomacy failed, he would support the invasion: "I also had to say to him that you are the President, you will have to make the ultimate judgment, and if the judgment is this isn't working and we don't think it is going to solve the problem, then if military action is undertaken I'm with you, I support you." In a 2008 interview on CNN, Powell reiterated his support for the 2003 decision to invade Iraq in the context of his endorsement of Barack Obama, stating: "My role has been very, very straightforward. I wanted to avoid a war. The president [Bush] agreed with me. We tried to do that. We couldn't get it through the U.N. and when the president made the decision, I supported that decision. And I've never blinked from that. I've never said I didn't support a decision to go to war." Powell's position on the Iraq War troop surge of 2007 was less consistent. In December 2006, he expressed skepticism that the strategy would work and whether the U.S. military had enough troops to carry it out successfully. He stated: "I am not persuaded that another surge of troops into Baghdad for the purposes of suppressing this communitarian violence, this civil war, will work." Following his endorsement of Barack Obama in October 2008, however, Powell praised General David Petraeus and U.S. troops, as well as the Iraqi government, concluding that "it's starting to turn around". By mid-2009, he had concluded a surge of U.S. forces in Iraq should have come sooner, perhaps in late 2003. Role in presidential election of 2008 Powell donated the maximum allowable amount to John McCain's campaign in the summer of 2007 and in early 2008, his name was listed as a possible running mate for Republican nominee McCain's bid during the 2008 U.S. presidential election. McCain won the Republican presidential nomination, but the Democrats nominated the first black candidate, Senator Barack Obama of Illinois. On October 19, 2008, Powell announced his endorsement of Obama during a Meet the Press interview, citing "his ability to inspire, because of the inclusive nature of his campaign, because he is reaching out all across America, because of who he is and his rhetorical abilities", in addition to his "style and substance." He additionally referred to Obama as a "transformational figure". Powell further questioned McCain's judgment in appointing Sarah Palin as the vice presidential candidate, stating that despite the fact that she is admired, "now that we have had a chance to watch her for some seven weeks, I don't believe she's ready to be president of the United States, which is the job of the vice president." He said that Obama's choice for vice-president, Joe Biden, was ready to be president. He also added that he was "troubled" by the "false intimations that Obama was Muslim." Powell stated that "[Obama] is a Christianhe's always been a Christian... But the really right answer is, what if he is? Is there something wrong with being a Muslim in this country? The answer's no, that's not America." Powell then mentioned Kareem Rashad Sultan Khan, a Muslim American soldier in the U.S. Army who served and died in the Iraq War. He later stated, "Over the last seven weeks, the approach of the Republican Party has become narrower and narrower [...] I look at these kind of approaches to the campaign, and they trouble me." Powell concluded his Sunday morning talk show comments, "It isn't easy for me to disappoint Sen. McCain in the way that I have this morning, and I regret that [...] I think we need a transformational figure. I think we need a president who is a generational change and that's why I'm supporting Barack Obama, not out of any lack of respect or admiration for Sen. John McCain." Later in a December 12, 2008, CNN interview with Fareed Zakaria, Powell reiterated his belief that during the last few months of the campaign, Palin pushed the Republican party further to the right and had a polarizing impact on it. When asked why he was still a Republican on Meet the Press he said, "I'm still a Republican. And I think the Republican Party needs me more than the Democratic Party needs me. And you can be a Republican and still feel strongly about issues such as immigration, and improving our education system, and doing something about some of the social problems that exist in our society and our country. I don't think there's anything inconsistent with this." Views on the Obama administration In a July 2009 CNN interview with John King, Powell expressed concern over President Obama increasing the size of the federal government and the size of the federal budget deficit. In September 2010, he criticized the Obama administration for not focusing "like a razor blade" on the economy and job creation. Powell reiterated that Obama was a "transformational figure." In a video that aired on CNN.com in November 2011, Colin Powell said in reference to Barack Obama, "many of his decisions have been quite sound. The financial system was put back on a stable basis." On October 25, 2012, 12 days before the presidential election, he gave his endorsement to President Obama for re-election during a broadcast of CBS This Morning. He considered the administration to have had success and achieved progress in foreign and domestic policy arenas. As additional reasons for his endorsement, Powell cited the changing positions and perceived lack of thoughtfulness of Mitt Romney on foreign affairs, and a concern for the validity of Romney's economic plans. In an interview with ABC's Diane Sawyer and George Stephanopoulos during ABC's coverage of President Obama's second inauguration, Powell criticized members of the Republican Party who spread "things that demonize the president". He called on GOP leaders to publicly denounce such talk. 2016 presidential election Powell was very vocal on the state of the Republican Party. Speaking at a Washington Ideas forum in early October 2015, he warned the audience that the Republican Party had begun a move to the fringe right, lessening the chances of a Republican White House in the future. He also remarked on Republican presidential candidate Donald Trump's statements regarding immigrants, noting that there were many immigrants working in Trump hotels. In March 2016, Powell denounced the "nastiness" of the 2016 Republican primaries during an interview on CBS This Morning. He compared the race to reality television, and stated that the campaign had gone "into the mud." In August 2016, Powell accused the Hillary Clinton campaign of trying to pin her email controversy on him. Speaking to People magazine, Powell said, "The truth is, she was using [the private email server] for a year before I sent her a memo telling her what I did." On September 13, 2016, emails were obtained that revealed Powell's private communications regarding both Donald Trump and Hillary Clinton. Powell privately reiterated his comments regarding Clinton's email scandal, writing, "I have told Hillary's minions repeatedly that they are making a mistake trying to drag me in, yet they still try," and complaining that "Hillary's mafia keeps trying to suck me into it" in another email. In another email discussing Clinton's controversy, Powell said she should have told everyone what she did "two years ago", and said that she has not "been covering herself with glory." Writing on the 2012 Benghazi attack controversy surrounding Clinton, Powell said to then U.S. Ambassador Susan Rice, "Benghazi is a stupid witch hunt." Commenting on Clinton in a general sense, he mused that "Everything HRC touches she kind of screws up with hubris", and in another email stated "I would rather not have to vote for her, although she is a friend I respect." Powell called Donald Trump a "national disgrace", with "no sense of shame". He wrote of Trump's role in the birther movement, which he called "racist". He suggested the media ignore Trump: "To go on and call him an idiot just emboldens him." The emails were obtained by the media as the result of a hack. Powell endorsed Clinton on October 25, 2016, stating it was "because I think she's qualified, and the other gentleman is not qualified". Despite not running in the election, Powell received three electoral votes for president from faithless electors in Washington who had pledged to vote for Clinton, coming in third overall. After Barack Obama, he was the second black person to receive electoral votes in a presidential election. Views on the Trump administration In an interview in October 2019, Powell warned that the GOP needed to “get a grip" and put the country before their party, standing up to President Trump rather than worrying about political fallout. "When they see things that are not right, they need to say something about it because our foreign policy is in shambles right now, in my humble judgment, and I see things happening that are hard to understand," Powell said. On June 7, 2020, Powell announced he would be voting for former Vice President Joe Biden in the 2020 presidential election. In August, Powell delivered a speech in support of Biden's candidacy at the 2020 Democratic National Convention. In January 2021, after the Capitol building was attacked by Trump supporters, Powell told CNN "I can no longer call myself a fellow Republican." Personal life Powell married Alma Johnson on August 25, 1962. Their son, Michael Powell, was the chairman of the Federal Communications Commission (FCC) from 2001 to 2005. His daughters are Linda Powell, an actress, and Annemarie Powell. As a hobby, Powell restored old Volvo and Saab automobiles. In 2013, he faced questions about his relationship with the Romanian diplomat Corina Crețu, after a hacked AOL email account had been made public. He acknowledged a "very personal" email relationship but denied further involvement. He was an Episcopalian. Death On October 18, 2021, Powell, who was being treated for multiple myeloma, died at Walter Reed National Military Medical Center of complications from COVID-19 at the age of 84. He had been vaccinated, but his myeloma compromised his immune system; he | and Carlucci formed a close friendship, referring to each by first names in private, as Powell refused any sort of first-name basis in an official capacity. It was on Powell's advice that Roy Benavidez received the Medal of Honor, after the presentation had been ignored by the Carter administration. Powell also declined an offer from Secretary of the Army John O. Marsh Jr. to be his under secretary due to his reluctance to assume a political appointment; James R. Ambrose was selected instead. Intent on attaining a division command, Powell petitioned Carlucci and Army chief of staff Edward C. Meyer for reassignment away from the Pentagon, with Meyer appointing Powell as assistant division commander for operations and training of the 4th Infantry Division at Fort Carson, Colorado under Major General John W. Hudachek. After he left Fort Carson, Powell became senior military assistant to Secretary of Defense Caspar Weinberger, whom he assisted during the 1983 invasion of Grenada and the 1986 airstrike on Libya. Under Weinberger, Powell was also involved in the unlawful transfer of U.S.-made TOW anti-tank missiles and Hawk anti-aircraft missiles from Israel to Iran as part of the criminal conspiracy that would later become known as the Iran–Contra affair. In November 1985, Powell solicited and delivered to Weinberger a legal assessment that the transfer of Hawk missiles to Israel or Iran, without Congressional notification, would be "a clear violation" of the law. Despite this, thousands of TOW missiles and hundreds of Hawk missiles and spare parts were transferred from Israel to Iran until the venture was exposed in a Lebanese magazine, Ash-Shiraa, in November 1986. According to Iran-Contra Independent Counsel Lawrence E. Walsh, when questioned by Congress, Powell "had given incomplete answers" concerning notes withheld by Weinberger and that the activities of Powell and others in concealing the notes "seemed corrupt enough to meet the new, poorly defined test of obstruction." Following his resignation as Secretary of Defense, Weinberger was indicted on five felony charges, including one count Obstruction of Congress for concealing the notes. Powell was never indicted by the Independent Counsel in connection with the Iran-Contra affair. In 1986, Powell took over the command of V Corps in Frankfurt, Germany, from Robert Lewis "Sam" Wetzel. The next year, he served as United States Deputy National Security Advisor, under Frank Carlucci. Following the Iran–Contra scandal, Powell became, at the age of 49, Ronald Reagan's National Security Advisor, serving from 1987 to 1989 while retaining his Army commission as a lieutenant general. He helped negotiate a number of arms treaties with Mikhail Gorbachev, the leader of the Soviet Union. In April 1989, after his tenure with the National Security Council, Powell was promoted to four-star general under President George H. W. Bush and briefly served as the Commander in Chief, Forces Command (FORSCOM), headquartered at Fort McPherson, Georgia, overseeing all Army, Army Reserve, and National Guard units in the Continental U.S., Hawaii, and Puerto Rico. He became the third general since World War II to reach four-star rank without ever serving as a division commander, joining Dwight D. Eisenhower and Alexander Haig. Later that year, President George H. W. Bush selected him as Chairman of the Joint Chiefs of Staff. Chairman of the Joint Chiefs of Staff Powell's last military assignment, from October 1, 1989, to September 30, 1993, was as the 12th chairman of the Joint Chiefs of Staff, the highest military position in the Department of Defense. At age 52, he became the youngest officer, and first Afro-Caribbean American, to serve in this position. Powell was also the first JCS chair who received his commission through ROTC. During this time, Powell oversaw responses to 28 crises, including the invasion of Panama in 1989 to remove General Manuel Noriega from power and Operation Desert Storm in the 1991 Persian Gulf War. During these events, Powell earned the nickname "the reluctant warrior"—although Powell himself disputed this label, and spoke in favor of the first Bush administration's Gulf War policies. As a military strategist, Powell advocated an approach to military conflicts that maximizes the potential for success and minimizes casualties. A component of this approach is the use of overwhelming force, which he applied to Operation Desert Storm in 1991. His approach has been dubbed the Powell Doctrine. Powell continued as chairman of the JCS into the Clinton presidency. However, as a realist, he considered himself a bad fit for an administration largely made up of liberal internationalists. He clashed with then-U.S. ambassador to the United Nations Madeleine Albright over the Bosnian crisis, as he opposed any military intervention that did not involve U.S. interests. Powell also regularly clashed with Secretary of Defense Leslie Aspin, whom he was initially hesitant to support after Aspin was nominated by President Clinton. During a lunch meeting between Powell and Aspin in preparation of Operation Gothic Serpent, Aspin was more focused on eating salad instead of listening and paying attention to Powell's presentation on military operations. The incident caused Powell to grow more irritated towards Aspin and led to his early resignation on September 30, 1993. Powell was succeeded temporarily by Vice Chairman of the Joint Chiefs of Staff Admiral David E. Jeremiah, who took the position as Acting Chairman of the Joint Chiefs of Staff. Soon after Powell's resignation, on October 3–4, 1993, the Battle of Mogadishu, the aim of which was to capture Somali warlord Mohamed Farrah Aidid, was initiated and ended in disaster. Powell later defended Aspin, saying in part that he could not fault Aspin for Aspin's decision to remove a Lockheed AC-130 from the list of armaments requested for the operation. Powell took an early resignation from his tenure as Chairman of the Joint Chiefs of Staff on September 30, 1993. During his chairmanship of the JCS, there was discussion of awarding Powell a fifth star, granting him the rank of General of the Army. But even in the wake of public and Congressional pressure to do so, Clinton-Gore presidential transition team staffers decided against it. Dates of rank Awards and decorations Badges Medals and ribbons Foreign decorations Potential presidential candidate Powell's experience in military matters made him a very popular figure with both American political parties. Many Democrats admired his moderate stance on military matters, while many Republicans saw him as a great asset associated with the successes of past Republican administrations. Put forth as a potential Democratic vice presidential nominee in the 1992 U.S. presidential election or even potentially replacing Vice President Dan Quayle as the Republican vice presidential nominee, Powell eventually declared himself a Republican and began to campaign for Republican candidates in 1995. He was touted as a possible opponent of Bill Clinton in the 1996 U.S. presidential election, possibly capitalizing on a split conservative vote in Iowa and even leading New Hampshire polls for the GOP nomination, but Powell declined, citing a lack of passion for politics. Powell defeated Clinton 50–38 in a hypothetical match-up proposed to voters in the exit polls conducted on Election Day. Despite not standing in the race, Powell won the Republican New Hampshire Vice-Presidential primary on write-in votes. In 1997, Powell founded America's Promise with the objective of helping children from all socioeconomic sectors. That same year saw the establishment of The Colin L. Powell Center for Leadership and Service. The mission of the center is to "prepare new generations of publicly engaged leaders from populations previously underrepresented in public service and policy circles, to build a strong culture of civic engagement at City College, and to mobilize campus resources to meet pressing community needs and serve the public good." Powell was mentioned as a potential candidate in the 2000 U.S. presidential election, but again decided against running. Once Texas Governor George W. Bush secured the Republican nomination, Powell endorsed him for president and spoke at the 2000 Republican National Convention. Bush won the general election and appointed Powell as secretary of state in 2001. In the electoral college vote count of 2016, Powell received three votes for president from faithless electors from Washington. Secretary of State (2001–2005) President-elect George W. Bush named Powell as his nominee to be secretary of state in a ceremony at his ranch in Crawford, Texas on December 16, 2000. This made Powell the first person to formally accept a Cabinet post in the Bush administration, as well the first black United States secretary of state. As secretary of state, Powell was perceived as moderate. Powell was unanimously confirmed by the United States Senate by voice vote on January 20, 2001, and ceremonially sworn in on January 26. Over the course of his tenure he traveled less than any other U.S. Secretary of State in thirty years. On September 11, 2001, Powell was in Lima, Peru, meeting with President Alejandro Toledo and attending a meeting of foreign ministers of the Organization of American States. After the September 11 attacks, Powell's job became of critical importance in managing the United States of America's relationships with foreign countries in order to secure a stable coalition in the War on Terrorism. 2003 U.S. invasion of Iraq Powell came under fire for his role in building the case for the 2003 invasion of Iraq. A 2004 report by the Iraq Survey Group concluded that the evidence that Powell offered to support the allegation that the Iraqi government possessed weapons of mass destruction (WMDs) was inaccurate. In a press statement on February 24, 2001, Powell had said that sanctions against Iraq had prevented the development of any weapons of mass destruction by Saddam Hussein. Powell favored involving the international community in the invasion, as opposed to a unilateral approach. Powell's chief role was to garner international support for a multi-national coalition to mount the invasion. To this end, Powell addressed a plenary session of the United Nations Security Council on February 5, 2003, to argue in favor of military action. Citing numerous anonymous Iraqi defectors, Powell asserted that "there can be no doubt that Saddam Hussein has biological weapons and the capability to rapidly produce more, many more." Powell also stated that there was "no doubt in my mind" that Saddam was working to obtain key components to produce nuclear weapons. Powell stated that he gave his speech to the UN on "four days' notice". Britain's Channel 4 News reported soon afterwards that a U.K. intelligence dossier that Powell had referred to as a "fine paper" during his presentation had been based on old material and plagiarized an essay by American graduate student Ibrahim al-Marashi. A Senate report on intelligence failures would later detail the intense debate that went on behind the scenes on what to include in Powell's speech. State Department analysts had found dozens of factual problems in drafts of the speech. Some of the claims were taken out, but others were left in, such as claims based on the yellowcake forgery. The administration came under fire for having acted on faulty intelligence, particularly that which was single-sourced to the informant known as Curveball. Powell later recounted how Vice President Dick Cheney had joked with him before he gave the speech, telling him, "You've got high poll ratings; you can afford to lose a few points." Powell's longtime aide-de-camp and Chief of Staff from 1989 to 2003, Colonel Lawrence Wilkerson, later characterized Cheney's view of Powell's mission as to "go up there and sell it, and we'll have moved forward a peg or two. Fall on your damn sword and kill yourself, and I'll be happy, too." In September 2005, Powell was asked about the speech during an interview with Barbara Walters and responded that it was a "blot" on his record. He went on to say, "It will always be a part of my record. It was painful. It's painful now." Wilkerson later said that he inadvertently participated in a hoax on the American people in preparing Powell's erroneous testimony before the United Nations Security Council. As recounted in Soldier: The Life of Colin Powell, in 2001 before 9/11, Richard A. Clarke, a National Security Council holdover from the Clinton administration, pushed the new Bush administration for action against al-Qaeda in Afghanistan, a move opposed by Paul Wolfowitz who advocated for the creation of a "U.S.-protected, opposition-run 'liberated' enclave around the southern Iraqi city of Basra". Powell referred to Wolfowitz and other top members of Donald Rumsfeld's staff "as the 'JINSA crowd,' " in reference to the pro-Israel Jewish Institute for National Security Affairs. Again invoking "the JINSA crowd" Powell also attributed the decision to go to war in Iraq in 2003 to the neoconservative belief that regime change in Baghdad "was a first and necessary stop on the road to peace in Jerusalem." A review of Soldier by Tim Rutten criticized Powell's remarks as a "blot on his record", accusing Powell of slandering "neoconservatives in the Defense Department -- nearly all of them Jews" with "old and wholly unmeritorious allegations of dual loyalty". A 2007 article about fears that Jewish groups "will be accused of driving America into a war with the regime in Tehran" cited the DeYoung biography and quoted JINSA's then-executive director, Thomas Neumann, as "surprised" Powell "would single out a Jewish group when naming those who supported the war." Neumann said, "I am not accusing Powell of anything, but these are words that the antisemites will use in the future". Once Saddam Hussein had been deposed, Powell's renewed role was to once again establish a working international coalition, this time to assist in the rebuilding of post-war Iraq. On September 13, 2004, Powell testified before the Senate Governmental Affairs Committee, acknowledging that the sources who provided much of the information in his February 2003 UN presentation were "wrong" and that it was "unlikely" that any stockpiles of WMDs would be found. Claiming that he was unaware that some intelligence officials questioned the information prior to his presentation, Powell pushed for reform in the intelligence community, including the creation of a national intelligence director who would assure that "what one person knew, everyone else knew". Other foreign policy issues Additionally, Powell was critical of other aspects of U.S. foreign policy in the past, such as its support for the 1973 Chilean coup d'état. From two separate interviews in 2003, Powell stated in one about the 1973 event "I can't justify or explain the actions and decisions that were made at that time. It was a different time. There was a great deal of concern about communism in this part of the world. Communism was a threat to the democracies in this part of the world. It was a threat to the United States." In another interview, however, he also simply stated, "With respect to your earlier comment about Chile in the 1970s and what happened with Mr. Allende, it is not a part of American history that we're proud of." In September 2004, Powell described the Darfur genocide as "genocide", thus becoming the first cabinet member to apply the term "genocide" to events in an ongoing conflict. In November the president "forced Powell to resign", according to Walter LaFeber. Powell announced his resignation as Secretary of State on November 15, 2004, shortly after Bush was reelected. Bush's desire for Powell to resign was communicated to Powell via a phone call by Bush's chief of staff, Andrew Card. The following day, Bush nominated National Security Advisor Condoleezza Rice as Powell's successor. In mid-November, Powell stated that he had seen new evidence suggesting that Iran was adapting missiles for a nuclear delivery system. The accusation came at the same time as the settlement of an agreement between Iran, the IAEA, and the European Union. Although biographer Jeffrey J. Matthews is highly critical of how Powell misled the United Nations Security Council regarding weapons of mass destruction in Iraq, he credits Powell with a series of achievements at the State Department. These include restoration of morale to a psychologically demoralized professional diplomats, leadership of the international HIV/AIDS initiative, resolving a crisis with China, and blocking efforts to tie Saddam Hussein to the 9/11 attacks on the United States. Life after diplomatic service After retiring from the role of Secretary of State, Powell returned to private life. In April 2005, he was privately telephoned by Republican senators Lincoln Chafee and Chuck Hagel, at which time Powell expressed reservations and mixed reviews about the nomination of John Bolton as ambassador to the United Nations, but refrained from advising the senators to oppose Bolton (Powell had clashed with Bolton during Bush's first term). The decision was viewed as potentially dealing significant damage to Bolton's chances of confirmation. Bolton was put into the position via a recess appointment because of the strong opposition in the Senate. On April 28, 2005, an opinion piece in The Guardian by Sidney Blumenthal (a former top aide to President Bill Clinton) claimed that Powell was in fact "conducting a campaign" against Bolton because of the acrimonious battles they had had while working together, which among other things had resulted in Powell cutting Bolton out of talks with Iran and Libya after complaints about Bolton's involvement from the British. Blumenthal added that "The foreign relations committee has discovered that Bolton made a highly unusual request and gained access to 10 intercepts by the National Security Agency. Staff members on the committee believe that Bolton was probably spying on Powell, his senior advisors and other officials reporting to him on diplomatic initiatives that Bolton opposed." In September 2005, Powell criticized the response to Hurricane Katrina, and said thousands of people were not properly protected because they were poor, rather than because they were black. On January 5, 2006, he participated in a meeting at the White House of former Secretaries of Defense and State to discuss United States foreign policy with Bush administration officials. In September 2006, Powell sided with more moderate Senate Republicans in supporting more rights for detainees and opposing President Bush's terrorism bill. He backed Senators John Warner, John McCain, and Lindsey Graham in their statement that U.S. military and intelligence personnel in future wars will suffer for abuses committed in 2006 by the U.S. in the name of fighting terrorism. Powell stated that "The world is beginning to doubt the moral basis of our fight against terrorism." In 2007, he joined the board of directors of Steve Case's new company Revolution Health. Powell also served on the Council on Foreign Relations Board of directors. In 2008, Powell served as a spokesperson for National Mentoring Month, a campaign held each January to recruit volunteer mentors for at-risk youth. Soon after Barack Obama's 2008 election, Powell began being mentioned as a possible cabinet member. He was not nominated. In September 2009, Powell advised President Obama against surging U.S. forces in Afghanistan. The president announced the surge the following December. On March 14, 2014, Salesforce.com announced that Powell had joined its board of directors. Political positions Powell was a moderate Republican from 1995 until 2021, when he became an independent following the 2021 United States Capitol attack. He was pro-choice regarding abortion, and expressed some support for an assault weapons ban. He stated in his autobiography that he supported affirmative action that levels the playing field, without giving a leg up to undeserving persons because of racial issues. Powell originally suggested the don't ask, don't tell policy to President Clinton, though he later supported its repeal as proposed by Robert Gates and Admiral Mike Mullen in January |
stromatolites in 2010; a molecular formula of C55H70O6N4Mg and a structure of (2-formyl)-chlorophyll a were deduced based on NMR, optical and mass spectra. Photosynthesis Chlorophyll is vital for photosynthesis, which allows plants to absorb energy from light. Chlorophyll molecules are arranged in and around photosystems that are embedded in the thylakoid membranes of chloroplasts. In these complexes, chlorophyll serves three functions. The function of the vast majority of chlorophyll (up to several hundred molecules per photosystem) is to absorb light. Having done so, these same centers execute their second function: the transfer of that light energy by resonance energy transfer to a specific chlorophyll pair in the reaction center of the photosystems. This pair effects the final function of chlorophylls, charge separation, leading to biosynthesis. The two currently accepted photosystem units are photosystem II and photosystem I, which have their own distinct reaction centres, named P680 and P700, respectively. These centres are named after the wavelength (in nanometers) of their red-peak absorption maximum. The identity, function and spectral properties of the types of chlorophyll in each photosystem are distinct and determined by each other and the protein structure surrounding them. Once extracted from the protein into a solvent (such as acetone or methanol), these chlorophyll pigments can be separated into chlorophyll a and chlorophyll b. The function of the reaction center of chlorophyll is to absorb light energy and transfer it to other parts of the photosystem. The absorbed energy of the photon is transferred to an electron in a process called charge separation. The removal of the electron from the chlorophyll is an oxidation reaction. The chlorophyll donates the high energy electron to a series of molecular intermediates called an electron transport chain. The charged reaction center of chlorophyll (P680+) is then reduced back to its ground state by accepting an electron stripped from water. The electron that reduces P680+ ultimately comes from the oxidation of water into O2 and H+ through several intermediates. This reaction is how photosynthetic organisms such as plants produce O2 gas, and is the source for practically all the O2 in Earth's atmosphere. Photosystem I typically works in series with Photosystem II; thus the P700+ of Photosystem I is usually reduced as it accepts the electron, via many intermediates in the thylakoid membrane, by electrons coming, ultimately, from Photosystem II. Electron transfer reactions in the thylakoid membranes are complex, however, and the source of electrons used to reduce P700+ can vary. The electron flow produced by the reaction center chlorophyll pigments is used to pump H+ ions across the thylakoid membrane, setting up a chemiosmotic potential used mainly in the production of ATP (stored chemical energy) or to reduce NADP+ to NADPH. NADPH is a universal agent used to reduce CO2 into sugars as well as other biosynthetic reactions. Reaction center chlorophyll–protein complexes are capable of directly absorbing light and performing charge separation events without the assistance of other chlorophyll pigments, but the probability of that happening under a given light intensity is small. Thus, the other chlorophylls in the photosystem and antenna pigment proteins all cooperatively absorb and funnel light energy to the reaction center. Besides chlorophyll a, there are other pigments, called accessory pigments, which occur in these pigment–protein antenna complexes. Chemical structure Chlorophylls are numerous in types, but all are defined by the presence of a fifth ring beyond the four pyrrole-like rings. Most chlorophylls are classified as chlorins, which are reduced relatives of porphyrins (found in hemoglobin). They share a common biosynthetic pathway with porphyrins, including the precursor uroporphyrinogen III. Unlike hemes, which feature iron at the center of the porphyrin based tetrapyrrole ring, in chlorophylls central magnesium atom coordinates with chlorin, a partially reduced porphyrin. For the structures depicted in this article, some of the ligands attached to the Mg2+ center are omitted for clarity. The chlorin ring can have various side chains, usually including a long phytyl chain (). The most widely distributed form in terrestrial plants is chlorophyll a. The only difference between chlorophyll a and chlorophyll | a fluoresces at 673 nm (maximum) and 726 nm. The peak molar absorption coefficient of chlorophyll a exceeds 105 M−1 cm−1, which is among the highest for small-molecule organic compounds. In 90% acetone-water, the peak absorption wavelengths of chlorophyll a are 430 nm and 664 nm; peaks for chlorophyll b are 460 nm and 647 nm; peaks for chlorophyll c1 are 442 nm and 630 nm; peaks for chlorophyll c2 are 444 nm and 630 nm; peaks for chlorophyll d are 401 nm, 455 nm and 696 nm. By measuring the absorption of light in the red and far red regions, it is possible to estimate the concentration of chlorophyll within a leaf. Ratio fluorescence emission can be used to measure chlorophyll content. By exciting chlorophyll a fluorescence at a lower wavelength, the ratio of chlorophyll fluorescence emission at and can provide a linear relationship of chlorophyll content when compared with chemical testing. The ratio F735/F700 provided a correlation value of r2 0.96 compared with chemical testing in the range from 41 mg m−2 up to 675 mg m−2. Gitelson also developed a formula for direct readout of chlorophyll content in mg m−2. The formula provided a reliable method of measuring chlorophyll content from 41 mg m−2 up to 675 mg m−2 with a correlation r2 value of 0.95. Biosynthesis In some plants, chlorophyll is derived from glutamate and is synthesised along a branched biosynthetic pathway that is shared with heme and siroheme. Chlorophyll synthase is the enzyme that completes the biosynthesis of chlorophyll a by catalysing the reaction chlorophyllide a + phytyl diphosphate chlorophyll a + diphosphate This forms an ester of the carboxylic acid group in chlorophyllide a with the 20-carbon diterpene alcohol phytol. Chlorophyll b is made by the same enzyme acting on chlorophyllide b. In Angiosperm plants, the later steps in the biosynthetic pathway are light-dependent and such plants are pale (etiolated) if grown in darkness. Non-vascular plants and green algae have an additional light-independent enzyme and grow green even in darkness. Chlorophyll itself is bound to proteins and can transfer the absorbed energy in the required direction. Protochlorophyllide, one of the biosynthetic intermediates, occurs mostly in the free form and, under light conditions, acts as a photosensitizer, forming highly toxic free radicals. Hence, plants need an efficient mechanism of regulating the amount of this chlorophyll precursor. In angiosperms, this is done at the step of aminolevulinic acid (ALA), one of the intermediate compounds in the biosynthesis pathway. Plants that are fed by ALA accumulate high and toxic levels of protochlorophyllide; so do the mutants with a damaged regulatory system. Senescence and the chlorophyll cycle The process of plant senescence involves the degradation of chlorophyll: for example the enzyme chlorophyllase () hydrolyses the phytyl sidechain to reverse the reaction in which chlorophylls are biosynthesised from chlorophyllide a or b. Since chlorophyllide a can be converted to chlorophyllide b and the latter can be re-esterified to chlorophyll b, these processes allow cycling between chlorophylls a and b. Moreover, chlorophyll b can be directly reduced (via ) back to chlorophyll a, completing the cycle. In later stages of senescence, chlorophyllides are converted to a group of colourless tetrapyrroles known as nonfluorescent chlorophyll catabolites (NCC's) with the general structure: These compounds have also been identified in ripening fruits and they give characteristic autumn colours to deciduous plants. Defective environments can cause chlorosis Chlorosis is a condition in which leaves produce insufficient chlorophyll, turning them yellow. Chlorosis can be caused by a nutrient deficiency of iron — called iron chlorosis — or by a shortage of magnesium or nitrogen. Soil pH sometimes plays a role in nutrient-caused chlorosis; many plants are adapted to grow in soils with specific pH levels and their ability to absorb nutrients from the soil can be dependent on this. Chlorosis can also be caused by pathogens including viruses, bacteria and fungal infections, or sap-sucking insects. Complementary light absorbance of anthocyanins Anthocyanins are other plant pigments. The absorbance pattern responsible for the red color of anthocyanins may be complementary to that of green chlorophyll in photosynthetically active tissues such as young Quercus coccifera leaves. It may protect the leaves from attacks by plant eaters that may be attracted by green color. Distribution The chlorophyll maps show milligrams |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.