sentence1
stringlengths
1
133k
sentence2
stringlengths
1
131k
are relatively poor converters of coloured dietary carotenoids to colourless retinoids have yellowed-coloured body fat, as a result of the carotenoid retention from the vegetable portion of their diet. The typical yellow-coloured fat of humans and chickens is a result of fat storage of carotenes from their diets. Carotenes contribute to photosynthesis by transmitting the light energy they absorb to chlorophyll. They also protect plant tissues by helping to absorb the energy from singlet oxygen, an excited form of the oxygen molecule O2 which is formed during photosynthesis. β-Carotene is composed of two retinyl groups, and is broken down in the mucosa of the human small intestine by β-carotene 15,15'-monooxygenase to retinal, a form of vitamin A. β-Carotene can be stored in the liver and body fat and converted to retinal as needed, thus making it a form of vitamin A for humans and some other mammals. The carotenes α-carotene and γ-carotene, due to their single retinyl group (β-ionone ring), also have some vitamin A activity (though less than β-carotene), as does the xanthophyll carotenoid β-cryptoxanthin. All other carotenoids, including lycopene, have no beta-ring and thus no vitamin A activity (although they may have antioxidant activity and thus biological activity in other ways). Animal species differ greatly in their ability to convert retinyl (beta-ionone) containing carotenoids to retinals. Carnivores in general are poor converters of dietary ionone-containing carotenoids. Pure carnivores such as ferrets lack β-carotene 15,15'-monooxygenase and cannot convert any carotenoids to retinals at all (resulting in carotenes not being a form of vitamin A for this species); while cats can convert a trace of β-carotene to retinol, although the amount is totally insufficient for meeting their daily retinol needs. Molecular structure Chemically, carotenes are polyunsaturated hydrocarbons containing 40 carbon atoms per molecule, variable numbers of hydrogen atoms, and no other elements. Some carotenes are terminated by hydrocarbon rings, on one or both ends of the molecule. All are coloured to the human eye, due to extensive systems of conjugated double bonds. Structurally carotenes are tetraterpenes, meaning that they are synthesized biochemically from four 10-carbon terpene units, which in turn are formed from eight 5-carbon isoprene units. Carotenes are found in plants in two primary forms designated by characters from the Greek alphabet: alpha-carotene (α-carotene) and beta-carotene (β-carotene). Gamma-, delta-, epsilon-, and zeta-carotene (γ, δ, ε, and ζ-carotene) also exist. Since they are hydrocarbons, and therefore contain no oxygen, carotenes are fat-soluble and insoluble in water (in contrast with other carotenoids, the xanthophylls, which contain oxygen and thus are less chemically hydrophobic). History The discovery of carotene from carrot juice is credited to Heinrich Wilhelm Ferdinand Wackenroder, a finding made during a search for antihelminthics, which he published in 1831. He obtained it in small ruby-red flakes soluble in ether, which when dissolved in fats gave 'a beautiful yellow colour'. William Christopher Zeise recognised its hydrocarbon nature in 1847, but his analyses gave him a composition of C5H8. It was Léon-Albert Arnaud in 1886 who confirmed its hydrocarbon nature and gave the formula C26H38, which is close to the theoretical composition of C40H56. Adolf Lieben in studies, also published in 1886, of the colouring matter in corpora lutea, first came across carotenoids in animal tissue, but did not recognise the nature of the pigment. Johann Ludwig Wilhelm Thudichum, in 1868–1869, after stereoscopic spectral examination, applied the term 'luteine'(lutein) to this class of yellow crystallizable substances found in animals and plants. Richard Martin Willstätter, who gained the Nobel Prize in Chemistry in 1915, mainly for his work on chlorophyll, assigned the composition of C40H56, distinguishing
fungi (for example, sweet potatoes, chanterelle and orange cantaloupe melon). Carotenes are also responsible for the orange (but not all of the yellow) colours in dry foliage. They also (in lower concentrations) impart the yellow coloration to milk-fat and butter. Omnivorous animal species which are relatively poor converters of coloured dietary carotenoids to colourless retinoids have yellowed-coloured body fat, as a result of the carotenoid retention from the vegetable portion of their diet. The typical yellow-coloured fat of humans and chickens is a result of fat storage of carotenes from their diets. Carotenes contribute to photosynthesis by transmitting the light energy they absorb to chlorophyll. They also protect plant tissues by helping to absorb the energy from singlet oxygen, an excited form of the oxygen molecule O2 which is formed during photosynthesis. β-Carotene is composed of two retinyl groups, and is broken down in the mucosa of the human small intestine by β-carotene 15,15'-monooxygenase to retinal, a form of vitamin A. β-Carotene can be stored in the liver and body fat and converted to retinal as needed, thus making it a form of vitamin A for humans and some other mammals. The carotenes α-carotene and γ-carotene, due to their single retinyl group (β-ionone ring), also have some vitamin A activity (though less than β-carotene), as does the xanthophyll carotenoid β-cryptoxanthin. All other carotenoids, including lycopene, have no beta-ring and thus no vitamin A activity (although they may have antioxidant activity and thus biological activity in other ways). Animal species differ greatly in their ability to convert retinyl (beta-ionone) containing carotenoids to retinals. Carnivores in general are poor converters of dietary ionone-containing carotenoids. Pure carnivores such as ferrets lack β-carotene 15,15'-monooxygenase and cannot convert any carotenoids to retinals at all (resulting in carotenes not being a form of vitamin A for this species); while cats can convert a trace of β-carotene to retinol, although the amount is totally insufficient for meeting their daily retinol needs. Molecular structure Chemically, carotenes are polyunsaturated hydrocarbons containing 40 carbon atoms per molecule, variable numbers of hydrogen atoms, and no other elements. Some carotenes are terminated by hydrocarbon rings, on one or both ends of the molecule. All are coloured to the human eye, due to extensive systems of conjugated double bonds. Structurally carotenes are tetraterpenes, meaning that they are synthesized biochemically from four 10-carbon terpene units, which in turn are formed from eight 5-carbon isoprene units. Carotenes are found in plants in two primary forms designated by characters from the Greek alphabet: alpha-carotene (α-carotene) and beta-carotene (β-carotene). Gamma-, delta-, epsilon-, and zeta-carotene (γ, δ, ε, and ζ-carotene) also exist. Since they are hydrocarbons, and therefore contain no oxygen, carotenes are fat-soluble and insoluble in water (in contrast with other carotenoids, the xanthophylls, which contain oxygen and thus are less chemically hydrophobic). History The discovery of carotene from carrot juice is credited to Heinrich Wilhelm Ferdinand Wackenroder, a finding made during a search for antihelminthics, which he published in 1831. He obtained it in small ruby-red flakes soluble in ether, which when dissolved in fats gave 'a beautiful yellow colour'. William Christopher Zeise recognised its hydrocarbon nature in 1847, but his analyses gave him a composition of C5H8. It was Léon-Albert Arnaud in 1886 who confirmed its hydrocarbon nature and gave the formula C26H38, which is close to the theoretical composition of C40H56. Adolf Lieben in studies, also published in 1886, of the colouring matter in corpora lutea, first came across carotenoids in animal tissue, but did not recognise the nature of the pigment. Johann Ludwig Wilhelm Thudichum, in 1868–1869, after stereoscopic spectral examination, applied the term 'luteine'(lutein) to this class of yellow crystallizable substances found in animals and plants. Richard Martin Willstätter, who gained the Nobel Prize in Chemistry in 1915, mainly for his work on chlorophyll, assigned the composition of C40H56, distinguishing it from the similar but oxygenated xanthophyll, C40H56O2. With Heinrich Escher, in 1910, lycopene was isolated from tomatoes and shown to be an isomer of carotene. Later work by Escher also differentiated the 'luteal' pigments in egg yolk from that of the carotenes in cow corpus luteum. Dietary sources The following foods contain carotenes in notable amounts: carrots wolfberries (goji) cantaloupe mangoes red bell pepper papaya spinach kale sweet potato tomato dandelion greens broccoli collard greens winter squash pumpkin cassava Absorption from these foods is enhanced if eaten with fats, as carotenes are fat soluble, and if the food is cooked for a few minutes until the plant cell wall splits and the color is released into any liquid. 12 μg of dietary β-carotene supplies the equivalent of 1 μg of retinol, and 24 µg of α-carotene or β-cryptoxanthin provides the equivalent of 1 µg of retinol. Forms of carotene The two primary isomers of carotene, α-carotene and β-carotene, differ in the position of a double bond (and thus a hydrogen) in the cyclic group at one end (the right end in the diagram at right). β-Carotene is the more common form and can be found in yellow, orange, and green leafy fruits and vegetables. As a rule of thumb, the greater the intensity of the orange colour of the fruit or vegetable, the more β-carotene it contains. Carotene protects plant cells against the destructive effects of ultraviolet light. β-Carotene is an antioxidant. β-Carotene and physiology β-Carotene and cancer An article on the
Epac1 and RAPGEF2. Role in eukaryotic cells cAMP is associated with kinases function in several biochemical processes, including the regulation of glycogen, sugar, and lipid metabolism. In eukaryotes, cyclic AMP works by activating protein kinase A (PKA, or cAMP-dependent protein kinase). PKA is normally inactive as a tetrameric holoenzyme, consisting of two catalytic and two regulatory units (C2R2), with the regulatory units blocking the catalytic centers of the catalytic units. Cyclic AMP binds to specific locations on the regulatory units of the protein kinase, and causes dissociation between the regulatory and catalytic subunits, thus enabling those catalytic units to phosphorylate substrate proteins. The active subunits catalyze the transfer of phosphate from ATP to specific serine or threonine residues of protein substrates. The phosphorylated proteins may act directly on the cell's ion channels, or may become activated or inhibited enzymes. Protein kinase A can also phosphorylate specific proteins that bind to promoter regions of DNA, causing increases in transcription. Not all protein kinases respond to cAMP. Several classes of protein kinases, including protein kinase C, are not cAMP-dependent. Further effects mainly depend on cAMP-dependent protein kinase, which vary based on the type of cell. Still, there are some minor PKA-independent functions of cAMP, e.g., activation of calcium channels, providing a minor pathway by which growth hormone-releasing hormone causes a release of growth hormone. However, the view that the majority of the effects of cAMP are controlled by PKA is an outdated one. In 1998 a family of cAMP-sensitive proteins with guanine nucleotide exchange factor (GEF) activity was discovered. These are termed Exchange proteins activated by cAMP (Epac) and the family comprises Epac1 and Epac2. The mechanism of activation is similar to that of PKA: the GEF domain is usually masked by the N-terminal region containing the cAMP binding domain. When cAMP binds, the domain dissociates and exposes the now-active GEF domain, allowing Epac to activate small Ras-like GTPase proteins, such as Rap1. Additional role of secreted cAMP in social amoebae In the species Dictyostelium discoideum, cAMP acts outside the cell as a secreted signal. The chemotactic aggregation of cells is organized by periodic waves of cAMP that propagate between cells over distances as large as several centimetres. The waves are the result of a regulated production and secretion of extracellular cAMP and a spontaneous biological oscillator that initiates the waves at centers of territories. Role in bacteria In bacteria, the level of cAMP varies depending on the medium used for growth. In particular, cAMP is low when glucose is the carbon source. This occurs through inhibition of the cAMP-producing enzyme, adenylate cyclase, as a side-effect of glucose transport into the cell. The transcription factor cAMP receptor protein (CRP) also called CAP (catabolite gene activator protein) forms a complex with cAMP and thereby is activated to bind to DNA. CRP-cAMP increases expression of a large number of genes, including some encoding enzymes that can supply energy independent of glucose. cAMP, for example, is involved in the positive regulation of the lac operon. In an environment with a low glucose concentration, cAMP accumulates and binds to the allosteric site on CRP (cAMP receptor protein), a transcription activator protein. The protein assumes its active shape and binds to a specific site upstream of the lac promoter, making it easier for RNA polymerase to bind to the adjacent promoter to start transcription of the lac operon, increasing the rate of
proteins that bind to promoter regions of DNA, causing increases in transcription. Not all protein kinases respond to cAMP. Several classes of protein kinases, including protein kinase C, are not cAMP-dependent. Further effects mainly depend on cAMP-dependent protein kinase, which vary based on the type of cell. Still, there are some minor PKA-independent functions of cAMP, e.g., activation of calcium channels, providing a minor pathway by which growth hormone-releasing hormone causes a release of growth hormone. However, the view that the majority of the effects of cAMP are controlled by PKA is an outdated one. In 1998 a family of cAMP-sensitive proteins with guanine nucleotide exchange factor (GEF) activity was discovered. These are termed Exchange proteins activated by cAMP (Epac) and the family comprises Epac1 and Epac2. The mechanism of activation is similar to that of PKA: the GEF domain is usually masked by the N-terminal region containing the cAMP binding domain. When cAMP binds, the domain dissociates and exposes the now-active GEF domain, allowing Epac to activate small Ras-like GTPase proteins, such as Rap1. Additional role of secreted cAMP in social amoebae In the species Dictyostelium discoideum, cAMP acts outside the cell as a secreted signal. The chemotactic aggregation of cells is organized by periodic waves of cAMP that propagate between cells over distances as large as several centimetres. The waves are the result of a regulated production and secretion of extracellular cAMP and a spontaneous biological oscillator that initiates the waves at centers of territories. Role in bacteria In bacteria, the level of cAMP varies depending on the medium used for growth. In particular, cAMP is low when glucose is the carbon source. This occurs through inhibition of the cAMP-producing enzyme, adenylate cyclase, as a side-effect of glucose transport into the cell. The transcription factor cAMP receptor protein (CRP) also called CAP (catabolite gene activator protein) forms a complex with cAMP and thereby is activated to bind to DNA. CRP-cAMP increases expression of a large number of genes, including some encoding enzymes that can supply energy independent of glucose. cAMP, for example, is involved in the positive regulation of the lac operon. In an environment with a low glucose concentration, cAMP accumulates and binds to the allosteric site on CRP (cAMP receptor protein), a transcription activator protein. The protein
have heavily influenced Giotto, include a Flagellation (Frick Collection), mosaics for the Baptistery of Florence (now largely restored), the Maestà at the Santa Maria dei Servi in Bologna and the Madonna in the Pinacoteca of Castelfiorentino. A workshop painting, perhaps assignable to a slightly later period, is the Maestà with Saints Francis and Dominic now in the Uffizi. During the pontificate of Pope Nicholas IV, the first Franciscan pope, Cimabue worked in Assisi. At Assisi, in the transept of the Lower Basilica of San Francesco, he created a fresco named Madonna with Child Enthroned, Four Angels and St Francis. The left portion of this fresco is lost, but it may have shown St Anthony of Padua (the authorship of the painting has been recently disputed for technical and stylistic reasons). Cimabue was subsequently commissioned to decorate the apse and the transept of the Upper Basilica of Assisi, in the same period of time that Roman artists were decorating the nave. The cycle he created there comprises scenes from the Gospels, the lives of the Virgin Mary, St Peter and St Paul. The paintings are now in poor condition because of oxidation of the brighter colours that were used by the artist. The Maestà of Santa Trinita, dated to c. 1290–1300, which was originally painted for the church of Santa Trinita in Florence, is now in the Uffizi Gallery. The softer expression of the characters suggests that it was influenced by Giotto, who was by then already active as a painter. Cimabue spent the last period of his life, 1301 to 1302, in Pisa. There, he was commissioned to finish a mosaic of Christ Enthroned, originally begun by Maestro Francesco, in the apse of the city's cathedral. Cimabue was to create the part of the mosaic depicting St John the Evangelist, which remains the sole surviving work documented as being by the artist. Cimabue died around 1302. Character According to Vasari, quoting a contemporary of Cimabue, "Cimabue of Florence was a painter who lived during the author's own time, a nobler man than anyone knew but he was as a result so haughty and proud that if someone pointed out to him any mistake or defect in his work, or if he had noted any himself... he would immediately destroy the work, no matter how precious it might be." The nickname Cimabue translates as "bull-head" but also possibly as "one who crushes the views of others", from the Latin word cimare, meaning "top", "shear", and "blunt". The conclusion for the second meaning is drawn from similar commentaries on Dante, who was also known "for being contemptuous of criticism". Legacy History has long regarded Cimabue as the last of an era that was overshadowed by the Italian Renaissance. As early as 1543, Vasari wrote of Cimabue, "Cimabue was, in one sense, the principal cause of the renewal of painting," with the qualification that, "Giotto truly eclipsed Cimabue's fame just as a great light eclipses a much smaller one." In Dante's Divine Comedy In Canto XI of his Purgatorio, Dante laments the quick loss of public interest in Cimabue in the face of Giotto's revolution in art: In Purgatorio, although not seen, Cimabue is mentioned by Oderisi, who is also repenting for his pride. Cimabue serves to represent
teacher attempted to sweep the fly away several times before he understood his pupil's prank. Many scholars now discount Vasari's claim that he took Giotto as his pupil, citing earlier sources that suggest otherwise. Around 1280, Cimabue painted the Maestà, originally displayed in the church of San Francesco at Pisa, but now at the Louvre. This work established a style that was followed subsequently by numerous artists, including Duccio di Buoninsegna in his Rucellai Madonna (in the past, wrongly attributed to Cimabue) as well as Giotto. Other works from the period, which were said to have heavily influenced Giotto, include a Flagellation (Frick Collection), mosaics for the Baptistery of Florence (now largely restored), the Maestà at the Santa Maria dei Servi in Bologna and the Madonna in the Pinacoteca of Castelfiorentino. A workshop painting, perhaps assignable to a slightly later period, is the Maestà with Saints Francis and Dominic now in the Uffizi. During the pontificate of Pope Nicholas IV, the first Franciscan pope, Cimabue worked in Assisi. At Assisi, in the transept of the Lower Basilica of San Francesco, he created a fresco named Madonna with Child Enthroned, Four Angels and St Francis. The left portion of this fresco is lost, but it may have shown St Anthony of Padua (the authorship of the painting has been recently disputed for technical and stylistic reasons). Cimabue was subsequently commissioned to decorate the apse and the transept of the Upper Basilica of Assisi, in the same period of time that Roman artists were decorating the nave. The cycle he created there comprises scenes from the Gospels, the lives of the Virgin Mary, St Peter and St Paul. The paintings are now in poor condition because of oxidation of the brighter colours that were used by the artist. The Maestà of Santa Trinita, dated to c. 1290–1300, which was originally painted for the church of Santa Trinita in Florence, is now in the Uffizi Gallery. The softer expression of the characters suggests that it was influenced by Giotto, who was by then already active as a painter. Cimabue spent the last period of his life, 1301 to 1302, in Pisa. There, he was commissioned to finish a mosaic of Christ Enthroned, originally begun by Maestro Francesco, in the apse of the city's cathedral. Cimabue was to create the part of the mosaic depicting St John the Evangelist, which remains the sole surviving work documented as being by the artist. Cimabue died around 1302. Character According to Vasari, quoting a contemporary of Cimabue, "Cimabue of Florence was a painter who lived during the author's own time, a nobler man than anyone knew but he was as a result so haughty and proud that if someone pointed out to him any mistake or defect in his work, or if he had noted any himself... he would immediately destroy the work, no matter how precious it might be." The nickname Cimabue translates as "bull-head" but also possibly as "one who crushes the views of others", from the Latin word cimare, meaning "top", "shear", and "blunt". The conclusion for the second meaning is drawn from similar commentaries on Dante, who was also known "for being contemptuous of criticism". Legacy History has long regarded Cimabue as the last of an era that was overshadowed by the Italian Renaissance. As early as 1543, Vasari wrote of Cimabue, "Cimabue was, in one sense, the principal cause of
similar ideas Historian Howard Zinn argues that during the Gilded Age in the United States, the U.S. government was acting exactly as Karl Marx described capitalist states: "pretending neutrality to maintain order, but serving the interests of the rich". According to economist Joseph Stiglitz, there has been a severe increase in the market power of corporations, largely due to U.S. antitrust laws being weakened by neoliberal reforms, leading to growing income inequality and a generally underperforming economy. He states that to improve the economy, it is necessary to decrease the influence of money on U.S. politics. In his 1956 book The power elite, sociologist C Wright Mills stated that together with the military and political establishment, leaders of the biggest corporations form a "power elite" that is in control of the U.S. Economist Jeffrey Sachs described the United States as a corporatocracy in The Price of Civilization (2011). He suggested that it arose from four trends: weak national parties and strong political representation of individual districts, the large U.S. military establishment after World War II, large corporations using money to finance election campaigns, and globalization tilting the balance of power away from workers. In 2013, economist Edmund Phelps criticized the economic system of the U.S. and other western countries in recent decades as being what he calls "the new corporatism", which he characterizes as a system in which the state is far too involved in the economy, tasked with "protecting everyone against everyone else", but in which at the same time big companies have a great deal of influence on the government, with lobbyists' suggestions being "welcome, especially if they come with bribes". Corporate influence on politics in the United States Corruption During the Gilded Age in the United States, corruption was rampant as business leaders spent significant amounts of money ensuring that government did not regulate their activities. Corporate influence on legislation Corporations have a significant influence on the regulations and regulators that monitor them. For example, Senator Elizabeth Warren explained in December 2014 how an omnibus spending bill required to fund the government was modified late in the process to weaken banking regulations. The modification made it easier to allow taxpayer-funded bailouts of banking "swaps entities", which the Dodd-Frank banking regulations prohibited. She singled out Citigroup, one of the largest banks, which had a role in modifying the legislation. She also explained how both Wall Street bankers and members of the government that formerly had worked on Wall Street stopped bi-partisan legislation that would have broken up the largest banks. She repeated President Theodore Roosevelt's warnings regarding powerful corporate entities that threatened the "very foundations of Democracy." In a 2015 interview, former President Jimmy Carter stated that the United States is now "an oligarchy with unlimited political bribery" due to the Citizens United v. FEC ruling, which effectively removed limits on donations to political candidates. Wall Street spent a record $2 billion trying to influence the 2016 United States elections. Joel Bakan, the University of British Columbia Law professor and author of the award-winning book The Corporation: The Pathological Pursuit of Profit and Power, writes: Perceived symptoms of corporatocracy in the United States Share of income With regard to income inequality, the 2014 income analysis of the University of California, Berkeley economist Emmanuel Saez confirms that relative growth of income and wealth is not occurring among small and mid-sized entrepreneurs and business owners (who generally populate the lower half of top one per-centers in income), but instead only among the top .1 percent of the income distribution, who earn $2,000,000 or more every year. Corporate power can also increase income inequality. Nobel Prize winner of economics Joseph Stiglitz wrote in May 2011: "Much of today’s inequality is due to manipulation of the financial system, enabled by changes in the rules that have been bought and paid for by the financial industry itself—one of its best investments ever. The government lent money to financial institutions at close to zero percent interest and provided generous bailouts on favorable terms when all else failed. Regulators turned a blind eye to a lack of transparency and to conflicts of interest." Stiglitz explained that the top 1% got nearly "one-quarter" of the income and own approximately 40% of the wealth. Measured relative to GDP, total compensation and its component wages and salaries have been declining since 1970. This indicates
low-tax countries since 1982, including 15 since 2012. Six more also planned to do so in 2015. Stock buybacks versus wage increases One indication of increasing corporate power was the removal of restrictions on their ability to buy back stock, contributing to increased income inequality. Writing in the Harvard Business Review in September 2014, William Lazonick blamed record corporate stock buybacks for reduced investment in the economy and a corresponding impact on prosperity and income inequality. Between 2003 and 2012, the 449 companies in the S&P 500 used 54% of their earnings ($2.4 trillion) to buy back their own stock. An additional 37% was paid to stockholders as dividends. Together, these were 91% of profits. This left little for investment in productive capabilities or higher income for employees, shifting more income to capital rather than labor. He blamed executive compensation arrangements, which are heavily based on stock options, stock awards, and bonuses, for meeting earnings per share (EPS) targets. EPS increases as the number of outstanding shares decreases. Legal restrictions on buybacks were greatly eased in the early 1980s. He advocates changing these incentives to limit buybacks. In the 12 months to March 31, 2014, S&P 500 companies increased their stock buyback payouts by 29% year on year, to $534.9 billion. U.S. companies are projected to increase buybacks to $701 billion in 2015, according to Goldman Sachs, an 18% increase over 2014. For scale, annual non-residential fixed investment (a proxy for business investment and a major GDP component) was estimated to be about $2.1 trillion for 2014. Industry concentration Brid Brennan of the Transnational Institute explained how the concentration of corporations increases their influence over government: "It's not just their size, their enormous wealth and assets that make the TNCs [transnational corporations] dangerous to democracy. It's also their concentration, their capacity to influence, and often infiltrate, governments and their ability to act as a genuine international social class in order to defend their commercial interests against the common good. It is such decision-making power as well as the power to impose deregulation over the past 30 years, resulting in changes to national constitutions, and to national and international legislation which has created the environment for corporate crime and impunity." Brennan concludes that this concentration in power leads to again more concentration of income and wealth. An example of such industry concentration is in banking. The top 5 U.S. banks had approximately 30% of the U.S. banking assets in 1998; this rose to 45% by 2008 and to 48% by 2010, before falling to 47% in 2011.
television, like French Canadian film, is buffered from excessive American influence by the fact of language, and likewise supports a host of home-grown productions. The success of French-language domestic television in Canada often exceeds that of its English-language counterpart. In recent years nationalism has been used to prompt products on television. The I Am Canadian campaign by Molson beer, most notably the commercial featuring Joe Canadian, infused domestically brewed beer and nationalism. Canada's television industry is in full expansion as a site for Hollywood productions. Since the 1980s, Canada, and Vancouver in particular, has become known as Hollywood North. The American TV series Queer as Folk was filmed in Toronto. Canadian producers have been very successful in the field of science fiction since the mid-1990s, with such shows as The X-Files, Stargate SG-1, Highlander: The Series, the new Battlestar Galactica, My Babysitter's A Vampire, Smallville, and The Outer Limits all filmed in Vancouver. The CRTC's Canadian content regulations dictate that a certain percentage of a domestic broadcaster's transmission time must include content that is produced by Canadians, or covers Canadian subjects. These regulations also apply to US cable television channels such as MTV and the Discovery Channel, which have local versions of their channels available on Canadian cable networks. Similarly, BBC Canada, while showing primarily BBC shows from the United Kingdom, also carries Canadian output. Film A number of Canadian pioneers in early Hollywood significantly contributed to the creation of the motion picture industry in the early days of the 20th century. Over the years, many Canadians have made enormous contributions to the American entertainment industry, although they are frequently not recognized as Canadians. Canada has developed a vigorous film industry that has produced a variety of well-known films and actors. In fact, this eclipsing may sometimes be creditable for the bizarre and innovative directions of some works, such as auteurs Atom Egoyan (The Sweet Hereafter, 1997) and David Cronenberg (The Fly, Naked Lunch, A History of Violence) and the avant-garde work of Michael Snow and Jack Chambers. Also, the distinct French-Canadian society permits the work of directors such as Denys Arcand and Denis Villeneuve, while First Nations cinema includes the likes of Atanarjuat: The Fast Runner. At the 76th Academy Awards, Arcand's The Barbarian Invasions became Canada's first film to win the Academy Award for Best Foreign Language Film. The National Film Board of Canada is 'a public agency that produces and distributes films and other audiovisual works which reflect Canada to Canadians and the rest of the world'. Canada has produced many popular documentaries such as The Corporation, Nanook of the North, Final Offer, and Canada: A People's History. The Toronto International Film Festival (TIFF) is considered by many to be one of the most prevalent film festivals for Western cinema. It is the première film festival in North America from which the Oscars race begins. Music The music of Canada has reflected the multi-cultural influences that have shaped the country. Indigenous, the French, and the British have all made historical contributions to the musical heritage of Canada. The country has produced its own composers, musicians and ensembles since the mid-1600s. From the 17th century onward, Canada has developed a music infrastructure that includes church halls; chamber halls; conservatories; academies; performing arts centres; record companys; radio stations, and television music-video channels. The music has subsequently been heavily influenced by American culture because of its proximity and migration between the two countries. Canadian rock has had a considerable impact on the development of modern popular music and the development of the most popular subgenres. Patriotic music in Canada dates back over 200 years as a distinct category from British patriotism, preceding the first legal steps to independence by over 50 years. The earliest known song, "The Bold Canadian", was written in 1812. The national anthem of Canada, "O Canada" adopted in 1980, was originally commissioned by the Lieutenant Governor of Quebec, the Honourable Théodore Robitaille, for the 1880 Saint-Jean-Baptiste Day ceremony. Calixa Lavallée wrote the music, which was a setting of a patriotic poem composed by the poet and judge Sir Adolphe-Basile Routhier. The text was originally only in French, before English lyrics were written in 1906. Music broadcasting in the country is regulated by the Canadian Radio-television and Telecommunications Commission (CRTC). The Canadian Academy of Recording Arts and Sciences presents Canada's music industry awards, the Juno Awards, which were first awarded in a ceremony during the summer of 1970. Media Canada has a well-developed media sector, but its cultural output—particularly in English films, television shows, and magazines—is often overshadowed by imports from the United States. Television, magazines, and newspapers are primarily for-profit corporations based on advertising, subscription, and other sales-related revenues. Nevertheless, both the television broadcasting and publications sectors require a number of government interventions to remain profitable, ranging from regulation that bars foreign companies in the broadcasting industry to tax laws that limit foreign competition in magazine advertising. The promotion of multicultural media in Canada began in the late 1980s as the multicultural policy was legislated in 1988. In the Multiculturalism Act, the federal government proclaimed the recognition of the diversity of Canadian culture. Thus, multicultural media became an integral part of Canadian media overall. Upon numerous government reports showing lack of minority representation or minority misrepresentation, the Canadian government stressed separate provision be made to allow minorities and ethnicities of Canada to have their own voice in the media. Sports Sports in Canada consists of a variety of games. Although there are many contests that Canadians value, the most common are ice hockey, box lacrosse, Canadian football, basketball, soccer, curling, baseball and ringette. All but curling and soccer are considered domestic sports as they were either invented by Canadians or trace their roots to Canada. Ice hockey, referred to as simply "hockey", is Canada's most prevalent winter sport, its most popular spectator sport, and its most successful sport in international competition. It is Canada's official national winter sport. Lacrosse, a sport with indigenous origins, is Canada's oldest and official summer sport. Canadian football is Canada's second most popular spectator sport, and the Canadian Football League's annual championship, the Grey Cup, is the country's largest annual sports event. While other sports have a larger spectator base, association football, known in Canada as soccer in both English and French, has the most registered players of any team sport in Canada, and is the most played sport with all demographics, including ethnic origin, ages and genders. Professional teams exist in many cities in Canada – with a trio of teams in North America's top pro league, Major League Soccer – and international soccer competitions such as the FIFA World Cup, UEFA Euro and the UEFA Champions League attract some of the biggest audiences in Canada. Other popular team sports include curling, street hockey, cricket, rugby league, rugby union, softball and Ultimate frisbee. Popular individual sports include auto racing, boxing, karate, kickboxing, hunting, sport shooting, fishing, cycling, golf, hiking, horse racing, ice skating, skiing, snowboarding, swimming, triathlon, disc golf, water sports, and several forms of wrestling. As a country with a generally cool climate, Canada has enjoyed greater success at the Winter Olympics than at the Summer Olympics, although significant regional variations in climate allow for a wide variety of both team and individual sports. Great achievements in Canadian sports are recognized by Canada's Sports Hall of Fame, while the Lou Marsh Trophy is awarded annually to Canada's top athlete by a panel of journalists. There are numerous other Sports Halls of Fame in Canada. Cuisine Canadian cuisine varies widely depending on the region. The former Canadian prime minister Joe Clark has been paraphrased to have noted: "Canada has a cuisine of cuisines. Not a stew pot, but a smorgasbord." There are considerable overlaps between Canadian food and the rest of the cuisine in North America, many unique dishes (or versions of certain dishes) are found and available only in the country. Common contenders for the Canadian national food include the Quebec-made poutine and the French-Canadian butter tarts. Other popular Canadian made foods include indigenous fried bread bannock, French tourtière, Kraft Dinner, ketchup chips, date squares, nanaimo bars, back bacon, the caesar cocktail and many many more. The Canadian province of Quebec is the birthplace and world's largest producer of maple syrup, The Montreal-style bagel and Montreal-style smoked meat are both food items originally developed by Jewish communities living in Quebec The three earliest cuisines of Canada have First Nations, English, and French roots. The indigenous population of Canada often have their own traditional cuisine. The cuisines of English Canada are closely related to British and American cuisine. Finally, the traditional cuisines of French Canada have evolved from 16th-century French cuisine because of the tough conditions of colonial life and the winter provisions of Coureur des bois. With subsequent waves of immigration in the 18th and 19th century from Central, Southern, and Eastern Europe, and then from Asia, Africa and Caribbean, the regional cuisines were subsequently affected. Outside views In a 2002 interview with the Globe and Mail, Aga Khan, the 49th Imam of the Ismaili Muslims, described Canada as "the most successful pluralist society on the face of our globe", citing it as "a model for the world". A 2007 poll ranked Canada as the country with the most positive influence in the world. 28,000 people in 27 countries were asked to rate 12 countries as either having a positive or negative worldwide influence. Canada's overall influence rating topped the list with 54 per cent of respondents rating it mostly positive and only 14 per cent mostly negative. A global opinion poll for the BBC saw Canada ranked the second most positively viewed nation in the world (behind Germany) in 2013 and 2014. The United States is home to a number of perceptions about Canadian culture, due to the countries' partially shared heritage and the relatively large number of cultural features common to both the US and Canada. For example, the average Canadian may be perceived as more reserved than his or her American counterpart. Canada and the United States are often inevitably compared as sibling countries, and the
muster national responses to major national issues. Humour Canadian humour is an integral part of the Canadian Identity. There are several traditions in Canadian humour in both English and French. While these traditions are distinct and at times very different, there are common themes that relate to Canadians' shared history and geopolitical situation in the Western Hemisphere and the world. Various trends can be noted in Canadian comedy. One trend is the portrayal of a "typical" Canadian family in an ongoing radio or television series. Other trends include outright absurdity, and political and cultural satire. Irony, parody, satire, and self-deprecation are arguably the primary characteristics of Canadian humour. The beginnings of Canadian national radio comedy date to the late 1930s with the debut of The Happy Gang, a long-running weekly variety show that was regularly sprinkled with corny jokes in between tunes. Canadian television comedy begins with Wayne and Shuster, a sketch comedy duo who performed as a comedy team during the Second World War, and moved their act to radio in 1946 before moving on to television. Second City Television, otherwise known as SCTV, Royal Canadian Air Farce, This Hour Has 22 Minutes, The Kids in the Hall, Trailer Park Boys, Corner gas and more recently Schitt's Creek are regarded as television shows which were very influential on the development of Canadian humour. Canadian comedians have had great success in the film industry and are amongst the most recognized in the world. Humber College in Toronto and the École nationale de l'humour in Montreal offer post-secondary programmes in comedy writing and performance. Montreal is also home to the bilingual (English and French) Just for Laughs festival and to the Just for Laughs Museum, a bilingual, international museum of comedy. Canada has a national television channel, The Comedy Network, devoted to comedy. Many Canadian cities feature comedy clubs and showcases, most notable, The Second City branch in Toronto (originally housed at The Old Fire Hall) and the Yuk Yuk's national chain. The Canadian Comedy Awards were founded in 1999 by the Canadian Comedy Foundation for Excellence, a not-for-profit organization. Symbols Predominant symbols of Canada include the maple leaf, beaver, and the Canadian horse. Many official symbols of the country such as the Flag of Canada have been changed or modified over the past few decades to Canadianize them and de-emphasise or remove references to the United Kingdom. Other prominent symbols include the sports of hockey and lacrosse, the Canada Goose, the Royal Canadian Mounted Police, the Canadian Rockies, and more recently the totem pole and Inuksuk. With material items such as Canadian beer, maple syrup, tuques, canoes, nanaimo bars, butter tarts and the Quebec dish of poutine being defined as uniquely Canadian. Symbols of the Canadian monarchy continue to be featured in, for example, the Arms of Canada, the armed forces, and the prefix Her Majesty's Canadian Ship. The designation Royal remains for institutions as varied as the Royal Canadian Armed Forces, Royal Canadian Mounted Police and the Royal Winnipeg Ballet. Arts Visual arts Indigenous artists were producing art in the territory that is now called Canada for thousands of years prior to the arrival of European settler colonists and the eventual establishment of Canada as a nation state. Like the peoples that produced them, indigenous art traditions spanned territories that extended across the current national boundaries between Canada and the United States. The majority of indigenous artworks preserved in museum collections date from the period after European contact and show evidence of the creative adoption and adaptation of European trade goods such as metal and glass beads. Canadian sculpture has been enriched by the walrus ivory, muskox horn and caribou antler and soapstone carvings by the Inuit artists. These carvings show objects and activities from the daily life, myths and legends of the Inuit. Inuit art since the 1950s has been the traditional gift given to foreign dignitaries by the Canadian government. The works of most early Canadian painters followed European trends. During the mid-19th century, Cornelius Krieghoff, a Dutch-born artist in Quebec, painted scenes of the life of the habitants (French-Canadian farmers). At about the same time, the Canadian artist Paul Kane painted pictures of indigenous life in western Canada. A group of landscape painters called the Group of Seven developed the first distinctly Canadian style of painting. All these artists painted large, brilliantly coloured scenes of the Canadian wilderness. Since the 1930s, Canadian painters have developed a wide range of highly individual styles. Emily Carr became famous for her paintings of totem poles in British Columbia. Other noted painters have included the landscape artist David Milne, the painters Jean-Paul Riopelle, Harold Town and Charles Carson and multi-media artist Michael Snow. The abstract art group Painters Eleven, particularly the artists William Ronald and Jack Bush, also had an important impact on modern art in Canada. Government support has played a vital role in their development enabling visual exposure through publications and periodicals featuring Canadian art, as has the establishment of numerous art schools and colleges across the country. Literature Canadian literature is often divided into French- and English-language literatures, which are rooted in the literary traditions of France and Britain, respectively. Canada's early literature, whether written in English or French, often reflects the Canadian perspective on nature, frontier life, and Canada's position in the world, for example the poetry of Bliss Carman or the memoirs of Susanna Moodie and Catherine Parr Traill. These themes, and Canada's literary history, inform the writing of successive generations of Canadian authors, from Leonard Cohen to Margaret Atwood. By the mid-20th century, Canadian writers were exploring national themes for Canadian readers. Authors were trying to find a distinctly Canadian voice, rather than merely emulating British or American writers. Canadian identity is closely tied to its literature. The question of national identity recurs as a theme in much of Canada's literature, from Hugh MacLennan's Two Solitudes (1945) to Alistair MacLeod's No Great Mischief (1999). Canadian literature is often categorized by region or province; by the socio-cultural origins of the author (for example, Acadians, indigenous peoples, LGBT, and Irish Canadians); and by literary period, such as "Canadian postmoderns" or "Canadian Poets Between the Wars". Canadian authors have accumulated numerous international awards. In 1992, Michael Ondaatje became the first Canadian to win the Man Booker Prize for The English Patient. Margaret Atwood won the Booker in 2000 for The Blind Assassin and Yann Martel won it in 2002 for the Life of Pi. Carol Shields's The Stone Diaries won the Governor General's Awards in Canada in 1993, the 1995 Pulitzer Prize for Fiction, and the 1994 National Book Critics Circle Award. In 2013, Alice Munro was the first Canadian to be awarded the Nobel Prize in Literature for her work as "master of the modern short story". Munro is also a recipient of the Man Booker International Prize for her lifetime body of work, and three-time winner of Canada's Governor General's Award for fiction. Theatre Canada has had a thriving stage theatre scene since the late 1800s. Theatre festivals draw many tourists in the summer months, especially the Stratford Shakespeare Festival in Stratford, Ontario, and the Shaw Festival in Niagara-on-the-Lake, Ontario. The Famous People Players are only one of many touring companies that have also developed an international reputation. Canada also hosts one of the largest fringe festivals, the Edmonton International Fringe Festival. Canada's largest cities host a variety of modern and historical venues. The Toronto Theatre District is Canada's largest, as well as being the third largest English-speaking theatre district in the world. In addition to original Canadian works, shows from the West End and Broadway frequently tour in Toronto. Toronto's Theatre District includes the venerable Roy Thomson Hall; the Princess of Wales Theatre; the Tim Sims Playhouse; The Second City; the Canon Theatre; the Panasonic Theatre; the Royal Alexandra Theatre; historic Massey Hall; and the city's new opera house, the Sony Centre for the Performing Arts. Toronto's Theatre District also includes the Theatre Museum Canada. Montreal's theatre district ("Quartier des Spectacles") is the scene of performances that are mainly French-language, although the city also boasts a lively anglophone theatre scene, such as the Centaur Theatre. Large French theatres in the city include Théâtre Saint-Denis and Théâtre du Nouveau Monde. Vancouver is host to, among others, the Vancouver Fringe Festival, the Arts Club Theatre Company, Carousel Theatre, Bard on the Beach, Theatre Under the Stars and Studio 58. Calgary is home to Theatre Calgary, a mainstream regional theatre; Alberta Theatre Projects, a major centre for new play development in Canada; the Calgary Animated Objects Society; and One Yellow Rabbit, a touring company. There are three major theatre venues in Ottawa; the Ottawa Little Theatre, originally called the Ottawa Drama League at its inception in 1913, is the longest-running community theatre company in Ottawa. Since 1969, Ottawa has been the home of the National Arts Centre, a major performing-arts venue that houses four stages and is home to the National Arts Centre Orchestra, the Ottawa Symphony Orchestra and Opera Lyra Ottawa. Established in 1975, the Great Canadian Theatre Company specializes in the production of Canadian plays at a local level. Television Canadian television, especially supported by the Canadian Broadcasting Corporation, is the home of a variety of locally produced shows. French-language television, like French Canadian film, is buffered from excessive American influence by the fact of language, and likewise supports a host of home-grown productions. The success of French-language domestic television in Canada often exceeds that of its English-language counterpart. In recent years nationalism has been used to prompt products on television. The I Am Canadian campaign by Molson beer, most notably the commercial featuring Joe Canadian, infused domestically brewed beer and nationalism. Canada's television industry is in full expansion as a site for Hollywood productions. Since the 1980s, Canada, and Vancouver in particular, has become known as Hollywood North. The American TV series Queer as Folk was filmed in Toronto. Canadian producers have been very successful in the field of science fiction since the mid-1990s, with such shows as The X-Files, Stargate SG-1, Highlander: The Series, the new Battlestar Galactica, My Babysitter's A Vampire, Smallville, and The Outer Limits all filmed in Vancouver. The CRTC's Canadian content regulations dictate that a certain percentage of a domestic broadcaster's transmission time must include content that is produced by Canadians, or covers Canadian subjects. These regulations also apply to US cable television channels such as MTV and the Discovery Channel, which have local versions of their channels available on Canadian cable networks. Similarly, BBC Canada, while showing primarily BBC shows from the United Kingdom, also carries Canadian output. Film A number of Canadian pioneers in early Hollywood significantly contributed to the creation of the motion picture industry in the early days of the 20th century. Over the years, many Canadians have made enormous contributions to the American entertainment industry, although they are frequently not recognized as Canadians. Canada has developed a vigorous film industry that has produced a variety of well-known films and actors. In fact, this eclipsing may sometimes be creditable for the bizarre and innovative directions of some works, such as auteurs Atom Egoyan (The Sweet Hereafter, 1997) and David Cronenberg (The Fly, Naked Lunch, A History of Violence) and the avant-garde work of Michael Snow and Jack Chambers. Also, the distinct French-Canadian society permits the work of directors such as Denys Arcand and Denis Villeneuve, while First Nations cinema includes the likes of Atanarjuat: The Fast Runner. At the 76th Academy Awards, Arcand's The Barbarian Invasions became Canada's first film to win the Academy Award for Best Foreign Language Film. The National Film Board of Canada is 'a public agency that produces and distributes films and other audiovisual works which reflect Canada to Canadians and the rest of the world'. Canada has produced many popular documentaries such as The Corporation, Nanook of the North, Final Offer, and Canada: A People's History. The Toronto International Film Festival (TIFF) is considered by many to be one of the most prevalent film festivals for Western cinema. It is the première film festival in North America from which the Oscars race begins. Music The music of Canada has reflected the multi-cultural influences that have shaped the country. Indigenous, the French, and the British have all made historical contributions to the musical heritage of Canada. The country has produced its own composers, musicians and ensembles since the mid-1600s. From the 17th century onward, Canada has developed a music infrastructure that includes church halls; chamber halls; conservatories; academies; performing arts centres; record companys; radio stations, and television music-video channels. The music has subsequently been heavily influenced by American culture because of its proximity and migration between the two countries. Canadian rock has had a considerable impact on the development of modern popular music and the development of the most popular subgenres. Patriotic music in Canada dates back over 200 years as a distinct category from British patriotism, preceding the first legal steps to independence by over 50 years. The earliest known song, "The Bold Canadian", was written in 1812. The national anthem of Canada, "O Canada" adopted in 1980, was originally commissioned by the Lieutenant Governor of Quebec, the Honourable Théodore Robitaille, for the 1880 Saint-Jean-Baptiste Day ceremony. Calixa Lavallée wrote the music, which was a setting of a patriotic poem composed by the poet and judge Sir Adolphe-Basile Routhier. The text was originally only in French, before English lyrics were written in 1906. Music broadcasting in the country is regulated by the Canadian Radio-television and Telecommunications Commission (CRTC). The Canadian Academy of Recording Arts and Sciences presents Canada's music industry awards, the Juno Awards, which were first awarded in a ceremony during the summer of 1970. Media Canada has a well-developed media sector, but its cultural output—particularly in English films, television shows, and magazines—is often overshadowed by imports from the United States. Television, magazines, and newspapers are primarily for-profit corporations based on advertising, subscription, and other sales-related revenues. Nevertheless, both the television broadcasting and publications sectors require a number of government interventions to remain profitable, ranging from regulation that bars foreign companies in the broadcasting industry to tax laws that limit foreign competition in magazine advertising. The promotion of multicultural media in Canada began in the late 1980s as the multicultural policy was legislated in 1988. In the Multiculturalism Act, the federal government proclaimed the recognition of the diversity of Canadian culture. Thus, multicultural media became an integral part of Canadian media overall. Upon numerous government reports showing lack of minority representation or minority misrepresentation, the Canadian government stressed separate provision be made to allow minorities and ethnicities of Canada to have their own voice in the media. Sports Sports in Canada consists of a variety of games. Although there are many contests that Canadians value, the most common are ice hockey, box lacrosse, Canadian
and experiencing a relatively low level of income disparity. The country's average household disposable income per capita is over US$23,900, higher than the OECD average. Furthermore, the Toronto Stock Exchange is the seventh-largest stock exchange in the world by market capitalization, listing over 1,500 companies with a combined market capitalization of over US$2 trillion . For further information on the types of business entities in this country and their abbreviations, see "Business entities in Canada". Largest firms This list shows firms in the Fortune Global 500, which ranks firms by total revenues reported before March 31, 2017. Only the top five firms (if available) are included as a sample. Notable firms This list includes
a sample. Notable firms This list includes notable companies with primary headquarters located in the country. The industry and sector follow the Industry Classification Benchmark taxonomy. Organizations which have ceased operations are included and noted as defunct. See also List of largest companies in Canada List of largest public companies in Canada by profit List of Canadian mobile phone companies List of mutual fund companies in Canada List of Canadian telephone companies List of defunct Canadian companies List
of Eponymy. Poisson noted that if the mean of observations following such a distribution were taken, the mean error did not converge to any finite number. As such, Laplace's use of the central limit theorem with such distribution was inappropriate, as it assumed a finite mean and variance. Despite this, Poisson did not regard the issue as important, in contrast to Bienaymé, who was to engage Cauchy in a long dispute over the matter. Characterisation Probability density function The Cauchy distribution has the probability density function (PDF) where is the location parameter, specifying the location of the peak of the distribution, and is the scale parameter which specifies the half-width at half-maximum (HWHM), alternatively is full width at half maximum (FWHM). is also equal to half the interquartile range and is sometimes called the probable error. Augustin-Louis Cauchy exploited such a density function in 1827 with an infinitesimal scale parameter, defining what would now be called a Dirac delta function. The maximum value or amplitude of the Cauchy PDF is , located at . It is sometimes convenient to express the PDF in terms of the complex parameter The special case when and is called the standard Cauchy distribution with the probability density function In physics, a three-parameter Lorentzian function is often used: where is the height of the peak. The three-parameter Lorentzian function indicated is not, in general, a probability density function, since it does not integrate to 1, except in the special case where Cumulative distribution function The cumulative distribution function of the Cauchy distribution is: and the quantile function (inverse cdf) of the Cauchy distribution is It follows that the first and third quartiles are , and hence the interquartile range is . For the standard distribution, the cumulative distribution function simplifies to arctangent function : Entropy The entropy of the Cauchy distribution is given by: The derivative of the quantile function, the quantile density function, for the Cauchy distribution is: The differential entropy of a distribution can be defined in terms of its quantile density, specifically: The Cauchy distribution is the maximum entropy probability distribution for a random variate for which or, alternatively, for a random variate for which In its standard form, it is the maximum entropy probability distribution for a random variate for which Kullback-Leibler divergence The Kullback-Leibler divergence between two Cauchy distributions has the following symmetric closed-form formula: Any f-divergence between two Cauchy distributions is symmetric and can be expressed as a function of the chi-squared divergence. Closed-form expression for the total variation, Jensen–Shannon divergence, Hellinger distance, etc are available. Properties The Cauchy distribution is an example of a distribution which has no mean, variance or higher moments defined. Its mode and median are well defined and are both equal to . When and are two independent normally distributed random variables with expected value 0 and variance 1, then the ratio has the standard Cauchy distribution. If is a positive-semidefinite covariance matrix with strictly positive diagonal entries, then for independent and identically distributed and any random -vector independent of and such that and (defining a categorical distribution) it holds that If are independent and identically distributed random variables, each with a standard Cauchy distribution, then the sample mean has the same standard Cauchy distribution. To see that this is true, compute the characteristic function of the sample mean: where is the sample mean. This example serves to show that the condition of finite variance in the central limit theorem cannot be dropped. It is also an example of a more generalized version of the central limit theorem that is characteristic of all stable distributions, of which the Cauchy distribution is a special case. The Cauchy distribution is an infinitely divisible probability distribution. It is also a strictly stable distribution. The standard Cauchy distribution coincides with the Student's t-distribution with one degree of freedom. Like all stable distributions, the location-scale family to which the Cauchy distribution belongs is closed under linear transformations with real coefficients. In addition, the Cauchy distribution is closed under linear fractional transformations with real coefficients. In this connection, see also McCullagh's parametrization of the Cauchy distributions. Characteristic function Let denote a Cauchy distributed random variable. The characteristic function of the Cauchy distribution is given by which is just the Fourier transform of the probability density. The original probability density may be expressed in terms of the characteristic function, essentially by using the inverse Fourier transform: The nth moment of a distribution is the nth derivative of the characteristic function evaluated at . Observe that the characteristic function is not differentiable at the origin: this corresponds to the fact that the Cauchy distribution does not have well-defined moments higher than the zeroth moment. Explanation of undefined moments Mean If a probability distribution has a density function , then the mean, if it exists, is given by We may evaluate this two-sided improper integral by computing the sum of two one-sided improper integrals. That is, for an arbitrary real number . For the integral to exist (even as an infinite value), at least one of the terms in this sum should be finite, or both should be infinite and have the same sign. But in the case of the Cauchy distribution, both the terms in this sum (2) are infinite and have opposite sign. Hence (1) is undefined, and thus so is the mean. Note that the Cauchy principal value of the mean of the Cauchy distribution is which is zero. On the other hand, the related integral is not zero, as can be seen easily by computing the integral. This again shows that the mean (1) cannot exist. Various results in probability theory about expected values, such as the strong law of large numbers, fail to hold for the Cauchy distribution. Smaller moments The absolute moments for are defined. For we have Higher moments The Cauchy distribution does not have finite moments of any order. Some of the higher raw moments do exist and have a value of infinity, for example, the raw second moment: By re-arranging the formula, one can see that the second moment is essentially the infinite integral of a constant (here 1). Higher even-powered raw moments will also evaluate to infinity. Odd-powered raw moments, however, are undefined, which is distinctly different from existing with the value of infinity. The odd-powered raw moments are undefined because their values are essentially equivalent to since the two halves of the integral both diverge and have opposite signs. The first raw moment is the mean, which, being odd, does not exist. (See also the discussion above about this.) This in turn
under linear transformations with real coefficients. In addition, the Cauchy distribution is closed under linear fractional transformations with real coefficients. In this connection, see also McCullagh's parametrization of the Cauchy distributions. Characteristic function Let denote a Cauchy distributed random variable. The characteristic function of the Cauchy distribution is given by which is just the Fourier transform of the probability density. The original probability density may be expressed in terms of the characteristic function, essentially by using the inverse Fourier transform: The nth moment of a distribution is the nth derivative of the characteristic function evaluated at . Observe that the characteristic function is not differentiable at the origin: this corresponds to the fact that the Cauchy distribution does not have well-defined moments higher than the zeroth moment. Explanation of undefined moments Mean If a probability distribution has a density function , then the mean, if it exists, is given by We may evaluate this two-sided improper integral by computing the sum of two one-sided improper integrals. That is, for an arbitrary real number . For the integral to exist (even as an infinite value), at least one of the terms in this sum should be finite, or both should be infinite and have the same sign. But in the case of the Cauchy distribution, both the terms in this sum (2) are infinite and have opposite sign. Hence (1) is undefined, and thus so is the mean. Note that the Cauchy principal value of the mean of the Cauchy distribution is which is zero. On the other hand, the related integral is not zero, as can be seen easily by computing the integral. This again shows that the mean (1) cannot exist. Various results in probability theory about expected values, such as the strong law of large numbers, fail to hold for the Cauchy distribution. Smaller moments The absolute moments for are defined. For we have Higher moments The Cauchy distribution does not have finite moments of any order. Some of the higher raw moments do exist and have a value of infinity, for example, the raw second moment: By re-arranging the formula, one can see that the second moment is essentially the infinite integral of a constant (here 1). Higher even-powered raw moments will also evaluate to infinity. Odd-powered raw moments, however, are undefined, which is distinctly different from existing with the value of infinity. The odd-powered raw moments are undefined because their values are essentially equivalent to since the two halves of the integral both diverge and have opposite signs. The first raw moment is the mean, which, being odd, does not exist. (See also the discussion above about this.) This in turn means that all of the central moments and standardized moments are undefined since they are all based on the mean. The variance—which is the second central moment—is likewise non-existent (despite the fact that the raw second moment exists with the value infinity). The results for higher moments follow from Hölder's inequality, which implies that higher moments (or halves of moments) diverge if lower ones do. Moments of truncated distributions Consider the truncated distribution defined by restricting the standard Cauchy distribution to the interval . Such a truncated distribution has all moments (and the central limit theorem applies for i.i.d. observations from it); yet for almost all practical purposes it behaves like a Cauchy distribution. Estimation of parameters Because the parameters of the Cauchy distribution do not correspond to a mean and variance, attempting to estimate the parameters of the Cauchy distribution by using a sample mean and a sample variance will not succeed. For example, if an i.i.d. sample of size n is taken from a Cauchy distribution, one may calculate the sample mean as: Although the sample values will be concentrated about the central value , the sample mean will become increasingly variable as more observations are taken, because of the increased probability of encountering sample points with a large absolute value. In fact, the distribution of the sample mean will be equal to the distribution of the observations themselves; i.e., the sample mean of a large sample is no better (or worse) an estimator of than any single observation from the sample. Similarly, calculating the sample variance will result in values that grow larger as more observations are taken. Therefore, more robust means of estimating the central value and the scaling parameter are needed. One simple method is to take the median value of the sample as an estimator of and half the sample interquartile range as an estimator of . Other, more precise and robust methods have been developed For example, the truncated mean of the middle 24% of the sample order statistics produces an estimate for that is more efficient than using either the sample median or the full sample mean. However, because of the fat tails of the Cauchy distribution, the efficiency of the estimator decreases if more than 24% of the sample is used. Maximum likelihood can also be used to estimate the parameters and . However, this tends to be complicated by the fact that this requires finding the roots of a high degree polynomial, and there can be multiple roots that represent local maxima. Also, while the maximum likelihood estimator is asymptotically efficient, it is relatively inefficient for small samples. The log-likelihood function for the Cauchy distribution for sample size is: Maximizing the log likelihood function with respect to and by taking the first derivative produces the following system of equations: Note that is a monotone function in and that the solution must satisfy Solving just for requires solving a polynomial of degree , and solving just for requires solving a polynomial of degree . Therefore, whether solving for one parameter or for both parameters simultaneously, a numerical solution on a computer is typically required. The benefit of maximum likelihood estimation is asymptotic efficiency; estimating using the sample median is only about 81% as asymptotically efficient as estimating by maximum likelihood. The truncated sample mean using the middle 24% order statistics is about 88% as asymptotically efficient an estimator of as the maximum likelihood estimate. When Newton's method is used to find the solution for the maximum likelihood estimate, the middle 24% order statistics can be used as an initial solution for . The shape can be estimated using the median of absolute values, since for location 0 Cauchy variables , the the shape parameter. Multivariate Cauchy distribution A random vector is said to have the multivariate Cauchy distribution if every linear combination of its components has a Cauchy distribution. That is, for any constant vector , the random variable should have a univariate Cauchy distribution. The characteristic function of a multivariate Cauchy distribution is given by: where and are real functions with a homogeneous function of degree one and a positive homogeneous function of degree one. More formally: for all . An example of a bivariate Cauchy distribution can be given by: Note that in this example, even though the covariance between and is 0, and are not statistically independent. We also can write this formula for complex variable. Then the probability density function of complex cauchy is : Analogous to the univariate density, the multidimensional Cauchy density also relates to the multivariate Student distribution. They are equivalent when the degrees of freedom parameter is equal to one. The density of a dimension Student distribution with one degree of freedom becomes: Properties and details for this density can be obtained by taking it as a particular case of the multivariate Student density. Transformation properties If then If and are independent, then and If then McCullagh's parametrization of the Cauchy distributions: Expressing a Cauchy distribution in terms of one complex parameter , define to mean . If then: where , , and are real numbers. Using the same convention as above, if then: where is the circular Cauchy distribution. Lévy measure The Cauchy distribution is the stable distribution of index 1. The Lévy–Khintchine representation of such a stable distribution of parameter is given, for by: where and can be expressed explicitly. In the case
of inputs, outputs and various components with different behaviors; to use control system design tools to develop controllers for those systems; and to implement controllers in physical systems employing available technology. A system can be mechanical, electrical, fluid, chemical, financial or biological, and its mathematical modelling, analysis and controller design uses control theory in one or many of the time, frequency and complex-s domains, depending on the nature of the design problem. History Automatic control systems were first developed over two thousand years ago. The first feedback control device on record is thought to be the ancient Ktesibios's water clock in Alexandria, Egypt around the third century B.C.E. It kept time by regulating the water level in a vessel and, therefore, the water flow from that vessel. This certainly was a successful device as water clocks of similar design were still being made in Baghdad when the Mongols captured the city in 1258 A.D. A variety of automatic devices have been used over the centuries to accomplish useful tasks or simply just to entertain. The latter includes the automata, popular in Europe in the 17th and 18th centuries, featuring dancing figures that would repeat the same task over and over again; these automata are examples of open-loop control. Milestones among feedback, or "closed-loop" automatic control devices, include the temperature regulator of a furnace attributed to Drebbel, circa 1620, and the centrifugal flyball governor used for regulating the speed of steam engines by James Watt in 1788. In his 1868 paper "On Governors", James Clerk Maxwell was able to explain instabilities exhibited by the flyball governor using differential equations to describe the control system. This demonstrated the importance and usefulness of mathematical models and methods in understanding complex phenomena, and it signaled the beginning of mathematical control and systems theory. Elements of control theory had appeared earlier but not as dramatically and convincingly as in Maxwell's analysis. Control theory made significant strides over the next century. New mathematical techniques, as well as advancements in electronic and computer technologies, made it possible to control significantly more complex dynamical systems than the original flyball governor could stabilize. New mathematical techniques included developments in optimal control in the 1950s and 1960s followed by progress in stochastic, robust, adaptive, nonlinear control methods in the 1970s and 1980s. Applications of control methodology have helped to make possible space travel and communication satellites, safer and more efficient aircraft, cleaner automobile engines, and cleaner and more efficient chemical processes. Before it emerged as a unique discipline, control engineering was practiced as a part of mechanical engineering and control theory was studied as a part of electrical engineering since electrical circuits can often be easily described using control theory techniques. In the very first control relationships, a current output was represented by a voltage control input. However, not having adequate technology to implement electrical control systems, designers were left with the option of less efficient and slow responding mechanical systems. A very effective mechanical controller that is still widely used in some hydro plants is the governor. Later on, previous to modern power electronics, process control systems for industrial applications were devised by mechanical engineers using pneumatic and hydraulic control devices, many of which are still in use today. Control theory There are two major divisions in control theory, namely, classical and modern, which have direct implications for the control engineering applications. Classical SISO System Design The scope of classical control theory is limited to single-input and single-output (SISO) system design, except when analyzing for disturbance rejection using a second input. The system analysis is carried out in the time domain using differential equations, in the complex-s domain with the Laplace transform, or in the frequency domain by transforming from the complex-s domain. Many systems may be assumed to have a second order and single variable system response in the time domain. A controller designed using classical theory often requires on-site tuning due to incorrect design approximations. Yet, due to the easier physical implementation of classical controller designs as compared to systems designed using modern control theory, these controllers are preferred in most industrial applications. The most common controllers designed using classical control theory are PID controllers. A less common implementation may include either or both a Lead or Lag filter. The ultimate end goal is to meet requirements typically provided in the time-domain called the step response, or at times in the frequency domain called the open-loop response. The step response characteristics applied in a specification are typically percent overshoot, settling time, etc. The open-loop response characteristics applied in a specification are typically Gain and Phase margin and bandwidth. These characteristics may be evaluated through simulation including a dynamic model of the system under control coupled with the compensation model. Modern MIMO System Design Modern control theory is carried out in the state space, and can deal with multiple-input and multiple-output (MIMO) systems. This overcomes the limitations of classical control theory in more sophisticated design problems, such as fighter aircraft control, with the limitation that no frequency domain analysis is possible. In modern design, a system is represented to the greatest advantage as a set of decoupled first order differential equations defined using state variables. Nonlinear, multivariable, adaptive and robust control theories come under this division. Matrix methods are significantly limited for MIMO systems where linear independence cannot be assured in the relationship between inputs and outputs . Being fairly new, modern control theory has many areas yet to be explored. Scholars like Rudolf E. Kálmán and Aleksandr Lyapunov are well known among the people who have shaped modern control theory. Control systems Control engineering is the engineering discipline that focuses on the modeling of a diverse
governor used for regulating the speed of steam engines by James Watt in 1788. In his 1868 paper "On Governors", James Clerk Maxwell was able to explain instabilities exhibited by the flyball governor using differential equations to describe the control system. This demonstrated the importance and usefulness of mathematical models and methods in understanding complex phenomena, and it signaled the beginning of mathematical control and systems theory. Elements of control theory had appeared earlier but not as dramatically and convincingly as in Maxwell's analysis. Control theory made significant strides over the next century. New mathematical techniques, as well as advancements in electronic and computer technologies, made it possible to control significantly more complex dynamical systems than the original flyball governor could stabilize. New mathematical techniques included developments in optimal control in the 1950s and 1960s followed by progress in stochastic, robust, adaptive, nonlinear control methods in the 1970s and 1980s. Applications of control methodology have helped to make possible space travel and communication satellites, safer and more efficient aircraft, cleaner automobile engines, and cleaner and more efficient chemical processes. Before it emerged as a unique discipline, control engineering was practiced as a part of mechanical engineering and control theory was studied as a part of electrical engineering since electrical circuits can often be easily described using control theory techniques. In the very first control relationships, a current output was represented by a voltage control input. However, not having adequate technology to implement electrical control systems, designers were left with the option of less efficient and slow responding mechanical systems. A very effective mechanical controller that is still widely used in some hydro plants is the governor. Later on, previous to modern power electronics, process control systems for industrial applications were devised by mechanical engineers using pneumatic and hydraulic control devices, many of which are still in use today. Control theory There are two major divisions in control theory, namely, classical and modern, which have direct implications for the control engineering applications. Classical SISO System Design The scope of classical control theory is limited to single-input and single-output (SISO) system design, except when analyzing for disturbance rejection using a second input. The system analysis is carried out in the time domain using differential equations, in the complex-s domain with the Laplace transform, or in the frequency domain by transforming from the complex-s domain. Many systems may be assumed to have a second order and single variable system response in the time domain. A controller designed using classical theory often requires on-site tuning due to incorrect design approximations. Yet, due to the easier physical implementation of classical controller designs as compared to systems designed using modern control theory, these controllers are preferred in most industrial applications. The most common controllers designed using classical control theory are PID controllers. A less common implementation may include either or both a Lead or Lag filter. The ultimate end goal is to meet requirements typically provided in the time-domain called the step response, or at times in the frequency domain called the open-loop response. The step response characteristics applied in a specification are typically percent overshoot, settling time, etc. The open-loop response characteristics applied in a specification are typically Gain and Phase margin and bandwidth. These characteristics may be evaluated through simulation including a dynamic model of the system under control coupled with the compensation model. Modern MIMO System Design Modern control theory is carried out in the state space, and can deal with multiple-input and multiple-output (MIMO) systems. This overcomes the limitations of classical control theory in more sophisticated design problems, such as fighter aircraft control, with the limitation that no frequency domain analysis is possible. In modern design, a system is represented to the greatest advantage as a set of decoupled first order differential equations defined using state variables. Nonlinear, multivariable, adaptive and robust control theories come under this division. Matrix methods are significantly limited for MIMO systems where linear independence cannot be assured in the relationship between inputs and outputs . Being fairly new, modern control theory has many areas yet to be explored. Scholars like Rudolf E. Kálmán and Aleksandr Lyapunov are well known among the people who have shaped modern control theory. Control systems Control engineering is the engineering discipline that focuses on the modeling of a diverse range of dynamic systems (e.g. mechanical systems) and the design of controllers that will cause these systems to behave in the desired manner. Although such controllers need not be electrical, many are and hence control engineering is often viewed as a subfield of electrical engineering. Electrical circuits, digital signal processors and microcontrollers can all be used to implement control systems. Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles. In most cases, control engineers utilize feedback when designing control systems. This is often accomplished using a PID controller system. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system, which adjusts the motor's torque accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback. In practically all such systems stability is important and control theory can help ensure stability is achieved. Although feedback is an important aspect of control engineering, control engineers may also work on the control of systems without feedback. This is known as open loop control. A classic example of open loop control is a washing machine that runs through a pre-determined cycle without the use of sensors. Control engineering education At any universities around the world, control engineering courses are taught primarily in electrical engineering and mechanical engineering, but some courses can be instructed in mechatronics engineering, and aerospace engineering. In others, control engineering is connected to computer science, as most control techniques today are implemented through computers, often as embedded systems (as in the automotive field). The field of control within chemical engineering is often known as process control. It deals primarily with the control of variables in a chemical process in a plant. It is taught as part of the undergraduate curriculum of any chemical engineering program and employs many of the same principles in control engineering. Other engineering disciplines also overlap with control engineering as it can be
disease, the concentration of parasites in the blood is too low to be reliably detected by microscopy or PCR, so the diagnosis is usually made using serological tests, which detect immunoglobulin G antibodies against in the blood. Two positive serology results, using different test methods, are required to confirm the diagnosis. If the test results are inconclusive, additional testing methods such as Western blot can be used. Various rapid diagnostic tests for Chagas disease are available. These tests are easily transported and can be performed by people without special training. They are useful for screening large numbers of people and testing people who cannot access healthcare facilities, but their sensitivity is relatively low, and it is recommended that a second method is used to confirm a positive result. T. cruzi parasites can be grown from blood samples by blood culture, xenodiagnosis, or by inoculating animals with the person's blood. In the blood culture method, the person's red blood cells are separated from the plasma and added to a specialized growth medium to encourage multiplication of the parasite. It can take up to six months to obtain the result. Xenodiagnosis involves feeding the blood to triatomine insects, then examining their feces for the parasite 30 to 60 days later. These methods are not routinely used, as they are slow and have low sensitivity. Prevention Efforts to prevent Chagas disease have largely focused on vector control to limit exposure to triatomine bugs. Insecticide-spraying programs have been the mainstay of vector control, consisting of spraying homes and the surrounding areas with residual insecticides. This was originally done with organochlorine, organophosphate, and carbamate insecticides, which were supplanted in the 1980s with pyrethroids. These programs have drastically reduced transmission in Brazil and Chile, and eliminated major vectors from certain regions: Triatoma infestans from Brazil, Chile, Uruguay, and parts of Peru and Paraguay, as well as Rhodnius prolixus from Central America. Vector control in some regions has been hindered by the development of insecticide resistance among triatomine bugs. In response, vector control programs have implemented alternative insecticides (e.g. fenitrothion and bendiocarb in Argentina and Bolivia), treatment of domesticated animals (which are also fed on by triatomine bugs) with pesticides, pesticide-impregnated paints, and other experimental approaches. In areas with triatomine bugs, transmission of can be prevented by sleeping under bed nets and by housing improvements that prevent triatomine bugs from colonizing houses. Blood transfusion was formerly the second-most common mode of transmission for Chagas disease. can survive in refrigerated stored blood, and can survive freezing and thawing, allowing it to persist in whole blood, packed red blood cells, granulocytes, cryoprecipitate, and platelets. The development and implementation of blood bank screening tests has dramatically reduced the risk of infection during blood transfusion. Nearly all blood donations in Latin American countries undergo Chagas screening. Widespread screening is also common in non-endemic nations with significant populations of immigrants from endemic areas including the United Kingdom (implemented in 1999), Spain (2005), the United States (2007), France and Sweden (2009), Switzerland (2012), and Belgium (2013). Blood is tested using serological tests, typically ELISAs, to detect antibodies against proteins. Other modes of transmission have been targeted by Chagas disease prevention programs. Treating -infected mothers during pregnancy reduces the risk of congenital transmission of the infection. To this end, many countries in Latin America have implemented routine screening of pregnant women and infants for infection, and the World Health Organization recommends screening all children born to infected mothers to prevent congenital infection from developing into chronic disease. Similarly to blood transfusions, many countries with endemic Chagas disease screen organs for transplantation with serological tests. There is no vaccine against Chagas disease. Several experimental vaccines have been tested in animals infected with and were able to reduce parasite numbers in the blood and heart, but no vaccine candidates had undergone clinical trials in humans as of 2016. Management Chagas disease is managed using antiparasitic drugs to eliminate T. cruzi from the body and symptomatic treatment to address the effects of the infection. As of 2018, benznidazole and nifurtimox were the antiparasitic drugs of choice for treating Chagas disease, though benznidazole is the only drug available in most of Latin America. For either drug, treatment typically consists of two to three oral doses per day for 60 to 90 days. Antiparasitic treatment is most effective early in the course of infection: it eliminates from 50 to 80% of people in the acute phase, but only 20–60% of those in the chronic phase. Treatment of chronic disease is more effective in children than in adults, and the cure rate for congenital disease approaches 100% if treated in the first year of life. Antiparasitic treatment can also slow the progression of the disease and reduce the possibility of congenital transmission. Elimination of does not cure the cardiac and gastrointestinal damage caused by chronic Chagas disease, so these conditions must be treated separately. Antiparasitic treatment is not recommended for people who have already developed dilated cardiomyopathy. Benznidazole is usually considered the first-line treatment because it has milder adverse effects than nifurtimox and its efficacy is better understood. Both benznidazole and nifurtimox have common side effects that can result in treatment being discontinued. The most common side effects of benznidazole are skin rash, digestive problems, decreased appetite, weakness, headache, and sleeping problems. These side effects can sometimes be treated with antihistamines or corticosteroids, and are generally reversed when treatment is stopped. However, benzidazole is discontinued in up to 29% of cases. Nifurtimox has more frequent side effects, affecting up to 97.5% of individuals taking the drug. The most common side effects are loss of appetite, weight loss, nausea and vomiting, and various neurological disorders including mood changes, insomnia, paresthesia and peripheral neuropathy. Treatment is discontinued in up to 75% of cases. Both drugs are contraindicated for use in pregnant women and people with liver or kidney failure. As of 2019, resistance to these drugs has been reported. Complications In the chronic stage, treatment involves managing the clinical manifestations of the disease. The treatment of Chagas cardiomyopathy is similar to that of other forms of heart disease. Beta blockers and ACE inhibitors may be prescribed, but some people with Chagas disease may not be able to take the standard dose of these drugs because they have low blood pressure or a low heart rate. To manage irregular heartbeats, people may be prescribed anti-arrhythmic drugs such as amiodarone, or have a pacemaker implanted. Blood thinners may be used to prevent thromboembolism and stroke. Chronic heart disease caused by Chagas is a common reason for heart transplantation surgery. Because transplant recipients take immunosuppressive drugs to prevent organ rejection, they are monitored using PCR to detect reactivation of the disease. People with Chagas disease who undergo heart transplantation have higher survival rates than the average heart transplant recipient. Mild gastrointestinal disease can be treated symptomatically, such as by using laxatives for constipation, or taking a prokinetic drug like metoclopramide before meals to relieve esophageal symptoms. Surgery to sever the muscles of the lower esophageal sphincter (cardiomyotomy) is indicated in more severe cases of esophageal disease, and surgical removal of the affected part of the organ may be required for advanced megacolon and megaesophagus. Epidemiology In 2017, an estimated 6.2 million people worldwide had Chagas disease, with approximately 162,000 new infections and 7,900 deaths each year. The disease resulted in a global annual economic burden estimated at US$7.2 billion in 2013, 86% of which is borne by endemic countries. Chagas disease results in the loss of over 800,000 disability-adjusted life years each year. Chagas is endemic to 21 countries in continental Latin America: Argentina, Belize, Bolivia, Brazil, Chile, Colombia, Costa Rica, Ecuador, El Salvador, French Guiana, Guatemala, Guyana, Honduras, Mexico, Nicaragua, Panama, Paraguay, Peru, Suriname, Uruguay, and Venezuela. The endemic area ranges from the southern United States to northern Chile and Argentina, with Bolivia (6.1%), Argentina (3.6%), and Paraguay (2.1%) exhibiting the highest prevalence of the disease. In endemic areas, due largely to vector control efforts and screening of blood donations, annual infections and deaths have fallen by 67% and more than 73% respectively from their peaks in the 1980s to 2010. Transmission by insect vector and blood transfusion has been completely interrupted in Uruguay (1997), Chile (1999), and Brazil (2006), and in Argentina, vectorial transmission has been interrupted in 13 of the 19 endemic provinces. During Venezuela's humanitarian crisis, vectorial transmission has begun occurring in areas where it had previously been interrupted and Chagas disease seroprevalence rates have increased. Transmission rates have also risen in the Gran Chaco region due to insecticide resistance and in the Amazon basin due to oral transmission. While the rate of vector-transmitted Chagas disease has declined throughout most of Latin America, the rate of orally transmitted disease has risen, possibly due to increasing urbanization and deforestation bringing people into closer contact with triatomines and altering the distribution of triatomine species. Orally transmitted Chagas disease is of particular concern in Venezuela, where 16 outbreaks have been recorded between 2007 and 2018. Chagas exists in two different ecological zones: In the Southern Cone region, the main vector lives in and around human homes. In Central America and Mexico, the main vector species lives both inside dwellings and in uninhabited areas. In both zones, Chagas occurs almost exclusively in rural areas, where also circulates in wild and domestic animals. commonly infects more than 100 species of mammals across Latin America including opossums, armadillos, marmosets, bats, and various rodents, all of which can be infected by the vectors or orally by eating triatomine bugs and other infected animals. Non-endemic countries Though Chagas is traditionally
Carlos Chagas, after whom it is named. Chagas disease is classified as a neglected tropical disease. Signs and symptoms Chagas disease occurs in two stages: an acute stage, which develops one to two weeks after the insect bite, and a chronic stage, which develops over many years. The acute stage is often symptom-free. When present, the symptoms are typically minor and not specific to any particular disease. Signs and symptoms include fever, malaise, headache, and enlargement of the liver, spleen, and lymph nodes. Rarely, people develop a swollen nodule at the site of infection, which is called "Romaña's sign" if it is on the eyelid, or a "chagoma" if it is elsewhere on the skin. In rare cases (less than 1–5%), infected individuals develop severe acute disease, which can involve inflammation of the heart muscle, fluid accumulation around the heart, and inflammation of the brain and surrounding tissues, and may be life-threatening. The acute phase typically lasts four to eight weeks and resolves without treatment. Unless treated with antiparasitic drugs, individuals remain infected with after recovering from the acute phase. Most chronic infections are asymptomatic, which is referred to as indeterminate chronic Chagas disease. However, over decades with the disease, approximately 30–40% of people develop organ dysfunction (determinate chronic Chagas disease), which most often affects the heart or digestive system. The most common manifestation is heart disease, which occurs in 14–45% of people with chronic Chagas disease. People with Chagas heart disease often experience heart palpitations, and sometimes fainting, due to irregular heart function. By electrocardiogram, people with Chagas heart disease most frequently have arrhythmias. As the disease progresses, the heart's ventricles become enlarged (dilated cardiomyopathy), which reduces its ability to pump blood. In many cases the first sign of Chagas heart disease is heart failure, thromboembolism, or chest pain associated with abnormalities in the microvasculature. Also common in chronic Chagas disease is damage to the digestive system, which affects 10–21% of people. Enlargement of the esophagus or colon are the most common digestive issues. Those with enlarged esophagus often experience pain (odynophagia) or trouble swallowing (dysphagia), acid reflux, cough, and weight loss. Individuals with enlarged colon often experience constipation, and may develop severe blockage of the intestine or its blood supply. Up to 10% of chronically infected individuals develop nerve damage that can result in numbness and altered reflexes or movement. While chronic disease typically develops over decades, some individuals with Chagas disease (less than 10%) progress to heart damage directly after acute disease. Signs and symptoms differ for people infected with through less common routes. People infected through ingestion of parasites tend to develop severe disease within three weeks of consumption, with symptoms including fever, vomiting, shortness of breath, cough, and pain in the chest, abdomen, and muscles. Those infected congenitally typically have few to no symptoms, but can have mild non-specific symptoms, or severe symptoms such as jaundice, respiratory distress, and heart problems. People infected through organ transplant or blood transfusion tend to have symptoms similar to those of vector-borne disease, but the symptoms may not manifest for anywhere from a week to five months. Chronically infected individuals who become immunosuppressed due to HIV infection can suffer particularly severe and distinct disease, most commonly characterized by inflammation in the brain and surrounding tissue or brain abscesses. Symptoms vary widely based on the size and location of brain abscesses, but typically include fever, headaches, seizures, loss of sensation, or other neurological issues that indicate particular sites of nervous system damage. Occasionally, these individuals also experience acute heart inflammation, skin lesions, and disease of the stomach, intestine, or peritoneum. Cause Chagas disease is caused by infection with the protozoan parasite , which is typically introduced into humans through the bite of triatomine bugs, also called "kissing bugs". When the insect defecates at the bite site, motile forms called trypomastigotes enter the bloodstream and invade various host cells. Inside a host cell, the parasite transforms into a replicative form called an amastigote, which undergoes several rounds of replication. The replicated amastigotes transform back into trypomastigotes, which burst the host cell and are released into the bloodstream. Trypomastigotes then disseminate throughout the body to various tissues, where they invade cells and replicate. Over many years, cycles of parasite replication and immune response can severely damage these tissues, particularly the heart and digestive tract. Transmission T. cruzi can be transmitted by various triatomine bugs in the genera Triatoma, Panstrongylus, and Rhodnius. The primary vectors for human infection are the species of triatomine bugs that inhabit human dwellings, namely Triatoma infestans, Rhodnius prolixus, Triatoma dimidiata and Panstrongylus megistus. These insects are known by a number of local names, including vinchuca in Argentina, Bolivia, Chile and Paraguay, barbeiro (the barber) in Brazil, pito in Colombia, chinche in Central America, and chipo in Venezuela. The bugs tend to feed at night, preferring moist surfaces near the eyes or mouth. A triatomine bug can become infected with when it feeds on an infected host. replicates in the insect's intestinal tract and is shed in the bug's feces. When an infected triatomine feeds, it pierces the skin and takes in a blood meal, defecating at the same time to make room for the new meal. The bite is typically painless, but causes itching. Scratching at the bite introduces the -laden feces into the bite wound, initiating infection. In addition to classical vector spread, Chagas disease can be transmitted through consumption of food or drink contaminated with triatomine insects or their feces. Since heating or drying kills the parasites, drinks and especially fruit juices are the most frequent source of infection. This oral route of transmission has been implicated in several outbreaks, where it led to unusually severe symptoms, likely due to infection with a higher parasite load than from the bite of a triatomine bug. T. cruzi can be transmitted independent of the triatomine bug during blood transfusion, following organ transplantation, or across the placenta during pregnancy. Transfusion with the blood of an infected donor infects the recipient 10–25% of the time. To prevent this, blood donations are screened for in many countries with endemic Chagas disease, as well as the United States. Similarly, transplantation of solid organs from an infected donor can transmit to the recipient. This is especially true for heart transplant, which transmits T. cruzi 75–100% of the time, and less so for transplantation of the liver (0–29%) or a kidney (0–19%). An infected mother can pass to her child through the placenta; this occurs in up to 15% of births by infected mothers. As of 2019, 22.5% of new infections occurred through congenital transmission. Pathophysiology In the acute phase of the disease, signs and symptoms are caused directly by the replication of and the immune system's response to it. During this phase, can be found in various tissues throughout the body and circulating in the blood. During the initial weeks of infection, parasite replication is brought under control by production of antibodies and activation of the host's inflammatory response, particularly cells that target intracellular pathogens such as NK cells and macrophages, driven by inflammation-signaling molecules like TNF-α and IFN-γ. During chronic Chagas disease, long-term organ damage develops over years due to continued replication of the parasite and damage from the immune system. Early in the course of the disease, is found frequently in the striated muscle fibers of the heart. As disease progresses, the heart becomes generally enlarged, with substantial regions of cardiac muscle fiber replaced by scar tissue and fat. Areas of active inflammation are scattered throughout the heart, with each housing inflammatory immune cells, typically macrophages and T cells. Late in the disease, parasites are rarely detected in the heart, and may be present at only very low levels. In the heart, colon, and esophagus, chronic disease also leads to a massive loss of nerve endings. In the heart, this may contribute to arrythmias and other cardiac dysfunction. In the colon and esophagus, loss of nervous system control is the major driver of organ dysfunction. Loss of nerves impairs the movement of food through the digestive tract, which can lead to blockage of the esophagus or colon and restriction of their blood supply. Diagnosis The presence of T. cruzi in the blood is diagnostic of Chagas disease. During the acute phase of infection, it can be detected by microscopic examination of fresh anticoagulated blood, or its buffy coat, for motile parasites; or by preparation of thin and thick blood smears stained with Giemsa, for direct visualization of parasites. Blood smear examination detects parasites in 34–85% of cases. The sensitivity increases if techniques such as microhematocrit centrifugation are used to concentrate the blood. On microscopic examination of stained blood smears, trypomastigotes have a slender body, often in the shape of an S or U, with a flagellum connected to the body by an undulating membrane. Alternatively, T. cruzi DNA can be detected by polymerase chain reaction (PCR). In acute and congenital Chagas disease, PCR is more sensitive than microscopy, and it is more reliable than antibody-based tests for the diagnosis of congenital disease because it is not affected by transfer of antibodies against from a mother to her baby (passive immunity). PCR is also used to monitor levels in organ transplant recipients and immunosuppressed people, which allows infection or reactivation to be detected at an early stage. In chronic Chagas disease, the concentration of parasites in the blood is too low to be reliably detected by microscopy or PCR, so the diagnosis is usually made using serological tests, which detect immunoglobulin G antibodies against in the blood. Two positive serology results, using different test methods, are required to confirm the diagnosis. If the test results are inconclusive, additional testing methods such as Western blot can be used. Various rapid diagnostic tests for Chagas disease are available. These tests are easily transported and can be performed by people without special training. They are useful for screening large numbers of people and testing people who cannot access healthcare facilities, but their sensitivity is relatively low, and it is recommended that a second method is used to confirm a positive result. T. cruzi parasites can be grown from blood samples by blood culture, xenodiagnosis, or by inoculating animals with the person's blood. In the blood culture method, the person's red blood cells are separated from the plasma and added to a specialized growth medium to encourage multiplication of the parasite. It can take up to six months to obtain the result. Xenodiagnosis involves feeding the blood to triatomine insects, then examining their feces for the parasite 30 to 60 days later. These methods are not routinely used, as they are slow and have low sensitivity. Prevention Efforts to prevent Chagas disease have largely focused on vector control to limit exposure to triatomine bugs. Insecticide-spraying programs have been the mainstay of vector control, consisting of spraying homes and the surrounding areas with residual insecticides. This was originally done with organochlorine, organophosphate, and carbamate insecticides, which were supplanted in the 1980s with pyrethroids. These programs have drastically reduced transmission in Brazil and Chile, and eliminated major vectors from certain regions: Triatoma infestans from Brazil, Chile, Uruguay, and parts of Peru and Paraguay, as well as Rhodnius prolixus from Central America. Vector control in some regions has been hindered by the development of insecticide resistance among triatomine bugs. In response, vector control programs have implemented alternative insecticides (e.g. fenitrothion and bendiocarb in Argentina and Bolivia), treatment of domesticated animals (which are also fed on by triatomine bugs) with pesticides, pesticide-impregnated paints, and other experimental approaches. In areas with triatomine bugs, transmission of can be prevented by sleeping under bed nets and by housing improvements that prevent triatomine bugs from colonizing houses. Blood transfusion was formerly the second-most common mode of transmission for Chagas disease. can survive in refrigerated stored blood, and can survive freezing and thawing,
wander across the hall and talk with Vince Gott who ran the lab for open-heart surgery pioneer Walt Lillehei. Gott had begun to develop a technique of running blood backwards through the veins of the heart so Lillehei could more easily operate on the aortic valve (McRae writes, "It was the type of inspired thinking that entranced Barnard"). In March 1956, Gott asked Barnard to help him run the heart-lung machine for an operation. Shortly thereafter, Wangensteen agreed to let Barnard switch to Lillehei's service. It was during this time that Barnard first became acquainted with fellow future heart transplantation surgeon Norman Shumway. Barnard also became friendly with Gil Campbell who had demonstrated that a dog's lung could be used to oxygenate blood during open-heart surgery. (The year before Barnard arrived, Lillehei and Campbell had used this procedure for twenty minutes during surgery on a 13-year-old boy with ventricular septal defect, and the boy had made a full recovery.) Barnard and Campbell met regularly for early breakfast. In 1958, Barnard received a Master of Science in Surgery for a thesis titled "The aortic valve – problems in the fabrication and testing of a prosthetic valve". The same year he was awarded a Ph.D. for his dissertation titled "The aetiology of congenital intestinal atresia". Barnard described the two years he spent in the United States as "the most fascinating time in my life." Upon returning to South Africa in 1958, Barnard was appointed head of the Department of Experimental Surgery at Groote Schuur hospital, as well as holding a joint post at the University of Cape Town. He was promoted to full-time lecturer and Director of Surgical Research at the University of Cape Town. In 1960, he flew to Moscow in order to meet Vladimir Demikhov, a top expert on organ transplants (later he credited Demikhov's accomplishment saying that "if there is a father of heart and lung transplantation then Demikhov certainly deserves this title.") In 1961 he was appointed Head of the Division of Cardiothoracic Surgery at the teaching hospitals of the University of Cape Town. He rose to the position of Associate Professor in the Department of Surgery at the University of Cape Town in 1962. Barnard's younger brother Marius, who also studied medicine, eventually became Barnard's right-hand man at the department of Cardiac Surgery. Over time, Barnard became known as a brilliant surgeon with many contributions to the treatment of cardiac diseases, such as the Tetralogy of Fallot and Ebstein's anomaly. He was promoted to Professor of Surgical Science in the Department of Surgery at the University of Cape Town in 1972. In 1981, Barnard became a founding member of the World Cultural Council. Among the many awards he received over the years, he was named Professor Emeritus in 1984. Historical context Following the first successful kidney transplant in 1953, in the United States, Barnard performed the second kidney transplant in South Africa in October 1967, the first having been done in Johannesburg the previous year. On 23 January 1964, James Hardy at the University of Mississippi Medical Center in Jackson, Mississippi, performed the world's first heart transplant and world's first cardiac xenotransplant by transplanting the heart of a chimpanzee into a desperately ill and dying man. This heart did beat in the patient's chest for approximately 60 to 90 minutes. The patient, Boyd Rush, died without ever regaining consciousness. Barnard had experimentally transplanted forty-eight hearts into dogs, which was about a fifth the number that Adrian Kantrowitz had performed at Maimonides Medical Center in New York and about a sixth the number Norman Shumway had performed at Stanford University in California. Barnard had no dogs which had survived longer than ten days, unlike Kantrowitz and Shumway who had had dogs survive for more than a year. With the availability of new breakthroughs introduced by several pioneers, also including Richard Lower at the Medical College of Virginia, several surgical teams were in a position to prepare for a human heart transplant. Barnard had a patient willing to undergo the procedure, but as with other surgeons, he needed a suitable donor. During the Apartheid era in South Africa, non-white persons and citizens were not given equal opportunities in the medical professions. At Groote Schuur Hospital, Hamilton Naki was an informally taught surgeon. He started out as a gardener and cleaner. One day he was asked to help out with an experiment on a giraffe. From this modest beginning, Naki became principal lab technician and taught hundreds of surgeons, and assisted with Barnard's organ transplant program. Barnard said, "Hamilton Naki had better technical skills than I did. He was a better craftsman than me, especially when it came to stitching, and had very good hands in the theatre". A popular myth, propagated principally by a widely discredited documentary film called Hidden Heart and an erroneous newspaper article, maintains incorrectly that Naki was present during the Washkansky transplant. First human-to-human heart transplant Barnard performed the world's first human-to-human heart transplant operation in the early morning hours of Sunday 3 December 1967. Louis Washkansky, a 54-year-old grocer who was suffering from diabetes and incurable heart disease, was the patient. Barnard was assisted by his brother Marius Barnard, as well as a team of thirty staff members. The operation lasted approximately five hours. Barnard stated to Washkansky and his wife Ann Washkansky that the transplant had an 80% chance of success. This has been criticised by the ethicists Peter Singer and Helga Kuhse as making claims for chances of success to the patient and family which were "unfounded" and "misleading". Barnard later wrote, "For a dying man it is not a difficult decision because he knows he is at the end. If a lion chases you to the bank of a river filled with crocodiles, you will leap into the water, convinced you have a chance to swim to the other side." The donor heart came from a young woman, Denise Darvall, who had been rendered brain dead in an accident on 2 December 1967, while crossing a street in Cape Town. On examination at Groote Schuur hospital, Darvall had two serious fractures in her skull, with no electrical activity in her brain detected, and no sign of pain when ice water was poured into her ear. Coert Venter and Bertie Bosman requested permission from Darvall's father for Denise's heart to be used in the transplant attempt. The afternoon before his first transplant, Barnard dozed at his home while listening to music. When he awoke, he decided to modify Shumway and Lower's technique. Instead of cutting straight across the back of the atrial chambers of the donor heart, he would avoid damage to the septum and instead cut two small holes for the venae cavae and pulmonary veins. Prior to the transplant, rather than wait for Darvall's heart to stop beating, at his brother Marius Barnard's urging, Christiaan had injected potassium into her heart to paralyse it and render her technically dead by the whole-body standard. Twenty years later, Marius Barnard recounted, "Chris stood there for a few moments, watching, then stood back and said, 'It works.'" Washkansky survived the operation and lived for 18 days, having succumbed to pneumonia as he was taking immunosuppressive drugs. Additional heart transplants Barnard and his patient received worldwide publicity. As a 2017 BBC retrospective article describes, "Journalists and film crews flooded into Cape Town's Groote Schuur Hospital, soon making Barnard and Washkansky household names." Barnard himself was described as "charismatic" and "photogenic." And the operation was initially reported as "successful" even though Washkansky only lived a further 18 days. Worldwide, approximately 100 transplants were performed by various doctors during 1968. However, only a third of these patients lived longer than three months. Many medical centers stopped performing transplants. In fact, a U.S. National Institutes of Health publication states, "Within several years, only Shumway's team at Stanford was attempting transplants." Barnard's second transplant operation was conducted on 2 January 1968, and the patient, Philip Blaiberg, survived for 19 months. Blaiberg's heart was donated by Clive Haupt, a 24-year-old black man who suffered a stroke, inciting controversy (especially in the African-American press) during the time of South African apartheid. Dirk van Zyl, who received a new heart in 1971, was the longest-lived recipient, surviving over 23 years. Between December 1967 and November 1974 at Groote Schuur Hospital in Cape Town, South Africa, ten heart transplants were performed, as well as a heart and lung transplant in 1971. Of these ten patients, four lived longer than 18 months, with two of these four becoming long-term survivors. One patient, Dorothy Fischer, lived for over thirteen years and another for over twenty-four years. Full recovery of donor heart function often takes place over hours or days, during which time considerable damage can occur. Other deaths to patients can occur from preexisting conditions. For example, in pulmonary hypertension the patient's right ventricle has often adapted to the higher pressure over time and, although diseased and hypertrophied, is often capable of maintaining circulation to the lungs. Barnard designed the idea of the heterotopic (or "piggy back" transplant) in which the patient's diseased heart is left in place while the donor heart is added, essentially forming a "double heart". Barnard performed the first such heterotopic heart transplant in 1974. From November 1974 through December 1983, 49 consecutive heterotopic heart transplants on 43 patients were performed at Groote Schuur. The survival rate for patients at one year was over 60%, as compared to less than 40% with standard transplants, and the survival rate at five years was over 36% as compared to less than 20% with standard transplants. Many surgeons gave up cardiac transplantation due to poor results, often due to rejection of the transplanted heart by the patient's immune system. Barnard persisted until the advent of cyclosporine, an effective immunosuppressive drug, which helped revive the operation throughout the world. He also attempted xenotransplantation in two human patients, utilizing a baboon heart and chimpanzee heart, respectively. Public life Barnard was an outspoken opponent of South Africa's laws of apartheid, and was not afraid to criticise his nation's government, although he had to temper his remarks to some extent to travel abroad. Rather than leaving his homeland, he used his fame to campaign for a change in the law. Christiaan's brother, Marius Barnard, went into politics, and was elected to the legislature from the Progressive Federal Party. Barnard later stated that the reason he never won the
his four brothers, Abraham, was a "blue baby" who died of a heart problem at the age of three (Barnard would later guess that it was tetralogy of Fallot). The family also experienced the loss of a daughter who was stillborn and who had been the fraternal twin of Barnard's older brother Johannes, who was twelve years older than Chris. Barnard matriculated from the Beaufort West High School in 1940, and went to study medicine at the University of Cape Town Medical School, where he obtained his MB ChB in 1945. His father served as a missionary to mixed-race people. His mother, the former Maria Elisabeth de Swart, instilled in the surviving brothers the belief that they could do anything they set their minds to. Career Barnard did his internship and residency at the Groote Schuur Hospital in Cape Town, after which he worked as a general practitioner in Ceres, a rural town in the Cape Province. In 1951, he returned to Cape Town where he worked at the City Hospital as a Senior Resident Medical Officer, and in the Department of Medicine at Groote Schuur as a registrar. He completed his master's degree, receiving Master of Medicine in 1953 from the University of Cape Town. In the same year he obtained a doctorate in medicine (MD) from the same university for a dissertation titled "The treatment of tuberculous meningitis". Soon after qualifying as a doctor, Barnard performed experiments on dogs while investigating intestinal atresia, a birth defect which allows life-threatening gaps to develop in the intestines. He followed a medical hunch that this was caused by inadequate blood flow to the fetus. After nine months and forty-three attempts, Barnard was able to reproduce this condition in a fetus puppy by tying off some of the blood supply to a puppy's intestines and then placing the animal back in the womb, after which it was born some two weeks later, with the condition of intestinal atresia. He was also able to cure the condition by removing the piece of intestine with inadequate blood supply. The mistake of previous surgeons had been attempting to reconnect ends of intestine which themselves still had inadequate blood supply. To be successful, it was typically necessary to remove between 15 and 20 centimeters of intestine (6 to 8 inches). Jannie Louw used this innovation in a clinical setting, and Barnard's method saved the lives of ten babies in Cape Town. This technique was also adapted by surgeons in Britain and the US. In addition, Barnard analyzed 259 cases of tubercular meningitis. Owen Wangensteen at the University of Minnesota in the United States had been impressed by the work of Alan Thal, a young South African doctor working in Minnesota. Wangensteen asked Groote Schuur Head of Medicine John Brock if he might recommend any similarly talented South Africans and Brock recommended Barnard. In December 1955, Barnard travelled to Minneapolis, Minnesota to begin a two-year scholarship under Chief of Surgery Wangensteen, who assigned Barnard more work on the intestines, which Barnard accepted even though he wanted to move onto something new. Simply by luck, whenever Barnard needed a break from this work, he could wander across the hall and talk with Vince Gott who ran the lab for open-heart surgery pioneer Walt Lillehei. Gott had begun to develop a technique of running blood backwards through the veins of the heart so Lillehei could more easily operate on the aortic valve (McRae writes, "It was the type of inspired thinking that entranced Barnard"). In March 1956, Gott asked Barnard to help him run the heart-lung machine for an operation. Shortly thereafter, Wangensteen agreed to let Barnard switch to Lillehei's service. It was during this time that Barnard first became acquainted with fellow future heart transplantation surgeon Norman Shumway. Barnard also became friendly with Gil Campbell who had demonstrated that a dog's lung could be used to oxygenate blood during open-heart surgery. (The year before Barnard arrived, Lillehei and Campbell had used this procedure for twenty minutes during surgery on a 13-year-old boy with ventricular septal defect, and the boy had made a full recovery.) Barnard and Campbell met regularly for early breakfast. In 1958, Barnard received a Master of Science in Surgery for a thesis titled "The aortic valve – problems in the fabrication and testing of a prosthetic valve". The same year he was awarded a Ph.D. for his dissertation titled "The aetiology of congenital intestinal atresia". Barnard described the two years he spent in the United States as "the most fascinating time in my life." Upon returning to South Africa in 1958, Barnard was appointed head of the Department of Experimental Surgery at Groote Schuur hospital, as well as holding a joint post at the University of Cape Town. He was promoted to full-time lecturer and Director of Surgical Research at the University of Cape Town. In 1960, he flew to Moscow in order to meet Vladimir Demikhov, a top expert on organ transplants (later he credited Demikhov's accomplishment saying that "if there is a father of heart and lung transplantation then Demikhov certainly deserves this title.") In 1961 he was appointed Head of the Division of Cardiothoracic Surgery at the teaching hospitals of the University of Cape Town. He rose to the position of Associate Professor in the Department of Surgery at the University of Cape Town in 1962. Barnard's younger brother Marius, who also studied medicine, eventually became Barnard's right-hand man at the department of Cardiac Surgery. Over time, Barnard became known as a brilliant surgeon with many contributions to the treatment of cardiac diseases, such as the Tetralogy of Fallot and Ebstein's anomaly. He was promoted to Professor of Surgical Science in the Department of Surgery at the University of Cape Town in 1972. In 1981, Barnard became a founding member of the World Cultural Council. Among the many awards he received over the years, he was named Professor Emeritus in 1984. Historical context Following the first successful kidney transplant in 1953, in the United States, Barnard performed the second kidney transplant in South Africa in October 1967, the first having been done in Johannesburg the previous year. On 23 January 1964, James Hardy at the University of Mississippi Medical Center in Jackson, Mississippi, performed the world's first heart transplant and world's first cardiac xenotransplant by transplanting the heart of a chimpanzee into a desperately ill and dying man. This heart did beat in the patient's chest for approximately 60 to 90 minutes. The patient, Boyd Rush, died without ever regaining consciousness. Barnard had experimentally transplanted forty-eight hearts into dogs, which was about a fifth the number that Adrian Kantrowitz had performed at Maimonides Medical Center in New York and about a sixth the number Norman Shumway had performed at Stanford University in California. Barnard had no dogs which had survived longer than ten days, unlike Kantrowitz and Shumway who had had dogs survive for more than a year. With the availability of new breakthroughs introduced by several pioneers, also including Richard Lower at the Medical College of Virginia, several surgical teams were in a position to prepare for a human heart transplant. Barnard had a patient willing to undergo the procedure, but as with other surgeons, he needed a suitable donor. During the Apartheid era in South Africa, non-white persons and citizens were not given equal opportunities in the medical professions. At Groote Schuur Hospital, Hamilton Naki was an informally taught surgeon. He started out as a gardener and cleaner. One day he was asked to help out with an experiment on a giraffe. From this modest beginning, Naki became principal lab technician and taught hundreds of surgeons, and assisted with Barnard's organ transplant program. Barnard said, "Hamilton Naki had better technical skills than I did. He was a better craftsman than me, especially when it came to stitching, and had very good hands in the theatre". A popular myth, propagated principally by a widely discredited documentary film called Hidden Heart and an erroneous newspaper article, maintains incorrectly that Naki was present during the Washkansky transplant. First human-to-human heart transplant Barnard performed the world's first human-to-human heart transplant operation in the early morning hours of Sunday 3 December 1967. Louis Washkansky, a 54-year-old grocer who was suffering from diabetes and incurable heart disease, was the patient. Barnard was assisted by his brother Marius Barnard, as well as a team of thirty staff members. The operation lasted approximately five hours. Barnard stated to Washkansky and his wife Ann Washkansky that the transplant had an 80% chance of success. This has been criticised by the ethicists Peter Singer and Helga Kuhse as making claims for chances of success to the patient and family which were "unfounded" and "misleading". Barnard later wrote, "For a dying man it is not a difficult decision because he knows he is at the end. If a lion chases you to the bank of a river filled
couple to marry could include differences in social rank status, an existing marriage and laws against bigamy, religious or professional prohibitions, or a lack of recognition by the appropriate authorities. The concubine in a concubinage tended to have a lower social status than the married party or home owner, and this was often the reason why concubinage was preferred to marriage. A concubine could be an "alien" in a society that did not recognize marriages between foreigners and citizens. Alternatively, they might be a slave, or person from a poor family interested in a union with a man from the nobility. In other cases, some social groups were forbidden to marry, such as Roman soldiers, and concubinage served as a viable alternative to marriage. In polygynous situations, the number of concubines there were permitted within an individual concubinage arrangement has varied greatly. In Roman Law, where monogamy was expected, the relationship was identical (and alternative) to marriage except for the lack of marital affection from both or one of the parties, which conferred rights related to property, inheritance and social rank. By contrast, in parts of Asia and the Middle East, powerful men kept as many concubines as they could financially support. Some royal households had thousands of concubines. In such cases concubinage served as a status symbol and for the production of sons. In societies that accepted polygyny, there were advantages to having a concubine over a mistress, as children from a concubine were legitimate, while children from a mistress would be considered "bastards". Categorization Scholars have made attempts to categorize various patterns of concubinage practiced in the world. The International Encyclopedia of Anthropology gives four distinct forms of concubinage: Royal concubinage, where politics was connected to reproduction. Concubines became consorts to the ruler, fostered diplomatic relations, and perpetuated the royal bloodline. Imperial concubines could be selected from the general population or prisoners of war. Examples of this included imperial China, Ottoman empire and Sultanate of Kano. Elite concubinage, which offered men the chance to increase social status, and satisfy desires. Most such men already had a wife. In East Asia this practice was justified by Confucianism. In the Muslim world, this concubinage resembled slavery. Concubinage could be a form of common-law relationship that allowed a couple, who did not or wish to marry, to live together. This was prevalent in medieval Europe and colonial Asia. In Europe, some families discouraged younger sons from marriage to prevent division of family wealth among many heirs. Concubinage could also function as a form of sexual enslavement of women in a patriarchal system. In such cases the children of the concubine could become permanently inferior to the children of the wife. Examples include Mughal India and Choson Korea. Junius P. Rodriguez gives three cultural patterns of concubinage: Asian, Islamic and European. Antiquity Mesopotamia In Mesopotamia, it was customary for a sterile wife to give her husband a slave as a concubine to bear children. The status of such concubines was ambiguous; they normally could not be sold but they remained the slave of the wife. However, in the late Babylonian period, there are reports that concubines could be sold. Assyria Old Assyrian Period (20th–18th centuries BC) In general, marriage was monogamous. "If after two or three years of marriage the wife had not given birth to any children, the husband was allowed to buy a slave (who could also be chosen by the wife) in order to produce heirs. This woman, however, remained a slave and never gained the status of a second wife." Middle Assyrian Period (14th–11th centuries BC) In the Middle Assyrian Period, the main wife (assatu) wore a veil in the street, as could a concubine (esirtu) if she were accompanying the main wife, or if she were married. "If a man veils his concubine in public, by declaring 'she is my wife,' this woman shall be his wife." It was illegal for unmarried women, prostitutes and slave women to wear a veil in the street. "The children of a concubine were lower in rank than the descendants of a wife, but they could inherit if the marriage of the latter remained childless." Ancient Egypt While most Ancient Egyptians were monogamous, a male pharaoh would have had other, lesser wives and concubines in addition to the Great Royal Wife. This arrangement would allow the pharaoh to enter into diplomatic marriages with the daughters of allies, as was the custom of ancient kings. Concubinage was a common occupation for women in ancient Egypt, especially for talented women. A request for forty concubines by Amenhotep III (c. 1386–1353 BC) to a man named Milkilu, Prince of Gezer states:"Behold, I have sent you Hanya, the commissioner of the archers, with merchandise in order to have beautiful concubines, i.e. weavers. Silver, gold, garments, all sort of precious stones, chairs of ebony, as well as all good things, worth 160 deben. In total: forty concubines—the price of every concubine is forty of silver. Therefore, send very beautiful concubines without blemish." — (Lewis, 146)Concubines would be kept in the pharaoh's harem. Amenhotep III kept his concubines in his palace at Malkata, which was one of the most opulent in the history of Egypt. The king was considered to be deserving of many women as long as he cared for his Great Royal Wife as well. Ancient Greece In Ancient Greece the practice of keeping a concubine ( pallakís) was common among the upper classes, and they were for the most part women who were slaves or foreigners, but occasional free born based on family arrangements (typically from poor families). Children produced by slaves remained slaves and those by non-slave concubines varied over time; sometimes they had the possibility of citizenship. The law prescribed that a man could kill another man caught attempting a relationship with his concubine. By the mid 4th century concubines could inherit property, but, like wives, they were treated as sexual property. While references to the sexual exploitation of maidservants appear in literature, it was considered disgraceful for a man to keep such women under the same roof as his wife. Apollodorus of Acharnae said that hetaera were concubines when they had a permanent relationship with a single man, but nonetheless used the two terms interchangeably. Ancient Rome A concubinatus (Latin for "concubinage" – see also concubina, "concubine", considered milder than paelex, and concubinus, "bridegroom") was an institution of quasi-marriage between Roman citizens who for various reasons did not want to enter into a full marriage. The institution was often found in unbalanced couples, where one of the members belonged to a higher social class or where one of the two was freed and the other one was freeborn. However it differed from a contubernium, where at least one of the partners was a slave. The relationship between a free citizen and a slave or between slaves was known as contubernium. The term describes a wide range of situations, from simple sexual slavery to quasi-marriage. For instance, according to Suetonius, Caenis, a slave and secretary of Antonia Minor, was Vespasian's wife "in all but name", until her death in AD 74. It was also not uncommon for slaves to create family-like unions, allowed but not protected by the law. The law allowed a slave-owner to free the slave and enter into a concubinatus or a regular marriage. Asia Concubinage was highly popular before the early 20th century all over East Asia. The main functions of concubinage for men was for pleasure and producing additional heirs, whereas for women the relationship could provide financial security. Children of concubines had lower rights in account to inheritance, which was regulated by the Dishu system. In places like China and the Muslim world, the concubine of a king could achieve power, especially if her son also became a monarch. China In China, successful men often had concubines until the practice was outlawed when the Chinese Communist Party came to power in 1949. The standard Chinese term translated as "concubine" was qiè , a term that has been used since ancient times. Concubinage resembled marriage in that concubines were recognized sexual partners of a man and were expected to bear children for him. Unofficial concubines () were of lower status, and their children were considered illegitimate. The English term concubine is also used for what the Chinese refer to as pínfēi (), or "consorts of emperors", an official position often carrying a very high rank. In premodern China it was illegal and socially disreputable for a man to have more than one wife at a time, but it was acceptable to have concubines. From the earliest times wealthy men purchased concubines and added them to their household in addition to their wife. The purchase of concubine was similar to the purchase of a slave, yet concubines had a higher social status. In the earliest records a man could have as many concubines as he could afford to purchase. From the Eastern Han period (AD 25–220) onward, the number of concubines a man could have was limited by law. The higher rank and the more noble identity a man possessed, the more concubines he was permitted to have. A concubine's treatment and situation was variable and was influenced by the social status of the male to whom she was attached, as well as the attitude of his wife. In the Book of Rites chapter on "The Pattern of the Family" () it says, "If there were betrothal rites, she became a wife; and if she went without these, a concubine." Wives brought a dowry to a relationship, but concubines did not. A concubinage relationship could be entered into without the ceremonies used in marriages, and neither remarriage nor a return to her natal home in widowhood were allowed to a concubine. The position of the concubine was generally inferior to that of the wife. Although a concubine could produce heirs, her children would be inferior in social status to a wife's children, although they were of higher status than illegitimate children. The child of a concubine had to show filial duty to two women, their biological mother and their legal mother—the wife of their father. After the death of a concubine, her sons would make an offering to her, but these offerings were not continued by the concubine's grandsons, who only made offerings to their grandfather's wife. There are early records of concubines allegedly being buried alive with their masters to "keep them company in the afterlife". Until the Song dynasty (960–1276), it was considered a serious breach of social ethics to promote a concubine to a wife. During the Qing dynasty (1644–1911), the status of concubines improved. It became permissible to promote a concubine to wife, if the original wife had died and the concubine was the mother of the only surviving sons. Moreover, the prohibition against forcing a widow to remarry was extended to widowed concubines. During this period tablets for concubine-mothers seem to have been more commonly placed in family ancestral altars, and genealogies of some lineages listed concubine-mothers. Many of the concubines of the emperor of the Qing dynasty were freeborn women from prominent families. Concubines of men of lower social status could be either freeborn or slave. Imperial concubines, kept by emperors in the Forbidden City, had different ranks and were traditionally guarded by eunuchs to ensure that they could not be impregnated by anyone but the emperor. In Ming China (1368–1644) there was an official system to select concubines for the emperor. The age of the candidates ranged mainly from 14 to 16. Virtues, behavior, character, appearance and body condition were the selection criteria. Despite the limitations imposed on Chinese concubines, there are several examples in history and literature of concubines who achieved great power and influence. Lady Yehenara, otherwise known as Empress Dowager Cixi, was arguably one of the most successful concubines in Chinese history. Cixi first entered the court as a concubine to Xianfeng Emperor and gave birth to his only surviving son, who later became Tongzhi Emperor. She eventually became the de facto ruler of Qing China for 47 years after her husband's death. An examination of concubinage features in one of the Four Great Classical Novels, Dream of the Red Chamber (believed to be a semi-autobiographical account of author Cao Xueqin's family life). Three generations of the Jia family are supported by one notable concubine of the emperor, Jia Yuanchun, the full elder sister of the male protagonist Jia Baoyu. In contrast, their younger half-siblings by concubine Zhao, Jia Tanchun and Jia Huan, develop distorted personalities because they are the children of a concubine. Emperors' concubines and harems are emphasized in 21st-century romantic novels written for female readers and set in ancient times. As a plot element, the children of concubines are depicted with a status much inferior to that in actual history. The zhai dou (,residential intrigue) and gong dou (,harem intrigue) genres show concubines and wives, as well as their children, scheming secretly to gain power. Empresses in the Palace, a gong dou type novel and TV drama, has had great success in 21st-century China. Hong Kong officially abolished the Great Qing Legal Code in 1971, thereby making concubinage illegal. Casino magnate Stanley Ho of Macau took his "second wife" as his official concubine in 1957, while his "third and fourth wives" retain no official status. Mongols Polygyny and concubinage were very common in Mongol society, especially for powerful Mongol men. Genghis Khan, Ögedei Khan, Jochi, Tolui, and Kublai Khan (among others) all had many wives and concubines. Genghis Khan frequently acquired wives and concubines from empires and societies that he had conquered, these women were often princesses or queens that were taken captive or gifted to him. Genghis Khan's most famous concubine was Möge Khatun, who, according to the Persian historian Ata-Malik Juvayni, was "given to Chinggis Khan by a chief of the Bakrin tribe, and he loved her very much." After Genghis Khan died, Möge Khatun became a wife of Ögedei Khan. Ögedei also favored her as a wife, and she frequently accompanied him on his hunting expeditions. Japan Before monogamy was legally imposed in the Meiji period, concubinage was common among the nobility. Its purpose was to ensure male heirs. For example, the son of an Imperial concubine often had a chance of becoming emperor. Yanagihara Naruko, a high-ranking concubine of Emperor Meiji, gave birth to Emperor Taishō, who was later legally adopted by Empress Haruko, Emperor Meiji's formal wife. Even among merchant families, concubinage was occasionally used to ensure heirs. Asako Hirooka, an entrepreneur who was the daughter of a concubine, worked hard to help her husband's family survive after the Meiji Restoration. She lost her fertility giving birth to her only daughter, Kameko; so her husband—with whom she got along well—took Asako's maid-servant as a concubine and fathered three daughters and a son with her. Kameko, as the child of the formal wife, married a noble man and matrilineally carried on the family name. A samurai could take concubines but their backgrounds were checked by higher-ranked samurai. In many cases, taking a concubine was akin to a marriage. Kidnapping a concubine, although common in fiction, would have been shameful, if not criminal. If the concubine was a commoner, a messenger was sent with betrothal money or a note for exemption of tax to ask for her parents' acceptance. Even though the woman would not be a legal wife, a situation normally considered a demotion, many wealthy merchants believed that being the concubine of a samurai was superior to being the legal wife of a commoner. When a merchant's daughter married a samurai, her family's money erased the samurai's debts, and the samurai's social status improved the standing of the merchant family. If a samurai's
in Chinese history. Cixi first entered the court as a concubine to Xianfeng Emperor and gave birth to his only surviving son, who later became Tongzhi Emperor. She eventually became the de facto ruler of Qing China for 47 years after her husband's death. An examination of concubinage features in one of the Four Great Classical Novels, Dream of the Red Chamber (believed to be a semi-autobiographical account of author Cao Xueqin's family life). Three generations of the Jia family are supported by one notable concubine of the emperor, Jia Yuanchun, the full elder sister of the male protagonist Jia Baoyu. In contrast, their younger half-siblings by concubine Zhao, Jia Tanchun and Jia Huan, develop distorted personalities because they are the children of a concubine. Emperors' concubines and harems are emphasized in 21st-century romantic novels written for female readers and set in ancient times. As a plot element, the children of concubines are depicted with a status much inferior to that in actual history. The zhai dou (,residential intrigue) and gong dou (,harem intrigue) genres show concubines and wives, as well as their children, scheming secretly to gain power. Empresses in the Palace, a gong dou type novel and TV drama, has had great success in 21st-century China. Hong Kong officially abolished the Great Qing Legal Code in 1971, thereby making concubinage illegal. Casino magnate Stanley Ho of Macau took his "second wife" as his official concubine in 1957, while his "third and fourth wives" retain no official status. Mongols Polygyny and concubinage were very common in Mongol society, especially for powerful Mongol men. Genghis Khan, Ögedei Khan, Jochi, Tolui, and Kublai Khan (among others) all had many wives and concubines. Genghis Khan frequently acquired wives and concubines from empires and societies that he had conquered, these women were often princesses or queens that were taken captive or gifted to him. Genghis Khan's most famous concubine was Möge Khatun, who, according to the Persian historian Ata-Malik Juvayni, was "given to Chinggis Khan by a chief of the Bakrin tribe, and he loved her very much." After Genghis Khan died, Möge Khatun became a wife of Ögedei Khan. Ögedei also favored her as a wife, and she frequently accompanied him on his hunting expeditions. Japan Before monogamy was legally imposed in the Meiji period, concubinage was common among the nobility. Its purpose was to ensure male heirs. For example, the son of an Imperial concubine often had a chance of becoming emperor. Yanagihara Naruko, a high-ranking concubine of Emperor Meiji, gave birth to Emperor Taishō, who was later legally adopted by Empress Haruko, Emperor Meiji's formal wife. Even among merchant families, concubinage was occasionally used to ensure heirs. Asako Hirooka, an entrepreneur who was the daughter of a concubine, worked hard to help her husband's family survive after the Meiji Restoration. She lost her fertility giving birth to her only daughter, Kameko; so her husband—with whom she got along well—took Asako's maid-servant as a concubine and fathered three daughters and a son with her. Kameko, as the child of the formal wife, married a noble man and matrilineally carried on the family name. A samurai could take concubines but their backgrounds were checked by higher-ranked samurai. In many cases, taking a concubine was akin to a marriage. Kidnapping a concubine, although common in fiction, would have been shameful, if not criminal. If the concubine was a commoner, a messenger was sent with betrothal money or a note for exemption of tax to ask for her parents' acceptance. Even though the woman would not be a legal wife, a situation normally considered a demotion, many wealthy merchants believed that being the concubine of a samurai was superior to being the legal wife of a commoner. When a merchant's daughter married a samurai, her family's money erased the samurai's debts, and the samurai's social status improved the standing of the merchant family. If a samurai's commoner concubine gave birth to a son, the son could inherit his father's social status. Concubines sometimes wielded significant influence. Nene, wife of Toyotomi Hideyoshi, was known to overrule her husband's decisions at times and Yodo-dono, his concubine, became the de facto master of Osaka castle and the Toyotomi clan after Hideyoshi's death. Korea Joseon monarchs had a harem which contained concubines of different ranks. Empress Myeongseong managed to have sons, preventing sons of concubines from getting power. Children of concubines often had lower value in account of marriage. A daughter of concubine could not marry a wife-born son of the same class. For example, Jang Nok-su was a concubine-born daughter of a mayor, who was initially married to a slave-servant, and later became a high-ranking concubine of Yeonsangun. The Joseon dynasty established in 1392 debated whether the children of a free parent and a slave parent should be considered free or slave. The child of a scholar-official father and a slave-concubine mother was always free, although the child could not occupy government positions. India In Hindu India, concubinage could be practiced with women with whom marriage was undesirable, such as a woman from a lower-caste or a Muslim woman. Children born of concubinage followed the caste categorization of the mother. In medieval Rajasthan, the ruling Rajput family often had certain women called paswan, khawaas, pardayat. These women were kept by ruler if their beauty had impressed him, but without formal marriage. Sometimes they were given rights to income collected from a particular village, as queens did. Their children were socially accepted but did not receive a share in the ruling family's property and married others of the same status as them. Concubinage was practiced in elite Rajput households between 16th and 20th centuries. Female slave-servants or slave-performers could be elevated to the rank of concubine (called khavas, pavas) if a ruler found them attractive. The entry into concubinage was marked by a ritual; however, this ritual differentiated from rituals marking marriage. Rajputs did not take concubines from the lower castes and refrained from taking Brahmins and Rajputs. There are instances of wife's eloping with their Rajput lovers and becoming their concubines. One such event is the elopement of Anara and Maharaja Gaj Singh. Anara was a wife of a Nawab, while her lover was the Maharaja of Marwar. The Nawab accepted the fate of his wife and did not try to get her back. Europe Vikings Polygyny was common among Vikings, and rich and powerful Viking men tended to have many wives and concubines. Viking men would often buy or capture women and make them into their wives or concubines. Concubinage for Vikings was connected to slavery; the Vikings took both free women and slaves as concubines. Researchers have suggested that Vikings may have originally started sailing and raiding due to a need to seek out women from foreign lands. Polygynous relationships in Viking society may have led to a shortage of eligible women for the average male; polygyny increases male-male competition in society because it creates a pool of unmarried men willing to engage in risky status-elevating and sex-seeking behaviors. Thus, the average Viking man could have been forced to perform riskier actions to gain wealth and power to be able to find suitable women. The concept was expressed in the 11th century by historian Dudo of Saint-Quentin in his semi imaginary History of The Normans. The Annals of Ulster depicts raptio and states that in 821 the Vikings plundered an Irish village and "carried off a great number of women into captivity". Early Christianity and Feudalism The Christian morals developed by Patristic writers largely promoted marriage as the only form of union between men and women. Both Saint Augustine and Saint Jerome strongly condemned the institution of concubinage. In parallel though, the late imperial Roman law improved the rights of the classical Roman concubinatus, reaching the point, with the Corpus Iuris Civilis by Justinian, of extending inheritance laws to these unions. The two views, Christian condemnation and secular continuity with the Roman legal system, continued to be in conflict throughout the entire Middle Age, until in the 14th and 15th centuries the Church outlawed concubinage in the territories under its control. Middle East In the Medieval Muslim Arab world, "concubine" (surriyya) referred to the female slave (jāriya), whether Muslim or non-Muslim, with whom her master engages in sexual intercourse in addition to household or other services. Such relationships were common in pre-Islamic Arabia and other pre-existing cultures of the wider region. Islam introduced legal restrictions and discipline to the concubinage and encouraged manumission. Islam furthermore endorsed educating, freeing or marrying female slaves if they embrace Islam abandoning polytheism or infidelity. In verse 23:6 in the Quran it is allowed to have sexual intercourse with concubines only after harmonizing rapport and relation with them. Children of concubines are generally declared as legitimate with or without wedlock, and the mother of a free child was considered free upon the death of the male partner. There is evidence that concubines had a higher rank than female slaves. Abu Hanifa and others argued for modesty-like practices for the concubine, recommending that the concubine be established in the home and their chastity be protected and not to misuse them for sale or sharing with friends or kins. While scholars exhorted masters to treat their slaves equally, a master was allowed to show favoritism towards a concubine. Some scholars recommended holding a wedding banquet (walima) to celebrate the concubinage relationship; however, this is not required in teachings of Islam and is rather the self-preferred opinions of certain non-liberal Islamic scholars. Even the Arabic term for concubine surriyya may have been derived from sarat meaning "eminence", indicating the concubine's higher status over other female slaves. The Qur'an does not use word "surriyya", but instead uses the expression "Ma malakat aymanukum" (that which your right hands own), which occurs 15 times in the book. Sayyid Abul Ala Maududi explains that "two categories of women have been excluded from the general command of guarding the private parts: (a) wives, (b) women who are legally in one's possession". Some contend that concubinage was a pre-Islamic custom that was allowed to be practiced under Islam, with Jews and non-Muslim people to marry a concubine after teaching her, instructing her well and then giving her freedom. Others contend that concubines in Islam remained in use until the 19th century. In the traditions of the Abrahamic religions, Abraham had a concubine named Hagar, who was originally a slave of his wife Sarah. The story of Hagar would affect how concubinage was perceived in early Islamic history. Sikainiga writes that one rationale for concubinage in Islam was that "it satisfied the sexual desire of the female slaves and thereby prevented the spread of immorality in the Muslim community." Most Islamic schools of thought restricted concubinage to a relationship where the female slave was required to be monogamous to her master, (though the master's monogamy to her is not required), but according to Sikainga, in reality this was not always practiced and female slaves were targeted by other men of the master's household. These opinions of Sikaingia are controversial and contested. In ancient times, two sources for concubines were permitted under an Islamic regime. Primarily, non-Muslim women taken as prisoners of war were made concubines as happened after the Battle of the Trench, or in numerous later Caliphates. It was encouraged to manumit slave women who rejected their initial faith and embraced Islam, or to bring them into formal marriage. The expansion of various Muslim dynasties resulted in acquisitions of concubines, through purchase, gifts from other rulers, and captives of war. To have a large number of concubines became a symbol of status. Almost all Abbasid caliphs were born to concubines. Several Twelver Shia imams were also born to concubines. Similarly, the sultans of the Ottoman empire were often the son of a concubine. As a result concubines came to exercise a degree of influence over Ottoman politics. Many concubines developed social networks, and accumulated personal wealth, both of which allowed them to rise on social status. The practice declined with the abolition of slavery, starting in the 19th century. Ottoman sultans appeared to have preferred concubinage to marriage, and for a time all royal children were born of concubines. The consorts of Ottoman sultans were often neither Turkish, nor Muslim by birth. Leslie Peirce argues that this was because a concubine would not have the political leverage that would be possessed by a princess or a daughter of the local elite. Ottoman sultans also appeared to have only one son with each concubine; that is once a concubine gave birth to a son, the sultan would no longer have intercourse with her. This limited the power of each son. New World When slavery became institutionalized in Colonial America, white men, whether or not they were married, sometimes took enslaved women as concubines; children of such unions remained slaves. In the various European colonies in the Caribbean, white planters took black and mulatto concubines, owing to the shortage of white women. The children of such unions were sometimes freed from slavery and even inherited from their father, though this was not the case for the majority of children born of such unions. These relationships appeared to have been socially accepted in the colony of Jamaica and even attracted European emigrants to the island. Brazil In colonial Brazil, men were expected to marry women who were equal to them in status and wealth. Alternatively, some men practiced concubinage, an extra-marital sexual relationship. This sort of relationship was condemned by the Catholic Church and the Council of Trent threatened those who engaged in it with excommunication. Concubines constituted both female slaves and former slaves. One reason for taking non-white women as concubines was that free white men outnumbered free white women, although marriage between races was not illegal. United States Relationships with slaves in the United States and the Confederacy were sometimes euphemistically referred to as concubinary. From lifelong to single or serial sexual visitations, these relationships with un-freed slaves illustrate a radical power imbalance between a human owned as chattel and the legal owner of same; they are now defined, without regard for claims of sexual attraction or affection by either party, to be rape. This is because when personal ownership of slaves was enshrined in the law, an enslaved person had no legal power over their own legal personhood, the legal control to which was held by another entity; therefore, a slave could never give real and legal consent in any aspect of their life. The inability to give any kind of consent when enslaved is in part due to the ability of a slave master to legally coerce acts and declarations including those of affection, attraction, and consent through rewards and punishments, but legally the concept of chattel slavery in the United States and Confederate States defined and enforced in the law owning the legal personhood of a slave; meaning that the proxy for legal consent was found with the slave's master, who was the sole source of consent in the law to the bodily integrity and all efforts of that slave except as regulated or limited by law. With slavery being recognized as a crime against humanity in the United States law, as well as in international customary law, the legal basis of slavery is repudiated for all time and therefore repudiates any rights of owner-rapists had had to exercise any proxy sexual or other consent for their slaves. Free men in the United States sometimes took female slaves in relationships which they referred to as concubinage, although marriage between the races was prohibited by law in the colonies and the later United States. Many colonies and states also had laws against miscegenation or any interracial relations. From 1662 the Colony of Virginia, followed by others, incorporated into law the principle that children took their mother's status, i.e., the principle of partus sequitur ventrem. This led to generations of multiracial slaves, some of whom were otherwise considered legally white (one-eighth or less African, equivalent to a great-grandparent) before the American Civil War. In some cases, men had long-term relationships with enslaved women, giving them and their mixed-race children freedom and providing their children with apprenticeships, education and transfer of capital. A relationship between Thomas Jefferson and Sally Hemings is an example of this. Such arrangements were more prevalent in the American South during the antebellum period. Plaçage In Louisiana and former French territories, a formalized system of concubinage called plaçage developed. European men took enslaved or free women of color as mistresses after making arrangements to give them a dowry, house or other transfer of property, and sometimes, if they were enslaved, offering freedom and education for their children. A third class of free people of color developed, especially in New Orleans. Many became educated, artisans and property owners. French-speaking and practicing Catholicism, these women combined French and African-American culture and created an elite between those of European descent and the slaves. Today, descendants of the free people of color are generally called Louisiana Creole people. In Judaism In Judaism, a concubine is a marital companion of inferior status to a wife. Among the Israelites, men commonly acknowledged their concubines, and such women enjoyed the same rights in the house as legitimate wives. Ancient Judaism The term concubine did not necessarily refer to women after the first wife. A man could have many wives and concubines. Legally, any children born to a concubine were considered to be the children of the wife she was under. Sarah had to get Ishmael (son of Hagar) out of her house because, legally, Ishmael would always be the first-born son even though Isaac was her natural child. The concubine may not have commanded the exact amount of respect as the wife. In the Levitical rules on sexual relations, the Hebrew word that is commonly translated as "wife" is distinct from the Hebrew word that means "concubine". However, on at least one other occasion the term is used to refer to a woman who is not a wife specifically, the handmaiden of Jacob's wife. In the Levitical code, sexual intercourse between a man and a wife of a different man was forbidden and punishable by death for both persons involved. Since it was regarded as the highest blessing to have many children, wives often gave their maids to their husbands if they were barren, as in the cases of Sarah and Hagar, and Rachel and Bilhah. The children of the concubine often had equal rights with those of the wife; for example, King Abimelech was the son of Gideon and his concubine. Later biblical figures, such as Gideon and Solomon, had concubines in addition to many childbearing wives. For example, the Books of Kings say that Solomon had 700 wives and 300 concubines. The account of the unnamed Levite in Judges 19–20 shows that the taking of concubines was not the exclusive preserve of kings or patriarchs in Israel during the time of the Judges, and that the rape of a concubine was completely unacceptable to the Israelite nation and led to a civil war. In the story, the Levite appears to be an ordinary member of the tribe, whose concubine was a woman from Bethlehem in Judah. This woman was unfaithful, and eventually abandoned him to return to her paternal household. However, after four months, the Levite, referred to as her husband, decided to travel to her father's house to persuade his concubine to return. She is amenable to
Ford, was held on 9 April 1992. Design Central Plaza is made up of two principal components: a free standing office tower and a podium block attached to it. The tower is made up of three sections: a tower base forming the main entrance and public circulation spaces; a tower body containing 57 office floors, a sky lobby and five mechanical plant floors; and the tower top consists of six mechanical plant floors and a tower mast. The ground level public area along with the public sitting out area form an landscaped garden with fountain, trees and artificial stone paving. No commercial element is included in the podium. The first level is a public thoroughfare for three pedestrian bridges linking the Mass Transit Railway, the Convention and Exhibition Centre and the China Resource Building. By turning these space to public use, the building got 20% plot ratio more as bonus. The shape of the tower is not truly triangular but with its three corners cut off to provide better internal office spaces. Central Plaza was designed by the Hong Kong architectural firm Ng Chun Man and Associates and engineered by Arup. The main contractor was a joint venture, comprising the contracting firms Sanfield (a subsidiary of Sun Hung Kai) and Tat Lee, called Manloze Ltd. Design constraints Triangular shaped floor plan The building was designed to be triangular in shape because it would allow 20% more of the office area to enjoy the harbour view as compared with a square or rectangular shaped buildings. From an architectural point of view, this arrangement provides better floor area utilisation, offering an internal column-free office area with a clear depth of and an overall usable floor area efficiency of 81%. Nonetheless, the triangular building plan causes the air handling unit (AHU) room in the internal core to also assume a triangular configuration. With only limited space, this makes the adoption of a standard AHU not feasible. Furthermore, all air-conditioning ducting, electrical trunking and piping gathered inside the core area has to be squeezed into a very narrow and congested corridor ceiling void. Super high-rise building As the building is situated opposite to the Hong Kong Convention and Exhibition Centre, the only way to get more sea view for the building and not be obstructed by the neighbouring high-rise buildings is to build it tall enough. However, a tall building brings a lot of difficulties to structural and building services design, for example, excessive system static pressure for water systems, high line voltage drop and long distance of vertical transportation. All these problems can increase the capital cost of the building systems and impair the safety operation of the building. Maximum clear ceiling height As a
a general practice, for achieving a clear height of , a floor-to-floor height of would be required. However, because of high windload in Hong Kong for such a super high-rise building, every increase in building height by a metre would increase the structural cost by more than HK$1 million (HK$304,800 per ft). Therefore, a comprehensive study was conducted and finally a floor height of was adopted. With this issue alone, an estimated construction cost saving for a total of 58 office floors, would be around HK$30 million. Yet at the same time, a maximum ceiling height of in office area could still be achieved with careful coordination and dedicated integration. Structural constraints The site is a newly reclaimed area with a maximum water table rises to about below ground level. In the original brief, a 6-storey basement is required, therefore a diaphragm wall design came out. The keyword to this project is time. With a briefing in a limited detail, the structural engineer needed to start work The diaphragm wall design allowed for the basement to be constructed by the top-down method. It allows the superstructure to be constructed at the same time as the basement, thereby removing time-consuming basement construction period from the critical path. Wind loading is another major design criterion in Hong Kong as it is situated in an area influenced by typhoons. Not only must the structure be able to resist the loads generally and the cladding system and its fixings resist higher local loads, but the building must also perform dynamically in an acceptable manner such that predicted movements lie within acceptable standards of occupant comfort criteria. To ensure that all aspects of the building's performance in strong winds will be acceptable, a detailed wind tunnel study was carried out by Professor Alan Davenport at the Boundary Layer Wind Tunnel Laboratory at the University of Western Ontario. Steel structure vs reinforced concrete Steel structure is more commonly adopted in high-rise building. In the original scheme, an externally cross-braced framed tube was applied with primary/secondary beams carrying metal decking with reinforced concrete slab. The core was also of steelwork, designed to carry vertical load only. Later after a financial review by the developer, they decided to reduce the height of the superstructure by increasing the size of the floor plate so as to reduce the complex architectural requirements of the tower base which means a highstrength concrete solution became possible. In the final scheme, columns at centres and floor edge beams were used to replace the large steel corner columns. As climbing form and table form construction method and efficient construction management are used in this project which make this reinforced concrete structure take no longer construction time than the steel structure. And the most attractive point is that the reinforced concrete scheme can save HK$230 million compared to that of steel structure. Hence the reinforced concrete structure was adopted and Central Plaza is now one of the tallest reinforced concrete buildings in the world. In the reinforced concrete structure scheme, the core has a similar arrangement to the steel scheme and the wind shear is taken out from the core at the lowest basement level and transferred to the perimeter diaphragm walls. In order to reduce large shear reversals in the core walls in the basement, and at the top of the tower base level, the ground floor, basement levels 1 and 2 and the 5th and 6th floors, the floor slabs and beams are separated horizontally from the core walls. Another advantage of using reinforced concrete structure is that it is more flexible to cope with changes in structural layout, sizes and height according to the site conditions by using table form system. Trivia This skyscraper was visited in the seventh leg of the reality TV show The Amazing Race 2, which described Central Plaza as "the tallest building in Hong Kong". Although contestants were told to reach the top floor, the actual task was performed on the 46th floor. See also List of tallest buildings in Hong Kong List of buildings and structures in Hong Kong List of tallest freestanding structures References External links Architectural study of the building Hong Kong's skyscrapers in comparison Central Plaza Elevator Layout Office buildings completed in 1992 Skyscraper office buildings in Hong Kong Sun Hung Kai Properties Wan Chai North
His influence can be seen directly or indirectly in the work of Peter Paul Rubens, Jusepe de Ribera, Gian Lorenzo Bernini, and Rembrandt. Artists heavily under his influence were called the "Caravaggisti" (or "Caravagesques"), as well as tenebrists or tenebrosi ("shadowists"). Caravaggio trained as a painter in Milan before moving to Rome when he was in his twenties. He developed a considerable name as an artist, and as a violent, touchy and provocative man. A brawl led to a death sentence for murder and forced him to flee to Naples. There he again established himself as one of the most prominent Italian painters of his generation. He traveled in 1607 to Malta and on to Sicily, and pursued a papal pardon for his sentence. In 1609 he returned to Naples, where he was involved in a violent clash; his face was disfigured and rumours of his death circulated. Questions about his mental state arose from his erratic and bizarre behavior. He died in 1610 under uncertain circumstances while on his way from Naples to Rome. Reports stated that he died of a fever, but suggestions have been made that he was murdered or that he died of lead poisoning. Caravaggio's innovations inspired Baroque painting, but the latter incorporated the drama of his chiaroscuro without the psychological realism. The style evolved and fashions changed, and Caravaggio fell out of favour. In the 20th century interest in his work revived, and his importance to the development of Western art was reevaluated. The 20th-century art historian stated: "What begins in the work of Caravaggio is, quite simply, modern painting." Biography Early life (1571–1592) Caravaggio (Michelangelo Merisi or Amerighi) was born in Milan, where his father, Fermo (Fermo Merixio), was a household administrator and architect-decorator to the Marchese of Caravaggio, a town 35 km to the east of Milan and south of Bergamo. In 1576 the family moved to Caravaggio (Caravaggius) to escape a plague that ravaged Milan, and Caravaggio's father and grandfather both died there on the same day in 1577. It is assumed that the artist grew up in Caravaggio, but his family kept up connections with the Sforzas and the powerful Colonna family, who were allied by marriage with the Sforzas and destined to play a major role later in Caravaggio's life. Caravaggio's mother died in 1584, the same year he began his four-year apprenticeship to the Milanese painter Simone Peterzano, described in the contract of apprenticeship as a pupil of Titian. Caravaggio appears to have stayed in the Milan-Caravaggio area after his apprenticeship ended, but it is possible that he visited Venice and saw the works of Giorgione, whom Federico Zuccari later accused him of imitating, and Titian. He would also have become familiar with the art treasures of Milan, including Leonardo da Vinci's Last Supper, and with the regional Lombard art, a style that valued simplicity and attention to naturalistic detail and was closer to the naturalism of Germany than to the stylised formality and grandeur of Roman Mannerism. Beginnings in Rome (1592/95–1600) Following his initial training under Simone Peterzano, in 1592 Caravaggio left Milan for Rome, in flight after "certain quarrels" and the wounding of a police officer. The young artist arrived in Rome "naked and extremely needy... without fixed address and without provision... short of money." During this period he stayed with the miserly Pandolfo Pucci, known as "monsignor Insalata". A few months later he was performing hack-work for the highly successful Giuseppe Cesari, Pope Clement VIII's favourite artist, "painting flowers and fruit" in his factory-like workshop. In Rome there was demand for paintings to fill the many huge new churches and palazzi being built at the time. It was also a period when the Church was searching for a stylistic alternative to Mannerism in religious art that was tasked to counter the threat of Protestantism. Caravaggio's innovation was a radical naturalism that combined close physical observation with a dramatic, even theatrical, use of chiaroscuro that came to be known as tenebrism (the shift from light to dark with little intermediate value). Known works from this period include a small Boy Peeling a Fruit (his earliest known painting), a Boy with a Basket of Fruit, and the Young Sick Bacchus, supposedly a self-portrait done during convalescence from a serious illness that ended his employment with Cesari. All three demonstrate the physical particularity for which Caravaggio was to become renowned: the fruit-basket-boy's produce has been analysed by a professor of horticulture, who was able to identify individual cultivars right down to "...a large fig leaf with a prominent fungal scorch lesion resembling anthracnose (Glomerella cingulata)." Caravaggio left Cesari, determined to make his own way after a heated argument. At this point he forged some extremely important friendships, with the painter Prospero Orsi, the architect Onorio Longhi, and the sixteen-year-old Sicilian artist Mario Minniti. Orsi, established in the profession, introduced him to influential collectors; Longhi, more balefully, introduced him to the world of Roman street-brawls. Minniti served Caravaggio as a model and, years later, would be instrumental in helping him to obtain important commissions in Sicily. Ostensibly, the first archival reference to Caravaggio in a contemporary document from Rome is the listing of his name, with that of Prospero Orsi as his partner, as an 'assistante' in a procession in October 1594 in honour of St. Luke. The earliest informative account of his life in the city is a court transcript dated 11 July 1597, when Caravaggio and Prospero Orsi were witnesses to a crime near San Luigi de' Francesi. An early published notice on Caravaggio, dating from 1604 and describing his lifestyle three years previously, recounts that "after a fortnight's work he will swagger about for a month or two with a sword at his side and a servant following him, from one ball-court to the next, ever ready to engage in a fight or an argument, so that it is most awkward to get along with him." In 1606 he killed a young man in a brawl, possibly unintentionally, and fled from Rome with a death sentence hanging over him. The Fortune Teller, his first composition with more than one figure, shows a boy, likely Minniti, having his palm read by a gypsy girl, who is stealthily removing his ring as she strokes his hand. The theme was quite new for Rome, and proved immensely influential over the next century and beyond. However, at the time, Caravaggio sold it for practically nothing. The Cardsharps—showing another naïve youth of privilege falling the victim of card cheats—is even more psychologically complex, and perhaps Caravaggio's first true masterpiece. Like The Fortune Teller, it was immensely popular, and over 50 copies survive. More importantly, it attracted the patronage of Cardinal Francesco Maria del Monte, one of the leading connoisseurs in Rome. For Del Monte and his wealthy art-loving circle, Caravaggio executed a number of intimate chamber-pieces—The Musicians, The Lute Player, a tipsy Bacchus, an allegorical but realistic Boy Bitten by a Lizard—featuring Minniti and other adolescent models. Caravaggio's first paintings on religious themes returned to realism, and the emergence of remarkable spirituality. The first of these was the Penitent Magdalene, showing Mary Magdalene at the moment when she has turned from her life as a courtesan and sits weeping on the floor, her jewels scattered around her. "It seemed not a religious painting at all ... a girl sitting on a low wooden stool drying her hair ... Where was the repentance ... suffering ... promise of salvation?" It was understated, in the Lombard manner, not histrionic in the Roman manner of the time. It was followed by others in the same style: Saint Catherine; Martha and Mary Magdalene; Judith Beheading Holofernes; a Sacrifice of Isaac; a Saint Francis of Assisi in Ecstasy; and a Rest on the Flight into Egypt. These works, while viewed by a comparatively limited circle, increased Caravaggio's fame with both connoisseurs and his fellow artists. But a true reputation would depend on public commissions, and for these it was necessary to look to the Church. Already evident was the intense realism or naturalism for which Caravaggio is now famous. He preferred to paint his subjects as the eye sees them, with all their natural flaws and defects instead of as idealised creations. This allowed a full display of his virtuosic talents. This shift from accepted standard practice and the classical idealism of Michelangelo was very controversial at the time. Caravaggio also dispensed with the lengthy preparations traditional in central Italy at the time. Instead, he preferred the Venetian practice of working in oils directly from the subject—half-length figures and still life. Supper at Emmaus, from c. 1600–1601, is a characteristic work of this period demonstrating his virtuoso talent. "Most famous painter in Rome" (1600–1606) In 1599, presumably through the influence of Del Monte, Caravaggio was contracted to decorate the Contarelli Chapel in the church of San Luigi dei Francesi. The two works making up the commission, The Martyrdom of Saint Matthew and The Calling of Saint Matthew, delivered in 1600, were an immediate sensation. Thereafter he never lacked commissions or patrons. Caravaggio's tenebrism (a heightened chiaroscuro) brought high drama to his subjects, while his acutely observed realism brought a new level of emotional intensity. Opinion among his artist peers was polarised. Some denounced him for various perceived failings, notably his insistence on painting from life, without drawings, but for the most part he was hailed as a great artistic visionary: "The painters then in Rome were greatly taken by this novelty, and the young ones particularly gathered around him, praised him as the unique imitator of nature, and looked on his work as miracles." Caravaggio went on to secure a string of prestigious commissions for religious works featuring violent struggles, grotesque decapitations, torture and death. Most notable and technically masterful among them was The Taking of Christ (circa 1602) for the Mattei family, only rediscovered in the early 1990s, in Ireland, after two centuries unrecognised. For the most part each new painting increased his fame, but a few were rejected by the various bodies for whom they were intended, at least in their original forms, and had to be re-painted or found new buyers. The essence of the problem was that while Caravaggio's dramatic intensity was appreciated, his realism was seen by some as unacceptably vulgar. His first version of Saint Matthew and the Angel, featuring the saint as a bald peasant with dirty legs attended by a lightly clad over-familiar boy-angel, was rejected and a second version had to be painted as The Inspiration of Saint Matthew. Similarly, The Conversion of Saint Paul was rejected, and while another version of the same subject, the Conversion on the Way to Damascus, was accepted, it featured the saint's horse's haunches far more prominently than the saint himself, prompting this exchange between the artist and an exasperated official of Santa Maria del Popolo: "Why have you put a horse in the middle, and Saint Paul on the ground?" "Because!" "Is the horse God?" "No, but he stands in God's light!" Other works included Entombment, the Madonna di Loreto (Madonna of the Pilgrims), the Grooms' Madonna, and the Death of the Virgin. The history of these last two paintings illustrates the reception given to some of Caravaggio's art, and the times in which he lived. The Grooms' Madonna, also known as Madonna dei palafrenieri, painted for a small altar in Saint Peter's Basilica in Rome, remained there for just two days, and was then taken off. A cardinal's secretary wrote: "In this painting there are but vulgarity, sacrilege, impiousness and disgust...One would say it is a work made by a painter that can paint well, but of a dark spirit, and who has been for a lot of time far from God, from His adoration, and from any good thought..." The Death of the Virgin, commissioned in 1601 by a wealthy jurist for his private chapel in the new Carmelite church of Santa Maria della Scala, was rejected by the Carmelites in 1606. Caravaggio's contemporary Giulio Mancini records that it was rejected because Caravaggio had used a well-known prostitute as his model for the Virgin. Giovanni Baglione, another contemporary, tells that it was due to Mary's bare legs—a matter of decorum in either case. Caravaggio scholar John Gash suggests that the problem for the Carmelites may have been theological rather than aesthetic, in that Caravaggio's version fails to assert the doctrine of the Assumption of Mary, the idea that the Mother of God did not die in any ordinary sense but was assumed into Heaven. The replacement altarpiece commissioned (from one of Caravaggio's most able followers, Carlo Saraceni), showed the Virgin not dead, as Caravaggio had painted her, but seated and dying; and even this was rejected, and replaced with a work showing the Virgin not dying, but ascending into Heaven with choirs of angels. In any case, the rejection did not mean that Caravaggio or his paintings were out of favour. The Death of the Virgin was no sooner taken out of the church than it was purchased by the Duke of Mantua, on the advice of Rubens, and later acquired by Charles I of England before entering the French royal collection in 1671. One secular piece from these years is Amor Vincit Omnia, in English also called Amor Victorious, painted in 1602 for Vincenzo Giustiniani, a member of Del Monte's circle. The model was named in a memoir of the early 17th century as "Cecco", the diminutive for Francesco. He is possibly Francesco Boneri, identified with an artist active in the period 1610–1625 and known as Cecco del Caravaggio ('Caravaggio's Cecco'), carrying a bow and arrows and trampling symbols of the warlike and peaceful arts and sciences underfoot. He is unclothed, and it is difficult to accept this grinning urchin as the Roman god Cupid—as difficult as it was to accept Caravaggio's other semi-clad adolescents as the various angels he painted in his canvases, wearing much the same stage-prop wings. The point, however, is the intense yet ambiguous reality of the work: it is simultaneously Cupid and Cecco, as Caravaggio's Virgins were simultaneously the Mother of Christ and the Roman courtesans who modeled for them. Legal Problems and Flight from Rome (1606) Caravaggio led a tumultuous life. He was notorious for brawling, even in a time and place when such behavior was commonplace, and the transcripts of his police records and trial proceedings fill many pages. Bellori claims that around 1590–1592, Caravaggio, already well known for brawling with gangs of young men, committed a murder which forced him to flee from Milan, first to Venice and then to Rome. On 28 November 1600, while living at the Palazzo Madama with his patron Cardinal Del Monte, Caravaggio beat nobleman Girolamo Stampa da Montepulciano, a guest of the cardinal, with a club, resulting in an official complaint to the police. Episodes of brawling, violence, and tumult grew more and more frequent. Caravaggio was often arrested and jailed at Tor di Nona. After his release from jail in 1601, Caravaggio returned to paint first The Taking of Christ and then Amor Vincit Omnia. In 1603, he was arrested again, this time for the defamation of another painter, Giovanni Baglione, who sued Caravaggio and his followers Orazio Gentileschi and Onorio Longhi for writing offensive poems about him. The French ambassador intervened, and Caravaggio was transferred to house arrest after a month in jail in Tor di Nona. Between May and October 1604, Caravaggio was arrested several times for possession of illegal weapons and for insulting the city guards. He was also sued by a tavern waiter for having thrown a plate of artichokes in his face. In 1605, Caravaggio was forced to flee to Genoa for three weeks after seriously injuring Mariano Pasqualone di Accumoli, a notary, in a dispute over Lena, Caravaggio's model and lover. The notary reported having been attacked on 29 July with a sword, causing a severe head injury. Caravaggio's patrons intervened and managed to cover up the incident. Upon his return to Rome, Caravaggio was sued by his landlady Prudenzia Bruni for not having paid his rent. Out of spite, Caravaggio threw rocks through her window at night and was sued again. In November, Caravaggio was hospitalized for an injury which he claimed he had caused himself by falling on his own sword. Caravaggio's gravest problem began on 29 May 1606, when he killed Ranuccio Tommasoni, a gangster from a wealthy family, in a duel with swords at Campo Marzio. The two had argued many times, often ending in blows. The circumstances are unclear and the killing may have been unintentional. Many rumors circulated at the time as to the cause of the duel. Several contemporary avvisi referred to a quarrel over a gambling debt and a pallacorda game, a sort of tennis; and this explanation has become established in the popular imagination. Other rumors, however, claimed that the duel stemmed from jealousy over Fillide Melandroni, a well known Roman prostitute who had modeled for him in several important paintings; Tommasoni was her pimp. According to such rumors, Caravaggio castrated Tommasoni with his sword before deliberately killing him, with other versions claiming that Tommasoni's death was caused accidentally during the castration. The duel may have had a political dimension, as Tommasoni's family was notoriously pro-Spanish, while Caravaggio was a client of the French ambassador. Caravaggio's patrons had hitherto been able to shield him from any serious consequences of his frequent duels and brawling, but Tommasoni's wealthy family was outraged by his death and demanded justice. Caravaggio's patrons were unable to protect him. Caravaggio was sentenced to beheading for murder, and an open bounty was decreed enabling anyone who recognized him to legally carry the sentence out. Caravaggio's paintings began to obsessively depict severed heads, often his own, at this time. Caravaggio was forced to flee Rome. He moved just south of the city, then to Naples, Malta, and Sicily. Good modern accounts are to be found in Peter Robb's M and Helen Langdon's Caravaggio: A Life. A theory relating the death to Renaissance notions of honour and symbolic wounding has been advanced by art historian Andrew Graham-Dixon. Whatever the details, it was a serious matter. Previously, his high-placed patrons had protected him from the consequences of his escapades, but this time they could do nothing. Caravaggio, outlawed, fled to Naples. Exile and death (1606–1610) Naples Following the death of Tomassoni, Caravaggio fled first to the estates of the Colonna family south of Rome, then on to Naples, where Costanza Colonna Sforza, widow of Francesco Sforza, in whose husband's household Caravaggio's father had held a position, maintained a palace. In Naples, outside the jurisdiction of the Roman authorities and protected by the Colonna family, the most famous painter in Rome became the most famous in Naples. His connections with the Colonnas led to a stream of important church commissions, including the Madonna of the Rosary, and The Seven Works of Mercy. The Seven Works of Mercy depicts the seven corporal works of mercy as a set of compassionate acts concerning the material needs of others. The painting was made for, and is still housed in, the church of Pio Monte della Misericordia in Naples. Caravaggio combined all seven works of mercy in one composition, which became the church's altarpiece. Alessandro Giardino has also established the connection between the iconography of "The Seven Works of Mercy" and the cultural, scientific and philosophical circles of the painting's commissioners. Malta Despite his success in Naples, after only a few months in the city Caravaggio left for Malta, the headquarters of the Knights of Malta. Fabrizio Sforza Colonna, Costanza's son, was a Knight of Malta and general of the Order's galleys. He appears to have facilitated Caravaggio's arrival in the island in 1607 (and his escape the next year). Caravaggio presumably hoped that the patronage of Alof de Wignacourt, Grand Master of the Knights of Saint John, could help him secure a pardon for Tomassoni's death. De Wignacourt was so impressed at having the famous artist as official painter to the Order that he inducted him as a Knight, and the early biographer Bellori records that the artist was well pleased with his success. Major works from his Malta period include the Beheading of Saint John the Baptist, his largest ever work, and the only painting to which he put his signature, Saint Jerome Writing (both housed in Saint John's Co-Cathedral, Valletta, Malta) and a Portrait of Alof de Wignacourt and his Page, as well as portraits of other leading Knights. According to Andrea Pomella, The Beheading of Saint John the Baptist is widely considered "one of the most important works in Western painting." Completed in 1608, the painting had been commissioned by the Knights of Malta as an altarpiece and measuring at 150 inches by 200 inches was the largest altarpiece Caravaggio painted. It still hangs in St. John's Co-Cathedral, for which it was commissioned and where Caravaggio himself was inducted and briefly served as a knight. Yet, by late August 1608, he was arrested and imprisoned, likely the result of yet another brawl, this time with an aristocratic knight, during which the door of a house was battered down and the knight seriously wounded. Caravaggio was imprisoned by the Knights at Valletta, but he managed to escape. By December, he had been expelled from the Order "as a foul and rotten member", a formal phrase used in all such cases. Sicily Caravaggio made his way to Sicily where he met his old friend Mario Minniti, who was now married and living in Syracuse. Together they set off on what amounted to a triumphal tour from Syracuse to Messina and, maybe, on to the island capital, Palermo. In Syracuse and Messina Caravaggio continued to win prestigious and well-paid commissions. Among other works from this period are Burial of St. Lucy, The Raising of Lazarus, and Adoration of the Shepherds. His style continued to evolve, showing now friezes of figures isolated against vast empty backgrounds. "His great Sicilian altarpieces isolate their shadowy, pitifully poor figures in vast areas of darkness; they suggest the desperate fears and frailty of man, and at the same time convey, with a new yet desolate tenderness, the beauty of humility and of the meek, who shall inherit the earth." Contemporary reports depict a man whose behaviour was becoming increasingly bizarre, which included sleeping fully armed and in his clothes, ripping up a painting at a slight word of criticism, and mocking local painters. Caravaggio displayed bizarre behaviour from very early in his career. Mancini describes him as "extremely crazy", a letter of Del Monte notes his strangeness, and Minniti's 1724 biographer says that Mario left Caravaggio because of his behaviour. The strangeness seems to have increased after Malta. Susinno's early-18th-century Le vite de' pittori Messinesi ("Lives of the Painters of Messina") provides several colourful anecdotes of Caravaggio's erratic behaviour in Sicily, and these are reproduced in modern full-length biographies such as Langdon and Robb. Bellori writes of Caravaggio's "fear" driving him from city to city across the island and finally, "feeling that it was no longer safe to remain", back to Naples. Baglione says Caravaggio was being "chased by his enemy", but like Bellori does not say who this enemy was. Return to Naples After only nine months in Sicily, Caravaggio returned to Naples in the late summer of 1609. According to his earliest biographer he was being pursued by enemies while in Sicily and felt it safest to place himself under the protection of the Colonnas until he could secure his pardon from the pope (now Paul V) and return to Rome.
Cardinal Scipione Borghese, nephew of the pope, who had the power to grant or withhold pardons. Caravaggio hoped Borghese could mediate a pardon, in exchange for works by the artist. News from Rome encouraged Caravaggio, and in the summer of 1610 he took a boat northwards to receive the pardon, which seemed imminent thanks to his powerful Roman friends. With him were three last paintings, the gifts for Cardinal Scipione. What happened next is the subject of much confusion and conjecture, shrouded in much mystery. The bare facts seem to be that on 28 July an anonymous avviso (private newsletter) from Rome to the ducal court of Urbino reported that Caravaggio was dead. Three days later another avviso said that he had died of fever on his way from Naples to Rome. A poet friend of the artist later gave 18 July as the date of death, and a recent researcher claims to have discovered a death notice showing that the artist died on that day of a fever in Porto Ercole, near Grosseto in Tuscany. Death Caravaggio had a fever at the time of his death, and what killed him was a matter of controversy and rumour at the time, and has been a matter of historical debate and study since. Contemporary rumors held that either the Tommasoni family or the Knights had him killed in revenge. Traditionally historians have long thought he died of syphilis. Some have said he had malaria, or possibly brucellosis from unpasteurised dairy. Some scholars have argued that Caravaggio was actually attacked and killed by the same "enemies" that had been pursuing him since he fled Malta, possibly Wignacourt and/or factions of the Knights. Caravaggio's remains were buried in Porto Ercole's San Sebastiano cemetery, which closed in 1956, and then moved to St. Erasmus cemetery, where, in 2010, archaeologists conducted a year-long investigation of remains found in three crypts and after using DNA, carbon dating, and other methods, believe with a high degree of confidence that they have identified those of Caravaggio. Initial tests suggested Caravaggio might have died of lead poisoning—paints used at the time contained high amounts of lead salts, and Caravaggio is known to have indulged in violent behavior, as caused by lead poisoning. Later research concluded he died as the result of a wound sustained in a brawl in Naples, specifically from sepsis caused by Staphylococcus aureus. Vatican documents released in 2002 support the theory that the wealthy Tommasoni family had him hunted down and killed as a vendetta for Caravaggio's murder of gangster Ranuccio Tommasoni, in a botched attempt at castration after a duel over the affections of model Fillide Melandroni. Sexuality Since the 1970s art scholars and historians have debated the inferences of homoeroticism in Caravaggio's works as a way to better understand the man. Caravaggio never married and had no known children, and Howard Hibbard observed the absence of erotic female figures in the artist's oeuvre: "In his entire career he did not paint a single female nude", and the cabinet-pieces from the Del Monte period are replete with "full-lipped, languorous boys ... who seem to solicit the onlooker with their offers of fruit, wine, flowers—and themselves" suggesting an erotic interest in the male form. The model of Amor vincit omnia, Cecco di Caravaggio, lived with the artist in Rome and stayed with him even after he was obliged to leave the city in 1606, and the two may have been lovers. A connection with a certain Lena is mentioned in a 1605 court deposition by Pasqualone, where she is described as "Michelangelo's girl". According to G.B. Passeri, this 'Lena' was Caravaggio's model for the Madonna di Loreto; and according to Catherine Puglisi, 'Lena' may have been the same person as the courtesan Maddalena di Paolo Antognetti, who named Caravaggio as an "intimate friend" by her own testimony in 1604. Caravaggio was also rumored to be madly in love with Fillide Melandroni, a well known Roman prostitute who modeled for him in several important paintings. Caravaggio's sexuality also received early speculation due to claims about the artist by Honoré Gabriel Riqueti, comte de Mirabeau. Writing in 1783, Mirabeau contrasted the personal life of Caravaggio directly with the writings of St Paul in the Book of Romans, arguing that "Romans" excessively practice sodomy or homosexuality. The Holy Mother Catholic Church teachings on morality (and so on; short book title) contains the Latin phrase "Et fœminæ eorum immutaverunt naturalem usum in eum usum qui est contra naturam." The phrase, according to Mirabeau, entered Caravaggio's thoughts, and he claimed that such an "abomination" could be witnessed through a particular painting housed at the Museum of the Grand Duke of Tuscany—featuring a rosary of a blasphemous nature, in which a circle of thirty men (turpiter ligati) are intertwined in embrace and presented in unbridled composition. Mirabeau notes the affectionate nature of Caravaggio's depiction reflects the voluptuous glow of the artist's sexuality. By the late nineteenth century, Sir Richard Francis Burton identified the painting as Caravaggio's painting of St. Rosario. Burton also identifies both St. Rosario and this painting with the practices of Tiberius mentioned by Seneca the Younger. The survival status and location of Caravaggio's painting is unknown. No such painting appears in his or his school's catalogues. Aside from the paintings, evidence also comes from the libel trial brought against Caravaggio by Giovanni Baglione in 1603. Baglione accused Caravaggio and his friends of writing and distributing scurrilous doggerel attacking him; the pamphlets, according to Baglione's friend and witness Mao Salini, had been distributed by a certain Giovanni Battista, a bardassa, or boy prostitute, shared by Caravaggio and his friend Onorio Longhi. Caravaggio denied knowing any young boy of that name, and the allegation was not followed up. Baglione's painting of "Divine Love" has also been seen as a visual accusation of sodomy against Caravaggio. Such accusations were damaging and dangerous as sodomy was a capital crime at the time. Even though the authorities were unlikely to investigate such a well-connected person as Caravaggio, "Once an artist had been smeared as a pederast, his work was smeared too." Francesco Susino in his later biography additionally relates the story of how the artist was chased by a schoolmaster in Sicily for spending too long gazing at the boys in his care. Susino presents it as a misunderstanding, but some authors have speculated that Caravaggio may indeed have been seeking sex with the boys, using the incident to explain some of his paintings which they believe to be homoerotic. The art historian Andrew Graham-Dixon has summarised the debate: <blockquote>A lot has been made of Caravaggio's presumed homosexuality, which has in more than one previous account of his life been presented as the single key that explains everything, both the power of his art and the misfortunes of his life. There is no absolute proof of it, only strong circumstantial evidence and much rumour. The balance of probability suggests that Caravaggio did indeed have sexual relations with men. But he certainly had female lovers. Throughout the years that he spent in Rome he kept close company with a number of prostitutes. The truth is that Caravaggio was as uneasy in his relationships as he was in most other aspects of life. He likely slept with men. He did sleep with women. He settled with no one... [but] the idea that he was an early martyr to the drives of an unconventional sexuality is an anachronistic fiction.</blockquote>Washington Post art critic Philip Kennicott has taken issue with what he regarded as Graham-Dixon's minimizing of Caravaggio's homosexuality: There was a fussiness to the tone whenever a scholar or curator was forced to grapple with transgressive sexuality, and you can still find it even in relatively recent histories, including Andrew Graham-Dixon’s 2010 biography of Caravaggio, which acknowledges only that “he likely slept with men.” The author notes the artist’s fluid sexual desires but gives some of Caravaggio’s most explicitly homoerotic paintings tortured readings to keep them safely in the category of mere “ambiguity.” As an artist The birth of Baroque Caravaggio "put the oscuro (shadows) into chiaroscuro." Chiaroscuro was practiced long before he came on the scene, but it was Caravaggio who made the technique a dominant stylistic element, darkening the shadows and transfixing the subject in a blinding shaft of light. With this came the acute observation of physical and psychological reality that formed the ground both for his immense popularity and for his frequent problems with his religious commissions. He worked at great speed, from live models, scoring basic guides directly onto the canvas with the end of the brush handle; very few of Caravaggio's drawings appear to have survived, and it is likely that he preferred to work directly on the canvas. The approach was anathema to the skilled artists of his day, who decried his refusal to work from drawings and to idealise his figures. Yet the models were basic to his realism. Some have been identified, including Mario Minniti and Francesco Boneri, both fellow artists, Minniti appearing as various figures in the early secular works, the young Boneri as a succession of angels, Baptists and Davids in the later canvasses. His female models include Fillide Melandroni, Anna Bianchini, and Maddalena Antognetti (the "Lena" mentioned in court documents of the "artichoke" case as Caravaggio's concubine), all well-known prostitutes, who appear as female religious figures including the Virgin and various saints. Caravaggio himself appears in several paintings, his final self-portrait being as the witness on the far right to the Martyrdom of Saint Ursula. Caravaggio had a noteworthy ability to express in one scene of unsurpassed vividness the passing of a crucial moment. The Supper at Emmaus depicts the recognition of Christ by his disciples: a moment before he is a fellow traveler, mourning the passing of the Messiah, as he never ceases to be to the inn-keeper's eyes; the second after, he is the Saviour. In The Calling of St Matthew, the hand of the Saint points to himself as if he were saying "who, me?", while his eyes, fixed upon the figure of Christ, have already said, "Yes, I will follow you". With The Resurrection of Lazarus, he goes a step further, giving a glimpse of the actual physical process of resurrection. The body of Lazarus is still in the throes of rigor mortis, but his hand, facing and recognising that of Christ, is alive. Other major Baroque artists would travel the same path, for example Bernini, fascinated with themes from Ovid's Metamorphoses. The Caravaggisti The installation of the St. Matthew paintings in the Contarelli Chapel had an immediate impact among the younger artists in Rome, and Caravaggism became the cutting edge for every ambitious young painter. The first Caravaggisti included Orazio Gentileschi and Giovanni Baglione. Baglione's Caravaggio phase was short-lived; Caravaggio later accused him of plagiarism and the two were involved in a long feud. Baglione went on to write the first biography of Caravaggio. In the next generation of Caravaggisti there were Carlo Saraceni, Bartolomeo Manfredi and Orazio Borgianni. Gentileschi, despite being considerably older, was the only one of these artists to live much beyond 1620, and ended up as court painter to Charles I of England. His daughter Artemisia Gentileschi was also stylistically close to Caravaggio, and one of the most gifted of the movement. Yet in Rome and in Italy it was not Caravaggio, but the influence of his rival Annibale Carracci, blending elements from the High Renaissance and Lombard realism, which ultimately triumphed. Caravaggio's brief stay in Naples produced a notable school of Neapolitan Caravaggisti, including Battistello Caracciolo and Carlo Sellitto. The Caravaggisti movement there ended with a terrible outbreak of plague in 1656, but the Spanish connection—Naples was a possession of Spain—was instrumental in forming the important Spanish branch of his influence. A group of Catholic artists from Utrecht, the "Utrecht Caravaggisti", travelled to Rome as students in the first years of the 17th century and were profoundly influenced by the work of Caravaggio, as Bellori describes. On their return to the north this trend had a short-lived but influential flowering in the 1620s among painters like Hendrick ter Brugghen, Gerrit van Honthorst, Andries Both and Dirck van Baburen. In the following generation the effects of Caravaggio, although attenuated, are to be seen in the work of Rubens (who purchased one of his paintings for the Gonzaga of Mantua and painted a copy of the Entombment of Christ), Vermeer, Rembrandt and Velázquez, the last of whom presumably saw his work during his various sojourns in Italy. Death and rebirth of a reputation Caravaggio's innovations inspired the Baroque, but the Baroque took the drama of his chiaroscuro without the psychological realism. While he directly influenced the style of the artists mentioned above, and, at a distance, the Frenchmen Georges de La Tour and Simon Vouet, and the Spaniard Giuseppe Ribera, within a few decades his works were being ascribed to less scandalous artists, or simply overlooked. The Baroque, to which he contributed so much, had evolved, and fashions had changed, but perhaps more pertinently Caravaggio never established a workshop as the Carracci did, and thus had no school to spread his techniques. Nor did he ever set out his underlying philosophical approach to art, the psychological realism that may only be deduced from his surviving work. Thus his reputation was doubly vulnerable to the critical demolition-jobs done by two of his earliest biographers, Giovanni Baglione, a rival painter with a vendetta, and the influential 17th-century critic Gian Pietro Bellori, who had not known him but was under the influence of the earlier Giovanni Battista Agucchi and Bellori's friend Poussin, in preferring the "classical-idealistic" tradition of the Bolognese school led by the Carracci. Baglione, his first biographer, played a considerable part in creating the legend of Caravaggio's unstable and violent character, as well as his inability to draw. In the 1920s, art critic Roberto Longhi brought Caravaggio's name once more to the foreground, and placed him in the European tradition: "Ribera, Vermeer, La Tour and Rembrandt could never have existed without him. And the art of Delacroix, Courbet and Manet would have been utterly different". The influential Bernard Berenson agreed: "With the exception of Michelangelo, no other Italian painter exercised so great an influence." Epitaph Caravaggio's epitaph was composed by his friend Marzio Milesi. It reads: He was commemorated on the front of the Banca d'Italia 100,000-lire banknote in the 1980s and '90s (before Italy switched to the euro) with the back showing his Basket of Fruit. Oeuvre There is disagreement as to the size of Caravaggio's oeuvre, with counts as low as 40 and as high as 80. In his biography, Caravaggio scholar Alfred Moir writes "The forty-eight colorplates in this book include almost all of the surviving works accepted by every Caravaggio expert as autograph, and even the least demanding would add fewer than a dozen more". One, The Calling of Saints Peter and Andrew, was in 2006 authenticated and restored; it had been in storage in Hampton Court, mislabeled as a copy. Richard Francis Burton writes of a "picture of St. Rosario (in the museum of the Grand Duke of Tuscany), showing a circle of thirty men turpiter ligati" ("lewdly banded"), which is not known to have survived. The rejected version of Saint Matthew and the Angel, intended for the Contarelli Chapel in San Luigi dei Francesi in Rome, was destroyed during the bombing of Dresden, though black and white photographs of the work exist. In June 2011 it was announced that a previously unknown Caravaggio painting of Saint Augustine dating to about 1600 had been discovered in a private collection in Britain. Called a "significant discovery", the painting had never been published and is thought to have been commissioned by Vincenzo Giustiniani, a patron of the painter in Rome. A painting believed by some experts to be Caravaggio's second version of Judith Beheading Holofernes, tentatively dated between 1600 and 1610, was discovered in an attic in Toulouse in 2014. An export ban was placed on the painting by the French government while tests were carried out to establish its provenance.<ref>'Lost Caravaggio,' found in a French attic, causes rift in the art world, The Guardian, Angelique Chrisafis, 12 April 2016. Retrieved 13 April 2016.</ref> In February 2019 it was announced that the painting would be sold at auction after the Louvre had turned down the opportunity to purchase it for €100 million. After an auction was considered, the painting was finally sold by mutual agreement to a private individual. The buyer is said to be J. Tomilson Hill for $110 million. After restoration the painting could be exhibited in a museum, possibly the Met. In April 2021 a minor work believed to be from the circle of a Spanish follower of Caravaggio, Jusepe de Ribera, was withdrawn from sale at the Madrid auction house Ansorena when the Museo del Prado alerted the Ministry of Culture, which placed a preemptive export ban on the painting. The by painting has been in the Pérez de Castro family since 1823, when it was exchanged for another work from the Real Academia of San Fernando. It had been listed as “Ecce-Hommo con dos saiones de Carabaggio” before the attribution was later lost or changed to the circle of Ribera. Stylistic evidence, as well as the similarity of the models to those in other Caravaggio works, has convinced some experts that the painting is the original Caravaggio 'Ecce Homo' for the 1605 Massimo Massimi commission. The attribution to Caravaggio is disputed by other experts. The painting is now undergoing restoration by Colnaghis, who will also be handling the future sale of the work. Art theft In October 1969, two thieves entered the Oratory of Saint Lawrence in Palermo, Sicily, and stole Caravaggio's Nativity with St. Francis and St. Lawrence from its frame. Experts estimated its value at $20 million. Following the theft, Italian police set up an art theft task force with the specific aim of re-acquiring lost and stolen art works. Since the creation of this task force, many leads have been followed regarding the Nativity. Former Italian mafia members have stated that Nativity with St. Francis and St. Lawrence was stolen by the Sicilian Mafia and displayed at important mafia gatherings. Former mafia members have said that the Nativity was damaged and has since been destroyed. The whereabouts of the artwork are still unknown. A reproduction currently hangs in its place in the Oratory of San Lorenzo. Legacy Caravaggio's work has been widely influential in late-20th-century American gay culture, with frequent references to male sexual imagery in paintings such as The Musicians and Amor Victorious. British filmmaker Derek Jarman made a critically applauded biopic entitled Caravaggio in 1986. Several poems written by Thom Gunn were responses to specific Caravaggio paintings. In 2013, a touring Caravaggio exhibition called "Burst of Light: Caravaggio and His Legacy" opened in the Wadsworth Atheneum Museum of Art in Hartford, Connecticut. The show included five paintings by the master artist that included Saint John the Baptist in the Wilderness (1604–1605) and Martha and Mary Magdalene (1589). The whole travelled to France and also to Los Angeles, California. Other Baroque artists like Georges de La Tour, Orazio Gentileschi, and Spanish trio of Diego Velazquez, Francisco de Zurbaran, and Carlo Saraceni were also included in the exhibitions. See also Caravaggisti List of paintings by Caravaggio References Citations Primary sources The main primary sources for Caravaggio's life are: Giulio Mancini's comments on Caravaggio in Considerazioni sulla pittura, c. 1617–1621 Giovanni Baglione's Le vite de' pittori, 1642 Giovanni Pietro Bellori's Le Vite de' pittori, scultori et architetti moderni, 1672 All have been reprinted in Howard Hibbard's Caravaggio and in the appendices to Catherine Puglisi's Caravaggio. Secondary sources Erin Benay (2017) Exporting Caravaggio: the Crucifixion of St. Andrew Giles Press Ltd. Ralf van Bühren, Caravaggio's 'Seven Works of Mercy' in Naples. The relevance of art history to cultural journalism, in Church, Communication and Culture 2 (2017), pp. 63–87 Claudio Strinati, Caravaggio Vero, Scripta Maneant, 2014, . Maurizio Calvesi, Caravaggio, Art Dossier 1986, Giunti Editori (1986) (ISBN not available) John Denison Champlin and Charles Callahan Perkins, Ed., Cyclopedia of Painters and Paintings, Charles Scribner's Sons, New York (1885), p. 241 (available at the Harvard's Fogg Museum Library and scanned on Google Books) Andrea Dusio, Caravaggio White Album, Cooper Arte, Roma 2009, Michael Fried, The Moment of Caravaggio, Yale University Press, 2010, ISB: 9780691147017, Review Walter Friedlaender, Caravaggio Studies, Princeton: Princeton University Press 1955 John Gash, Caravaggio, Chaucer Press, (2004) ) Rosa Giorgi, Caravaggio: Master of light and dark – his life in paintings, Dorling Kindersley (1999) Andrew Graham-Dixon, Caravaggio: A Life Sacred and Profane, London, Allen Lane, 2009. Jonathan Harr (2005). The Lost Painting: The Quest for a Caravaggio Masterpiece. New York: Random House. ["The Taking of Christ"] Howard Hibbard, Caravaggio (1983) Harris, Ann Sutherland. Seventeenth-century Art & Architecture, Laurence King Publishing (2004), . Michael Kitson, The Complete Paintings of Caravaggio London, Abrams, 1967. New edition: Weidenfeld & Nicolson, 1969 and 1986, Pietro Koch, Caravaggio – The Painter of Blood and Darkness, Gunther Edition, (Rome – 2004) Gilles Lambert, Caravaggio, Taschen, (2000) Helen Langdon, Caravaggio: A Life, Farrar, Straus and Giroux, 1999 (original UK edition 1998) Denis Mahon (1947). Studies in Seicento Art. London: Warburg Institute. Alfred Moir, The Italian Followers of Caravaggio, Harvard University Press (1967) Ostrow, Steven F., review of Giovanni Baglione: Artistic Reputation in Baroque Rome by Maryvelma Smith O'Neil, The Art Bulletin, Vol. 85, No. 3 (Sep. 2003), pp. 608–611, online text Catherine Puglisi, Caravaggio, Phaidon (1998) Peter Robb, M, Duffy & Snellgrove, 2003 amended edition (original edition 1998) John Spike, with assistance from Michèle Kahn Spike, Caravaggio with Catalogue of Paintings on CD-ROM, Abbeville Press, New York (2001) John L. Varriano, Caravaggio: The Art of Realism, Pennsylvania State University Press (University Park, PA – 2006) Rudolf Wittkower, Art and Architecture in Italy, 1600–1750, Penguin/Yale History of Art, 3rd edition, 1973, Alberto Macchi, "L'uomo Caravaggio" – Atto unico (pref. Stefania Macioce), AETAS, Roma 1995, External links Biography Caravaggio, The Prince of the Night Articles and essays Christiansen, Keith. "Caravaggio (Michelangelo Merisi) (1571–1610) and his Followers." In Heilbrunn Timeline of Art History. New York: The Metropolitan Museum of Art, 2000–. (October 2003) FBI Art Theft Notice for Caravaggio's Nativity The Passion of Caravaggio Deconstructing Caravaggio and Velázquez Interview with Peter Robb, author of M Compare Rembrandt with Caravaggio. Caravaggio and the Camera Obscura Caravaggio's incisions by Ramon van de Werken Caravaggio's use of the Camera Obscura: Lapucci Some notes on Caravaggio – Patrick Swift Roberta Lapucci's website and most of her publications on Caravaggio as freely downloadable PDF Art works caravaggio-foundation.org 175 works by Caravaggio caravaggio.org Analysis of 100 important Caravaggio works Caravaggio, Michelangelo Merisi da Caravaggio WebMuseum, Paris webpage Caravaggio's EyeGate Gallery Music Lachrimae Caravaggio, by Jordi Savall, performed by Le Concert des Nations & Hesperion XXI (Article at Answers.com) Video Caravaggio's Calling of Saint Matthew at Smarthistory, accessed 13 February 2013 Caravaggio's Crucifixion of Saint Peter, accessed 13 February 2013 Caravaggio's Death of the Virgin, accessed 13 February 2013 Caravaggio's Narcissus at the Source, accessed 13 February 2013 Caravaggio's paintings in the Contarelli Chapel, San Luigi dei Francesi, accessed 13
Chardin's son, also a painter, drowned in Venice, a probable suicide. The artist's last known oil painting was dated 1776; his final Salon participation was in 1779, and featured several pastel studies. Gravely ill by November of that year, he died in Paris on December 6, at the age of 80. Work Chardin worked very slowly and painted only slightly more than 200 pictures (about four a year) in total. Chardin's work had little in common with the Rococo painting that dominated French art in the 18th century. At a time when history painting was considered the supreme classification for public art, Chardin's subjects of choice were viewed as minor categories. He favored simple yet beautifully textured still lifes, and sensitively handled domestic interiors and genre paintings. Simple, even stark, paintings of common household items (Still Life with a Smoker's Box) and an uncanny ability to portray children's innocence in an unsentimental manner (Boy with a Top [right]) nevertheless found an appreciative audience in his time, and account for his timeless appeal. Largely self-taught, Chardin was greatly influenced by the realism and subject matter of the 17th-century Low Country masters. Despite his unconventional portrayal of the ascendant bourgeoisie, early support came from patrons in the French aristocracy, including Louis XV. Though his popularity rested initially on paintings of animals and fruit, by the 1730s he introduced kitchen utensils into his work (The Copper Cistern, ca. 1735, Louvre). Soon figures populated his scenes as well, supposedly in response to a portrait painter who challenged him to take up the genre. Woman Sealing a Letter (ca. 1733), which may have been his first attempt, was followed by half-length compositions of children saying grace, as in Le Bénédicité, and kitchen maids in moments of reflection. These humble scenes deal with simple, everyday activities, yet they also have functioned as a source of documentary information about a level of French society not hitherto considered a worthy subject for painting. The pictures are noteworthy for their formal structure and pictorial harmony. Chardin said about painting, "Who said one paints with colors? One employs colors, but one paints with feeling." A child playing was a favourite subject of Chardin. He depicted an adolescent building a house of cards on at least four occasions. The version at Waddesdon Manor is the most elaborate. Scenes such as these derived from 17th-century Netherlandish vanitas works, which bore messages about the transitory nature of human life and the worthlessness of material ambitions, but Chardin's also display a delight in the ephemeral phases of childhood for their
of material ambitions, but Chardin's also display a delight in the ephemeral phases of childhood for their own sake. Chardin frequently painted replicas of his compositions—especially his genre paintings, nearly all of which exist in multiple versions which in many cases are virtually indistinguishable. Beginning with The Governess (1739, in the National Gallery of Canada, Ottawa), Chardin shifted his attention from working-class subjects to slightly more spacious scenes of bourgeois life. Chardin's extant paintings, which number about 200, are in many major museums, including the Louvre. Influence Chardin's influence on the art of the modern era was wide-ranging and has been well-documented. Édouard Manet's half-length Boy Blowing Bubbles and the still lifes of Paul Cézanne are equally indebted to their predecessor. He was one of Henri Matisse's most admired painters; as an art student Matisse made copies of four Chardin paintings in the Louvre. Chaïm Soutine's still lifes looked to Chardin for inspiration, as did the paintings of Georges Braque, and later, Giorgio Morandi. In 1999 Lucian Freud painted and etched several copies after The Young Schoolmistress (National Gallery, London). Marcel Proust, in the chapter "How to open your eyes?" from In Search of Lost Time (À la recherche du temps perdu), describes a melancholic young man sitting at his simple breakfast table. The only comfort he finds is in the imaginary ideas of beauty depicted in the great masterpieces of the Louvre, materializing fancy palaces, rich princes, and the like. The author tells the young man to follow him to another section of the Louvre where the pictures of Jean-Baptiste Chardin are. There he would see the beauty in still life at home and in everyday activities like peeling turnips. Gallery See also The Attributes of Civilian and Military Music Soap Bubbles (painting) Notes References ArtCyclopedia: Jean-Baptiste Siméon Chardin. Rosenberg, Pierre (2000), Chardin. Munich: Prestel. . Rosenberg, Pierre, and Florence Bruyant (2000), Chardin. London: Royal Academy of Arts. . External links Chardin exhibition at the Metropolitan Museum of Art Getty Museum: Chardin. WebMuseum: Jean-Baptiste-Siméon Chardin. Jean-Baptiste-Simeon-Chardin.org 124 works by Jean-Baptiste-Siméon Chardin. Artcylopedia: Jean-Baptiste Siméon Chardin - identifies where Chardin's work is in galleries and museums around the world. Web Gallery of Art: Chardin. Neil Jeffares, Dictionary of pastellists before 1800, online edition Chardin, Boy Building a House of Cards at Waddesdon Manor 1699 births 1779 deaths 18th-century French painters French male painters Rococo painters French still life
than black-body absorption. The wheel turns backwards because the net exchange of heat between the black sides and the environment initially cools the black sides faster than the white sides. Upon reaching equilibrium, typically after a minute or two, reverse rotation ceases. This contrasts with sunlight, with which forward rotation can be maintained all day. Explanations for the force on the vanes Over the years, there have been many attempts to explain how a Crookes radiometer works: Incorrect theories Crookes incorrectly suggested that the force was due to the pressure of light. This theory was originally supported by James Clerk Maxwell, who had predicted this force. This explanation is still often seen in leaflets packaged with the device. The first experiment to test this theory was done by Arthur Schuster in 1876, who observed that there was a force on the glass bulb of the Crookes radiometer that was in the opposite direction to the rotation of the vanes. This showed that the force turning the vanes was generated inside the radiometer. If light pressure were the cause of the rotation, then the better the vacuum in the bulb, the less air resistance to movement, and the faster the vanes should spin. In 1901, with a better vacuum pump, Pyotr Lebedev showed that in fact, the radiometer only works when there is low-pressure gas in the bulb, and the vanes stay motionless in a hard vacuum. Finally, if light pressure were the motive force, the radiometer would spin in the opposite direction, as the photons on the shiny side being reflected would deposit more momentum than on the black side, where the photons are absorbed. This results from conservation of momentum – the momentum of the reflected photon exiting on the light side must be matched by a reaction on the vane that reflected it. The actual pressure exerted by light is far too small to move these vanes, but can be measured with devices such as the Nichols radiometer. It is in fact possible to make the radiometer spin in the opposite direction by either heating it or putting it in a cold environment (like a freezer) in absence of light, when black sides become cooler than the white ones due to the black-body radiation. Another incorrect theory was that the heat on the dark side was causing the material to outgas, which pushed the radiometer around. This was later effectively disproved by both Schuster's experiments (1876) and Lebedev's (1901) Partially correct theory A partial explanation is that gas molecules hitting the warmer side of the vane will pick up some of the heat, bouncing off the vane with increased speed. Giving the molecule this extra boost effectively means that a minute pressure is exerted on the vane. The imbalance of this effect between the warmer black side and the cooler silver side means the net pressure on the vane is equivalent to a push on the black side and as a result the vanes spin round with the black side trailing. The problem with this idea is that while the faster moving molecules produce more force, they also do a better job of stopping other molecules from reaching the vane, so the net force on the vane should be the same. The greater temperature causes a decrease in local density which results in the same force on both sides. Years after this explanation was dismissed, Albert Einstein showed that the two pressures do not cancel out exactly at the edges of the vanes because of the temperature difference there. The force predicted by Einstein would be enough to move the vanes, but not fast enough. Currently accepted theory The currently accepted theory was formulated by Osborne Reynolds, who theorized that thermal transpiration was the cause of the motion. Reynolds found that if a porous plate is kept hotter on one side than the other, the interactions between gas molecules and the plates are such that gas will flow through from the hotter to the cooler side. The vanes of a typical Crookes radiometer are not porous, but the space past their edges behaves like the pores in Reynolds's plate. On average, the gas molecules move from the hot side toward the
turns in the forward direction (i.e. black sides trailing). If a person's hands are placed around the glass without touching it, the vanes will turn slowly or not at all, but if the glass is touched to warm it quickly, they will turn more noticeably. Directly heated glass gives off enough infrared radiation to turn the vanes, but glass blocks much of the far-infrared radiation from a source of warmth not in contact with it. However, near-infrared and visible light more easily penetrate the glass. If the glass is cooled quickly in the absence of a strong light source by putting ice on the glass or placing it in the freezer with the door almost closed, it turns backwards (i.e. the silversides trail). This demonstrates black-body radiation from the black sides of the vanes rather than black-body absorption. The wheel turns backwards because the net exchange of heat between the black sides and the environment initially cools the black sides faster than the white sides. Upon reaching equilibrium, typically after a minute or two, reverse rotation ceases. This contrasts with sunlight, with which forward rotation can be maintained all day. Explanations for the force on the vanes Over the years, there have been many attempts to explain how a Crookes radiometer works: Incorrect theories Crookes incorrectly suggested that the force was due to the pressure of light. This theory was originally supported by James Clerk Maxwell, who had predicted this force. This explanation is still often seen in leaflets packaged with the device. The first experiment to test this theory was done by Arthur Schuster in 1876, who observed that there was a force on the glass bulb of the Crookes radiometer that was in the opposite direction to the rotation of the vanes. This showed that the force turning the vanes was generated inside the radiometer. If light pressure were the cause of the rotation, then the better the vacuum in the bulb, the less air resistance to movement, and the faster the vanes should spin. In 1901, with a better vacuum pump, Pyotr Lebedev showed that in fact, the radiometer only works when there is low-pressure gas in the bulb, and the vanes stay motionless in a hard vacuum. Finally, if light pressure were the motive force, the radiometer would spin in the opposite direction, as the photons on the shiny side being reflected would deposit more momentum than on the black side, where the photons are absorbed. This results from conservation of momentum – the momentum of the reflected photon exiting on the light side must be matched by a reaction on the vane that reflected it. The actual pressure exerted by light is far too small to move these vanes, but can be measured with devices such as the Nichols radiometer. It is in fact possible to make the radiometer spin in the opposite direction by either heating it or putting it in a cold environment (like a freezer) in absence of light, when black sides become cooler than the white ones due to the black-body radiation. Another incorrect theory was that the heat on the dark side was causing the material to outgas, which pushed the radiometer around. This was later effectively disproved by both Schuster's experiments (1876) and Lebedev's (1901) Partially correct theory A partial explanation is that gas molecules hitting the warmer side of the vane will pick up some of the heat, bouncing off the vane with increased speed. Giving the molecule this extra boost effectively means that a minute pressure is exerted on the vane. The imbalance of this effect between the warmer black side and the cooler silver side means the net pressure on the vane is equivalent to a push on the black side and as a result the vanes spin round with the black side trailing. The problem with this idea is that while the faster moving molecules produce more force, they also do a better job of stopping other molecules from reaching the vane, so the net force on the vane should be the same. The greater temperature causes a decrease in local density which results in the same force on both sides. Years after this explanation was dismissed, Albert Einstein showed that the two pressures do not cancel out exactly at the edges of the vanes because of the temperature difference there. The force predicted by Einstein would be enough to move the vanes, but not fast enough. Currently accepted theory The currently accepted theory was
stage, crouched, sweating, as he roared his vocals into the microphone at the top of his lungs." The EP peaked at No. 35 on the Kent Music Report Singles Chart. "Merry Go Round" was re-recorded for their second studio album, Breakfast at Sweethearts (February 1979). This was recorded between July 1978 and January 1979 with producer, Richard Batchens, who had previously worked with Richard Clapton, Sherbet and Blackfeather. Batchens smoothed out the band's rough edges and attempted to give their songs a sophisticated sound. With regards to this approach, the band were unsatisfied with the finished product. It peaked at No. 4 and was the top selling album in Australia by a locally based artist for that year; it was certified platinum. The majority of its tracks were written by Walker, with Barnes and Walker on the lead single, "Goodbye (Astrid, Goodbye)" (September 1978), and Moss contributed to "Dresden". "Goodbye (Astrid, Goodbye)" became a live favourite, and was covered by U2 during Australian tours in the 1980s. 1979-1980: East Cold Chisel had gained national chart success and increased popularity of their fans without significant commercial radio airplay. The members developed reputations for wild behaviour, particularly Barnes who claimed to have had sex with over 1000 women and who consumed more than a bottle of vodka each night while performing. In late 1979, severing their relationship with Batchens, Cold Chisel chose Mark Opitz to produce the next single, "Choirgirl" (November). It is a Walker composition dealing with a young woman's experience with abortion. Despite the subject matter it reached No. 14. "Choirgirl" paved the way for the group's third studio album, East (June 1980), with Opitz producing. Recorded over two months in early 1980, East, reached No. 2 and is the second highest selling album by an Australian artist for that year. The Australian Women's Weeklys Gregg Flynn noticed, "[they are] one of the few Australian bands in which each member is capable of writing hit songs." Despite the continued dominance of Walker, the other members contributed more tracks to their play list, and this was their first album to have songs written by each one. McFarlane described it as, "a confident, fully realised work of tremendous scope." Nimmervoll explained how, "This time everything fell into place, the sound, the songs, the playing... East was a triumph. [The group] were now the undisputed No. 1 rock band in Australia." The album varied from straight ahead rock tracks, "Standing on the Outside" and "My Turn to Cry", to rockabilly-flavoured work-outs ("Rising Sun", written about Barnes' relationship with his then-girlfriend Jane Mahoney) and pop-laced love songs ("My Baby", featuring Joe Camilleri on saxophone) to a poignant piano ballad about prison life, "Four Walls". The cover art showed Barnes reclined in a bathtub wearing a kamikaze bandanna in a room littered with junk and was inspired by Jacques-Louis David's 1793 painting, The Death of Marat. The Ian Moss-penned "Never Before" was chosen as the first song to air on the ABC's youth radio station, Triple J, when it switched to the FM band that year. Supporting the release of East, Cold Chisel embarked on the Youth in Asia Tour from May 1980, which took its name from a lyric in "Star Hotel". In late 1980, the Aboriginal rock reggae band No Fixed Address supported the band on its "Summer Offensive" tour to the east coast, with the final concert on 20 December at the University of Adelaide. 1981-1982: Swingshift to Circus Animals The Youth in Asia Tour performances were used for Cold Chisel's double live album, Swingshift (March 1981). Nimmervoll declared, "[the group] rammed what they were all about with [this album]." In March 1981 the band won seven categories: Best Australian Album, Most Outstanding Achievement, Best Recorded Song Writer, Best Australian Producer, Best Australian Record Cover Design, Most Popular Group and Most Popular Record, at the Countdown/TV Week pop music awards for 1980. They attended the ceremony at the Sydney Entertainment Centre and were due to perform: however, as a protest against a TV magazine's involvement, they refused to accept any trophy and finished the night with, "My Turn to Cry". After one verse and chorus, they smashed up the set and left the stage. Swingshift debuted at No 1, which demonstrated their status as the highest selling local act. With a slightly different track-listing, East, was issued in the United States and they undertook their first US tour in mid-1981. Ahead of the tour they had issued, "My Baby", for the North America market and it reached the top 40 on Billboards chart, Mainstream Rock. They were generally popular as a live act there, but the US branch of their label did little to promote the album. According to Barnes' biographer, Toby Creswell, at one point they were ushered into an office to listen to the US master tape to find it had substantial hiss and other ambient noise, which made it almost unable to be released. Notwithstanding, the album reached the lower region of the Billboard 200 in July. The group were booed off stage after a lacklustre performance in Dayton, Ohio in May 1981 opening for Ted Nugent. Other support slots they took were for Cheap Trick, Joe Walsh, Heart and the Marshall Tucker Band. European audiences were more accepting of the Australian band and they developed a fan base in Germany. In August 1981 Cold Chisel began work on a fourth studio album, Circus Animals (March 1982), again with Opitz producing. To launch the album, the band performed under a circus tent at Wentworth Park in Sydney and toured heavily once more, including a show in Darwin that attracted more than 10 percent of the city's population. It peaked at No. 1 in both Australia and on the Official New Zealand Music Chart. In October 2010 it was listed at No. 4 in the book, 100 Best Australian Albums, by music journalists, Creswell, Craig Mathieson and John O'Donnell. Its lead single, "You Got Nothing I Want" (November 1981), is an aggressive Barnes-penned hard rock track, which attacked the US industry for its handling of the band on their recent tour. The song caused problems for Barnes when he later attempted to break into the US market as a solo performer; senior music executives there continued to hold it against him. Like its predecessor, Circus Animals, contained songs of contrasting styles, with harder-edged tracks like "Bow River" and "Hound Dog" in place beside more expansive ballads such as the next two singles, "Forever Now" (March 1982) and "When the War Is Over" (August), both are written by Prestwich. "Forever Now" is their highest charting single in two Australasian markets: No. 4 on the Kent Music Report Singles Chart and No. 2 on the Official New Zealand Music Chart. "When the War Is Over" is the most covered Cold Chisel track – Uriah Heep included a version on their 1989 album, Raging Silence; John Farnham recorded it while he and Prestwich were members of Little River Band in the mid-1980s and again for his 1990 solo album, Age of Reason. The song was also a No. 1 hit for former Australian Idol contestant, Cosima De Vito, in 2004 and was performed by Bobby Flynn during that show's 2006 season. "Forever Now" was covered, as a country waltz, by Australian band, the Reels. 1983: Break-up Success outside Australasia continued to elude Cold Chisel and friction occurred between the members. According to McFarlane, "[the] failed attempts to break into the American market represented a major blow... [their] earthy, high-energy rock was overlooked." In early 1983 they toured Germany but the shows went so badly that in the middle of the tour Walker up-ended his keyboard and stormed off stage during one show. After returning to Australia, Prestwich was fired and replaced by Ray Arnott, formerly of the 1970s progressive rockers, Spectrum, and country rockers, the Dingoes. After this, Barnes requested a large advance from management. Now married with a young child, reckless spending had left him almost broke. His request was refused as there was a standing arrangement that any advance to one band member had to be paid to all the others. After a meeting on 17 August during which Barnes quit the band it was decided that the group would split up. A farewell concert series, The Last Stand, was planned and a final studio album, Twentieth Century (February 1984), was recorded. Prestwich returned for that tour, which began in October. Before the last four scheduled shows in Sydney, Barnes lost his voice and those dates were postponed to mid-December. The band's final performances were at the Sydney Entertainment Centre from 12 to 15 December 1983 – ten years since their first live appearance as Cold Chisel in Adelaide – the group then disbanded. The Sydney shows formed the basis of a concert film, The Last Stand (July 1984), which became the biggest-selling cinema-released concert documentary by an Australian band to that time. Other recordings from the tour were used on a live album, The Barking Spiders Live: 1983 (1984), the title is a reference to the pseudonym the group occasionally used when playing warm-up shows before tours. Some were also used as b-sides for a three-CD singles package, Three Big XXX Hits, issued ahead of the release of their 1994 compilation album, Teenage Love. During breaks in the tour, Twentieth Century, was recorded. It was a fragmentary process, spread across various studios and sessions as the individual members often refused to work together – both Arnott (on ten tracks) and Prestwich (on three tracks) are recorded as drummers. The album reached No. 1 and provided the singles, "Saturday Night" (March 1984) and "Flame Trees" (August), both of which remain radio staples. "Flame Trees", co-written by Prestwich and Walker, took its title from the BBC series, The Flame Trees of Thika, although it was lyrically inspired by the organist's hometown of Grafton. Barnes later recorded an acoustic version for his 1993 solo album, Flesh and Wood, and it was also covered by Sarah Blasko in 2006. 1984-1996: Aftermath and ARIA Hall of Fame Barnes launched his solo career in January 1984, which has provided nine Australian number-one studio albums and an array of hit singles, including, "Too Much Ain't Enough Love", which peaked at No. 1. He has recorded with INXS, Tina Turner, Joe Cocker and John Farnham to become one of the country's most popular male rock singers. Prestwich joined Little River Band in 1984 and appeared on the albums, Playing to Win and No Reins, before departing in 1986 to join Farnham's touring band. Moss, Small and Walker took extended breaks from music. Small maintained a low profile as a member in a variety of minor groups, Pound, the Earls of Duke and the Outsiders. Walker formed Catfish in 1988, ostensibly a solo band with a variable membership, which included Moss, Charlie Owen and Dave Blight at times. Catfish's recordings during this phase attracted little commercial success. During 1988 and 1989 Walker wrote several tracks for Moss including the singles, "Tucker's Daughter" (November 1988) and "Telephone Booth" (June 1989), which appeared on Moss' debut solo album, Matchbook (August 1989). Both the album and "Tucker's Daughter" peaked at No. 1. Moss won five trophies at the ARIA Music Awards of 1990. His other solo albums met with less chart or award success. Throughout the 1980s and most of the 1990s, Cold Chisel were courted to re-form but refused, at one point reportedly turning down a $5 million offer to play a sole show in each of the major Australian state capitals. Moss and Walker often collaborated on projects, neither worked with Barnes until Walker wrote, "Stone Cold", for the singer's sixth studio album, Heat (October 1993). The pair recorded an acoustic version for Flesh and Wood (December). Thanks primarily to continued radio airplay and Barnes' solo success, Cold Chisel's legacy remained solidly intact. By the early 1990s the group had surpassed 3 million album sales, most sold since 1983. The 1991 compilation album, Chisel, was re-issued and re-packaged several times, once with the long-deleted 1978 EP as a bonus disc and a second time in 2001 as a double album. The Last Stand soundtrack album was finally released in 1992. In 1994 a complete album of previously unreleased demo and rare live recordings, Teenage Love, was released, which provided three singles. 1997–2010: Reunited Cold Chisel reunited in October 1997, with the line-up of Barnes, Moss, Prestwich, Small and Walker. They recorded their sixth studio album, The Last Wave of Summer (October 1998), from February to July with the band members co-producing. They supported it with a national tour. The album debuted at number one on the ARIA Albums Chart. In 2003 they re-grouped for the Ringside Tour and in 2005 again to perform at a benefit for the victims of the Boxing Day tsunami at the Myer Music Bowl in Melbourne. Founding bass guitarist, Les Kaczmarek, died of liver failure on
the singles, "Saturday Night" (March 1984) and "Flame Trees" (August), both of which remain radio staples. "Flame Trees", co-written by Prestwich and Walker, took its title from the BBC series, The Flame Trees of Thika, although it was lyrically inspired by the organist's hometown of Grafton. Barnes later recorded an acoustic version for his 1993 solo album, Flesh and Wood, and it was also covered by Sarah Blasko in 2006. 1984-1996: Aftermath and ARIA Hall of Fame Barnes launched his solo career in January 1984, which has provided nine Australian number-one studio albums and an array of hit singles, including, "Too Much Ain't Enough Love", which peaked at No. 1. He has recorded with INXS, Tina Turner, Joe Cocker and John Farnham to become one of the country's most popular male rock singers. Prestwich joined Little River Band in 1984 and appeared on the albums, Playing to Win and No Reins, before departing in 1986 to join Farnham's touring band. Moss, Small and Walker took extended breaks from music. Small maintained a low profile as a member in a variety of minor groups, Pound, the Earls of Duke and the Outsiders. Walker formed Catfish in 1988, ostensibly a solo band with a variable membership, which included Moss, Charlie Owen and Dave Blight at times. Catfish's recordings during this phase attracted little commercial success. During 1988 and 1989 Walker wrote several tracks for Moss including the singles, "Tucker's Daughter" (November 1988) and "Telephone Booth" (June 1989), which appeared on Moss' debut solo album, Matchbook (August 1989). Both the album and "Tucker's Daughter" peaked at No. 1. Moss won five trophies at the ARIA Music Awards of 1990. His other solo albums met with less chart or award success. Throughout the 1980s and most of the 1990s, Cold Chisel were courted to re-form but refused, at one point reportedly turning down a $5 million offer to play a sole show in each of the major Australian state capitals. Moss and Walker often collaborated on projects, neither worked with Barnes until Walker wrote, "Stone Cold", for the singer's sixth studio album, Heat (October 1993). The pair recorded an acoustic version for Flesh and Wood (December). Thanks primarily to continued radio airplay and Barnes' solo success, Cold Chisel's legacy remained solidly intact. By the early 1990s the group had surpassed 3 million album sales, most sold since 1983. The 1991 compilation album, Chisel, was re-issued and re-packaged several times, once with the long-deleted 1978 EP as a bonus disc and a second time in 2001 as a double album. The Last Stand soundtrack album was finally released in 1992. In 1994 a complete album of previously unreleased demo and rare live recordings, Teenage Love, was released, which provided three singles. 1997–2010: Reunited Cold Chisel reunited in October 1997, with the line-up of Barnes, Moss, Prestwich, Small and Walker. They recorded their sixth studio album, The Last Wave of Summer (October 1998), from February to July with the band members co-producing. They supported it with a national tour. The album debuted at number one on the ARIA Albums Chart. In 2003 they re-grouped for the Ringside Tour and in 2005 again to perform at a benefit for the victims of the Boxing Day tsunami at the Myer Music Bowl in Melbourne. Founding bass guitarist, Les Kaczmarek, died of liver failure on 5 December 2008, aged 53. Walker described him as, "a wonderful and beguiling man in every respect." On 10 September 2009 Cold Chisel announced they would reform for a one-off performance at the Sydney 500 V8 Supercars event on 5 December. The band performed at Stadium Australia to the largest crowd of its career, with more than 45,000 fans in attendance. They played a single live show in 2010: at the Deniliquin ute muster in October. In December Moss confirmed that Cold Chisel were working on new material for an album. 2011–2019: Death of Steve Prestwich & The Perfect Crime In January 2011 Steve Prestwich was diagnosed with a brain tumour; he underwent surgery on 14 January but never regained consciousness and died two days later, aged 56. All six of Cold Chisel's studio albums were re-released in digital and CD formats in mid-2011. Three digital-only albums were released, Never Before, Besides and Covered, as well as a new compilation album, The Best of Cold Chisel: All for You, which peaked at number 2 on the ARIA Charts. The thirty-date Light the Nitro Tour was announced in July along with the news that former Divinyls and Catfish drummer, Charley Drayton, had replaced Prestwich. Most shows on the tour sold out within days and new dates were later announced for early 2012. No Plans, their seventh studio album, was released in April 2012, with Kevin Shirley producing, which peaked at No. 2. The Australians Stephen Fitzpatrick rated it as four-and-a-half out-of-five and found its lead track, "All for You", "speaks of redemption; of a man's ability to make something of himself through love." The track, "I Got Things to Do", was written and sung by Prestwich, which Fitzpatrick described as, "the bittersweet finale" and had "a vocal track the other band members did not know existed until after his death." Midway through 2012 they had a short UK tour and played with Soundgarden and Mars Volta at Hard Rock Calling at London's Hyde Park. The group's eighth studio album, The Perfect Crime, appeared in October 2015, again with Shirley producing, which peaked at No. 2. Martin Boulton of The Sydney Morning Herald rated it at four-out-of-five stars and explained, "[they] work incredibly hard, not take any shortcuts and play the hell out of the songs" where the album, "delves further back to their rock'n'roll roots with chief songwriter [Walker] carving up the keys, guitarist [Moss] both gritty and sublime and the [Small/Drayton] engine room firing on every cylinder. Barnes' voice sounds worn, wonderful and better than ever." The band's latest album, Blood Moon, was released in December 2019. The album debuted a number one on the ARIA Albums Chart, the band's fifth to reach the top. Half of the songs had lyrics written by Barnes and music by Walker, a new combination for Cold Chisel, with Barnes noting his increased confidence after writing two autobiographies. Musical style and lyrical themes McFarlane described Cold Chisel's early career in his Encyclopedia of Australian Rock and Pop (1999), "after ten years on the road, [they] called it a day. Not that the band split up for want of success; by that stage [they] had built up a reputation previously uncharted in Australian rock history. By virtue of the profound effect the band's music had on the many thousands of fans who witnessed its awesome power, Cold Chisel remains one of Australia's best-loved groups. As one of the best live bands of its day, [they] fused a combination of rockabilly, hard rock and rough-house soul'n'blues that was defiantly Australian in outlook." The Canberra Times Luis Feliu, in July 1978, observed how, "This is not just another Australian rock band, no mediocrity here, and their honest, hard-working approach looks like paying off." While "the range of styles tackled and done convincingly, from hard rock to blues, boogie, rhythm and blues, is where the appeal lies." Influences from blues and early rock n' roll was broadly apparent, fostered by the love of those styles by Moss, Barnes and Walker. Small and Prestwich contributed strong pop sensibilities. This allowed volatile rock songs like "You Got Nothing I Want" and "Merry-Go-Round" to stand beside thoughtful ballads like "Choirgirl", pop-flavoured love songs like "My Baby" and caustic political statements like "Star Hotel", an attack on the late 1970s government of Malcolm Fraser, inspired by the Star Hotel riot in Newcastle. The songs were not overtly political but rather observations of everyday life within Australian society and culture, in which the members with their various backgrounds (Moss was from Alice Springs, Walker grew up in rural New South Wales, Barnes and Prestwich were working-class immigrants from the UK) were quite well able to provide. Cold Chisel's songs were about distinctly Australian experiences, a factor often cited as a major reason for the band's lack of international appeal. "Saturday Night" and "Breakfast at Sweethearts" were observations of the urban experience of Sydney's Kings Cross district where Walker lived for many years. "Misfits", which featured on the b-side to "My Baby", was about homeless kids in the suburbs surrounding Sydney. Songs like "Shipping Steel" and "Standing on The Outside" were working class anthems and many others featured characters trapped in mundane, everyday existences, yearning for the good times of the past ("Flame Trees") or for something better from life ("Bow River"). Reputation and recognition Alongside contemporaries like The Angels and Midnight Oil, Cold Chisel was renowned as one of the most dynamic live acts of their day and from early in their career concerts routinely became sell-out events. But the band was also famous for its wild lifestyle, particularly the hard-drinking Barnes, who played his role as one of the wild men of Australian rock to the hilt, never seen on stage without at least one bottle of vodka and often so drunk he could barely stand upright. Despite this, by 1982 he was a devoted family man who refused to tour without his wife and daughter. All the other band members were also settled or married; Ian Moss had a long-term relationship with the actress, Megan Williams, (she even sang on Twentieth Century) whose own public persona could have hardly been more different. It was the band's public image that often had them compared less favourably with other important acts like Midnight Oil, whose music and politics (while rather more overt) were often similar but whose image and reputation was more clean-cut. Cold Chisel remained hugely popular however and by the mid-1990s they continued to sell records at such a consistent rate they became the first Australian band to achieve higher sales after their split than during their active years. At the ARIA Music Awards of 1993 they were inducted into the Hall of Fame. While repackages and compilations accounted for much of these sales, 1994's Teenage Love provided two of its singles, which were top ten hits. When the group finally reformed in 1998 the resultant album was also a major hit and the follow-up tour sold out almost immediately. In 2001 Australasian Performing Right Association (APRA), listed their single, "Khe Sanh" (May 1978), at No. 8 of the all-time best Australian songs. Cold Chisel were one of the first Australian acts to have become the subject of a major tribute album. In 2007, Standing on the Outside: The Songs of Cold Chisel was released, featuring a collection of the band's songs as performed by artists including The Living End, Evermore, Something for Kate, Pete Murray, Katie Noonan, You Am I, Paul Kelly, Alex Lloyd, Thirsty Merc and Ben Lee, many of whom were children when Cold Chisel first disbanded and some, like the members of Evermore, had not even been born. Circus Animals was listed at No. 4 in the book, 100 Best Australian Albums (October 2010), while East appeared at No. 53. They won The Ted Albert Award for Outstanding Services to Australian Music at the APRA Music Awards of 2016. In March 2021, a previously unnamed lane off Burnett Street (off Currie Street) in Adelaide CBD, near where the band had its first residency in the 1970s, was officially named Cold Chisel Lane. On one of its walls, there is a mural by Adelaide artist James Dodd, inspired by the band. Members Current members Ian Moss – lead guitar, backing and lead vocals (1973–1983, 1997–1999, 2003, 2009–present) Don Walker – keyboards, backing vocals (1973–1983, 1997–1999, 2003, 2009–present) Jimmy Barnes – lead and backing vocals, occasional guitar (1973–1975, 1976–1977, 1978–1984, 1997–1999, 2003, 2009–present) Phil Small – bass guitar, backing vocals (1975–1984, 1997–1999, 2003, 2009–present) Charley Drayton – drums, backing vocals (2011–present) Former members Steve Prestwich – drums, backing vocals (1973–1983, 1997–1999, 2003, 2009–2011; died 2011) Ted Broniecki – keyboards (1973) Les Kaczmarek – bass guitar (1973–1975; died 2008) John Swan – percussion, backing vocals (1975) Ray Arnott – drums (1983–1984) Additional musicians Dave Blight – harmonica Billy Rodgers – saxophone Jimmy Sloggett – saxophone Andy Bickers – saxophone Renée Geyer – backing vocals Venetta Fields – backing vocals Megan Williams – backing vocals Peter Walker – acoustic guitar Joe Camilleri – saxophone Wilbur Wilde – saxophone Timeline Discography Cold Chisel (1978) Breakfast at Sweethearts (1979) East (1980) Circus Animals (1982) Twentieth Century (1984) The Last Wave of Summer (1998) No Plans (2012) The Perfect Crime (2015) Blood Moon (2019) Awards and nominations APRA Awards The APRA Awards are several award ceremonies run in Australia by the Australasian Performing Right Association (APRA) to recognise composing and song writing skills, sales and airplay performance by its members annually. |- | 2021 || "Getting the Band Back Together" || Most Performed Rock Work || |- ARIA Music Awards The ARIA Music Awards is an annual awards ceremony that recognises excellence, innovation, and achievement across all genres of Australian music. They commenced in 1987. Cold Chisel was inducted into the Hall of Fame in 1993. |- | 1992 | Chisel | Highest Selling Album | |- | 1993 | Cold Chisel | ARIA Hall of Fame | |- | rowspan="2" | 1999 | rowspan="2" | The Last Wave of Summer | Best Rock Album | |- | Highest Selling Album | |- | rowspan="3" | 2012 | rowspan="2" | No Plans | Best Rock Album | |- | Best Group | |- | Light The Nitro Tour | Best Australian Live Act | |- | rowspan="3" | 2020 | Blood Moon | Best Rock Album | |- | Kevin Shirley for Blood Moon by Cold Chisel | Producer of the Year | |- | Blood Moon Tour | Best Australian Live Act
growing shortage of horses and mules, which hurt the Southern economy and the war effort. The South lost half of its 2.5 million horses and mules; many farmers ended the war with none left. Army horses were used up by hard work, malnourishment, disease and battle wounds; they had a life expectancy of about seven months. Financial instruments Both the individual Confederate states and later the Confederate government printed Confederate States of America dollars as paper currency in various denominations, with a total face value of $1.5 billion. Much of it was signed by Treasurer Edward C. Elmore. Inflation became rampant as the paper money depreciated and eventually became worthless. The state governments and some localities printed their own paper money, adding to the runaway inflation. Many bills still exist, although in recent years counterfeit copies have proliferated. The Confederate government initially wanted to finance its war mostly through tariffs on imports, export taxes, and voluntary donations of gold. After the spontaneous imposition of an embargo on cotton sales to Europe in 1861, these sources of revenue dried up and the Confederacy increasingly turned to issuing debt and printing money to pay for war expenses. The Confederate States politicians were worried about angering the general population with hard taxes. A tax increase might disillusion many Southerners, so the Confederacy resorted to printing more money. As a result, inflation increased and remained a problem for the southern states throughout the rest of the war. By April 1863, for example, the cost of flour in Richmond had risen to $100 a barrel and housewives were rioting. The Confederate government took over the three national mints in its territory: the Charlotte Mint in North Carolina, the Dahlonega Mint in Georgia, and the New Orleans Mint in Louisiana. During 1861 all of these facilities produced small amounts of gold coinage, and the latter half dollars as well. Since the mints used the current dies on hand, all appear to be U.S. issues. However, by comparing slight differences in the dies specialists can distinguish 1861-O half dollars that were minted either under the authority of the U.S. government, the State of Louisiana, or finally the Confederate States. Unlike the gold coins, this issue was produced in significant numbers (over 2.5 million) and is inexpensive in lower grades, although fakes have been made for sale to the public. However, before the New Orleans Mint ceased operation in May, 1861, the Confederate government used its own reverse design to strike four half dollars. This made one of the great rarities of American numismatics. A lack of silver and gold precluded further coinage. The Confederacy apparently also experimented with issuing one cent coins, although only 12 were produced by a jeweler in Philadelphia, who was afraid to send them to the South. Like the half dollars, copies were later made as souvenirs. US coinage was hoarded and did not have any general circulation. U.S. coinage was admitted as legal tender up to $10, as were British sovereigns, French Napoleons and Spanish and Mexican doubloons at a fixed rate of exchange. Confederate money was paper and postage stamps. Food shortages and riots By mid-1861, the Union naval blockade virtually shut down the export of cotton and the import of manufactured goods. Food that formerly came overland was cut off. As women were the ones who remained at home, they had to make do with the lack of food and supplies. They cut back on purchases, used old materials, and planted more flax and peas to provide clothing and food. They used ersatz substitutes when possible, but there was no real coffee, only okra and chicory substitutes. The households were severely hurt by inflation in the cost of everyday items like flour, and the shortages of food, fodder for the animals, and medical supplies for the wounded. State governments requested that planters grow less cotton and more food, but most refused. When cotton prices soared in Europe, expectations were that Europe would soon intervene to break the blockade and make them rich, but Europe remained neutral. The Georgia legislature imposed cotton quotas, making it a crime to grow an excess. But food shortages only worsened, especially in the towns. The overall decline in food supplies, made worse by the inadequate transportation system, led to serious shortages and high prices in urban areas. When bacon reached a dollar a pound in 1863, the poor women of Richmond, Atlanta and many other cities began to riot; they broke into shops and warehouses to seize food, as they were angry at ineffective state relief efforts, speculators, and merchants. As wives and widows of soldiers, they were hurt by the inadequate welfare system. Devastation by 1865 By the end of the war deterioration of the Southern infrastructure was widespread. The number of civilian deaths is unknown. Every Confederate state was affected, but most of the war was fought in Virginia and Tennessee, while Texas and Florida saw the least military action. Much of the damage was caused by direct military action, but most was caused by lack of repairs and upkeep, and by deliberately using up resources. Historians have recently estimated how much of the devastation was caused by military action. Paul Paskoff calculates that Union military operations were conducted in 56% of 645 counties in nine Confederate states (excluding Texas and Florida). These counties contained 63% of the 1860 white population and 64% of the slaves. By the time the fighting took place, undoubtedly some people had fled to safer areas, so the exact population exposed to war is unknown. The eleven Confederate States in the 1860 United States Census had 297 towns and cities with 835,000 people; of these 162 with 681,000 people were at one point occupied by Union forces. Eleven were destroyed or severely damaged by war action, including Atlanta (with an 1860 population of 9,600), Charleston, Columbia, and Richmond (with prewar populations of 40,500, 8,100, and 37,900, respectively); the eleven contained 115,900 people in the 1860 census, or 14% of the urban South. Historians have not estimated what their actual population was when Union forces arrived. The number of people (as of 1860) who lived in the destroyed towns represented just over 1% of the Confederacy's 1860 population. In addition, 45 court houses were burned (out of 830). The South's agriculture was not highly mechanized. The value of farm implements and machinery in the 1860 Census was $81 million; by 1870, there was 40% less, worth just $48 million. Many old tools had broken through heavy use; new tools were rarely available; even repairs were difficult. The economic losses affected everyone. Banks and insurance companies were mostly bankrupt. Confederate currency and bonds were worthless. The billions of dollars invested in slaves vanished. Most debts were also left behind. Most farms were intact but most had lost their horses, mules and cattle; fences and barns were in disrepair. Paskoff shows the loss of farm infrastructure was about the same whether or not fighting took place nearby. The loss of infrastructure and productive capacity meant that rural widows throughout the region faced not only the absence of able-bodied men, but a depleted stock of material resources that they could manage and operate themselves. During four years of warfare, disruption, and blockades, the South used up about half its capital stock. The North, by contrast, absorbed its material losses so effortlessly that it appeared richer at the end of the war than at the beginning. The rebuilding took years and was hindered by the low price of cotton after the war. Outside investment was essential, especially in railroads. One historian has summarized the collapse of the transportation infrastructure needed for economic recovery: Effect on women and families About 250,000 men never came home, some 30 percent of all white men aged 18 to 40 (as counted in 1860). Widows who were overwhelmed often abandoned their farms and merged into the households of relatives, or even became refugees living in camps with high rates of disease and death. In the Old South, being an "old maid" was something of an embarrassment to the woman and her family, but after the war, it became almost a norm. Some women welcomed the freedom of not having to marry. Divorce, while never fully accepted, became more common. The concept of the "New Woman" emerged – she was self-sufficient and independent, and stood in sharp contrast to the "Southern Belle" of antebellum lore. National flags The first official flag of the Confederate States of America – called the "Stars and Bars" – originally had seven stars, representing the first seven states that initially formed the Confederacy. As more states joined, more stars were added, until the total was 13 (two stars were added for the divided states of Kentucky and Missouri). During the First Battle of Bull Run, (First Manassas) it sometimes proved difficult to distinguish the Stars and Bars from the Union flag. To rectify the situation, a separate "Battle Flag" was designed for use by troops in the field. Also known as the "Southern Cross", many variations sprang from the original square configuration. Although it was never officially adopted by the Confederate government, the popularity of the Southern Cross among both soldiers and the civilian population was a primary reason why it was made the main color feature when a new national flag was adopted in 1863. This new standard – known as the "Stainless Banner" – consisted of a lengthened white field area with a Battle Flag canton. This flag too had its problems when used in military operations as, on a windless day, it could easily be mistaken for a flag of truce or surrender. Thus, in 1865, a modified version of the Stainless Banner was adopted. This final national flag of the Confederacy kept the Battle Flag canton, but shortened the white field and added a vertical red bar to the fly end. Because of its depiction in the 20th-century and popular media, many people consider the rectangular battle flag with the dark blue bars as being synonymous with "the Confederate Flag", but this flag was never adopted as a Confederate national flag. The "Confederate Flag" has a color scheme similar to that of the most common Battle Flag design, but is rectangular, not square. The "Confederate Flag" is a highly recognizable symbol of the South in the United States today, and continues to be a controversial icon. Geography Region and climate The Confederate States of America claimed a total of of coastline, thus a large part of its territory lay on the seacoast with level and often sandy or marshy ground. Most of the interior portion consisted of arable farmland, though much was also hilly and mountainous, and the far western territories were deserts. The lower reaches of the Mississippi River bisected the country, with the western half often referred to as the Trans-Mississippi. The highest point (excluding Arizona and New Mexico) was Guadalupe Peak in Texas at . Climate Much of the area claimed by the Confederate States of America had a humid subtropical climate with mild winters and long, hot, humid summers. The climate and terrain varied from vast swamps (such as those in Florida and Louisiana) to semi-arid steppes and arid deserts west of longitude 100 degrees west. The subtropical climate made winters mild but allowed infectious diseases to flourish. Consequently, on both sides more soldiers died from disease than were killed in combat, a fact hardly atypical of pre-World War I conflicts. Demographics Population The United States Census of 1860 gives a picture of the overall 1860 population for the areas that had joined the Confederacy. Note that the population numbers exclude non-assimilated Indian tribes. In 1860, the areas that later formed the eleven Confederate states (and including the future West Virginia) had 132,760 (1.46%) free blacks. Males made up 49.2% of the total population and females 50.8% (whites: 48.60% male, 51.40% female; slaves: 50.15% male, 49.85% female; free blacks: 47.43% male, 52.57% female). Rural and urban population The CSA was overwhelmingly rural. Few towns had populations of more than 1,000 – the typical county seat had a population of fewer than 500. Cities were rare; of the twenty largest U.S. cities in the 1860 census, only New Orleans lay in Confederate territory – and the Union captured New Orleans in 1862. Only 13 Confederate-controlled cities ranked among the top 100 U.S. cities in 1860, most of them ports whose economic activities vanished or suffered severely in the Union blockade. The population of Richmond swelled after it became the Confederate capital, reaching an estimated 128,000 in 1864. Other Southern cities in the border slave-holding states such as Baltimore, Washington, D.C., Wheeling, Alexandria, Louisville, and St. Louis never came under the control of the Confederate government. The cities of the Confederacy included most prominently in order of size of population: (See also Atlanta in the Civil War, Charleston, South Carolina, in the Civil War, Nashville in the Civil War, New Orleans in the Civil War, Wilmington, North Carolina, in the American Civil War, and Richmond in the Civil War). Religion The CSA was overwhelmingly Protestant. Both free and enslaved populations identified with evangelical Protestantism. Baptists and Methodists together formed majorities of both the white and the slave population (see Black church). Freedom of religion and separation of church and state were fully ensured by Confederate laws. Church attendance was very high and chaplains played a major role in the Army. Most large denominations experienced a North–South split in the prewar era on the issue of slavery. The creation of a new country necessitated independent structures. For example, the Presbyterian Church in the United States split, with much of the new leadership provided by Joseph Ruggles Wilson (father of President Woodrow Wilson). In 1861, he organized the meeting that formed the General Assembly of the Southern Presbyterian Church and served as its chief executive for 37 years. Baptists and Methodists both broke off from their Northern coreligionists over the slavery issue, forming the Southern Baptist Convention and the Methodist Episcopal Church, South, respectively. Elites in the southeast favored the Protestant Episcopal Church in the Confederate States of America, which had reluctantly split from the Episcopal Church in 1861. Other elites were Presbyterians belonging to the 1861-founded Presbyterian Church in the United States. Catholics included an Irish working class element in coastal cities and an old French element in southern Louisiana. Other insignificant and scattered religious populations included Lutherans, the Holiness movement, other Reformed, other Christian fundamentalists, the Stone-Campbell Restoration Movement, the Churches of Christ, the Latter Day Saint movement, Adventists, Muslims, Jews, Native American animists, deists and irreligious people. The southern churches met the shortage of Army chaplains by sending missionaries. The Southern Baptists started in 1862 and had a total of 78 missionaries. Presbyterians were even more active with 112 missionaries in January 1865. Other missionaries were funded and supported by the Episcopalians, Methodists, and Lutherans. One result was wave after wave of revivals in the Army. Military leaders Military leaders of the Confederacy (with their state or country of birth and highest rank) included: Robert E. Lee (Virginia) – General & General in Chief P. G. T. Beauregard (Louisiana) – General Braxton Bragg (North Carolina) – General Samuel Cooper (New York) – General Albert Sidney Johnston (Kentucky) – General Joseph E. Johnston (Virginia) – General Edmund Kirby Smith (Florida)General Simon Bolivar Buckner, Sr. (Kentucky)Lieutenant General Jubal Early (Virginia) – Lieutenant-General Richard S. Ewell (Virginia) – Lieutenant-General Nathan Bedford Forrest (Tennessee) – Lieutenant-General Wade Hampton III (South Carolina) – Lieutenant-General William J. Hardee (Georgia)Lieutenant-General A. P. Hill (Virginia) – Lieutenant-General Theophilus H. Holmes (North Carolina) Lieutenant-General John Bell Hood (Kentucky) – Lieutenant-General (temporary General) Thomas J. "Stonewall" Jackson (Virginia) – Lieutenant-General Stephen D. Lee (South Carolina)Lieutenant-General James Longstreet (South Carolina) – Lieutenant-General John C. Pemberton (Pennsylvania)Lieutenant-General Leonidas Polk (North Carolina) – Lieutenant-General Alexander P. Stewart (North Carolina)Lieutenant-General Richard Taylor (Kentucky) – Lieutenant-General (son of U.S. President Zachary Taylor) Joseph Wheeler (Georgia)Lieutenant-General John C. Breckinridge (Kentucky)Major-General & Secretary of War Richard H. Anderson (South Carolina)Major-General (temporary Lieutenant-General) Patrick Cleburne (Arkansas) – Major-General John Brown Gordon (Georgia)Major-General Henry Heth (Virginia)Major-General Daniel Harvey Hill (South Carolina)Major-General Edward Johnson (Virginia)Major-General Joseph B. Kershaw (South Carolina)Major-General Fitzhugh Lee (Virginia)Major-General George Washington Custis Lee (Virginia)Major-General William Henry Fitzhugh Lee (Virginia)Major-General William Mahone (Virginia)Major-General George Pickett (Virginia)Major-General Camillus J. Polignac (France) – Major-General Sterling Price (Missouri) – Major-General Stephen Dodson Ramseur (North Carolina) – Major-General Thomas L. Rosser (Virginia) – Major-General J. E. B. Stuart (Virginia) – Major-General Earl Van Dorn (Mississippi)Major-General John A. Wharton (Tennessee) – Major-General Edward Porter Alexander (Georgia) – Brigadier-General Francis Marion Cockrell (Missouri) – Brigadier-General Clement A. Evans (Georgia)Brigadier-General John Hunt Morgan (Kentucky) – Brigadier-General William N. Pendleton (Virginia) – Brigadier-General Stand Watie (Georgia) – Brigadier-General (last to surrender) Lawrence Sullivan Ross (Texas) – Brigadier-General John S. Mosby, the "Grey Ghost of the Confederacy" (Virginia) – Colonel Franklin Buchanan (Maryland) – Admiral Raphael Semmes (Maryland) – Rear Admiral See also American Civil War prison camps Cabinet of the Confederate States of America Commemoration of the American Civil War Commemoration of the American Civil War on postage stamps Confederate colonies Confederate Patent Office Confederate war finance C.S.A.: The Confederate States of America Golden Circle (proposed country) History of the Southern United States List of Confederate arms manufacturers List of Confederate arsenals and armories List of Confederate monuments and memorials List of treaties of the Confederate States of America List of historical separatist movements List of civil wars National Civil War Naval Museum Notes References Bowman, John S. (ed), The Civil War Almanac, New York: Bison Books, 1983 Eicher, John H., & Eicher, David J., Civil War High Commands, Stanford University Press, 2001, Martis, Kenneth C. The Historical Atlas of the Congresses of the Confederate States of America 1861–1865 (1994) Further reading Overviews and reference American Annual Cyclopaedia for 1861 (N.Y.: Appleton's, 1864), an encyclopedia of events in the U.S. and CSA (and other countries); covers each state in detail Appletons' annual cyclopedia and register of important events: Embracing political, military, and ecclesiastical affairs; public documents; biography, statistics, commerce, finance, literature, science, agriculture, and mechanical industry, Volume 3 1863 (1864), thorough coverage of the events of 1863 Beringer, Richard E., Herman Hattaway, Archer Jones, and William N. Still Jr. Why the South Lost the Civil War. Athens: University of Georgia Press, 1986. . Boritt, Gabor S., and others., Why the Confederacy Lost, (1992) Coulter, E. Merton The Confederate States of America, 1861–1865, 1950 Current, Richard N., ed. Encyclopedia of the Confederacy (4 vol), 1993. 1900 pages, articles by scholars. Eaton, Clement A History of the Southern Confederacy, 1954 Faust, Patricia L., ed. Historical Times Illustrated History of the Civil War. New York: Harper & Row, 1986. . Gallagher, Gary W. The Confederate War. Cambridge, MA: Harvard University Press, 1997. . Heidler, David S., and Jeanne T. Heidler, eds. Encyclopedia of the American Civil War: A Political, Social, and Military History. New York: W. W. Norton & Company, 2000. . 2740 pages. McPherson, James M. Battle Cry of Freedom: The Civil War Era. Oxford History of the United States. New York: Oxford University Press, 1988. . standard military history of the war; Pulitzer Prize Nevins, Allan. The War for the Union. Vol. 1, The Improvised War 1861–1862. New York: Charles Scribner's Sons, 1959. ; The War for the Union. Vol. 2, War Becomes Revolution 1862–1863. New York: Charles Scribner's Sons, 1960. ; The War for the Union. Vol. 3, The Organized War 1863–1864. New York: Charles Scribner's Sons, 1971. ; The War for the Union. Vol. 4, The Organized War to Victory 1864–1865. New York: Charles Scribner's Sons, 1971. . The most detailed history of the war. Roland, Charles P. The Confederacy, (1960) brief survey Thomas, Emory M. The Confederate Nation, 1861–1865. New York: Harper & Row, 1979. . Standard political-economic-social history Wakelyn, Jon L. Biographical Dictionary of the Confederacy Greenwood Press Weigley, Russell F. A Great Civil War: A Military and Political History, 1861–1865. Bloomington and Indianapolis: Indiana University Press, 2000. . Historiography Boles, John B. and Evelyn Thomas Nolen, eds. Interpreting Southern History: Historiographical Essays in Honor of Sanford W. Higginbotham (1987) Foote, Lorien. "Rethinking the Confederate home front." Journal of the Civil War Era 7.3 (2017): 446-465 online. Grant, Susan-Mary, and Brian Holden Reid, eds. The American civil war: explorations and reconsiderations (Longman, 2000.) Hettle, Wallace. Inventing Stonewall Jackson: A Civil War Hero in History and Memory (LSU Press, 2011). Link, Arthur S. and Rembert W. Patrick, eds. Writing Southern History: Essays in Historiography in Honor of Fletcher M. Green (1965) Sternhell, Yael A. "Revisionism Reinvented? The Antiwar Turn in Civil War Scholarship." Journal of the Civil War Era 3.2 (2013): 239–256 online. Woodworth, Steven E. ed. The American Civil War: A Handbook of Literature and Research (1996), 750 pages of historiography and bibliography State studies Tucker, Spencer, ed. American Civil War: A State-by-State Encyclopedia (2 vol 2015) 1019pp Border states Ash, Stephen V. Middle Tennessee society transformed, 1860–1870: war and peace in the Upper South (2006) Cooling, Benjamin Franklin. Fort Donelson's Legacy: War and Society in Kentucky and Tennessee, 1862–1863 (1997) Cottrell, Steve. Civil War in Tennessee (2001) 142pp Crofts, Daniel W. Reluctant Confederates: Upper South Unionists in the Secession Crisis. (1989) . Dollar, Kent, and others. Sister States, Enemy States: The Civil War in Kentucky and Tennessee (2009) Durham, Walter T. Nashville: The Occupied City, 1862–1863 (1985); Reluctant Partners: Nashville and the Union, 1863–1865 (1987) Mackey, Robert R. The Uncivil War: Irregular Warfare in the Upper South, 1861–1865 (University of Oklahoma Press, 2014) Temple, Oliver P. East Tennessee and the civil war (1899) 588pp online edition Alabama and Mississippi Fleming, Walter L. Civil War and Reconstruction in Alabama (1905). the most detailed study; Dunning School full text online from Project Gutenberg Rainwater, Percy Lee. Mississippi: storm center of secession, 1856–1861 (1938) Rigdon, John. A Guide to Alabama Civil War Research (2011) Smith, Timothy B. Mississippi in the Civil War: The Home Front University Press of Mississippi, (2010) 265 pages; Examines the declining morale of Mississippians as they witnessed extensive destruction and came to see victory as increasingly improbable Sterkx, H. E. Partners in Rebellion: Alabama Women in the Civil War (Fairleigh Dickinson University Press, 1970) Storey, Margaret M. "Civil War Unionists and the Political Culture of Loyalty in Alabama, 1860–1861". Journal of Southern History (2003): 71–106. in JSTOR Storey, Margaret M., Loyalty and Loss: Alabama's Unionists in the Civil War and Reconstruction. Baton Rouge: Louisiana State University Press, 2004. Towns, Peggy Allen. Duty Driven: The Plight of North Alabama's African Americans During the Civil War (2012) Florida and Georgia DeCredico, Mary A. Patriotism for Profit: Georgia's Urban Entrepreneurs and the Confederate War Effort (1990) Fowler, John D. and David B. Parker, eds. Breaking the Heartland: The Civil War in Georgia (2011) Hill, Louise Biles. Joseph E. Brown and the Confederacy. (1972); He was the governor Johns, John Edwin. Florida During the Civil War (University of Florida Press, 1963) Johnson, Michael P. Toward A Patriarchal Republic: The Secession of Georgia (1977) Mohr, Clarence L. On the Threshold of Freedom: Masters and Slaves in Civil War Georgia (1986) Nulty, William H. Confederate Florida: The Road to Olustee (University of Alabama Press, 1994) Parks, Joseph H. Joseph E. Brown of Georgia (LSU Press, 1977) 612 pages; Governor Wetherington, Mark V. Plain Folk's Fight: The Civil War and Reconstruction in Piney Woods Georgia (2009) Louisiana, Texas, Arkansas, and West Bailey, Anne J., and Daniel E. Sutherland, eds. Civil War Arkansas: beyond battles and leaders (Univ of Arkansas Pr, 2000) Ferguson, John Lewis, ed. Arkansas and the Civil War (Pioneer Press, 1965) Ripley, C. Peter. Slaves and Freedmen in Civil War Louisiana (LSU Press, 1976) Snyder, Perry Anderson. Shreveport, Louisiana, during the Civil War and Reconstruction (1979) Underwood, Rodman L. Waters of Discord: The Union Blockade of Texas During the Civil War (McFarland, 2003) Winters, John D. The Civil War in Louisiana (LSU Press, 1991) Woods, James M. Rebellion and Realignment: Arkansas's Road to Secession. (1987) Wooster, Ralph A. Civil War Texas (Texas A&M University Press, 2014) North and South Carolina Barrett, John G. The Civil War in North Carolina (1995) Carbone, John S. The Civil War in Coastal North Carolina (2001) Cauthen, Charles Edward; Power, J. Tracy. South Carolina goes to war, 1860–1865 (1950) Hardy, Michael C. North Carolina in the Civil War (2011) Inscoe, John C. The Heart of Confederate Appalachia: Western North Carolina in the Civil War (2003) Lee, Edward J. and Ron Chepesiuk, eds. South Carolina in the Civil War: The Confederate Experience in Letters and Diaries (2004), primary sources Miller, Richard F., ed. States at War, Volume 6: The Confederate States Chronology and a Reference Guide for South Carolina in the Civil War (UP of New England, 2018). Virginia Ash, Stephen V. Rebel Richmond: Life and Death in the Confederate Capital (UNC Press, 2019). Ayers, Edward L. and others. Crucible of the Civil War: Virginia from Secession to Commemoration (2008) Bryan, T. Conn. Confederate Georgia (1953), the standard scholarly survey Davis, William C. and James I. Robertson, Jr., eds. Virginia at War 1861. Lexington, KY: University of Kentucky Press, 2005. ; Virginia at War 1862 (2007); Virginia at War 1863 (2009); Virginia at War 1864 (2009); Virginia at War 1865 (2012) Snell, Mark A. West Virginia and the Civil War, Mountaineers Are Always Free, (2011) . Wallenstein, Peter, and Bertram Wyatt-Brown, eds. Virginia's Civil War (2008) Furgurson, Ernest B. Ashes of Glory: Richmond at War (1997) Social history, gender Bever, Megan L. "Prohibition, Sacrifice, and Morality in the Confederate States, 1861–1865." Journal of Southern History 85.2 (2019): 251–284 online. Brown, Alexis Girardin. "The Women Left Behind: Transformation of the Southern Belle, 1840–1880" (2000) Historian 62#4 pp 759–778. Cashin, Joan E. "Torn Bonnets and Stolen Silks: Fashion, Gender, Race, and Danger in the Wartime South." Civil War History 61#4 (2015): 338–361. online Chesson, Michael B. "Harlots or Heroines? A New Look at the Richmond Bread Riot." Virginia Magazine of History and Biography 92#2 (1984): 131–175. in JSTOR Clinton, Catherine, and Silber, Nina, eds. Divided Houses: Gender and the Civil War (1992) Davis, William C. and James I. Robertson Jr., eds. Virginia at War, 1865 (2012). Elliot, Jane Evans. Diary of Mrs. Jane Evans Elliot, 1837–1882 (1908) Faust, Drew Gilpin. Mothers of Invention: Women of the Slaveholding South in the American Civil War (1996) Faust, Drew Gilpin. This Republic of Suffering: Death and the American Civil War (2008) Frank, Lisa Tendrich, ed. Women in the American Civil War (2008) Frank, Lisa Tendrich. The Civilian War: Confederate Women and Union Soldiers during Sherman's March (LSU Press, 2015). Gleeson. David T. The Green and the Gray: The Irish in the Confederate States of America (U of North Carolina Press, 2013); online review Glymph, Thavolia. The Women's Fight: The Civil War's Battles for Home, Freedom, and Nation (UNC Press, 2019). Hilde, Libra Rose. Worth a Dozen Men: Women and Nursing in the Civil War South (U of Virginia Press, 2012). Levine, Bruce. The Fall of the House of Dixie: The Civil War and the Social Revolution That Transformed the South (2013) Lowry, Thomas P. The Story the Soldiers Wouldn't Tell: Sex in the Civil War (Stackpole Books, 1994). Massey, Mary. Bonnet Brigades: American Women and the Civil War (1966), excellent overview North and South; reissued as Women in the Civil War (1994) "Bonnet Brigades at Fifty: Reflections on Mary Elizabeth Massey and Gender in Civil War History," Civil War History (2015) 61#4 pp 400–444. Massey, Mary Elizabeth. Refugee Life in the Confederacy, (1964) Rable, George C. Civil Wars: Women and the Crisis of Southern Nationalism (1989) Slap, Andrew L. and Frank Towers, eds. Confederate Cities: The Urban South during the Civil War Era (U of Chicago Press, 2015). 302 pp. Stokes, Karen. South Carolina Civilians in Sherman's Path: Stories of Courage Amid Civil War Destruction (The History Press, 2012). Strong, Melissa J. "'The Finest Kind of Lady': Hegemonic Femininity in American Women’s Civil War Narratives." Women's Studies 46.1 (2017): 1–21 online. Swanson, David A., and Richard R. Verdugo. "The Civil War’s Demographic Impact on White Males in the Eleven Confederate States: An Analysis by State and Selected Age Groups." Journal of Political & Military Sociology 46.1 (2019): 1–26. Whites, LeeAnn. The Civil War as a Crisis in Gender: Augusta, Georgia, 1860–1890 (1995) Wiley, Bell Irwin Confederate Women (1975) online Wiley, Bell Irwin The Plain People of the Confederacy (1944) online Woodward, C. Vann, ed. Mary Chesnut's Civil War, 1981, detailed diary; primary source African Americans Andrews, William L. Slavery and Class in the American South: A Generation of Slave Narrative Testimony, 1840–1865 (Oxford UP, 2019). Ash, Stephen V. The Black Experience in the Civil War South (2010). Bartek, James M. "The Rhetoric of Destruction: Racial Identity and Noncombatant Immunity in the Civil War Era." (PhD Dissertation, University of Kentucky, 2010). online; Bibliography pp. 515–52. Frankel, Noralee. Freedom's Women: Black Women and Families in Civil War Era Mississippi (1999). Lang, Andrew F. In the Wake of War: Military Occupation, Emancipation, and Civil War America (LSU Press, 2017). Levin, Kevin M. Searching for Black Confederates: The Civil War’s Most Persistent Myth (UNC Press, 2019). Litwack, Leon F. Been in the Storm So Long: The Aftermath of Slavery (1979), on freed slaves Reidy, Joseph P. Illusions of Emancipation: The Pursuit of Freedom and Equality in the Twilight of Slavery (UNC Press, 2019). Wiley, Bell Irwin Southern Negroes: 1861–1865 (1938) Soldiers Broomall, James J. Private Confederacies: The Emotional Worlds of Southern Men as Citizens and Soldiers (UNC Press, 2019). Donald, David. "The Confederate as a Fighting Man." Journal of Southern History 25.2 (1959): 178–193. online Faust, Drew Gilpin. "Christian Soldiers: The Meaning of Revivalism in the Confederate Army." Journal of Southern History 53.1 (1987): 63–90 online. McNeill, William J. "A Survey of Confederate Soldier Morale During Sherman's Campaign Through Georgia and the Carolinas." Georgia Historical Quarterly 55.1 (1971): 1–25. Scheiber, Harry N. "The Pay of Confederate Troops and Problems of Demoralization: A Case of Administrative Failure." Civil War History 15.3 (1969): 226–236 online. Sheehan-Dean, Aaron. Why Confederates Fought: Family and Nation in Civil War Virginia (U of North Carolina Press, 2009). Watson, Samuel J. "Religion and combat motivation in the Confederate armies." Journal of Military History 58.1 (1994): 29+. Wiley, Bell Irwin. The life of Johnny Reb; the common soldier of the Confederacy (1971) online Wooster, Ralph A., and Robert Wooster. "'Rarin'for a Fight': Texans in the Confederate Army." Southwestern Historical Quarterly 84.4 (1981): 387–426 online. Intellectual history Bernath, Michael T. Confederate Minds: The Struggle for Intellectual Independence in the Civil War South (University of North Carolina Press; 2010) 412 pages. Examines the efforts of writers, editors, and other "cultural nationalists" to free the South from the dependence on Northern print culture and educational systems. Bonner, Robert E., "Proslavery Extremism Goes to War: The Counterrevolutionary Confederacy and Reactionary Militarism", Modern Intellectual History, 6 (August 2009), 261–85. Downing, David C. A South Divided: Portraits of Dissent in the Confederacy. (2007). Faust, Drew Gilpin. The Creation of Confederate Nationalism: Ideology and Identity in the Civil War South. (1988) Hutchinson, Coleman. Apples and Ashes: Literature, Nationalism, and the Confederate States of America. Athens, Georgia: University of Georgia Press, 2012. Lentz, Perry Carlton Our Missing Epic: A Study in the Novels about the American Civil War, 1970 Rubin, Anne Sarah. A Shattered Nation: The Rise and Fall of the Confederacy, 1861–1868, 2005 A cultural study of Confederates' self images Political history Alexander, Thomas B., and Beringer, Richard E. The Anatomy of the Confederate Congress: A Study of the Influences of Member Characteristics on Legislative Voting Behavior, 1861–1865, (1972) Cooper, William J, Jefferson Davis, American (2000), standard biography Davis, William C. A Government of Our Own: The Making of the Confederacy. New York: The Free Press, a division of Macmillan, Inc., 1994. . Eckenrode, H. J., Jefferson Davis: President of the South, 1923 Levine, Bruce. Confederate Emancipation: Southern Plans to Free and Arm Slaves during the Civil War. (2006) Martis, Kenneth C., "The Historical Atlas of the Congresses of the Confederate States of America 1861–1865" (1994) Neely, Mark E. Jr., Confederate Bastille: Jefferson Davis and Civil Liberties (1993) Neely, Mark E. Jr. Southern Rights: Political Prisoners and the Myth of Confederate Constitutionalism. (1999) George C. Rable The Confederate Republic: A Revolution against Politics, 1994 Rembert, W. Patrick Jefferson Davis and His Cabinet (1944). Williams, William M. Justice in Grey: A History of the Judicial System of the Confederate States of America (1941) Yearns, Wilfred Buck The Confederate Congress (1960) Foreign affairs Blumenthal, Henry. "Confederate Diplomacy: Popular Notions and International Realities", Journal of Southern History, Vol. 32, No. 2 (May 1966), pp. 151–171 in JSTOR Cleland, Beau. "The Confederate States of America and the British Empire: Neutral Territory and Civil Wars." Journal of Military and Strategic Studies 16.4 (2016): 171–181. online Daddysman, James W. The Matamoros Trade: Confederate Commerce, Diplomacy, and Intrigue. (1984) online Foreman, Amanda. A World on Fire: Britain's Crucial Role in the American Civil War (2011) especially on Brits inside the Confederacy; Hubbard, Charles M. The Burden of Confederate Diplomacy (1998) Jones, Howard. Blue and Gray Diplomacy: A History of Union and Confederate Foreign Relations (2009) online Jones, Howard. Union in Peril: The Crisis Over British Intervention in the Civil War. Lincoln, NE: University of Nebraska Press, Bison Books, 1997. . Originally published: Chapel Hill: University of North Carolina Press, 1992. Mahin, Dean B. One War at a Time: The International Dimensions of the American Civil War. Washington, DC: Brassey's, 2000. . Originally published: Washington, DC: Brassey's, 1999. Merli, Frank J. The Alabama, British Neutrality, and the American Civil War (2004). 225 pages. Owsley, Frank. King Cotton Diplomacy: Foreign Relations of the Confederate States of America (2nd ed. 1959) online Sainlaude, Steve. France and the American Civil War: A Diplomatic History (2019) excerpt Economic history Black, III, Robert C. The Railroads of the Confederacy. Chapel Hill: University of North Carolina Press, 1952, 1988. . Bonner, Michael Brem. "Expedient Corporatism and Confederate Political Economy", Civil War History, 56 (March 2010), 33–65. Dabney, Virginius Richmond: The Story of a City. Charlottesville: The University of Virginia Press, 1990 Grimsley, Mark The Hard Hand of War: Union Military Policy toward Southern Civilians, 1861–1865, 1995 Hurt, R. Douglas. Agriculture and the Confederacy: Policy, Productivity, and Power in the Civil War South (2015) Massey, Mary Elizabeth Ersatz in the Confederacy: Shortages and Substitutes on the Southern Homefront (1952) Paskoff, Paul F. "Measures of War: A Quantitative Examination of the Civil War's Destructiveness in the Confederacy", Civil War History (2008) 54#1 pp 35–62 in Project MUSE Ramsdell, Charles. Behind the Lines in the Southern Confederacy, 1994. Roark, James L. Masters without Slaves: Southern Planters in the Civil War and Reconstruction, 1977. Thomas, Emory M. The Confederacy as a Revolutionary Experience, 1992 Primary sources Carter, Susan B., ed. The Historical Statistics of the United States: Millennial Edition (5 vols), 2006 Commager, Henry Steele. The Blue and the Gray: The Story of the Civil War As Told by Participants. 2 vols. Indianapolis and New York: The Bobbs-Merrill Company, Inc., 1950. . Many reprints. Davis, Jefferson. The Rise of the Confederate Government. New York: Barnes & Noble, 2010. Original edition: 1881. . Davis, Jefferson. The Fall of the Confederate Government. New York: Barnes & Noble, 2010. Original edition: 1881. . Harwell, Richard B., The Confederate Reader (1957) Hettle, Wallace, ed. The Confederate Homefront: A History in Documents (LSU Press, 2017) 214 pages Jones, John B. A Rebel War Clerk's Diary at the Confederate States Capital, edited by Howard Swiggert, [1935] 1993. 2 vols. Richardson, James D., ed. A Compilation of the Messages and Papers of the Confederacy, Including the Diplomatic Correspondence 1861–1865, 2 volumes, 1906. Yearns, W. Buck and Barret, John G., eds. North Carolina Civil War Documentary, 1980. Confederate official government documents major online collection of complete texts in HTML format, from University of North Carolina Journal of the Congress of the Confederate
term. They would serve only in units and under officers of their state. Those under 18 and over 35 could substitute for conscripts, in September those from 35 to 45 became conscripts. The cry of "rich man's war and a poor man's fight" led Congress to abolish the substitute system altogether in December 1863. All principals benefiting earlier were made eligible for service. By February 1864, the age bracket was made 17 to 50, those under eighteen and over forty-five to be limited to in-state duty. Confederate conscription was not universal; it was a selective service. The First Conscription Act of April 1862 exempted occupations related to transportation, communication, industry, ministers, teaching and physical fitness. The Second Conscription Act of October 1862 expanded exemptions in industry, agriculture and conscientious objection. Exemption fraud proliferated in medical examinations, army furloughs, churches, schools, apothecaries and newspapers. Rich men's sons were appointed to the socially outcast "overseer" occupation, but the measure was received in the country with "universal odium". The legislative vehicle was the controversial Twenty Negro Law that specifically exempted one white overseer or owner for every plantation with at least 20 slaves. Backpedaling six months later, Congress provided overseers under 45 could be exempted only if they held the occupation before the first Conscription Act. The number of officials under state exemptions appointed by state Governor patronage expanded significantly. By law, substitutes could not be subject to conscription, but instead of adding to Confederate manpower, unit officers in the field reported that over-50 and under-17-year-old substitutes made up to 90% of the desertions. The Conscription Act of February 1864 "radically changed the whole system" of selection. It abolished industrial exemptions, placing detail authority in President Davis. As the shame of conscription was greater than a felony conviction, the system brought in "about as many volunteers as it did conscripts." Many men in otherwise "bombproof" positions were enlisted in one way or another, nearly 160,000 additional volunteers and conscripts in uniform. Still there was shirking. To administer the draft, a Bureau of Conscription was set up to use state officers, as state Governors would allow. It had a checkered career of "contention, opposition and futility". Armies appointed alternative military "recruiters" to bring in the out-of-uniform 17–50-year-old conscripts and deserters. Nearly 3,000 officers were tasked with the job. By late 1864, Lee was calling for more troops. "Our ranks are constantly diminishing by battle and disease, and few recruits are received; the consequences are inevitable." By March 1865 conscription was to be administered by generals of the state reserves calling out men over 45 and under 18 years old. All exemptions were abolished. These regiments were assigned to recruit conscripts ages 17–50, recover deserters, and repel enemy cavalry raids. The service retained men who had lost but one arm or a leg in home guards. Ultimately, conscription was a failure, and its main value was in goading men to volunteer. The survival of the Confederacy depended on a strong base of civilians and soldiers devoted to victory. The soldiers performed well, though increasing numbers deserted in the last year of fighting, and the Confederacy never succeeded in replacing casualties as the Union could. The civilians, although enthusiastic in 1861–62, seem to have lost faith in the future of the Confederacy by 1864, and instead looked to protect their homes and communities. As Rable explains, "This contraction of civic vision was more than a crabbed libertarianism; it represented an increasingly widespread disillusionment with the Confederate experiment." Victories: 1861 The American Civil War broke out in April 1861 with a Confederate victory at the Battle of Fort Sumter in Charleston. In January, President James Buchanan had attempted to resupply the garrison with the steamship, Star of the West, but Confederate artillery drove it away. In March, President Lincoln notified South Carolina Governor Pickens that without Confederate resistance to the resupply there would be no military reinforcement without further notice, but Lincoln prepared to force resupply if it were not allowed. Confederate President Davis, in cabinet, decided to seize Fort Sumter before the relief fleet arrived, and on April 12, 1861, General Beauregard forced its surrender. Following Sumter, Lincoln directed states to provide 75,000 troops for three months to recapture the Charleston Harbor forts and all other federal property. This emboldened secessionists in Virginia, Arkansas, Tennessee and North Carolina to secede rather than provide troops to march into neighboring Southern states. In May, Federal troops crossed into Confederate territory along the entire border from the Chesapeake Bay to New Mexico. The first battles were Confederate victories at Big Bethel (Bethel Church, Virginia), First Bull Run (First Manassas) in Virginia July and in August, Wilson's Creek (Oak Hills) in Missouri. At all three, Confederate forces could not follow up their victory due to inadequate supply and shortages of fresh troops to exploit their successes. Following each battle, Federals maintained a military presence and occupied Washington, DC; Fort Monroe, Virginia; and Springfield, Missouri. Both North and South began training up armies for major fighting the next year. Union General George B. McClellan's forces gained possession of much of northwestern Virginia in mid-1861, concentrating on towns and roads; the interior was too large to control and became the center of guerrilla activity. General Robert E. Lee was defeated at Cheat Mountain in September and no serious Confederate advance in western Virginia occurred until the next year. Meanwhile, the Union Navy seized control of much of the Confederate coastline from Virginia to South Carolina. It took over plantations and the abandoned slaves. Federals there began a war-long policy of burning grain supplies up rivers into the interior wherever they could not occupy. The Union Navy began a blockade of the major southern ports and prepared an invasion of Louisiana to capture New Orleans in early 1862. Incursions: 1862 The victories of 1861 were followed by a series of defeats east and west in early 1862. To restore the Union by military force, the Federal strategy was to (1) secure the Mississippi River, (2) seize or close Confederate ports, and (3) march on Richmond. To secure independence, the Confederate intent was to (1) repel the invader on all fronts, costing him blood and treasure, and (2) carry the war into the North by two offensives in time to affect the mid-term elections. Much of northwestern Virginia was under Federal control. In February and March, most of Missouri and Kentucky were Union "occupied, consolidated, and used as staging areas for advances further South". Following the repulse of Confederate counter-attack at the Battle of Shiloh, Tennessee, permanent Federal occupation expanded west, south and east. Confederate forces repositioned south along the Mississippi River to Memphis, Tennessee, where at the naval Battle of Memphis, its River Defense Fleet was sunk. Confederates withdrew from northern Mississippi and northern Alabama. New Orleans was captured April 29 by a combined Army-Navy force under U.S. Admiral David Farragut, and the Confederacy lost control of the mouth of the Mississippi River. It had to concede extensive agricultural resources that had supported the Union's sea-supplied logistics base. Although Confederates had suffered major reverses everywhere, as of the end of April the Confederacy still controlled territory holding 72% of its population. Federal forces disrupted Missouri and Arkansas; they had broken through in western Virginia, Kentucky, Tennessee and Louisiana. Along the Confederacy's shores, Union forces had closed ports and made garrisoned lodgments on every coastal Confederate state except Alabama and Texas. Although scholars sometimes assess the Union blockade as ineffectual under international law until the last few months of the war, from the first months it disrupted Confederate privateers, making it "almost impossible to bring their prizes into Confederate ports". British firms developed small fleets of blockade running companies, such as John Fraser and Company and S. Isaac, Campbell & Company while the Ordnance Department secured its own blockade runners for dedicated munitions cargoes. During the Civil War fleets of armored warships were deployed for the first time in sustained blockades at sea. After some success against the Union blockade, in March the ironclad CSS Virginia was forced into port and burned by Confederates at their retreat. Despite several attempts mounted from their port cities, CSA naval forces were unable to break the Union blockade. Attempts were made by Commodore Josiah Tattnall III's ironclads from Savannah in 1862 with the CSS Atlanta. Secretary of the Navy Stephen Mallory placed his hopes in a European-built ironclad fleet, but they were never realized. On the other hand, four new English-built commerce raiders served the Confederacy, and several fast blockade runners were sold in Confederate ports. They were converted into commerce-raiding cruisers, and manned by their British crews. In the east, Union forces could not close on Richmond. General McClellan landed his army on the Lower Peninsula of Virginia. Lee subsequently ended that threat from the east, then Union General John Pope attacked overland from the north only to be repulsed at Second Bull Run (Second Manassas). Lee's strike north was turned back at Antietam MD, then Union Major General Ambrose Burnside's offensive was disastrously ended at Fredericksburg VA in December. Both armies then turned to winter quarters to recruit and train for the coming spring. In an attempt to seize the initiative, reprove, protect farms in mid-growing season and influence U.S. Congressional elections, two major Confederate incursions into Union territory had been launched in August and September 1862. Both Braxton Bragg's invasion of Kentucky and Lee's invasion of Maryland were decisively repulsed, leaving Confederates in control of but 63% of its population. Civil War scholar Allan Nevins argues that 1862 was the strategic high-water mark of the Confederacy. The failures of the two invasions were attributed to the same irrecoverable shortcomings: lack of manpower at the front, lack of supplies including serviceable shoes, and exhaustion after long marches without adequate food. Also in September Confederate General William W. Loring pushed Federal forces from Charleston, Virginia, and the Kanawha Valley in western Virginia, but lacking reinforcements Loring abandoned his position and by November the region was back in Federal control. Anaconda: 1863–64 The failed Middle Tennessee campaign was ended January 2, 1863, at the inconclusive Battle of Stones River (Murfreesboro), both sides losing the largest percentage of casualties suffered during the war. It was followed by another strategic withdrawal by Confederate forces. The Confederacy won a significant victory April 1863, repulsing the Federal advance on Richmond at Chancellorsville, but the Union consolidated positions along the Virginia coast and the Chesapeake Bay. Without an effective answer to Federal gunboats, river transport and supply, the Confederacy lost the Mississippi River following the capture of Vicksburg, Mississippi, and Port Hudson in July, ending Southern access to the trans-Mississippi West. July brought short-lived counters, Morgan's Raid into Ohio and the New York City draft riots. Robert E. Lee's strike into Pennsylvania was repulsed at Gettysburg, Pennsylvania despite Pickett's famous charge and other acts of valor. Southern newspapers assessed the campaign as "The Confederates did not gain a victory, neither did the enemy." September and November left Confederates yielding Chattanooga, Tennessee, the gateway to the lower south. For the remainder of the war fighting was restricted inside the South, resulting in a slow but continuous loss of territory. In early 1864, the Confederacy still controlled 53% of its population, but it withdrew further to reestablish defensive positions. Union offensives continued with Sherman's March to the Sea to take Savannah and Grant's Wilderness Campaign to encircle Richmond and besiege Lee's army at Petersburg. In April 1863, the C.S. Congress authorized a uniformed Volunteer Navy, many of whom were British. The Confederacy had altogether eighteen commerce-destroying cruisers, which seriously disrupted Federal commerce at sea and increased shipping insurance rates 900%. Commodore Tattnall again unsuccessfully attempted to break the Union blockade on the Savannah River in Georgia with an ironclad in 1863. Beginning in April 1864 the ironclad CSS Albemarle engaged Union gunboats for six months on the Roanoke River in North Carolina. The Federals closed Mobile Bay by sea-based amphibious assault in August, ending Gulf coast trade east of the Mississippi River. In December, the Battle of Nashville ended Confederate operations in the western theater. Large numbers of families relocated to safer places, usually remote rural areas, bringing along household slaves if they had any. Mary Massey argues these elite exiles introduced an element of defeatism into the southern outlook. Collapse: 1865 The first three months of 1865 saw the Federal Carolinas Campaign, devastating a wide swath of the remaining Confederate heartland. The "breadbasket of the Confederacy" in the Great Valley of Virginia was occupied by Philip Sheridan. The Union Blockade captured Fort Fisher in North Carolina, and Sherman finally took Charleston, South Carolina, by land attack. The Confederacy controlled no ports, harbors or navigable rivers. Railroads were captured or had ceased operating. Its major food producing regions had been war-ravaged or occupied. Its administration survived in only three pockets of territory holding only one-third of its population. Its armies were defeated or disbanding. At the February 1865 Hampton Roads Conference with Lincoln, senior Confederate officials rejected his invitation to restore the Union with compensation for emancipated slaves. The three pockets of unoccupied Confederacy were southern Virginia – North Carolina, central Alabama – Florida, and Texas, the latter two areas less from any notion of resistance than from the disinterest of Federal forces to occupy them. The Davis policy was independence or nothing, while Lee's army was wracked by disease and desertion, barely holding the trenches defending Jefferson Davis' capital. The Confederacy's last remaining blockade-running port, Wilmington, North Carolina, was lost. When the Union broke through Lee's lines at Petersburg, Richmond fell immediately. Lee surrendered a remnant of 50,000 from the Army of Northern Virginia at Appomattox Court House, Virginia, on April 9, 1865. "The Surrender" marked the end of the Confederacy. The CSS Stonewall sailed from Europe to break the Union blockade in March; on making Havana, Cuba, it surrendered. Some high officials escaped to Europe, but President Davis was captured May 10; all remaining Confederate land forces surrendered by June 1865. The U.S. Army took control of the Confederate areas without post-surrender insurgency or guerrilla warfare against them, but peace was subsequently marred by a great deal of local violence, feuding and revenge killings. The last confederate military unit, the commerce raider CSS Shenandoah, surrendered on November 6, 1865 in Liverpool. Historian Gary Gallagher concluded that the Confederacy capitulated in early 1865 because northern armies crushed "organized southern military resistance". The Confederacy's population, soldier and civilian, had suffered material hardship and social disruption. They had expended and extracted a profusion of blood and treasure until collapse; "the end had come". Jefferson Davis' assessment in 1890 determined, "With the capture of the capital, the dispersion of the civil authorities, the surrender of the armies in the field, and the arrest of the President, the Confederate States of America disappeared ... their history henceforth became a part of the history of the United States." Postwar history Amnesty and treason issue When the war ended over 14,000 Confederates petitioned President Johnson for a pardon; he was generous in giving them out. He issued a general amnesty to all Confederate participants in the "late Civil War" in 1868. Congress passed additional Amnesty Acts in May 1866 with restrictions on office holding, and the Amnesty Act in May 1872 lifting those restrictions. There was a great deal of discussion in 1865 about bringing treason trials, especially against Jefferson Davis. There was no consensus in President Johnson's cabinet, and no one was charged with treason. An acquittal of Davis would have been humiliating for the government. Davis was indicted for treason but never tried; he was released from prison on bail in May 1867. The amnesty of December 25, 1868, by President Johnson eliminated any possibility of Jefferson Davis (or anyone else associated with the Confederacy) standing trial for treason. Henry Wirz, the commandant of a notorious prisoner-of-war camp near Andersonville, Georgia, was tried and convicted by a military court, and executed on November 10, 1865. The charges against him involved conspiracy and cruelty, not treason. The U.S. government began a decade-long process known as Reconstruction which attempted to resolve the political and constitutional issues of the Civil War. The priorities were: to guarantee that Confederate nationalism and slavery were ended, to ratify and enforce the Thirteenth Amendment which outlawed slavery; the Fourteenth which guaranteed dual U.S. and state citizenship to all native-born residents, regardless of race; and the Fifteenth, which made it illegal to deny the right to vote because of race. By 1877, the Compromise of 1877 ended Reconstruction in the former Confederate states. Federal troops were withdrawn from the South, where conservative white Democrats had already regained political control of state governments, often through extreme violence and fraud to suppress black voting. The prewar South had many rich areas; the war left the entire region economically devastated by military action, ruined infrastructure, and exhausted resources. Still dependent on an agricultural economy and resisting investment in infrastructure, it remained dominated by the planter elite into the next century. Confederate veterans had been temporarily disenfranchised by Reconstruction policy, and Democrat-dominated legislatures passed new constitutions and amendments to now exclude most blacks and many poor whites. This exclusion and a weakened Republican Party remained the norm until the Voting Rights Act of 1965. The Solid South of the early 20th century did not achieve national levels of prosperity until long after World War II. Texas v. White In Texas v. White, the United States Supreme Court ruled – by a 5–3 majority – that Texas had remained a state ever since it first joined the Union, despite claims that it joined the Confederate States of America. In this case, the court held that the Constitution did not permit a state to unilaterally secede from the United States. Further, that the ordinances of secession, and all the acts of the legislatures within seceding states intended to give effect to such ordinances, were "absolutely null", under the Constitution. This case settled the law that applied to all questions regarding state legislation during the war. Furthermore, it decided one of the "central constitutional questions" of the Civil War: The Union is perpetual and indestructible, as a matter of constitutional law. In declaring that no state could leave the Union, "except through revolution or through consent of the States", it was "explicitly repudiating the position of the Confederate states that the United States was a voluntary compact between sovereign states". Theories regarding the Confederacy's demise "Died of states' rights" Historian Frank Lawrence Owsley argued that the Confederacy "died of states' rights". The central government was denied requisitioned soldiers and money by governors and state legislatures because they feared that Richmond would encroach on the rights of the states. Georgia's governor Joseph Brown warned of a secret conspiracy by Jefferson Davis to destroy states' rights and individual liberty. The first conscription act in North America, authorizing Davis to draft soldiers, was said to be the "essence of military despotism". Vice President Alexander H. Stephens feared losing the very form of republican government. Allowing President Davis to threaten "arbitrary arrests" to draft hundreds of governor-appointed "bomb-proof" bureaucrats conferred "more power than the English Parliament had ever bestowed on the king. History proved the dangers of such unchecked authority." The abolishment of draft exemptions for newspaper editors was interpreted as an attempt by the Confederate government to muzzle presses, such as the Raleigh NC Standard, to control elections and to suppress the peace meetings there. As Rable concludes, "For Stephens, the essence of patriotism, the heart of the Confederate cause, rested on an unyielding commitment to traditional rights" without considerations of military necessity, pragmatism or compromise. In 1863 governor Pendleton Murrah of Texas determined that state troops were required for defense against Plains Indians and Union forces that might attack from Kansas. He refused to send his soldiers to the East. Governor Zebulon Vance of North Carolina showed intense opposition to conscription, limiting recruitment success. Vance's faith in states' rights drove him into repeated, stubborn opposition to the Davis administration. Despite political differences within the Confederacy, no national political parties were formed because they were seen as illegitimate. "Anti-partyism became an article of political faith." Without a system of political parties building alternate sets of national leaders, electoral protests tended to be narrowly state-based, "negative, carping and petty". The 1863 mid-term elections became mere expressions of futile and frustrated dissatisfaction. According to historian David M. Potter, the lack of a functioning two-party system caused "real and direct damage" to the Confederate war effort since it prevented the formulation of any effective alternatives to the conduct of the war by the Davis administration. "Died of Davis" The enemies of President Davis proposed that the Confederacy "died of Davis". He was unfavorably compared to George Washington by critics such as Edward Alfred Pollard, editor of the most influential newspaper in the Confederacy, the Richmond (Virginia) Examiner. E. Merton Coulter summarizes, "The American Revolution had its Washington; the Southern Revolution had its Davis ... one succeeded and the other failed." Beyond the early honeymoon period, Davis was never popular. He unwittingly caused much internal dissension from early on. His ill health and temporary bouts of blindness disabled him for days at a time. Coulter, viewed by today's historians as a Confederate apologist, says Davis was heroic and his will was indomitable. But his "tenacity, determination, and will power" stirred up lasting opposition from enemies that Davis could not shake. He failed to overcome "petty leaders of the states" who made the term "Confederacy" into a label for tyranny and oppression, preventing the "Stars and Bars" from becoming a symbol of larger patriotic service and sacrifice. Instead of campaigning to develop nationalism and gain support for his administration, he rarely courted public opinion, assuming an aloofness, "almost like an Adams". Escott argues that Davis was unable to mobilize Confederate nationalism in support of his government effectively, and especially failed to appeal to the small farmers who comprised the bulk of the population. In addition to the problems caused by states rights, Escott also emphasizes that the widespread opposition to any strong central government combined with the vast difference in wealth between the slave-owning class and the small farmers created insolvable dilemmas when the Confederate survival presupposed a strong central government backed by a united populace. The prewar claim that white solidarity was necessary to provide a unified Southern voice in Washington no longer held. Davis failed to build a network of supporters who would speak up when he came under criticism, and he repeatedly alienated governors and other state-based leaders by demanding centralized control of the war effort. According to Coulter, Davis was not an efficient administrator as he attended to too many details, protected his friends after their failures were obvious, and spent too much time on military affairs versus his civic responsibilities. Coulter concludes he was not the ideal leader for the Southern Revolution, but he showed "fewer weaknesses than any other" contemporary character available for the role. Robert E. Lee's assessment of Davis as president was, "I knew of none that could have done as well." Government and politics Political divisions Constitution The Southern leaders met in Montgomery, Alabama, to write their constitution. Much of the Confederate States Constitution replicated the United States Constitution verbatim, but it contained several explicit protections of the institution of slavery including provisions for the recognition and protection of slavery in any territory of the Confederacy. It maintained the ban on international slave-trading, though it made the ban's application explicit to "Negroes of the African race" in contrast to the U.S. Constitution's reference to "such Persons as any of the States now existing shall think proper to admit". It protected the existing internal trade of slaves among slaveholding states. In certain areas, the Confederate Constitution gave greater powers to the states (or curtailed the powers of the central government more) than the U.S. Constitution of the time did, but in other areas, the states lost rights they had under the U.S. Constitution. Although the Confederate Constitution, like the U.S. Constitution, contained a commerce clause, the Confederate version prohibited the central government from using revenues collected in one state for funding internal improvements in another state. The Confederate Constitution's equivalent to the U.S. Constitution's general welfare clause prohibited protective tariffs (but allowed tariffs for providing domestic revenue), and spoke of "carry[ing] on the Government of the Confederate States" rather than providing for the "general welfare". State legislatures had the power to impeach officials of the Confederate government in some cases. On the other hand, the Confederate Constitution contained a Necessary and Proper Clause and a Supremacy Clause that essentially duplicated the respective clauses of the U.S. Constitution. The Confederate Constitution also incorporated each of the 12 amendments to the U.S. Constitution that had been ratified up to that point. The Confederate Constitution did not specifically include a provision allowing states to secede; the Preamble spoke of each state "acting in its sovereign and independent character" but also of the formation of a "permanent federal government". During the debates on drafting the Confederate Constitution, one proposal would have allowed states to secede from the Confederacy. The proposal was tabled with only the South Carolina delegates voting in favor of considering the motion. The Confederate Constitution also explicitly denied States the power to bar slaveholders from other parts of the Confederacy from bringing their slaves into any state of the Confederacy or to interfere with the property rights of slave owners traveling between different parts of the Confederacy. In contrast with the secular language of the United States Constitution, the Confederate Constitution overtly asked God's blessing ("... invoking the favor and guidance of Almighty God ..."). Executive The Montgomery Convention to establish the Confederacy and its executive met on February 4, 1861. Each state as a sovereignty had one vote, with the same delegation size as it held in the U.S. Congress, and generally 41 to 50 members attended. Offices were "provisional", limited to a term not to exceed one year. One name was placed in nomination for president, one for vice president. Both were elected unanimously, 6–0. Jefferson Davis was elected provisional president. His U.S. Senate resignation speech greatly impressed with its clear rationale for secession and his pleading for a peaceful departure from the Union to independence. Although he had made it known that he wanted to be commander-in-chief of the Confederate armies, when elected, he assumed the office of Provisional President. Three candidates for provisional Vice President were under consideration the night before the February 9 election. All were from Georgia, and the various delegations meeting in different places determined two would not do, so Alexander H. Stephens was elected unanimously provisional Vice President, though with some privately held reservations. Stephens was inaugurated February 11, Davis February 18. Davis and Stephens were elected president and vice president, unopposed on November 6, 1861. They were inaugurated on February 22, 1862. Historian and Confederate apologist E. M. Coulter stated, "No president of the U.S. ever had a more difficult task." Washington was inaugurated in peacetime. Lincoln inherited an established government of long standing. The creation of the Confederacy was accomplished by men who saw themselves as fundamentally conservative. Although they referred to their "Revolution", it was in their eyes more a counter-revolution against changes away from their understanding of U.S. founding documents. In Davis' inauguration speech, he explained the Confederacy was not a French-like revolution, but a transfer of rule. The Montgomery Convention had assumed all the laws of the United States until superseded by the Confederate Congress. The Permanent Constitution provided for a President of the Confederate States of America, elected to serve a six-year term but without the possibility of re-election. Unlike the United States Constitution, the Confederate Constitution gave the president the ability to subject a bill to a line item veto, a power also held by some state governors. The Confederate Congress could overturn either the general or the line item vetoes with the same two-thirds votes required in the U.S. Congress. In addition, appropriations not specifically requested by the executive branch required passage by a two-thirds vote in both houses of Congress. The only person to serve as president was Jefferson Davis, as the Confederacy was defeated before the completion of his term. Administration and cabinet Legislative The only two "formal, national, functioning, civilian administrative bodies" in the Civil War South were the Jefferson Davis administration and the Confederate Congresses. The Confederacy was begun by the Provisional Congress in Convention at Montgomery, Alabama on February 28, 1861. The Provisional Confederate Congress was a unicameral assembly, each state received one vote. The Permanent Confederate Congress was elected and began its first session February 18, 1862. The Permanent Congress for the Confederacy followed the United States forms with a bicameral legislature. The Senate had two per state, twenty-six Senators. The House numbered 106 representatives apportioned by free and slave populations within each state. Two Congresses sat in six sessions until March 18, 1865. The political influences of the civilian, soldier vote and appointed representatives reflected divisions of political geography of a diverse South. These in turn changed over time relative to Union occupation and disruption, the war impact on the local economy, and the course of the war. Without political parties, key candidate identification related to adopting secession before or after Lincoln's call for volunteers to retake Federal property. Previous party affiliation played a part in voter selection, predominantly secessionist Democrat or unionist Whig. The absence of political parties made individual roll call voting all the more important, as the Confederate "freedom of roll-call voting [was] unprecedented in American legislative history." Key issues throughout the life of the Confederacy related to (1) suspension of habeas corpus, (2) military concerns such as control of state militia, conscription and exemption, (3) economic and fiscal policy including impressment of slaves, goods and scorched earth, and (4) support of the Jefferson Davis administration in its foreign affairs and negotiating peace. Provisional Congress For the first year, the unicameral Provisional Confederate Congress functioned as the Confederacy's legislative branch. President of the Provisional Congress Howell Cobb, Sr. of Georgia, February 4, 1861 – February 17, 1862 Presidents pro tempore of the Provisional Congress Robert Woodward Barnwell of South Carolina, February 4, 1861 Thomas Stanhope Bocock of Virginia, December 10–21, 1861 and January 7–8, 1862 Josiah Abigail Patterson Campbell of Mississippi, December 23–24, 1861 and January 6, 1862 Sessions of the Confederate Congress Provisional Congress 1st Congress 2nd Congress Tribal Representatives to Confederate Congress Elias Cornelius Boudinot 1862–65, Cherokee Samuel Benton Callahan Unknown years, Creek, Seminole Burton Allen Holder 1864–65, Chickasaw Robert McDonald Jones 1863–65, Choctaw Judicial The Confederate Constitution outlined a judicial branch of the government, but the ongoing war and resistance from states-rights advocates, particularly on the question of whether it would have appellate jurisdiction over the state courts, prevented the creation or seating of the "Supreme Court of the Confederate States;" the state courts generally continued to operate as they had done, simply recognizing the Confederate States as the national government. Confederate district courts were authorized by Article III, Section 1, of the Confederate Constitution, and President Davis appointed judges within the individual states of the Confederate States of America. In many cases, the same US Federal District Judges were appointed as Confederate States District Judges. Confederate district courts began reopening in early 1861, handling many of the same type cases as had been done before. Prize cases, in which Union ships were captured by the Confederate Navy or raiders and sold through court proceedings, were heard until the blockade of southern ports made this impossible. After a Sequestration Act was passed by the Confederate Congress, the Confederate district courts heard many cases in which enemy aliens (typically Northern absentee landlords owning property in the South) had their property sequestered (seized) by Confederate Receivers. When the matter came before the Confederate court, the property owner could not appear because he was unable to travel across the front lines between Union and Confederate forces. Thus, the District Attorney won the case by default, the property was typically sold, and the money used to further the Southern war effort. Eventually, because there was no Confederate Supreme Court, sharp attorneys like South Carolina's Edward McCrady began filing appeals. This prevented their clients' property from being sold until a supreme court could be constituted to hear the appeal, which never occurred. Where Federal troops gained control over parts of the Confederacy and re-established civilian government, US district courts sometimes resumed jurisdiction. Supreme Court – not established. District Courts – judges Alabama William G. Jones 1861–65 Arkansas Daniel Ringo 1861–65 Florida Jesse J. Finley 1861–62 Georgia Henry R. Jackson 1861, Edward J. Harden 1861–65 Louisiana Edwin Warren Moise 1861–65 Mississippi Alexander Mosby Clayton 1861–65 North Carolina Asa Biggs 1861–65 South Carolina Andrew G. Magrath 1861–64, Benjamin F. Perry 1865 Tennessee West H. Humphreys 1861–65 Texas-East William Pinckney Hill 1861–65 Texas-West Thomas J. Devine 1861–65 Virginia-East James D. Halyburton 1861–65 Virginia-West John W. Brockenbrough 1861–65 Post Office When the Confederacy was formed and its seceding states broke from the Union, it was at once confronted with the arduous task of providing its citizens with a mail delivery system, and, in the midst of the American Civil War, the newly formed Confederacy created and established the Confederate Post Office. One of the first undertakings in establishing the Post Office was the appointment of John H. Reagan to the position of Postmaster General, by Jefferson Davis in 1861, making him the first Postmaster General of the Confederate Post Office as well as a member of Davis' presidential cabinet. Writing in 1906, historian Walter Flavius McCaleb praised Reagan's "energy and intelligence... in a degree scarcely matched by any of his associates." When the war began, the US Post Office still delivered mail from the secessionist states for a brief period of time. Mail that was postmarked after the date of a state's admission into the Confederacy through May 31, 1861, and bearing US postage was still delivered. After this time, private express companies still managed to carry some of the mail across enemy lines. Later, mail that crossed lines had to be sent by 'Flag of Truce' and was allowed to pass at only two specific points. Mail sent from the Confederacy to the U.S. was received, opened and inspected at Fortress Monroe on the Virginia coast before being passed on into the U.S. mail stream. Mail sent from the North to the South passed at City Point, also in Virginia, where it was also inspected before being sent on. With the chaos of the war, a working postal system was more important than ever for the Confederacy. The Civil War had divided family members and friends and consequently letter writing increased dramatically across the entire divided nation, especially to and from the men who were away serving in an army. Mail delivery was also important for the Confederacy for a myriad of business and military reasons. Because of the Union blockade, basic supplies were always in demand and so getting mailed correspondence out of the country to suppliers was imperative to the successful operation of the Confederacy. Volumes of material have been written about the Blockade runners who evaded Union ships on blockade patrol, usually at night, and who moved cargo and mail in and out of the Confederate States throughout the course of the war. Of particular interest to students and historians of the American Civil War is Prisoner of War mail and Blockade mail as these items were often involved with a variety of military and other war time activities. The postal history of the Confederacy along with surviving Confederate mail has helped historians document the various people, places and events that were involved in the American Civil War as it unfolded. Civil liberties The Confederacy actively used the army to arrest people suspected of loyalty to the United States. Historian Mark Neely found 4,108 names of men arrested and estimated a much larger total. The Confederacy arrested pro-Union civilians in the South at about the same rate as the Union arrested pro-Confederate civilians in the North. Neely argues: Economy Slaves Across the South, widespread rumors alarmed the whites by predicting the slaves were planning some sort of insurrection. Patrols were stepped up. The slaves did become increasingly independent, and resistant to punishment, but historians agree there were no insurrections. In the invaded areas, insubordination was more the norm than was loyalty to the old master; Bell Wiley says, "It was not disloyalty, but the lure of freedom." Many slaves became spies for the North, and large numbers ran away to federal lines. Lincoln's Emancipation Proclamation, an executive order of the U.S. government on January 1, 1863, changed the legal status of three million slaves in designated areas of the Confederacy from "slave" to "free". The long-term effect was that the Confederacy could not preserve the institution of slavery, and lost the use of the core element of its plantation labor force. Slaves were legally freed by the Proclamation, and became free by escaping to federal lines, or by advances of federal troops. Over 200,000 freed slaves were hired by the federal army as teamsters, cooks, launderers and laborers, and eventually as soldiers. Plantation owners, realizing that emancipation would destroy their economic system, sometimes moved their slaves as far as possible out of reach of the Union army. By "Juneteenth" (June 19, 1865, in Texas), the Union Army controlled all of the Confederacy and had liberated all its slaves. The former slaves never received compensation and, unlike British policy, neither did the owners. Political economy Most whites were subsistence farmers who traded their surpluses locally. The plantations of the South, with white ownership and an enslaved labor force, produced substantial wealth from cash crops. It supplied two-thirds of the world's cotton, which was in high demand for textiles, along with tobacco, sugar, and naval stores (such as turpentine). These raw materials were exported to factories in Europe and the Northeast. Planters reinvested their profits in more slaves and fresh land, as cotton and tobacco depleted the soil. There was little manufacturing or mining; shipping was controlled by non-southerners. The plantations that enslaved over three million black people were the principal source of wealth. Most were concentrated in "black belt" plantation areas (because few white families in the poor regions owned slaves). For decades, there had been widespread fear of slave revolts. During the war, extra men were assigned to "home guard" patrol duty and governors sought to keep militia units at home for protection. Historian William Barney reports, "no major slave revolts erupted during the Civil War." Nevertheless, slaves took the opportunity to enlarge their sphere of independence, and when union forces were nearby, many ran off to join them. Slave labor was applied in industry in a limited way in the Upper South and in a few port cities. One reason for the regional lag in industrial development was top-heavy income distribution. Mass production requires mass markets, and slaves living in small cabins, using self-made tools and outfitted with one suit of work clothes each year of inferior fabric, did not generate consumer demand to sustain local manufactures of any description in the same way as did a mechanized family farm of free labor in the North. The Southern economy was "pre-capitalist" in that slaves were put to work in the largest revenue-producing enterprises, not free labor markets. That labor system as practiced in the American South encompassed paternalism, whether abusive or indulgent, and that meant labor management considerations apart from productivity. Approximately 85% of both the North and South white populations lived on family farms, both regions were predominantly agricultural, and mid-century industry in both was mostly domestic. But the Southern economy was pre-capitalist in its overwhelming reliance on the agriculture of cash crops to produce wealth, while the great majority of farmers fed themselves and supplied a small local market. Southern cities and industries grew faster than ever before, but the thrust of the rest of the country's exponential growth elsewhere was toward urban industrial development along transportation systems of canals and railroads. The South was following the dominant currents of the American economic mainstream, but at a "great distance" as it lagged in the all-weather modes of transportation that brought cheaper, speedier freight shipment and forged new, expanding inter-regional markets. A third count of the pre-capitalist Southern economy relates to the cultural setting. The South and southerners did not adopt a work ethic, nor the habits of thrift that marked the rest of the country. It had access to the tools of capitalism, but it did not adopt its culture. The Southern Cause as a national economy in the Confederacy was grounded in "slavery and race, planters and patricians, plain folk and folk culture, cotton and plantations". National production The Confederacy started its existence as an agrarian economy with exports, to a world market, of cotton, and, to a lesser extent, tobacco and sugarcane. Local food production included grains, hogs, cattle, and gardens. The cash came from exports but the Southern people spontaneously stopped exports in early 1861 to hasten the impact of "King Cotton", a failed strategy to coerce international support for the Confederacy through its cotton exports. When the blockade was announced, commercial shipping practically ended (the ships could not get insurance), and only a trickle of supplies came via blockade runners. The cutoff of exports was an economic disaster for the South, rendering useless its most valuable properties, its plantations and their enslaved workers. Many planters kept growing cotton, which piled up everywhere, but most turned to food production. All across the region, the lack of repair and maintenance wasted away the physical assets. The eleven states had produced $155 million in manufactured goods in 1860, chiefly from local grist-mills, and lumber, processed tobacco, cotton goods and naval stores such as turpentine. The main industrial areas were border cities such as Baltimore, Wheeling, Louisville and St. Louis, that were never under Confederate control. The government did set up munitions factories in the Deep South. Combined with captured munitions and those coming via blockade runners, the armies were kept minimally supplied with weapons. The soldiers suffered from reduced rations, lack of medicines, and the growing shortages of uniforms, shoes and boots. Shortages were much worse for civilians, and the prices of necessities steadily rose. The Confederacy adopted a tariff or tax on imports of 15%, and imposed it on all imports from other countries, including the United States. The tariff mattered little; the Union blockade minimized commercial traffic through the Confederacy's ports, and very few people paid taxes on goods smuggled from the North. The Confederate government in its entire history collected only $3.5 million in tariff revenue. The lack of adequate financial resources led the Confederacy to finance the war through printing money, which led to high inflation. The Confederacy underwent an economic revolution by centralization and standardization, but it was too little too late as its economy was systematically strangled by blockade and raids. Transportation systems In peacetime, the South's extensive and connected systems of navigable rivers and coastal access allowed for cheap and easy transportation of agricultural products. The railroad system in the South had developed as a supplement to the navigable rivers to enhance the all-weather shipment of cash crops to market. Railroads tied plantation areas to the nearest river or seaport and so made supply more dependable, lowered costs and increased profits. In the event of invasion, the vast geography of the Confederacy made logistics difficult for the Union. Wherever Union armies invaded, they assigned many of their soldiers to garrison captured areas and to protect rail lines. At the onset of the Civil War the South had a rail network disjointed and plagued by changes in track gauge as well as lack of interchange. Locomotives and freight cars had fixed axles and could not use tracks of different gauges (widths). Railroads of different gauges leading to the same city required all freight to be off-loaded onto wagons for transport to the connecting railroad station, where it had to await freight cars and a locomotive before proceeding. Centers requiring off-loading included Vicksburg, New Orleans, Montgomery, Wilmington and Richmond. In addition, most rail lines led from coastal or river ports to inland cities, with few lateral railroads. Because of this design limitation, the relatively primitive railroads of the Confederacy were unable to overcome the Union naval blockade of the South's crucial intra-coastal and river routes. The Confederacy had no plan to expand, protect or encourage its railroads. Southerners' refusal to export the cotton crop in 1861 left railroads bereft of their main source of income. Many lines had to lay off employees; many critical skilled technicians and engineers were permanently lost to military service. In the early years of the war the Confederate government had a hands-off approach to the railroads. Only in mid-1863 did the Confederate government initiate a national policy, and it was confined solely to aiding the war effort. Railroads came under the de facto control of the military. In contrast, the U.S. Congress had authorized military administration of Union-controlled railroad and telegraph systems in January 1862, imposed a standard gauge, and built railroads into the South using that gauge. Confederate armies successfully reoccupying territory could not be resupplied directly by rail as they advanced. The C.S. Congress formally authorized military administration of railroads in February 1865. In the last year before the end of the war, the Confederate railroad system stood permanently on the verge of collapse. There was no new equipment and raids on both sides systematically destroyed key bridges, as well as locomotives and freight cars. Spare parts were cannibalized; feeder lines were torn up to get replacement rails for trunk lines, and rolling stock wore out through heavy use. Horses and mules The Confederate army experienced a persistent shortage of horses and mules, and requisitioned them with dubious promissory notes given to local farmers and breeders. Union forces paid in real money and found ready sellers in the South. Both armies needed horses for cavalry and for artillery. Mules pulled the wagons. The supply was undermined by an unprecedented epidemic of glanders, a fatal disease that baffled veterinarians. After 1863 the invading Union forces had a policy of shooting all the local horses and mules that they did not need, in order to keep them out of Confederate hands. The Confederate armies and farmers experienced a growing shortage of horses and mules, which hurt the Southern economy and the war effort. The South lost half of its 2.5 million horses and mules; many farmers ended the war with none left. Army horses were used up by hard work, malnourishment, disease and battle wounds; they had a life expectancy of about seven months. Financial instruments Both the individual Confederate states
a cause and effect relationship had not been established between cranberry consumption and reduced risk of UTIs. One 2017 systematic review showed that consuming cranberry products reduced the incidence of UTIs in women with recurrent infections. Another review of small clinical studies indicated that consuming cranberry products could reduce the risk of UTIs by 26% in otherwise healthy women, although the authors indicated that larger studies were needed to confirm such an effect. When the quality of meta-analyses on the efficacy of consuming cranberry products for preventing or treating UTIs is examined, large variation and uncertainty of effect are seen, resulting from inconsistencies of clinical research design and inadequate numbers of subjects. Phytochemicals Raw cranberries, cranberry juice and cranberry extracts are a source of polyphenols – including proanthocyanidins, flavonols and quercetin. These phytochemical compounds are being studied in vivo and in vitro for possible effects on the cardiovascular system, immune system and cancer. However, there is no confirmation from human studies that consuming cranberry polyphenols provides anti-cancer, immune, or cardiovascular benefits. Potential is limited by poor absorption and rapid excretion. Cranberry juice contains a high molecular weight non-dializable material that is under research for its potential to affect formation of plaque by Streptococcus mutans pathogens that cause tooth decay. Cranberry juice components are also being studied for possible effects on kidney stone formation. Extract quality Problems may arise with the lack of validation for quantifying of A-type proanthocyanidins (PAC) extracted from cranberries. For instance, PAC extract quality and content can be performed using different methods including the European Pharmacopoeia method, liquid chromatography–mass spectrometry, or a modified 4-dimethylaminocinnamaldehyde colorimetric method. Variations in extract analysis can lead to difficulties in assessing the quality of PAC extracts from different cranberry starting material, such as by regional origin, ripeness at time of harvest and post-harvest processing. Assessments show that quality varies greatly from one commercial PAC extract product to another. Possible safety concerns The anticoagulant effects of warfarin may be increased by consuming cranberry juice, resulting in adverse effects such as increased incidence of bleeding and bruising. Other safety concerns from consuming large quantities of cranberry juice or using cranberry supplements include potential for nausea, and increasing stomach inflammation, sugar intake or kidney stone formation. Marketing and economics United States Cranberry sales in the United States have traditionally been associated with holidays of Thanksgiving and Christmas. In the U.S., large-scale cranberry cultivation has been developed as opposed to other countries. American cranberry growers have a long history of cooperative marketing. In 1958, Morris April Brothers—who produced Eatmor brand cranberry sauce in Tuckahoe, New Jersey—brought an action against Ocean Spray for violation of the Sherman Antitrust Act and won $200,000 in real damages plus triple damages, just in time for the Great Cranberry Scare: on 9 November 1959, Secretary of the United States Department of Health, Education, and Welfare Arthur S. Flemming announced that some of the 1959 cranberry crop was tainted with traces of the herbicide aminotriazole. The market for cranberries collapsed and growers lost millions of dollars. However, the scare taught the industry that they could not be completely dependent on the holiday market for their products; they had to find year-round markets for their fruit. They also had to be exceedingly careful about their use of pesticides. After the aminotriazole scare, Ocean Spray reorganized and spent substantial sums on product development. New products such as cranberry-apple juice blends were introduced, followed by other juice blends. Cranberry handlers (processors) include Ocean Spray, Cliffstar Corporation, Northland Cranberries Inc. (Sun Northland LLC), Clement Pappas & Co., and Decas Cranberry Products as well as a number of small handlers and processors. Cranberry Marketing Committee The Cranberry Marketing Committee is an organization that was established in 1962 as a Federal Marketing Order to ensure a stable, orderly supply of good quality product. The order has been renewed and modified slightly over the years. The market order has been invoked during six crop years: 1962 (12%), 1963 (5%), 1970 (10%), 1971 (12%), 2000 (15%), and 2001 (35%). Even though supply still exceeds demand, there is little will to invoke the Federal Marketing Order out of the realization that any pullback in supply by U.S. growers would easily be filled by Canadian production. International trade , the European Union was the largest importer of American cranberries, followed individually by Canada, China, Mexico, and South Korea. From 2013 to 2017, U.S. cranberry exports to China grew exponentially, making China the second largest country importer, reaching $36 million in cranberry products. The China–United States trade war resulted in many Chinese businesses cutting off ties with their U.S. cranberry suppliers. References Notes Further reading Cole, S. & Gifford, L. (2009). The Cranberry: Hard Work and Holiday Sauce. Tilbury House Publishers. Trehane, J. (2009). Blueberries, Cranberries and Other Vacciniums. Timber Press. External links Germplasm Resources Information Network: Sect.
evergreen dwarf shrubs or trailing vines in the subgenus Oxycoccus of the genus Vaccinium. In Britain, cranberry may refer to the native species Vaccinium oxycoccos, while in North America, cranberry may refer to Vaccinium macrocarpon. Vaccinium oxycoccos is cultivated in central and northern Europe, while Vaccinium macrocarpon is cultivated throughout the northern United States, Canada and Chile. In some methods of classification, Oxycoccus is regarded as a genus in its own right. They can be found in acidic bogs throughout the cooler regions of the Northern Hemisphere. Cranberries are low, creeping shrubs or vines up to long and in height; they have slender, wiry stems that are not thickly woody and have small evergreen leaves. The flowers are dark pink, with very distinct reflexed petals, leaving the style and stamens fully exposed and pointing forward. They are pollinated by bees. The fruit is a berry that is larger than the leaves of the plant; it is initially light green, turning red when ripe. It is edible, but with an acidic taste that usually overwhelms its sweetness. In 2017, the United States, Canada, and Chile accounted for 98% of the world production of cranberries. Most cranberries are processed into products such as juice, sauce, jam, and sweetened dried cranberries, with the remainder sold fresh to consumers. Cranberry sauce is a traditional accompaniment to turkey at Christmas and Thanksgiving dinners in the United States and Canada, and at Christmas dinner in the United Kingdom. Species and description Cranberries are related to bilberries, blueberries, and huckleberries, all in Vaccinium subgenus Vaccinium. These differ in having bell-shaped flowers, the petals not being reflexed, and woodier stems, forming taller shrubs. There are 3-4 species of cranberry, classified by subgenus: Subgenus Oxycoccus Vaccinium oxycoccos or Oxycoccus palustris (common cranberry, northern cranberry or cranberry) is widespread throughout the cool temperate Northern Hemisphere, including northern Europe, northern Asia, and northern North America. It has small leaves, with an inrolled margin. The flowers are dark pink, with a purple central spike, produced on finely hairy stalks. The fruit is a small pale pink to red berry, with a refreshing sharp acidic flavor. Vaccinium microcarpum or Oxycoccus microcarpus (small cranberry) occurs in northern North America, northern Europe and northern Asia. It is highly similar toV. oxycoccos, differing in the leaves being more triangular, and the flower stems hairless; additionally, their stems can also be smaller and produce a smaller number of flowers than V. ocycoccos. They also differ in the fact that their leaves can be smaller in size, even though the main difference is their triangular shape. Some botanists include it within V. oxycoccos. Vaccinium macrocarpon or Oxycoccus macrocarpus (large cranberry, American cranberry, bearberry) native to northern North America across Canada, and eastern United States, south to North Carolina at high altitudes). It differs from V. oxycoccos in the leaves being larger, long, and flat, and in the slightly apple-like taste of the berries. Subgenus Oxycoccus, sect. Oxycoccoides Vaccinium erythrocarpum or Oxycoccus erythrocarpus (southern mountain cranberry) native to southeastern North America at high altitudes in the southern Appalachian Mountains, and also in eastern Asia. Etymology The name cranberry derives from the German kraanbere (English translation, craneberry), first named as cranberry in English by the missionary John Eliot in 1647. Around 1694, German and Dutch colonists in New England used the word, cranberry, to represent the expanding flower, stem, calyx, and petals resembling the neck, head, and bill of a crane. The traditional English name for the plant more common in Europe, Vaccinium oxycoccos, fenberry, originated from plants with small red berries found growing in fen (marsh) lands of England. History In North America, the Narragansett people of the Algonquian nation in the regions of New England appeared to be using cranberries in pemmican for food and for dye. Calling the red berries, sasemineash, the Narragansett people may have introduced cranberries to colonists in Massachusetts. In 1550, James White Norwood made reference to Native Americans using cranberries, and it was the first reference to American cranberries up until this point. In James Rosier's book The Land of Virginia there is an account of Europeans coming ashore and being met with Native Americans bearing bark cups full of cranberries. In Plymouth, Massachusetts, there is a 1633 account of the husband of Mary Ring auctioning her cranberry-dyed petticoat for 16 shillings. In 1643, Roger Williams's book A Key Into the Language of America described cranberries, referring to them as "bearberries" because bears ate them. In 1648, preacher John Elliott was quoted in Thomas Shepard's book Clear Sunshine of the Gospel with an account of the difficulties the Pilgrims were having in using the Indians to harvest cranberries as they preferred to hunt and fish. In 1663, the Pilgrim cookbook appears with a recipe for cranberry sauce. In 1667, New Englanders sent to King Charles ten barrels of cranberries, three barrels of codfish and some Indian corn as a means of appeasement for his anger over their local coining of the pine tree shilling minted by John Hull in the "Hull Mint" with Daniel Quincy. In 1669, Captain Richard Cobb had a banquet in his house (to celebrate both his marriage to Mary Gorham and his election to the Convention of Assistance), serving wild turkey with sauce made from wild cranberries. In the 1672 book New England Rarities Discovered author John Josselyn described cranberries, writing: Sauce for the Pilgrims, cranberry or bearberry, is a small trayling [sic] plant that grows in salt marshes that are overgrown with moss. The berries are of a pale yellow color, afterwards red, as big as a cherry, some perfectly round, others oval, all of them hollow with sower [sic] astringent taste; they are ripe in August and September. They are excellent against the Scurvy. They are also good to allay the fervor of
function: int foo (int x, int y) { int z = 0; if ((x > 0) && (y > 0)) { z = x; } return z; } Assume this function is a part of some bigger program and this program was run with some test suite. Function coverage will be satisfied if, during this execution, the function foo was called at least once. Statement coverage for this function will be satisfied if it was called for example as foo(1,1), because in this case, every line in the function would be executed—including z = x;. Branch coverage will be satisfied by tests calling foo(1,1) and foo(0,1) because, in the first case, both if conditions are met and z = x; is executed, while in the second case, the first condition, (x>0), is not satisfied, which prevents the execution of z = x;. Condition coverage will be satisfied with tests that call foo(1,0) and foo(0,1). These are necessary because in the first cases, (x>0) evaluates to true, while in the second, it evaluates to false. At the same time, the first case makes (y>0) false, while the second makes it true. Condition coverage does not necessarily imply branch coverage. For example, consider the following code fragment: if a and b then Condition coverage can be satisfied by two tests: a=true, b=false a=false, b=true However, this set of tests does not satisfy branch coverage since neither case will meet the if condition. Fault injection may be necessary to ensure that all conditions and branches of exception-handling code have adequate coverage during testing. Modified condition/decision coverage A combination of function coverage and branch coverage is sometimes also called decision coverage. This criterion requires that every point of entry and exit in the program has been invoked at least once, and every decision in the program has taken on all possible outcomes at least once. In this context, the decision is a boolean expression comprising conditions and zero or more boolean operators. This definition is not the same as branch coverage, however, the term decision coverage is sometimes used as a synonym for it. Condition/decision coverage requires that both decision and condition coverage be satisfied. However, for safety-critical applications (such as avionics software) it is often required that modified condition/decision coverage (MC/DC) be satisfied. This criterion extends condition/decision criteria with requirements that each condition should affect the decision outcome independently. For example, consider the following code: if (a or b) and c then The condition/decision criteria will be satisfied by the following set of tests: a=true, b=true, c=true a=false, b=false, c=false However, the above tests set will not satisfy modified condition/decision coverage, since in the first test, the value of 'b' and in the second test the value of 'c' would not influence the output. So, the following test set is needed to satisfy MC/DC: a=false, b=true, c=false a=false, b=true, c=true a=false, b=false, c=true a=true, b=false, c=true Multiple condition coverage This criterion requires that all combinations of conditions inside each decision are tested. For example, the code fragment from the previous section will require eight tests: a=false, b=false, c=false a=false, b=false, c=true a=false, b=true, c=false a=false, b=true, c=true a=true, b=false, c=false a=true, b=false, c=true a=true, b=true, c=false a=true, b=true, c=true Parameter value coverage Parameter value coverage (PVC) requires that in a method taking parameters, all the common values for such parameters be considered. The idea is that all common possible values for a parameter are tested. For example, common values for a string are: 1) null, 2) empty, 3) whitespace (space, tabs, newline), 4) valid string, 5) invalid string, 6) single-byte string, 7) double-byte string. It may also be appropriate to use very long strings. Failure to test each possible parameter value may result in a bug. Testing only one of these could result in 100%
Fault injection may be necessary to ensure that all conditions and branches of exception-handling code have adequate coverage during testing. Modified condition/decision coverage A combination of function coverage and branch coverage is sometimes also called decision coverage. This criterion requires that every point of entry and exit in the program has been invoked at least once, and every decision in the program has taken on all possible outcomes at least once. In this context, the decision is a boolean expression comprising conditions and zero or more boolean operators. This definition is not the same as branch coverage, however, the term decision coverage is sometimes used as a synonym for it. Condition/decision coverage requires that both decision and condition coverage be satisfied. However, for safety-critical applications (such as avionics software) it is often required that modified condition/decision coverage (MC/DC) be satisfied. This criterion extends condition/decision criteria with requirements that each condition should affect the decision outcome independently. For example, consider the following code: if (a or b) and c then The condition/decision criteria will be satisfied by the following set of tests: a=true, b=true, c=true a=false, b=false, c=false However, the above tests set will not satisfy modified condition/decision coverage, since in the first test, the value of 'b' and in the second test the value of 'c' would not influence the output. So, the following test set is needed to satisfy MC/DC: a=false, b=true, c=false a=false, b=true, c=true a=false, b=false, c=true a=true, b=false, c=true Multiple condition coverage This criterion requires that all combinations of conditions inside each decision are tested. For example, the code fragment from the previous section will require eight tests: a=false, b=false, c=false a=false, b=false, c=true a=false, b=true, c=false a=false, b=true, c=true a=true, b=false, c=false a=true, b=false, c=true a=true, b=true, c=false a=true, b=true, c=true Parameter value coverage Parameter value coverage (PVC) requires that in a method taking parameters, all the common values for such parameters be considered. The idea is that all common possible values for a parameter are tested. For example, common values for a string are: 1) null, 2) empty, 3) whitespace (space, tabs, newline), 4) valid string, 5) invalid string, 6) single-byte string, 7) double-byte string. It may also be appropriate to use very long strings. Failure to test each possible parameter value may result in a bug. Testing only one of these could result in 100% code coverage as each line is covered, but as only one of seven options are tested, there is only 14.2% PVC. Other coverage criteria There are further coverage criteria, which are used less often: Linear Code Sequence and Jump (LCSAJ) coverage a.k.a. JJ-Path coverage has every LCSAJ/JJ-path been executed? Path coverageHas every possible route through a given part of the code been executed? Entry/exit coverageHas every possible call and return of the function been executed? Loop coverageHas every possible loop been executed zero times, once, and more than once? State coverageHas each state in a finite-state machine been reached and explored? Data-flow coverageHas each variable definition and its usage been reached and explored? Safety-critical or dependable applications are often required to demonstrate 100% of some form of test coverage. For example, the ECSS-E-ST-40C standard demands 100% statement and decision coverage for two out of four different criticality levels; for the other ones, target coverage values are up to negotiation between supplier and customer. However, setting specific target values - and, in particular, 100% - has been criticized by practitioners for various reasons (cf.) Martin Fowler writes: "I would be suspicious of anything like 100% - it would smell of someone writing tests to make the coverage numbers happy, but not thinking about what they are doing". Some of the coverage criteria above are connected. For instance, path coverage implies decision, statement and entry/exit coverage. Decision coverage implies statement coverage, because every statement is part of a branch. Full path coverage, of the type described above, is usually impractical or impossible. Any module with a succession of decisions in it can have up to paths within it; loop constructs can result in an infinite number of paths. Many paths may also be infeasible, in that there is no input to the program under test that can cause that particular path to be executed. However, a general-purpose algorithm for
film actress best known for her role as Valerian in the 1981 fantasy film Dragonslayer and for her role as Charlotte Cardoza in the 1998–1999 Broadway musical Titanic. Biography Clarke was born Katherine Anne Clarke in Pittsburgh, the oldest of five sisters, the youngest of whom is Victoria Clarke. Her family moved to Sewickley when she was ten. Clarke received her B.A. in theater arts from Mount Holyoke College in 1974 and her M.F.A. from the Yale School of Drama in 1978. During her final year at Yale Clarke performed with the Yale Repertory Theater in such plays as Tales from the Vienna Woods. The first few years of Clarke's professional career were largely theatrical, apart from her role in Dragonslayer. After appearing in three Broadway plays in 1985, Clarke moved to Los Angeles for several years as a film and television actress. She appeared in the 1986 film Crocodile Dundee as Simone, a friendly prostitute. She returned to theater in the early 1990s, and to Broadway as Charlotte Cardoza in Titanic. Clarke was diagnosed with ovarian cancer in 2000. She returned to Pittsburgh to teach theater at the University of Pittsburgh and at the Pittsburgh Musical Theater's Rauh Conservatory as well as to
professional career were largely theatrical, apart from her role in Dragonslayer. After appearing in three Broadway plays in 1985, Clarke moved to Los Angeles for several years as a film and television actress. She appeared in the 1986 film Crocodile Dundee as Simone, a friendly prostitute. She returned to theater in the early 1990s, and to Broadway as Charlotte Cardoza in Titanic. Clarke was diagnosed with ovarian cancer in 2000. She returned to Pittsburgh to teach theater at the University of Pittsburgh and at the Pittsburgh Musical Theater's Rauh Conservatory as well as to perform in Pittsburgh theatre until her death on September 9, 2004. Stage Broadway 1983 - Teaneck Tanzi: The Venus Flytrap 1985 - The Marriage of Figaro 1985 - Arms and the Man 1985 - Strange Interlude 1998 - Titanic: A New Musical Off-Broadway 1979 - Othello 1981 - No
1920s had displacements of less than 10,000 tons and a speed of up to 35 knots. They were equipped with 6–12 main guns with a caliber of 127–133 mm (5–5.5 inches). In addition, they were equipped with 8–12 secondary guns under 127 mm (5 in) and dozens of small caliber cannons, as well as torpedoes and mines. Some ships also carried 2–4 seaplanes, mainly for reconnaissance. In 1930 the London Naval Treaty allowed large light cruisers to be built, with the same tonnage as heavy cruisers and armed with up to fifteen guns. The Japanese Mogami class were built to this treaty's limit, the Americans and British also built similar ships. However, in 1939 the Mogamis were refitted as heavy cruisers with ten guns. 1939 to Pearl Harbor In December 1939, three British cruisers engaged the German "pocket battleship" Admiral Graf Spee (which was on a commerce raiding mission) in the Battle of the River Plate; Admiral Graf Spee then took refuge in neutral Montevideo, Uruguay. By broadcasting messages indicating capital ships were in the area, the British caused Admiral Graf Spees captain to think he faced a hopeless situation while low on ammunition and order his ship scuttled. On 8 June 1940 the German capital ships and , classed as battleships but with large cruiser armament, sank the aircraft carrier with gunfire. From October 1940 through March 1941 the German heavy cruiser (also known as "pocket battleship", see above) conducted a successful commerce-raiding voyage in the Atlantic and Indian Oceans. On 27 May 1941, attempted to finish off the German battleship with torpedoes, probably causing the Germans to scuttle the ship. Bismarck (accompanied by the heavy cruiser ) previously sank the battlecruiser and damaged the battleship with gunfire in the Battle of the Denmark Strait. On 19 November 1941 sank in a mutually fatal engagement with the German raider Kormoran in the Indian Ocean near Western Australia. Atlantic, Mediterranean, and Indian Ocean operations 1942–1944 Twenty-three British cruisers were lost to enemy action, mostly to air attack and submarines, in operations in the Atlantic, Mediterranean, and Indian Ocean. Sixteen of these losses were in the Mediterranean. The British included cruisers and anti-aircraft cruisers among convoy escorts in the Mediterranean and to northern Russia due to the threat of surface and air attack. Almost all cruisers in World War II were vulnerable to submarine attack due to a lack of anti-submarine sonar and weapons. Also, until 1943–44 the light anti-aircraft armament of most cruisers was weak. In July 1942 an attempt to intercept Convoy PQ 17 with surface ships, including the heavy cruiser Admiral Scheer, failed due to multiple German warships grounding, but air and submarine attacks sank 2/3 of the convoy's ships. In August 1942 Admiral Scheer conducted Operation Wunderland, a solo raid into northern Russia's Kara Sea. She bombarded Dikson Island but otherwise had little success. On 31 December 1942 the Battle of the Barents Sea was fought, a rare action for a Murmansk run because it involved cruisers on both sides. Four British destroyers and five other vessels were escorting Convoy JW 51B from the UK to the Murmansk area. Another British force of two cruisers ( and ) and two destroyers were in the area. Two heavy cruisers (one the "pocket battleship" Lützow), accompanied by six destroyers, attempted to intercept the convoy near North Cape after it was spotted by a U-boat. Although the Germans sank a British destroyer and a minesweeper (also damaging another destroyer), they failed to damage any of the convoy's merchant ships. A German destroyer was lost and a heavy cruiser damaged. Both sides withdrew from the action for fear of the other side's torpedoes. On 26 December 1943 the German capital ship Scharnhorst was sunk while attempting to intercept a convoy in the Battle of the North Cape. The British force that sank her was led by Vice Admiral Bruce Fraser in the battleship , accompanied by four cruisers and nine destroyers. One of the cruisers was the preserved . Scharnhorsts sister Gneisenau, damaged by a mine and a submerged wreck in the Channel Dash of 13 February 1942 and repaired, was further damaged by a British air attack on 27 February 1942. She began a conversion process to mount six guns instead of nine guns, but in early 1943 Hitler (angered by the recent failure at the Battle of the Barents Sea) ordered her disarmed and her armament used as coast defence weapons. One 28 cm triple turret survives near Trondheim, Norway. Pearl Harbor through Dutch East Indies campaign The attack on Pearl Harbor on 7 December 1941 brought the United States into the war, but with eight battleships sunk or damaged by air attack. On 10 December 1941 HMS Prince of Wales and the battlecruiser were sunk by land-based torpedo bombers northeast of Singapore. It was now clear that surface ships could not operate near enemy aircraft in daylight without air cover; most surface actions of 1942–43 were fought at night as a result. Generally, both sides avoided risking their battleships until the Japanese attack at Leyte Gulf in 1944. Six of the battleships from Pearl Harbor were eventually returned to service, but no US battleships engaged Japanese surface units at sea until the Naval Battle of Guadalcanal in November 1942, and not thereafter until the Battle of Surigao Strait in October 1944. was on hand for the initial landings at Guadalcanal on 7 August 1942, and escorted carriers in the Battle of the Eastern Solomons later that month. However, on 15 September she was torpedoed while escorting a carrier group and had to return to the US for repairs. Generally, the Japanese held their capital ships out of all surface actions in the 1941–42 campaigns or they failed to close with the enemy; the Naval Battle of Guadalcanal in November 1942 was the sole exception. The four ships performed shore bombardment in Malaya, Singapore, and Guadalcanal and escorted the raid on Ceylon and other carrier forces in 1941–42. Japanese capital ships also participated ineffectively (due to not being engaged) in the Battle of Midway and the simultaneous Aleutian diversion; in both cases they were in battleship groups well to the rear of the carrier groups. Sources state that sat out the entire Guadalcanal Campaign due to lack of high-explosive bombardment shells, poor nautical charts of the area, and high fuel consumption. It is likely that the poor charts affected other battleships as well. Except for the Kongō class, most Japanese battleships spent the critical year of 1942, in which most of the war's surface actions occurred, in home waters or at the fortified base of Truk, far from any risk of attacking or being attacked. From 1942 through mid-1943, US and other Allied cruisers were the heavy units on their side of the numerous surface engagements of the Dutch East Indies campaign, the Guadalcanal Campaign, and subsequent Solomon Islands fighting; they were usually opposed by strong Japanese cruiser-led forces equipped with Long Lance torpedoes. Destroyers also participated heavily on both sides of these battles and provided essentially all the torpedoes on the Allied side, with some battles in these campaigns fought entirely between destroyers. Along with lack of knowledge of the capabilities of the Long Lance torpedo, the US Navy was hampered by a deficiency it was initially unaware of—the unreliability of the Mark 15 torpedo used by destroyers. This weapon shared the Mark 6 exploder and other problems with the more famously unreliable Mark 14 torpedo; the most common results of firing either of these torpedoes were a dud or a miss. The problems with these weapons were not solved until mid-1943, after almost all of the surface actions in the Solomon Islands had taken place. Another factor that shaped the early surface actions was the pre-war training of both sides. The US Navy concentrated on long-range 8-inch gunfire as their primary offensive weapon, leading to rigid battle line tactics, while the Japanese trained extensively for nighttime torpedo attacks. Since all post-1930 Japanese cruisers had 8-inch guns by 1941, almost all of the US Navy's cruisers in the South Pacific in 1942 were the 8-inch-gunned (203 mm) "treaty cruisers"; most of the 6-inch-gunned (152 mm) cruisers were deployed in the Atlantic. Dutch East Indies campaign Although their battleships were held out of surface action, Japanese cruiser-destroyer forces rapidly isolated and mopped up the Allied naval forces in the Dutch East Indies campaign of February–March 1942. In three separate actions, they sank five Allied cruisers (two Dutch and one each British, Australian, and American) with torpedoes and gunfire, against one Japanese cruiser damaged. With one other Allied cruiser withdrawn for repairs, the only remaining Allied cruiser in the area was the damaged . Despite their rapid success, the Japanese proceeded methodically, never leaving their air cover and rapidly establishing new air bases as they advanced. Guadalcanal campaign After the key carrier battles of the Coral Sea and Midway in mid-1942, Japan had lost four of the six fleet carriers that launched the Pearl Harbor raid and was on the strategic defensive. On 7 August 1942 US Marines were landed on Guadalcanal and other nearby islands, beginning the Guadalcanal Campaign. This campaign proved to be a severe test for the Navy as well as the Marines. Along with two carrier battles, several major surface actions occurred, almost all at night between cruiser-destroyer forces. Battle of Savo Island On the night of 8–9 August 1942 the Japanese counterattacked near Guadalcanal in the Battle of Savo Island with a cruiser-destroyer force. In a controversial move, the US carrier task forces were withdrawn from the area on the 8th due to heavy fighter losses and low fuel. The Allied force included six heavy cruisers (two Australian), two light cruisers (one Australian), and eight US destroyers. Of the cruisers, only the Australian ships had torpedoes. The Japanese force included five heavy cruisers, two light cruisers, and one destroyer. Numerous circumstances combined to reduce Allied readiness for the battle. The results of the battle were three American heavy cruisers sunk by torpedoes and gunfire, one Australian heavy cruiser disabled by gunfire and scuttled, one heavy cruiser damaged, and two US destroyers damaged. The Japanese had three cruisers lightly damaged. This was the most lopsided outcome of the surface actions in the Solomon Islands. Along with their superior torpedoes, the opening Japanese gunfire was accurate and very damaging. Subsequent analysis showed that some of the damage was due to poor housekeeping practices by US forces. Stowage of boats and aircraft in midships hangars with full gas tanks contributed to fires, along with full and unprotected ready-service ammunition lockers for the open-mount secondary armament. These practices were soon corrected, and US cruisers with similar damage sank less often thereafter. Savo was the first surface action of the war for almost all the US ships and personnel; few US cruisers and destroyers were targeted or hit at Coral Sea or Midway. Battle of the Eastern Solomons On 24–25 August 1942 the Battle of the Eastern Solomons, a major carrier action, was fought. Part of the action was a Japanese attempt to reinforce Guadalcanal with men and equipment on troop transports. The Japanese troop convoy was attacked by Allied aircraft, resulting in the Japanese subsequently reinforcing Guadalcanal with troops on fast warships at night. These convoys were called the "Tokyo Express" by the Allies. Although the Tokyo Express often ran unopposed, most surface actions in the Solomons revolved around Tokyo Express missions. Also, US air operations had commenced from Henderson Field, the airfield on Guadalcanal. Fear of air power on both sides resulted in all surface actions in the Solomons being fought at night. Battle of Cape Esperance The Battle of Cape Esperance occurred on the night of 11–12 October 1942. A Tokyo Express mission was underway for Guadalcanal at the same time as a separate cruiser-destroyer bombardment group loaded with high explosive shells for bombarding Henderson Field. A US cruiser-destroyer force was deployed in advance of a convoy of US Army troops for Guadalcanal that was due on 13 October. The Tokyo Express convoy was two seaplane tenders and six destroyers; the bombardment group was three heavy cruisers and two destroyers, and the US force was two heavy cruisers, two light cruisers, and five destroyers. The US force engaged the Japanese bombardment force; the Tokyo Express convoy was able to unload on Guadalcanal and evade action. The bombardment force was sighted at close range () and the US force opened fire. The Japanese were surprised because their admiral was anticipating sighting the Tokyo Express force, and withheld fire while attempting to confirm the US ships' identity. One Japanese cruiser and one destroyer were sunk and one cruiser damaged, against one US destroyer sunk with one light cruiser and one destroyer damaged. The bombardment force failed to bring its torpedoes into action, and turned back. The next day US aircraft from Henderson Field attacked several of the Japanese ships, sinking two destroyers and damaging a third. The US victory resulted in overconfidence in some later battles, reflected in the initial after-action report claiming two Japanese heavy cruisers, one light cruiser, and three destroyers sunk by the gunfire of alone. The battle had little effect on the overall situation, as the next night two Kongō-class battleships bombarded and severely damaged Henderson Field unopposed, and the following night another Tokyo Express convoy delivered 4,500 troops to Guadalcanal. The US convoy delivered the Army troops as scheduled on the 13th. Battle of the Santa Cruz Islands The Battle of the Santa Cruz Islands took place 25–27 October 1942. It was a pivotal battle, as it left the US and Japanese with only two large carriers each in the South Pacific (another large Japanese carrier was damaged and under repair until May 1943). Due to the high carrier attrition rate with no replacements for months, for the most part both sides stopped risking their remaining carriers until late 1943, and each side sent in a pair of battleships instead. The next major carrier operations for the US were the carrier raid on Rabaul and support for the invasion of Tarawa, both in November 1943. Naval Battle of Guadalcanal The Naval Battle of Guadalcanal occurred 12–15 November 1942 in two phases. A night surface action on 12–13 November was the first phase. The Japanese force consisted of two Kongō-class battleships with high explosive shells for bombarding Henderson Field, one small light cruiser, and 11 destroyers. Their plan was that the bombardment would neutralize Allied airpower and allow a force of 11 transport ships and 12 destroyers to reinforce Guadalcanal with a Japanese division the next day. However, US reconnaissance aircraft spotted the approaching Japanese on the 12th and the Americans made what preparations they could. The American force consisted of two heavy cruisers, one light cruiser, two anti-aircraft cruisers, and eight destroyers. The Americans were outgunned by the Japanese that night, and a lack of pre-battle orders by the US commander led to confusion. The destroyer closed with the battleship , firing all torpedoes (though apparently none hit or detonated) and raking the battleship's bridge with gunfire, wounding the Japanese admiral and killing his chief of staff. The Americans initially lost four destroyers including Laffey, with both heavy cruisers, most of the remaining destroyers, and both anti-aircraft cruisers damaged. The Japanese initially had one battleship and four destroyers damaged, but at this point they withdrew, possibly unaware that the US force was unable to further oppose them. At dawn US aircraft from Henderson Field, , and Espiritu Santo found the damaged battleship and two destroyers in the area. The battleship (Hiei) was sunk by aircraft (or possibly scuttled), one destroyer was sunk by the damaged , and the other destroyer was attacked by aircraft but was able to withdraw. Both of the damaged US anti-aircraft cruisers were lost on 13 November, one () torpedoed by a Japanese submarine, and the other sank on the way to repairs. Juneaus loss was especially tragic; the submarine's presence prevented immediate rescue, over 100 survivors of a crew of nearly 700 were adrift for eight days, and all but ten died. Among the dead were the five Sullivan brothers. The Japanese transport force was rescheduled for the 14th and a new cruiser-destroyer force (belatedly joined by the surviving battleship ) was sent to bombard Henderson Field the night of 13 November. Only two cruisers actually bombarded the airfield, as Kirishima had not arrived yet and the remainder of the force was on guard for US warships. The bombardment caused little damage. The cruiser-destroyer force then withdrew, while the transport force continued towards Guadalcanal. Both forces were attacked by US aircraft on the 14th. The cruiser force lost one heavy cruiser sunk and one damaged. Although the transport force had fighter cover from the carrier , six transports were sunk and one heavily damaged. All but four of the destroyers accompanying the transport force picked up survivors and withdrew. The remaining four transports and four destroyers approached Guadalcanal at night, but stopped to await the results of the night's action. On the night of 14–15 November a Japanese force of Kirishima, two heavy and two light cruisers, and nine destroyers approached Guadalcanal. Two US battleships ( and ) were there to meet them, along with four destroyers. This was one of only two battleship-on-battleship encounters during the Pacific War; the other was the lopsided Battle of Surigao Strait in October 1944, part of the Battle of Leyte Gulf. The battleships had been escorting Enterprise, but were detached due to the urgency of the situation. With nine 16-inch (406 mm) guns apiece against eight 14-inch (356 mm) guns on Kirishima, the Americans had major gun and armor advantages. All four destroyers were sunk or severely damaged and withdrawn shortly after the Japanese attacked them with gunfire and torpedoes. Although her main battery remained in action for most of the battle, South Dakota spent much of the action dealing with major electrical failures that affected her radar, fire control, and radio systems. Although her armor was not penetrated, she was hit by 26 shells of various calibers and temporarily rendered, in a US admiral's words, "deaf, dumb, blind, and impotent". Washington went undetected by the Japanese for most of the battle, but withheld shooting to avoid "friendly fire" until South Dakota was illuminated by Japanese fire, then rapidly set Kirishima ablaze with a jammed rudder and other damage. Washington, finally spotted by the Japanese, then headed for the Russell Islands to hopefully draw the Japanese away from Guadalcanal and South Dakota, and was successful in evading several torpedo attacks. Unusually, only a few Japanese torpedoes scored hits in this engagement. Kirishima sank or was scuttled before the night was out, along with two Japanese destroyers. The remaining Japanese ships withdrew, except for the four transports, which beached themselves in the night and started unloading. However, dawn (and US aircraft, US artillery, and a US destroyer) found them still beached, and they were destroyed. Battle of Tassafaronga The Battle of Tassafaronga took place on the night of 30 November – 1 December 1942. The US had four heavy cruisers, one light cruiser, and four destroyers. The Japanese had eight destroyers on a Tokyo Express run to deliver food and supplies in drums to Guadalcanal. The Americans achieved initial surprise, damaging one destroyer with gunfire which later sank, but the Japanese torpedo counterattack was devastating. One American heavy cruiser was sunk and three others heavily damaged, with the bows blown off of two of them. It was significant that these two were not lost to Long Lance hits as happened in previous battles; American battle readiness and damage control had improved. Despite defeating the Americans, the Japanese withdrew without delivering the crucial supplies to Guadalcanal. Another attempt on 3 December dropped 1,500 drums of supplies near Guadalcanal, but Allied strafing aircraft sank all but 300 before the Japanese Army could recover them. On 7 December PT boats interrupted a Tokyo Express run, and the following night sank a Japanese supply submarine. The next day the Japanese Navy proposed stopping all destroyer runs to Guadalcanal, but agreed to do just one more. This was on 11 December and was also intercepted by PT boats, which sank a destroyer; only 200 of 1,200 drums dropped off the island were recovered. The next day the Japanese Navy proposed abandoning Guadalcanal; this was approved by the Imperial General Headquarters on 31 December and the Japanese left the island in early February 1943. Post-Guadalcanal After the Japanese abandoned Guadalcanal in February 1943, Allied operations in the Pacific shifted to the New Guinea campaign and isolating Rabaul. The Battle of Kula Gulf was fought on the night of 5–6 July. The US had three light cruisers and four destroyers; the Japanese had ten destroyers loaded with 2,600 troops destined for Vila to oppose a recent US landing on Rendova. Although the Japanese sank a cruiser, they lost two destroyers and were able to deliver only 850 troops. On the night of 12–13 July, the Battle of Kolombangara occurred. The Allies had three light cruisers (one New Zealand) and ten destroyers; the Japanese had one small light cruiser and five destroyers, a Tokyo Express run for Vila. All three Allied cruisers were heavily damaged, with the New Zealand cruiser put out of action for 25 months by a Long Lance hit. The Allies sank only the Japanese light cruiser, and the Japanese landed 1,200 troops at Vila. Despite their tactical victory, this battle caused the Japanese to use a different route in the future, where they were more vulnerable to destroyer and PT boat attacks. The Battle of Empress Augusta Bay was fought on the night of 1–2 November 1943, immediately after US Marines invaded Bougainville in the Solomon Islands. A Japanese heavy cruiser was damaged by a nighttime air attack shortly before the battle; it is likely that Allied airborne radar had progressed far enough to allow night operations. The Americans had four of the new cruisers and eight destroyers. The Japanese had two heavy cruisers, two small light cruisers, and six destroyers. Both sides were plagued by collisions, shells that failed to explode, and mutual skill in dodging torpedoes. The Americans suffered significant damage to three destroyers and light damage to a cruiser, but no losses. The Japanese lost one light cruiser and a destroyer, with four other ships damaged. The Japanese withdrew; the Americans pursued them until dawn, then returned to the landing area to provide anti-aircraft cover. After the Battle of the Santa Cruz Islands in October 1942, both sides were short of large aircraft carriers. The US suspended major carrier operations until sufficient carriers could be completed to destroy the entire Japanese fleet at once should it appear. The Central Pacific carrier raids and amphibious operations commenced in November 1943 with a carrier raid on Rabaul (preceded and followed by Fifth Air Force attacks) and the bloody but successful invasion of Tarawa. The air attacks on Rabaul crippled the Japanese cruiser force, with four heavy and two light cruisers damaged; they were withdrawn to Truk. The US had built up a force in the Central Pacific of six large, five light, and six escort carriers prior to commencing these operations. From this point on, US cruisers primarily served as anti-aircraft escorts for carriers and in shore bombardment. The only major Japanese carrier operation after Guadalcanal was the disastrous (for Japan) Battle of the Philippine Sea in June 1944, nicknamed the "Marianas Turkey Shoot" by the US Navy. Leyte Gulf The Imperial Japanese Navy's last major operation was the Battle of Leyte Gulf, an attempt to dislodge the American invasion of the Philippines in October 1944. The two actions at this battle in which cruisers played a significant role were the Battle off Samar and the Battle of Surigao Strait. Battle of Surigao Strait The Battle of Surigao Strait was fought on the night of 24–25 October, a few hours before the Battle off Samar. The Japanese had a small battleship group composed of and , one heavy cruiser, and four destroyers. They were followed at a considerable distance by another small force of two heavy cruisers, a small light cruiser, and four destroyers. Their goal was to head north through Surigao Strait and attack the invasion fleet off Leyte. The Allied force, known as the 7th Fleet Support Force, guarding the strait was overwhelming. It included six battleships (all but one previously damaged in 1941 at Pearl Harbor), four heavy cruisers (one Australian), four light cruisers, and 28 destroyers, plus a force of 39 PT boats. The only advantage to the Japanese was that most of the Allied battleships and cruisers were loaded mainly with high explosive shells, although a significant number of armor-piercing shells were also loaded. The lead Japanese force evaded the PT boats' torpedoes, but were hit hard by the destroyers' torpedoes, losing a battleship. Then they encountered the battleship and cruiser guns. Only one destroyer survived. The engagement is notable for being one of only two occasions in which battleships fired on battleships in the Pacific Theater, the other being the Naval Battle of Guadalcanal. Due to the starting arrangement of the opposing forces, the Allied force was in a "crossing the T" position, so this was the last battle in which this occurred, but it was not a planned maneuver. The following Japanese cruiser force had several problems, including a light cruiser damaged by a PT boat and two heavy cruisers colliding, one of which fell behind and was sunk by air attack the next day. An American veteran of Surigao Strait, , was transferred to Argentina in 1951 as , becoming most famous for being sunk by in the Falklands War on 2 May 1982. She was the first ship sunk by a nuclear submarine outside of accidents, and only the second ship sunk by a submarine since World War II. Battle off Samar At the Battle off Samar, a Japanese battleship group moving towards the invasion fleet off Leyte engaged a minuscule American force known as "Taffy 3" (formally Task Unit 77.4.3), composed of six escort carriers with about 28 aircraft each, three destroyers, and four destroyer escorts. The biggest guns in the American force were /38 caliber guns, while the Japanese had , , and guns. Aircraft from six additional escort carriers also participated for a total of around 330 US aircraft, a mix of F6F Hellcat fighters and TBF Avenger torpedo bombers. The Japanese had four battleships including Yamato, six heavy cruisers, two small light cruisers, and 11 destroyers. The Japanese force had earlier been driven off by air attack, losing Yamatos sister . Admiral Halsey then decided to use his Third Fleet carrier force to attack the Japanese carrier group, located well to the north of Samar, which was actually a decoy group with few aircraft. The Japanese were desperately short of aircraft and pilots at this point in the war, and Leyte Gulf was the first battle in which kamikaze attacks were used. Due to a tragedy of errors, Halsey took the American battleship force with him, leaving San Bernardino Strait guarded only by the small Seventh Fleet escort carrier force. The battle commenced at dawn on 25 October 1944, shortly after the Battle of Surigao Strait. In the engagement that followed, the Americans exhibited uncanny torpedo accuracy, blowing the bows off several Japanese heavy cruisers. The escort carriers' aircraft also performed very well, attacking with machine guns after their carriers ran out of bombs and torpedoes. The unexpected level of damage, and maneuvering to avoid the torpedoes and air attacks, disorganized the Japanese and caused them to think they faced at least part of the Third Fleet's main force. They had also learned of the defeat a few hours before at Surigao Strait, and did not hear that Halsey's force was busy destroying the decoy fleet. Convinced that the rest of the Third Fleet would arrive soon if it hadn't already, the Japanese withdrew, eventually losing three heavy cruisers sunk with three damaged to air and torpedo attacks. The Americans lost two escort carriers, two destroyers, and one destroyer escort sunk, with three escort carriers, one destroyer, and two destroyer escorts damaged, thus losing over one-third of their engaged force sunk with nearly all the remainder damaged. Wartime cruiser production The US built cruisers in quantity through the end of the war, notably 14 heavy cruisers and 27 Cleveland-class light cruisers, along with eight Atlanta-class anti-aircraft cruisers. The Cleveland class was the largest cruiser class ever built in number of ships completed, with nine additional Clevelands completed as light aircraft carriers. The large number of cruisers built was probably due to the significant cruiser losses of 1942 in the Pacific theater (seven American and five other Allied) and the perceived need for several cruisers to escort each of the numerous s being built. Losing four heavy and two small light cruisers in 1942, the Japanese built only five light cruisers during the war; these were small ships with six guns each. Losing 20 cruisers in 1940–42, the British completed no heavy cruisers, thirteen light cruisers ( and classes), and sixteen anti-aircraft cruisers (Dido class) during the war. Late 20th century The rise of air power during World War II dramatically changed the nature of naval combat. Even the fastest cruisers could not maneuver quickly enough to evade aerial attack, and aircraft now had torpedoes, allowing moderate-range standoff capabilities. This change led to the end of independent operations by single ships or very small task groups, and for the second half of the 20th century naval operations were based on very large fleets believed able to fend off all but the largest air attacks, though this was not tested by any war in that period. The US Navy became centered around carrier groups, with cruisers and battleships primarily providing anti-aircraft defense and shore bombardment. Until the Harpoon missile entered service in the late 1970s, the US Navy was almost entirely dependent on carrier-based aircraft and submarines for conventionally attacking enemy warships. Lacking aircraft carriers, the Soviet Navy depended on anti-ship cruise missiles; in the 1950s these were primarily delivered from heavy land-based bombers. Soviet submarine-launched cruise missiles at the time were primarily for land attack; but by 1964 anti-ship missiles were deployed in quantity on cruisers, destroyers, and submarines. US cruiser development The US Navy was aware of the potential missile threat as
meant that new designs of battleship, later known as pre-dreadnought battleships, would be able to combine firepower and armor with better endurance and speed than ever before. The armored cruisers of the 1890s greatly resembled the battleships of the day; they tended to carry slightly smaller main armament ( rather than 12-inch) and have somewhat thinner armor in exchange for a faster speed (perhaps rather than 18). Because of their similarity, the lines between battleships and armored cruisers became blurred. Early 20th century Shortly after the turn of the 20th century there were difficult questions about the design of future cruisers. Modern armored cruisers, almost as powerful as battleships, were also fast enough to outrun older protected and unarmored cruisers. In the Royal Navy, Jackie Fisher cut back hugely on older vessels, including many cruisers of different sorts, calling them "a miser's hoard of useless junk" that any modern cruiser would sweep from the seas. The scout cruiser also appeared in this era; this was a small, fast, lightly armed and armored type designed primarily for reconnaissance. The Royal Navy and the Italian Navy were the primary developers of this type. Battle cruisers The growing size and power of the armored cruiser resulted in the battlecruiser, with an armament and size similar to the revolutionary new dreadnought battleship; the brainchild of British admiral Jackie Fisher. He believed that to ensure British naval dominance in its overseas colonial possessions, a fleet of large, fast, powerfully armed vessels which would be able to hunt down and mop up enemy cruisers and armored cruisers with overwhelming fire superiority was needed. They were equipped with the same gun types as battleships, though usually with fewer guns, and were intended to engage enemy capital ships as well. This type of vessel came to be known as the battlecruiser, and the first were commissioned into the Royal Navy in 1907. The British battlecruisers sacrificed protection for speed, as they were intended to "choose their range" (to the enemy) with superior speed and only engage the enemy at long range. When engaged at moderate ranges, the lack of protection combined with unsafe ammunition handling practices became tragic with the loss of three of them at the Battle of Jutland. Germany and eventually Japan followed suit to build these vessels, replacing armored cruisers in most frontline roles. German battlecruisers were generally better protected but slower than British battlecruisers. Battlecruisers were in many cases larger and more expensive than contemporary battleships, due to their much-larger propulsion plants. Light cruisers At around the same time as the battlecruiser was developed, the distinction between the armored and the unarmored cruiser finally disappeared. By the British , the first of which was launched in 1909, it was possible for a small, fast cruiser to carry both belt and deck armor, particularly when turbine engines were adopted. These light armored cruisers began to occupy the traditional cruiser role once it became clear that the battlecruiser squadrons were required to operate with the battle fleet. Flotilla leaders Some light cruisers were built specifically to act as the leaders of flotillas of destroyers. Coastguard cruisers These vessels were essentially large coastal patrol boats armed with multiple light guns. One such warship was Grivița of the Romanian Navy. She displaced 110 tons, measured 60 meters in length and was armed with four light guns. Auxiliary cruisers The auxiliary cruiser was a merchant ship hastily armed with small guns on the outbreak of war. Auxiliary cruisers were used to fill gaps in their long-range lines or provide escort for other cargo ships, although they generally proved to be useless in this role because of their low speed, feeble firepower and lack of armor. In both world wars the Germans also used small merchant ships armed with cruiser guns to surprise Allied merchant ships. Some large liners were armed in the same way. In British service these were known as Armed Merchant Cruisers (AMC). The Germans and French used them in World War I as raiders because of their high speed (around 30 knots (56 km/h)), and they were used again as raiders early in World War II by the Germans and Japanese. In both the First World War and in the early part of the Second, they were used as convoy escorts by the British. World War I Cruisers were one of the workhorse types of warship during World War I. By the time of World War I, cruisers had accelerated their development and improved their quality significantly, with drainage volume reaching 3000–4000 tons, a speed of 25–30 knots and a calibre of 127–152 mm. Mid-20th century Naval construction in the 1920s and 1930s was limited by international treaties designed to prevent the repetition of the Dreadnought arms race of the early 20th century. The Washington Naval Treaty of 1922 placed limits on the construction of ships with a standard displacement of more than 10,000 tons and an armament of guns larger than 8-inch (203 mm). A number of navies commissioned classes of cruisers at the top end of this limit, known as "treaty cruisers". The London Naval Treaty in 1930 then formalised the distinction between these "heavy" cruisers and light cruisers: a "heavy" cruiser was one with guns of more than 6.1-inch (155 mm) calibre. The Second London Naval Treaty attempted to reduce the tonnage of new cruisers to 8,000 or less, but this had little effect; Japan and Germany were not signatories, and some navies had already begun to evade treaty limitations on warships. The first London treaty did touch off a period of the major powers building 6-inch or 6.1-inch gunned cruisers, nominally of 10,000 tons and with up to fifteen guns, the treaty limit. Thus, most light cruisers ordered after 1930 were the size of heavy cruisers but with more and smaller guns. The Imperial Japanese Navy began this new race with the , launched in 1934. After building smaller light cruisers with six or eight 6-inch guns launched 1931–35, the British Royal Navy followed with the 12-gun in 1936. To match foreign developments and potential treaty violations, in the 1930s the US developed a series of new guns firing "super-heavy" armor piercing ammunition; these included the 6-inch (152 mm)/47 caliber gun Mark 16 introduced with the 15-gun s in 1936, and the 8-inch (203 mm)/55 caliber gun Mark 12 introduced with in 1937. Heavy cruisers The heavy cruiser was a type of cruiser designed for long range, high speed and an armament of naval guns around 203 mm (8 in) in calibre. The first heavy cruisers were built in 1915, although it only became a widespread classification following the London Naval Treaty in 1930. The heavy cruiser's immediate precursors were the light cruiser designs of the 1910s and 1920s; the US lightly armored 8-inch "treaty cruisers" of the 1920s (built under the Washington Naval Treaty) were originally classed as light cruisers until the London Treaty forced their redesignation. Initially, all cruisers built under the Washington treaty had torpedo tubes, regardless of nationality. However, in 1930, results of war games caused the US Naval War College to conclude that only perhaps half of cruisers would use their torpedoes in action. In a surface engagement, long-range gunfire and destroyer torpedoes would decide the issue, and under air attack numerous cruisers would be lost before getting within torpedo range. Thus, beginning with launched in 1933, new cruisers were built without torpedoes, and torpedoes were removed from older heavy cruisers due to the perceived hazard of their being exploded by shell fire. The Japanese took exactly the opposite approach with cruiser torpedoes, and this proved crucial to their tactical victories in most of the numerous cruiser actions of 1942. Beginning with the launched in 1925, every Japanese heavy cruiser was armed with torpedoes, larger than any other cruisers'. By 1933 Japan had developed the Type 93 torpedo for these ships, eventually nicknamed "Long Lance" by the Allies. This type used compressed oxygen instead of compressed air, allowing it to achieve ranges and speeds unmatched by other torpedoes. It could achieve a range of at , compared with the US Mark 15 torpedo with at . The Mark 15 had a maximum range of at , still well below the "Long Lance". The Japanese were able to keep the Type 93's performance and oxygen power secret until the Allies recovered one in early 1943, thus the Allies faced a great threat they were not aware of in 1942. The Type 93 was also fitted to Japanese post-1930 light cruisers and the majority of their World War II destroyers. Heavy cruisers continued in use until after World War II, with some converted to guided missile cruisers for air defense or strategic attack and some used for shore bombardment by the United States in the Korean War and the Vietnam War. German pocket battleships The German was a series of three Panzerschiffe ("armored ships"), a form of heavily armed cruiser, designed and built by the German Reichsmarine in nominal accordance with restrictions imposed by the Treaty of Versailles. All three ships were launched between 1931 and 1934, and served with Germany's Kriegsmarine during World War II. Within the Kriegsmarine, the Panzerschiffe had the propaganda value of capital ships: heavy cruisers with battleship guns, torpedoes, and scout aircraft. The similar Swedish Panzerschiffe were tactically used as centers of battlefleets and not as cruisers. They were deployed by Nazi Germany in support of the German interests in the Spanish Civil War. Panzerschiff Admiral Graf Spee represented Germany in the 1937 Cornation Fleet Review. The British press referred to the vessels as pocket battleships, in reference to the heavy firepower contained in the relatively small vessels; they were considerably smaller than contemporary battleships, though at 28 knots were slower than battlecruisers. At up to 16,000 tons at full load, they were not treaty compliant 10,000 ton cruisers. And although their displacement and scale of armor protection were that of a heavy cruiser, their main armament was heavier than the guns of other nations' heavy cruisers, and the latter two members of the class also had tall conning towers resembling battleships. The Panzerschiffe were listed as Ersatz replacements for retiring Reichsmarine coastal defense battleships, which added to their propaganda status in the Kriegsmarine as Ersatz battleships; within the Royal Navy, only battlecruisers HMS Hood, HMS Repulse and HMS Renown were capable of both outrunning and outgunning the Panzerschiffe. They were seen in the 1930s as a new and serious threat by both Britain and France. While the Kriegsmarine reclassified them as heavy cruisers in 1940, Deutschland-class ships continued to be called pocket battleships in the popular press. Large cruiser The American represented the supersized cruiser design. Due to the German pocket battleships, the , and rumored Japanese "super cruisers", all of which carried guns larger than the standard heavy cruiser's 8-inch size dictated by naval treaty limitations, the Alaskas were intended to be "cruiser-killers". While superficially appearing similar to a battleship/battlecruiser and mounting three triple turrets of 12-inch guns, their actual protection scheme and design resembled a scaled-up heavy cruiser design. Their hull classification symbol of CB (cruiser, big) reflected this. Anti-aircraft cruisers A precursor to the anti-aircraft cruiser was the Romanian British-built protected cruiser Elisabeta. After the start of World War I, her four 120 mm main guns were landed and her four 75 mm (12-pounder) secondary guns were modified for anti-aircraft fire. The development of the anti-aircraft cruiser began in 1935 when the Royal Navy re-armed and . Torpedo tubes and low-angle guns were removed from these World War I light cruisers and replaced with ten high-angle guns, with appropriate fire-control equipment to provide larger warships with protection against high-altitude bombers. A tactical shortcoming was recognised after completing six additional conversions of s. Having sacrificed anti-ship weapons for anti-aircraft armament, the converted anti-aircraft cruisers might themselves need protection against surface units. New construction was undertaken to create cruisers of similar speed and displacement with dual-purpose guns, which offered good anti-aircraft protection with anti-surface capability for the traditional light cruiser role of defending capital ships from destroyers. The first purpose built anti-aircraft cruiser was the British , completed in 1940–42. The US Navy's cruisers (CLAA: light cruiser with anti-aircraft capability) were designed to match the capabilities of the Royal Navy. Both Dido and Atlanta cruisers initially carried torpedo tubes; the Atlanta cruisers at least were originally designed as destroyer leaders, were originally designated CL (light cruiser), and did not receive the CLAA designation until 1949. The concept of the quick-firing dual-purpose gun anti-aircraft cruiser was embraced in several designs completed too late to see combat, including: , completed in 1948; , completed in 1949; two s, completed in 1947; two s, completed in 1953; , completed in 1955; , completed in 1959; and , and , all completed between 1959 and 1961. Most post-World War II cruisers were tasked with air defense roles. In the early 1950s, advances in aviation technology forced the move from anti-aircraft artillery to anti-aircraft missiles. Therefore, most modern cruisers are equipped with surface-to-air missiles as their main armament. Today's equivalent of the anti-aircraft cruiser is the guided missile cruiser (CAG/CLG/CG/CGN). World War II Cruisers participated in a number of surface engagements in the early part of World War II, along with escorting carrier and battleship groups throughout the war. In the later part of the war, Allied cruisers primarily provided anti-aircraft (AA) escort for carrier groups and performed shore bombardment. Japanese cruisers similarly escorted carrier and battleship groups in the later part of the war, notably in the disastrous Battle of the Philippine Sea and Battle of Leyte Gulf. In 1937–41 the Japanese, having withdrawn from all naval treaties, upgraded or completed the Mogami and es as heavy cruisers by replacing their triple turrets with twin turrets. Torpedo refits were also made to most heavy cruisers, resulting in up to sixteen tubes per ship, plus a set of reloads. In 1941 the 1920s light cruisers and were converted to torpedo cruisers with four guns and forty torpedo tubes. In 1944 Kitakami was further converted to carry up to eight Kaiten human torpedoes in place of ordinary torpedoes. Before World War II, cruisers were mainly divided into three types: heavy cruisers, light cruisers and auxiliary cruisers. Heavy cruiser tonnage reached 20–30,000 tons, speed 32–34 knots, endurance of more than 10,000 nautical miles, armor thickness of 127–203 mm. Heavy cruisers were equipped with eight or nine guns with a range of more than 20 nautical miles. They were mainly used to attack enemy surface ships and shore-based targets. In addition, there were 10–16 secondary guns with a caliber of less than . Also, dozens of automatic antiaircraft guns were installed to fight aircraft and small vessels such as torpedo boats. For example, in World War II, American Alaska-class cruisers were more than 30,000 tons, equipped with nine guns. Some cruisers could also carry three or four seaplanes to correct the accuracy of gunfire and perform reconnaissance. Together with battleships, these heavy cruisers formed powerful naval task forces, which dominated the world's oceans for more than a century. After the signing of the Washington Treaty on Arms Limitation in 1922, the tonnage and quantity of battleships, aircraft carriers and cruisers were severely restricted. In order not to violate the treaty, countries began to develop light cruisers. Light cruisers of the 1920s had displacements of less than 10,000 tons and a speed of up to 35 knots. They were equipped with 6–12 main guns with a caliber of 127–133 mm (5–5.5 inches). In addition, they were equipped with 8–12 secondary guns under 127 mm (5 in) and dozens of small caliber cannons, as well as torpedoes and mines. Some ships also carried 2–4 seaplanes, mainly for reconnaissance. In 1930 the London Naval Treaty allowed large light cruisers to be built, with the same tonnage as heavy cruisers and armed with up to fifteen guns. The Japanese Mogami class were built to this treaty's limit, the Americans and British also built similar ships. However, in 1939 the Mogamis were refitted as heavy cruisers with ten guns. 1939 to Pearl Harbor In December 1939, three British cruisers engaged the German "pocket battleship" Admiral Graf Spee (which was on a commerce raiding mission) in the Battle of the River Plate; Admiral Graf Spee then took refuge in neutral Montevideo, Uruguay. By broadcasting messages indicating capital ships were in the area, the British caused Admiral Graf Spees captain to think he faced a hopeless situation while low on ammunition and order his ship scuttled. On 8 June 1940 the German capital ships and , classed as battleships but with large cruiser armament, sank the aircraft carrier with gunfire. From October 1940 through March 1941 the German heavy cruiser (also known as "pocket battleship", see above) conducted a successful commerce-raiding voyage in the Atlantic and Indian Oceans. On 27 May 1941, attempted to finish off the German battleship with torpedoes, probably causing the Germans to scuttle the ship. Bismarck (accompanied by the heavy cruiser ) previously sank the battlecruiser and damaged the battleship with gunfire in the Battle of the Denmark Strait. On 19 November 1941 sank in a mutually fatal engagement with the German raider Kormoran in the Indian Ocean near Western Australia. Atlantic, Mediterranean, and Indian Ocean operations 1942–1944 Twenty-three British cruisers were lost to enemy action, mostly to air attack and submarines, in operations in the Atlantic, Mediterranean, and Indian Ocean. Sixteen of these losses were in the Mediterranean. The British included cruisers and anti-aircraft cruisers among convoy escorts in the Mediterranean and to northern Russia due to the threat of surface and air attack. Almost all cruisers in World War II were vulnerable to submarine attack due to a lack of anti-submarine sonar and weapons. Also, until 1943–44 the light anti-aircraft armament of most cruisers was weak. In July 1942 an attempt to intercept Convoy PQ 17 with surface ships, including the heavy cruiser Admiral Scheer, failed due to multiple German warships grounding, but air and submarine attacks sank 2/3 of the convoy's ships. In August 1942 Admiral Scheer conducted Operation Wunderland, a solo raid into northern Russia's Kara Sea. She bombarded Dikson Island but otherwise had little success. On 31 December 1942 the Battle of the Barents Sea was fought, a rare action for a Murmansk run because it involved cruisers on both sides. Four British destroyers and five other vessels were escorting Convoy JW 51B from the UK to the Murmansk area. Another British force of two cruisers ( and ) and two destroyers were in the area. Two heavy cruisers (one the "pocket battleship" Lützow), accompanied by six destroyers, attempted to intercept the convoy near North Cape after it was spotted by a U-boat. Although the Germans sank a British destroyer and a minesweeper (also damaging another destroyer), they failed to damage any of the convoy's merchant ships. A German destroyer was lost and a heavy cruiser damaged. Both sides withdrew from the action for fear of the other side's torpedoes. On 26 December 1943 the German capital ship Scharnhorst was sunk while attempting to intercept a convoy in the Battle of the North Cape. The British force that sank her was led by Vice Admiral Bruce Fraser in the battleship , accompanied by four cruisers and nine destroyers. One of the cruisers was the preserved
conjunctivitis, which may lead to blindness; and pneumonia. Conjunctivitis due to chlamydia typically occurs one week after birth (compared with chemical causes (within hours) or gonorrhea (2–5 days)). Other conditions A different serovar of Chlamydia trachomatis is also the cause of lymphogranuloma venereum, an infection of the lymph nodes and lymphatics. It usually presents with genital ulceration and swollen lymph nodes in the groin, but it may also manifest as rectal inflammation, fever or swollen lymph nodes in other regions of the body. Transmission Chlamydia can be transmitted during vaginal, anal, or oral sex, or direct contact with infected tissue such as conjunctiva. Chlamydia can also be passed from an infected mother to her baby during vaginal childbirth. It is assumed that the probability of becoming infected is proportionate to the number of bacteria one is exposed to. Pathophysiology Chlamydiae have the ability to establish long-term associations with host cells. When an infected host cell is starved for various nutrients such as amino acids (for example, tryptophan), iron, or vitamins, this has a negative consequence for Chlamydiae since the organism is dependent on the host cell for these nutrients. Long-term cohort studies indicate that approximately 50% of those infected clear within a year, 80% within two years, and 90% within three years. The starved chlamydiae enter a persistent growth state wherein they stop cell division and become morphologically aberrant by increasing in size. Persistent organisms remain viable as they are capable of returning to a normal growth state once conditions in the host cell improve. There is debate as to whether persistence has relevance. Some believe that persistent chlamydiae are the cause of chronic chlamydial diseases. Some antibiotics such as β-lactams have been found to induce a persistent-like growth state. Diagnosis The diagnosis of genital chlamydial infections evolved rapidly from the 1990s through 2006. Nucleic acid amplification tests (NAAT), such as polymerase chain reaction (PCR), transcription mediated amplification (TMA), and the DNA strand displacement amplification (SDA) now are the mainstays. NAAT for chlamydia may be performed on swab specimens sampled from the cervix (women) or urethra (men), on self-collected vaginal swabs, or on voided urine. NAAT has been estimated to have a sensitivity of approximately 90% and a specificity of approximately 99%, regardless of sampling from a cervical swab or by urine specimen. In women seeking an sexually transmitted infection (STI) clinic and a urine test is negative, a subsequent cervical swab has been estimated to be positive in approximately 2% of the time. At present, the NAATs have regulatory approval only for testing urogenital specimens, although rapidly evolving research indicates that they may give reliable results on rectal specimens. Because of improved test accuracy, ease of specimen management, convenience in specimen management, and ease of screening sexually active men and women, the NAATs have largely replaced culture, the historic gold standard for chlamydia diagnosis, and the non-amplified probe tests. The latter test is relatively insensitive, successfully detecting only 60–80% of infections in asymptomatic women, and often giving falsely-positive results. Culture remains useful in selected circumstances and is currently the only assay approved for testing non-genital specimens. Other methods also exist including: ligase chain reaction (LCR), direct fluorescent antibody resting, enzyme immunoassay, and cell culture. Rapid point-of-care tests are, as of 2020, not thought to be effective for diagnosing chlamydia in men of reproductive age and nonpregnant women because of high false-negative rates. Prevention Prevention is by not having sex, the use of condoms, or having sex only with partners who are not infected.
period between exposure and being able to infect others is thought to be on the order of two to six weeks. Symptoms in women may include vaginal discharge or burning with urination. Symptoms in men may include discharge from the penis, burning with urination, or pain and swelling of one or both testicles. The infection can spread to the upper genital tract in women, causing pelvic inflammatory disease, which may result in future infertility or ectopic pregnancy. Chlamydia infections can occur in other areas besides the genitals, including the anus, eyes, throat, and lymph nodes. Repeated chlamydia infections of the eyes that go without treatment can result in trachoma, a common cause of blindness in the developing world. Chlamydia can be spread during vaginal, anal, or oral sex, and can be passed from an infected mother to her baby during childbirth. The eye infections may also be spread by personal contact, flies, and contaminated towels in areas with poor sanitation. Infection by the bacterium Chlamydia trachomatis only occurs in humans. Diagnosis is often by screening which is recommended yearly in sexually active women under the age of twenty-five, others at higher risk, and at the first prenatal visit. Testing can be done on the urine or a swab of the cervix, vagina, or urethra. Rectal or mouth swabs are required to diagnose infections in those areas. Prevention is by not having sex, the use of condoms, or having sex only with persons who are not infected. Chlamydia can be cured by antibiotics with typically either azithromycin or doxycycline being used. Erythromycin or azithromycin is recommended in babies and during pregnancy. Sexual partners should also be treated, and infected people should be advised not to have sex for seven days and until symptom free. Gonorrhea, syphilis, and HIV should be tested for in those who have been infected. Following treatment people should be tested again after three months. Chlamydia is one of the most common sexually transmitted infections, affecting about 4.2% of women and 2.7% of men worldwide. In 2015, about 61 million new cases occurred globally. In the United States about 1.4 million cases were reported in 2014. Infections are most common among those between the ages of 15 and 25 and are more common in women than men. In 2015 infections resulted in about 200 deaths. The word chlamydia is from the Greek χλαμύδα, meaning "cloak". Signs and symptoms Genital disease Women Chlamydial infection of the cervix (neck of the womb) is a sexually transmitted infection which has no symptoms for around 70% of women infected. The infection can be passed through vaginal, anal, or oral sex. Of those who have an asymptomatic infection that is not detected by their doctor, approximately half will develop pelvic inflammatory disease (PID), a generic term for infection of the uterus, fallopian tubes, and/or ovaries. PID can cause scarring inside the reproductive organs, which can later cause serious complications, including chronic pelvic pain, difficulty becoming pregnant, ectopic (tubal) pregnancy, and other dangerous complications of pregnancy. Chlamydia is known as the "silent epidemic", as at least 70% of genital C. trachomatis infections in women (and 50% in men) are asymptomatic at the time of diagnosis, and can linger for months or years before being discovered. Signs and symptoms may include abnormal vaginal bleeding or discharge, abdominal pain, painful sexual intercourse, fever, painful urination or the urge to urinate more often than usual (urinary urgency). For sexually active women who are not pregnant, screening is recommended in those under 25 and others at risk of infection. Risk factors include a history of chlamydial or other sexually transmitted infection, new or multiple sexual partners, and inconsistent condom use. Guidelines recommend all women attending for emergency contraceptive are offered chlamydia testing, with studies showing up to 9% of women aged <25 years had chlamydia. Men In men, those with a chlamydial infection show symptoms of infectious inflammation of the urethra in about 50% of cases. Symptoms that may occur include: a painful or burning sensation when urinating, an unusual discharge from the penis, testicular pain or swelling, or fever. If left untreated, chlamydia in men can spread to the testicles causing epididymitis, which in rare cases can lead to sterility if not treated. Chlamydia is also a potential cause of prostatic inflammation in men, although the exact relevance in prostatitis is difficult to ascertain due to possible contamination from urethritis. Eye disease Trachoma is a chronic conjunctivitis caused by Chlamydia trachomatis. It was once the leading cause of blindness worldwide, but its role diminished from 15% of blindness cases by trachoma in 1995 to 3.6% in 2002. The infection
rash. Very rarely, yeast infections may become invasive, spreading to other parts of the body. This may result in fevers along with other symptoms depending on the parts involved. More than 20 types of Candida can cause infection with Candida albicans being the most common. Infections of the mouth are most common among children less than one month old, the elderly, and those with weak immune systems. Conditions that result in a weak immune system include HIV/AIDS, the medications used after organ transplantation, diabetes, and the use of corticosteroids. Other risks include dentures, following antibiotic therapy, and breastfeeding. Vaginal infections occur more commonly during pregnancy, in those with weak immune systems, and following antibiotic use. Individuals at risk for invasive candidiasis include low birth weight babies, people recovering from surgery, people admitted to intensive care units, and those with an otherwise compromised immune system. Efforts to prevent infections of the mouth include the use of chlorhexidine mouthwash in those with poor immune function and washing out the mouth following the use of inhaled steroids. Little evidence supports probiotics for either prevention or treatment, even among those with frequent vaginal infections. For infections of the mouth, treatment with topical clotrimazole or nystatin is usually effective. Oral or intravenous fluconazole, itraconazole, or amphotericin B may be used if these do not work. A number of topical antifungal medications may be used for vaginal infections, including clotrimazole. In those with widespread disease, an echinocandin such as caspofungin or micafungin is used. A number of weeks of intravenous amphotericin B may be used as an alternative. In certain groups at very high risk, antifungal medications may be used preventatively. Infections of the mouth occur in about 6% of babies less than a month old. About 20% of those receiving chemotherapy for cancer and 20% of those with AIDS also develop the disease. About three-quarters of women have at least one yeast infection at some time during their lives. Widespread disease is rare except in those who have risk factors. Signs and symptoms Signs and symptoms of candidiasis vary depending on the area affected. Most candidal infections result in minimal complications such as redness, itching, and discomfort, though complications may be severe or even fatal if left untreated in certain populations. In healthy (immunocompetent) persons, candidiasis is usually a localized infection of the skin, fingernails or toenails (onychomycosis), or mucosal membranes, including the oral cavity and pharynx (thrush), esophagus, and the genitalia (vagina, penis, etc.); less commonly in healthy individuals, the gastrointestinal tract, urinary tract, and respiratory tract are sites of candida infection. In immunocompromised individuals, Candida infections in the esophagus occur more frequently than in healthy individuals and have a higher potential of becoming systemic, causing a much more serious condition, a fungemia called candidemia. Symptoms of esophageal candidiasis include difficulty swallowing, painful swallowing, abdominal pain, nausea, and vomiting. Mouth Infection in the mouth is characterized by white discolorations in the tongue, around the mouth, and throat. Irritation may also occur, causing discomfort when swallowing. Thrush is commonly seen in infants. It is not considered abnormal in infants unless it lasts longer than a few weeks. Genitals Infection of the vagina or vulva may cause severe itching, burning, soreness, irritation, and a whitish or whitish-gray cottage cheese-like discharge. Symptoms of infection of the male genitalia (balanitis thrush) include red skin around the head of the penis, swelling, irritation, itchiness and soreness of the head of the penis, thick, lumpy discharge under the foreskin, unpleasant odour, difficulty retracting the foreskin (phimosis), and pain when passing urine or during sex. Skin Signs and symptoms of candidiasis in the skin include itching, irritation, and chafing or broken skin. Invasive infection Common symptoms of gastrointestinal candidiasis in healthy individuals are anal itching, belching, bloating, indigestion, nausea, diarrhea, gas, intestinal cramps, vomiting, and gastric ulcers. Perianal candidiasis can cause anal itching; the lesion can be red, papular, or ulcerative in appearance, and it is not considered to be a sexually transmissible disease. Abnormal proliferation of the candida in the gut may lead to dysbiosis. While it is not yet clear, this alteration may be the source of symptoms generally described as the irritable bowel syndrome, and other gastrointestinal diseases. Causes Candida yeasts are generally present in healthy humans, frequently part of the human body's normal oral and intestinal flora, and particularly on the skin; however, their growth is normally limited by the human immune system and by competition of other microorganisms, such as bacteria occupying the same locations in the human body. Candida requires moisture for growth, notably on the skin. For example, wearing wet swimwear for long periods of time is believed to be a risk factor. Candida can also cause diaper rashes in babies. In extreme cases, superficial infections of the skin or mucous membranes may enter the bloodstream and cause systemic Candida infections. Factors that increase the risk of candidiasis include HIV/AIDS, mononucleosis, cancer treatments, steroids, stress, antibiotic usage, diabetes, and nutrient deficiency. Hormone replacement therapy and infertility treatments may also be predisposing factors. Use of inhaled corticosteroids increases risk of candidiasis of the mouth. Inhaled corticosteroids with other risk factors such as antibiotics, oral glucocorticoids, not rinsing mouth after use of inhaled corticosteroids or high dose of inhaled corticosteroids put people at even higher risk. Treatment with antibiotics can lead to eliminating the yeast's natural competitors for resources in the oral and intestinal flora, thereby increasing the severity of the condition. A weakened or undeveloped immune system or metabolic illnesses are significant predisposing factors of candidiasis. Almost 15% of people with weakened immune systems develop a systemic illness caused by Candida species. Diets high in simple carbohydrates have been found to affect rates of oral candidiases. C. albicans was isolated from the vaginas of 19% of apparently healthy people, i.e., those who experienced few or no symptoms of infection. External use of detergents or douches or internal disturbances (hormonal or physiological) can perturb the normal vaginal flora, consisting of lactic acid bacteria, such as lactobacilli, and result in an overgrowth of Candida cells, causing symptoms of infection, such as local inflammation. Pregnancy and the use of oral contraceptives have been reported as risk factors. Diabetes mellitus and the use of antibiotics are also linked to increased rates of yeast infections. In penile candidiasis, the causes include sexual intercourse with an infected individual, low immunity, antibiotics, and diabetes. Male genital yeast infections are less common, but a yeast infection on the penis caused from direct contact via sexual intercourse with an infected partner is not uncommon. Breast-feeding mothers may also develop candidiasis on and around the nipple as a
more commonly during pregnancy, in those with weak immune systems, and following antibiotic use. Individuals at risk for invasive candidiasis include low birth weight babies, people recovering from surgery, people admitted to intensive care units, and those with an otherwise compromised immune system. Efforts to prevent infections of the mouth include the use of chlorhexidine mouthwash in those with poor immune function and washing out the mouth following the use of inhaled steroids. Little evidence supports probiotics for either prevention or treatment, even among those with frequent vaginal infections. For infections of the mouth, treatment with topical clotrimazole or nystatin is usually effective. Oral or intravenous fluconazole, itraconazole, or amphotericin B may be used if these do not work. A number of topical antifungal medications may be used for vaginal infections, including clotrimazole. In those with widespread disease, an echinocandin such as caspofungin or micafungin is used. A number of weeks of intravenous amphotericin B may be used as an alternative. In certain groups at very high risk, antifungal medications may be used preventatively. Infections of the mouth occur in about 6% of babies less than a month old. About 20% of those receiving chemotherapy for cancer and 20% of those with AIDS also develop the disease. About three-quarters of women have at least one yeast infection at some time during their lives. Widespread disease is rare except in those who have risk factors. Signs and symptoms Signs and symptoms of candidiasis vary depending on the area affected. Most candidal infections result in minimal complications such as redness, itching, and discomfort, though complications may be severe or even fatal if left untreated in certain populations. In healthy (immunocompetent) persons, candidiasis is usually a localized infection of the skin, fingernails or toenails (onychomycosis), or mucosal membranes, including the oral cavity and pharynx (thrush), esophagus, and the genitalia (vagina, penis, etc.); less commonly in healthy individuals, the gastrointestinal tract, urinary tract, and respiratory tract are sites of candida infection. In immunocompromised individuals, Candida infections in the esophagus occur more frequently than in healthy individuals and have a higher potential of becoming systemic, causing a much more serious condition, a fungemia called candidemia. Symptoms of esophageal candidiasis include difficulty swallowing, painful swallowing, abdominal pain, nausea, and vomiting. Mouth Infection in the mouth is characterized by white discolorations in the tongue, around the mouth, and throat. Irritation may also occur, causing discomfort when swallowing. Thrush is commonly seen in infants. It is not considered abnormal in infants unless it lasts longer than a few weeks. Genitals Infection of the vagina or vulva may cause severe itching, burning, soreness, irritation, and a whitish or whitish-gray cottage cheese-like discharge. Symptoms of infection of the male genitalia (balanitis thrush) include red skin around the head of the penis, swelling, irritation, itchiness and soreness of the head of the penis, thick, lumpy discharge under the foreskin, unpleasant odour, difficulty retracting the foreskin (phimosis), and pain when passing urine or during sex. Skin Signs and symptoms of candidiasis in the skin include itching, irritation, and chafing or broken skin. Invasive infection Common symptoms of gastrointestinal candidiasis in healthy individuals are anal itching, belching, bloating, indigestion, nausea, diarrhea, gas, intestinal cramps, vomiting, and gastric ulcers. Perianal candidiasis can cause anal itching; the lesion can be red, papular, or ulcerative in appearance, and it is not considered to be a sexually transmissible disease. Abnormal proliferation of the candida in the gut may lead to dysbiosis. While it is not yet clear, this alteration may be the source of symptoms generally described as the irritable bowel syndrome, and other gastrointestinal diseases. Causes Candida yeasts are generally present in healthy humans, frequently part of the human body's normal oral and intestinal flora, and particularly on the skin; however, their growth is normally limited by the human immune system and by competition of other microorganisms, such as bacteria occupying the same locations in the human body. Candida requires moisture for growth, notably on the skin. For example, wearing wet swimwear for long periods of time is believed to be a risk factor. Candida can also cause diaper rashes in babies. In extreme cases, superficial infections of the skin or mucous membranes may enter the bloodstream and cause systemic Candida infections. Factors that increase the risk of candidiasis include HIV/AIDS, mononucleosis, cancer treatments, steroids, stress, antibiotic usage, diabetes, and nutrient deficiency. Hormone replacement therapy and infertility treatments may also be predisposing factors. Use of inhaled corticosteroids increases risk of candidiasis of the mouth. Inhaled corticosteroids with other risk factors such as antibiotics, oral glucocorticoids, not rinsing mouth after use of inhaled corticosteroids or high dose of inhaled corticosteroids put people at even higher risk. Treatment with antibiotics can lead to eliminating the yeast's natural competitors for resources in the oral and intestinal flora, thereby increasing the severity of the condition. A weakened or undeveloped immune system or metabolic illnesses are significant predisposing factors of candidiasis. Almost 15% of people with weakened immune systems develop a systemic illness caused by Candida species. Diets high in simple carbohydrates have been found to affect rates of oral candidiases. C. albicans was isolated from the vaginas of 19% of apparently healthy people, i.e., those who experienced few or no symptoms of infection. External use of detergents or douches or internal disturbances (hormonal or physiological) can perturb the normal vaginal flora, consisting of lactic acid bacteria, such as lactobacilli, and result in an overgrowth of Candida cells, causing symptoms of infection, such as local inflammation. Pregnancy and the use of oral contraceptives have been reported as risk factors. Diabetes mellitus and the use of antibiotics are also linked to increased rates of yeast infections. In penile candidiasis, the causes include sexual intercourse with an infected individual, low immunity, antibiotics, and diabetes. Male genital yeast infections are less common, but a yeast infection on the penis caused from direct contact via sexual intercourse with an infected partner is not uncommon. Breast-feeding mothers may also develop candidiasis on and around the nipple as a result of moisture created by excessive milk-production. Vaginal candidiasis can cause congenital candidiasis in newborns. Diagnosis In oral candidiasis, simply inspecting the person's mouth for white patches and irritation may make the diagnosis. A sample of the infected area may also be taken to determine what organism is causing the infection. Symptoms of vaginal candidiasis are also present in the more common bacterial vaginosis; aerobic vaginitis is distinct and should be excluded in the differential diagnosis. In a 2002 study, only 33% of women who were self-treating for a yeast infection actually had such an infection, while most had either bacterial vaginosis or a mixed-type infection. Diagnosis of a yeast infection is done either via microscopic examination or culturing. For identification by light microscopy, a scraping or swab of the affected
phenomenon of self-oscillation, in which lags in the system may lead to overcompensation and unstable behavior. This generated a flurry of interest in the topic, during which Maxwell's classmate, Edward John Routh, abstracted Maxwell's results for the general class of linear systems. Independently, Adolf Hurwitz analyzed system stability using differential equations in 1877, resulting in what is now known as the Routh–Hurwitz theorem. A notable application of dynamic control was in the area of crewed flight. The Wright brothers made their first successful test flights on December 17, 1903, and were distinguished by their ability to control their flights for substantial periods (more so than the ability to produce lift from an airfoil, which was known). Continuous, reliable control of the airplane was necessary for flights lasting longer than a few seconds. By World War II, control theory was becoming an important area of research. Irmgard Flügge-Lotz developed the theory of discontinuous automatic control systems, and applied the bang-bang principle to the development of automatic flight control equipment for aircraft. Other areas of application for discontinuous controls included fire-control systems, guidance systems and electronics. Sometimes, mechanical methods are used to improve the stability of systems. For example, ship stabilizers are fins mounted beneath the waterline and emerging laterally. In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship. The Space Race also depended on accurate spacecraft control, and control theory has also seen an increasing use in fields such as economics and artificial intelligence. Here, one might say that the goal is to find an internal model that obeys the good regulator theorem. So, for example, in economics, the more accurately a (stock or commodities) trading model represents the actions of the market, the more easily it can control that market (and extract "useful work" (profits) from it). In AI, an example might be a chatbot modelling the discourse state of humans: the more accurately it can model the human state (e.g. on a telephone voice-support hotline), the better it can manipulate the human (e.g. into performing the corrective actions to resolve the problem that caused the phone call to the help-line). These last two examples take the narrow historical interpretation of control theory as a set of differential equations modeling and regulating kinetic motion, and broaden it into a vast generalization of a regulator interacting with a plant. Open-loop and closed-loop (feedback) control Fundamentally, there are two types of control loops: open loop control and closed loop (feedback) control. In open loop control, the control action from the controller is independent of the "process output" (or "controlled process variable" - PV). A good example of this is a central heating boiler controlled only by a timer, so that heat is applied for a constant time, regardless of the temperature of the building. The control action is the timed switching on/off of the boiler, the process variable is the building temperature, but neither is linked. In closed loop control, the control action from the controller is dependent on feedback from the process in the form of the value of the process variable (PV). In the case of the boiler analogy, a closed loop would include a thermostat to compare the building temperature (PV) with the temperature set on the thermostat (the set point - SP). This generates a controller output to maintain the building at the desired temperature by switching the boiler on and off. A closed loop controller, therefore, has a feedback loop which ensures the controller exerts a control action to manipulate the process variable to be the same as the "Reference input" or "set point". For this reason, closed loop controllers are also called feedback controllers. The definition of a closed loop control system according to the British Standard Institution is "a control system possessing monitoring feedback, the deviation signal formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero." Likewise; "A Feedback Control System is a system which tends to maintain a prescribed relationship of one system variable to another by comparing functions of these variables and using the difference as a means of control." Other examples An example of a control system is a car's cruise control, which is a device designed to maintain vehicle speed at a constant desired or reference speed provided by the driver. The controller is the cruise control, the plant is the car, and the system is the car and the cruise control. The system output is the car's speed, and the control itself is the engine's throttle position which determines how much power the engine delivers. A primitive way to implement cruise control is simply to lock the throttle position when the driver engages cruise control. However, if the cruise control is engaged on a stretch of non-flat road, then the car will travel slower going uphill and faster when going downhill. This type of controller is called an open-loop controller because there is no feedback; no measurement of the system output (the car's speed) is used to alter the control (the throttle position.) As a result, the controller cannot compensate for changes acting on the car, like a change in the slope of the road. In a closed-loop control system, data from a sensor monitoring the car's speed (the system output) enters a controller which continuously compares the quantity representing the speed with the reference quantity representing the desired speed. The difference, called the error, determines the throttle position (the control). The result is to match the car's speed to the reference speed (maintain the desired system output). Now, when the car goes uphill, the difference between the input (the sensed speed) and the reference continuously determines the throttle position. As the sensed speed drops below the reference, the difference increases, the throttle opens, and engine power increases, speeding up the vehicle. In this way, the controller dynamically counteracts changes to the car's speed. The central idea of these control systems is the feedback loop, the controller affects the system output, which in turn is measured and fed back to the controller. Classical control theory To overcome the limitations of the open-loop controller, control theory introduces feedback. A closed-loop controller uses feedback to control states or outputs of a dynamical system. Its name comes from the information path in the system: process inputs (e.g., voltage applied to an electric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured with sensors and processed by the controller; the result (the control signal) is "fed back" as input to the process, closing the loop. Closed-loop controllers have the following advantages over open-loop controllers: disturbance rejection (such as hills in the cruise control example above) guaranteed performance even with model uncertainties, when the model structure does not match perfectly the real process and the model parameters are not exact unstable processes can be stabilized reduced sensitivity to parameter variations improved reference tracking performance In some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termed feedforward and serves to further improve reference tracking performance. A common closed-loop controller architecture is the PID controller. Closed-loop transfer function The output of the system y(t) is fed back through a sensor measurement F to a comparison with the reference value r(t). The controller C then takes the error e (difference) between the reference and the output to change the inputs u to the system under control P. This is shown in the figure. This kind of controller is a closed-loop controller or feedback controller. This is called a single-input-single-output (SISO) control system; MIMO (i.e., Multi-Input-Multi-Output) systems, with more than one input/output, are common. In such cases variables are represented through vectors instead of simple scalar values. For some distributed parameter systems the vectors may be infinite-dimensional (typically functions). If we assume the controller C, the plant P, and the sensor F are linear and time-invariant (i.e., elements of their transfer function C(s), P(s), and F(s) do not depend on time), the systems above can be analysed using the Laplace transform on the variables. This gives the following relations: Solving for Y(s) in terms of R(s) gives The expression is referred to as the closed-loop transfer function of the system. The numerator is the forward (open-loop) gain from r to y, and the denominator is one plus the gain in going around the feedback loop, the so-called loop gain. If , i.e., it has a large norm with each value of s, and if , then Y(s) is approximately equal to R(s) and the output closely tracks the reference input. PID feedback control A proportional–integral–derivative controller (PID controller) is a control loop feedback mechanism control technique widely used in control systems. A PID controller continuously calculates an error value as the difference between a desired setpoint and a measured process variable and applies a correction based on proportional, integral, and derivative terms. PID is an initialism for Proportional-Integral-Derivative, referring to the three terms operating on the error signal to produce a control signal. The theoretical understanding and application dates from the 1920s, and they are implemented in nearly all analogue control systems; originally in mechanical controllers, and then using discrete electronics and later in industrial process computers. The PID controller is probably the most-used feedback control design. If is the control signal sent to the system, is the measured output and is the desired output, and is the tracking error, a PID controller has the general form The desired closed loop dynamics is obtained by adjusting the three parameters , and , often iteratively by "tuning" and without specific knowledge of a plant model. Stability can often be ensured using only the proportional term. The integral term permits the rejection of a step disturbance (often a striking specification in process control). The derivative term is used to provide damping or shaping of the response. PID controllers are the most well-established class of control systems: however, they cannot be used in several more complicated cases, especially if MIMO systems are considered. Applying Laplace transformation results in the transformed PID controller equation with the PID controller transfer function As an example of tuning a PID controller in the closed-loop system , consider a 1st order plant given by where and are some constants. The plant output is fed back through where is also a constant. Now if we set , , and , we can express the PID controller transfer function in series form as Plugging , , and into the closed-loop transfer function , we find that by setting . With this tuning in this example, the system output follows the reference input exactly. However, in practice, a pure differentiator is neither physically realizable nor desirable due to amplification of noise and resonant modes in the system. Therefore, a phase-lead compensator type approach or a differentiator with low-pass roll-off are used instead. Linear and nonlinear control theory The field of control theory can be divided into two branches: Linear control theory – This applies to systems made of devices which obey the superposition principle, which means roughly that the output is proportional to the input. They are governed by linear differential equations. A major subclass is systems which in addition have parameters which do not change with time, called linear time invariant (LTI) systems. These systems are amenable to powerful frequency domain mathematical techniques of great generality, such as the Laplace transform, Fourier transform, Z transform, Bode plot, root locus, and Nyquist stability criterion. These lead to a description of the system using terms like bandwidth, frequency response, eigenvalues, gain, resonant frequencies, zeros and poles, which give solutions for system response and design techniques for most systems of interest. Nonlinear control theory – This covers a wider class of systems that do not obey the superposition principle, and applies to more real-world systems because all real control systems are nonlinear. These systems are often governed by nonlinear differential equations. The few mathematical techniques which have been developed to handle them are more difficult and much less general, often applying only to narrow categories of systems. These include limit cycle theory, Poincaré maps, Lyapunov stability theorem, and describing functions. Nonlinear systems are often analyzed using numerical methods on computers, for example by simulating their operation using a simulation language. If only solutions near a stable point are of interest, nonlinear systems can often be linearized by approximating them by a linear system using perturbation theory, and linear techniques can be used. Analysis techniques - frequency domain and time domain Mathematical techniques for analyzing and designing control systems fall into two different categories: Frequency domain – In this type the values of the state variables, the mathematical variables representing the system's input, output and feedback are represented as functions of frequency. The input signal and the system's transfer function are converted from time functions to functions of frequency by a transform such as the Fourier transform, Laplace transform, or Z transform. The advantage of this technique is that it results in a simplification of the mathematics; the differential equations that represent the system are replaced by algebraic equations in the frequency domain which is much simpler to solve. However, frequency domain techniques can only be used with linear systems, as mentioned above. Time-domain state space representation – In this type the values of the state variables are represented as functions of time. With this model, the system being analyzed is represented by one or more differential equations. Since frequency domain techniques are limited to linear systems, time domain is widely used to analyze real-world nonlinear systems. Although these are more difficult to solve, modern computer simulation techniques such as simulation languages have made their analysis routine. In contrast to the frequency domain analysis of the classical control theory, modern control theory utilizes the time-domain state space representation, a mathematical model of a physical system
machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality. To do this, a controller with the requisite corrective behavior is required. This controller monitors the controlled process variable (PV), and compares it with the reference or set point (SP). The difference between actual and desired value of the process variable, called the error signal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. Other aspects which are also studied are controllability and observability. This is the basis for the advanced type of automation that revolutionized manufacturing, aircraft, communications and other industries. This is feedback control, which involves taking measurements using a sensor and making calculated adjustments to keep the measured variable within a set range by means of a "final control element", such as a control valve. Extensive use is usually made of a diagrammatic style known as the block diagram. In it the transfer function, also known as the system function or network function, is a mathematical model of the relation between the input and output based on the differential equations describing the system. Control theory dates from the 19th century, when the theoretical basis for the operation of governors was first described by James Clerk Maxwell. Control theory was further advanced by Edward Routh in 1874, Charles Sturm and in 1895, Adolf Hurwitz, who all contributed to the establishment of control stability criteria; and from 1922 onwards, the development of PID control theory by Nicolas Minorsky. Although a major application of mathematical control theory is in control systems engineering, which deals with the design of process control systems for industry, other applications range far beyond this. As the general theory of feedback systems, control theory is useful wherever feedback occurs - thus control theory also has applications in life sciences, computer engineering, sociology and operation research. History Although control systems of various types date back to antiquity, a more formal analysis of the field began with a dynamics analysis of the centrifugal governor, conducted by the physicist James Clerk Maxwell in 1868, entitled On Governors. A centrifugal governor was already used to regulate the velocity of windmills. Maxwell described and analyzed the phenomenon of self-oscillation, in which lags in the system may lead to overcompensation and unstable behavior. This generated a flurry of interest in the topic, during which Maxwell's classmate, Edward John Routh, abstracted Maxwell's results for the general class of linear systems. Independently, Adolf Hurwitz analyzed system stability using differential equations in 1877, resulting in what is now known as the Routh–Hurwitz theorem. A notable application of dynamic control was in the area of crewed flight. The Wright brothers made their first successful test flights on December 17, 1903, and were distinguished by their ability to control their flights for substantial periods (more so than the ability to produce lift from an airfoil, which was known). Continuous, reliable control of the airplane was necessary for flights lasting longer than a few seconds. By World War II, control theory was becoming an important area of research. Irmgard Flügge-Lotz developed the theory of discontinuous automatic control systems, and applied the bang-bang principle to the development of automatic flight control equipment for aircraft. Other areas of application for discontinuous controls included fire-control systems, guidance systems and electronics. Sometimes, mechanical methods are used to improve the stability of systems. For example, ship stabilizers are fins mounted beneath the waterline and emerging laterally. In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship. The Space Race also depended on accurate spacecraft control, and control theory has also seen an increasing use in fields such as economics and artificial intelligence. Here, one might say that the goal is to find an internal model that obeys the good regulator theorem. So, for example, in economics, the more accurately a (stock or commodities) trading model represents the actions of the market, the more easily it can control that market (and extract "useful work" (profits) from it). In AI, an example might be a chatbot modelling the discourse state of humans: the more accurately it can model the human state (e.g. on a telephone voice-support hotline), the better it can manipulate the human (e.g. into performing the corrective actions to resolve the problem that caused the phone call to the help-line). These last two examples take the narrow historical interpretation of control theory as a set of differential equations modeling and regulating kinetic motion, and broaden it into a vast generalization of a regulator interacting with a plant. Open-loop and closed-loop (feedback) control Fundamentally, there are two types of control loops: open loop control and closed loop (feedback) control. In open loop control, the control action from the controller is independent of the "process output" (or "controlled process variable" - PV). A good example of this is a central heating boiler controlled only by a timer, so that heat is applied for a constant time, regardless of the temperature of the building. The control action is the timed switching on/off of the boiler, the process variable is the building temperature, but neither is linked. In closed loop control, the control action from the controller is dependent on feedback from the process in the form of the value of the process variable (PV). In the case of the boiler analogy, a closed loop would include a thermostat to compare the building temperature (PV) with the temperature set on the thermostat (the set point - SP). This generates a controller output to maintain the building at the desired temperature by switching the boiler on and off. A closed loop controller, therefore, has a feedback loop which ensures the controller exerts a control action to manipulate the process variable to be the same as the "Reference input" or "set point". For this reason, closed loop controllers are also called feedback controllers. The definition of a closed loop control system according to the British Standard Institution is "a control system possessing monitoring feedback, the deviation signal formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero." Likewise; "A Feedback Control System is a system which tends to maintain a prescribed relationship of one system variable to another by comparing functions of these variables and using the difference as a means of control." Other examples An example of a control system is a car's cruise control, which is a device designed to maintain vehicle speed at a constant desired or reference speed provided by the driver. The controller is the cruise control, the plant is the car, and the system is the car and the cruise control. The system output is the car's speed, and the control itself is the engine's throttle position which determines how much power the engine delivers. A primitive way to implement cruise control is simply to lock the throttle position when the driver engages cruise control. However, if the cruise control is engaged on a stretch of non-flat road, then the car will travel slower going uphill and faster when going downhill. This type of controller is called an open-loop controller because there is no feedback; no measurement of the system output (the car's speed) is used to alter the control (the throttle position.) As a result, the controller cannot compensate for changes acting on the car, like a change in the slope of the road. In a closed-loop control system, data from a sensor monitoring the car's speed (the system output) enters a controller which continuously compares the quantity representing the speed with the reference quantity representing the desired speed. The difference, called the error, determines the throttle position (the control). The result is to match the car's speed to the reference speed (maintain the desired system output). Now, when the car goes uphill, the difference between the input (the sensed speed) and the reference continuously determines the throttle position. As the sensed speed drops below the reference, the difference increases, the throttle opens, and engine power increases, speeding up the vehicle. In this way, the controller dynamically counteracts changes to the car's speed. The central idea of these control systems is the feedback loop, the controller affects the system output, which in turn is measured and fed back to the controller. Classical control theory To overcome the limitations of the open-loop controller, control theory introduces feedback. A closed-loop controller uses feedback to control states or outputs of a dynamical system. Its name comes from the information path in the system: process inputs (e.g., voltage applied to an electric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured with sensors and processed by the controller; the result (the control signal) is "fed back" as input to the process, closing the loop. Closed-loop controllers have the following advantages over open-loop controllers: disturbance rejection (such as hills in the cruise control example above) guaranteed performance even with model uncertainties, when the model structure does not match perfectly the real process and the model parameters are not exact unstable processes can be stabilized reduced sensitivity to parameter variations improved reference tracking performance In some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termed feedforward and serves to further improve reference tracking performance. A common closed-loop controller architecture is the PID controller. Closed-loop transfer function The output of the system y(t) is fed back through a sensor measurement F to a comparison with the reference value r(t). The controller C then takes the error e (difference) between the reference and the output to change the inputs u to the system under control P. This is shown in the figure. This kind of controller is a closed-loop controller or feedback controller. This is called a single-input-single-output (SISO) control system; MIMO (i.e., Multi-Input-Multi-Output) systems, with more than one input/output, are common. In such cases variables are represented through vectors instead of simple scalar values. For some distributed parameter systems the vectors may be infinite-dimensional (typically functions). If we assume the controller C, the plant P, and the sensor F are linear and time-invariant (i.e., elements of their transfer function C(s), P(s), and F(s) do not depend on time), the systems above can be analysed using the Laplace transform on the variables. This gives the following relations: Solving for Y(s) in terms of R(s) gives The expression is referred to as the closed-loop transfer function of the system. The numerator is the forward (open-loop) gain from r to y, and the denominator is one plus the gain in going around the feedback loop, the so-called loop gain. If , i.e., it has a large norm with each value of s, and if , then Y(s) is approximately equal to R(s) and the output closely tracks the reference input. PID feedback control A proportional–integral–derivative controller (PID controller) is a control loop feedback mechanism control technique widely used in control systems. A PID controller continuously calculates an error value as the difference between a desired setpoint and a measured process variable and applies a correction based on proportional, integral, and derivative terms. PID is an initialism for Proportional-Integral-Derivative, referring to the three terms operating on the error signal to produce a control signal. The theoretical understanding and application dates from the 1920s, and they are implemented in nearly all analogue control systems; originally in mechanical controllers, and then using discrete electronics and later in industrial process computers. The PID controller is probably the most-used feedback control design. If is the control signal sent to the system, is the measured output and is the desired output, and is the tracking error, a PID controller has the general form The desired closed loop dynamics is obtained by adjusting the three parameters , and , often iteratively by "tuning" and without specific knowledge of a plant model. Stability can often be ensured using only the proportional term. The integral term permits the rejection of a step disturbance (often a striking specification in process control). The derivative term is used to provide damping or shaping of the response. PID controllers are the most well-established class of control systems: however, they cannot be used in several more complicated cases, especially if MIMO systems are considered. Applying Laplace transformation results in the transformed PID controller equation with the PID controller transfer function As an example of tuning a PID controller in the closed-loop system , consider a 1st order plant given by where and are some constants. The plant output is fed back through where is also a constant. Now if we set , , and , we can express the PID controller transfer function in series form as Plugging , , and into the closed-loop transfer function , we find that by setting . With this tuning in this example, the system output follows the reference input exactly. However, in practice, a pure differentiator is neither physically realizable nor desirable due to amplification of noise and resonant modes in the system. Therefore, a phase-lead compensator type approach or a differentiator with low-pass roll-off are used instead. Linear and nonlinear control theory The field of control theory can be divided into two branches: Linear control theory – This applies to systems made of devices which obey the superposition principle, which means roughly that the output is proportional to the input. They are governed by linear differential equations. A major subclass is systems which in addition have parameters which do not change with time, called linear time invariant (LTI) systems. These systems are amenable to powerful frequency domain mathematical techniques of great generality, such as the Laplace transform, Fourier transform, Z transform, Bode plot, root locus, and Nyquist stability criterion. These lead to a description of the system using terms like bandwidth, frequency response, eigenvalues, gain, resonant frequencies, zeros and poles, which give solutions for system response and design techniques for most systems of interest. Nonlinear control theory – This covers a wider class of systems that do not obey the superposition principle, and applies to more real-world systems because all real control systems are nonlinear. These systems are often governed by nonlinear differential equations. The few mathematical techniques which have been developed to handle them are more difficult and much less general, often applying only to narrow categories of systems. These include limit cycle theory, Poincaré maps, Lyapunov stability theorem, and describing functions. Nonlinear systems are often analyzed using numerical methods on computers, for example by simulating their operation using a simulation language. If only solutions near a stable point are of interest, nonlinear systems can often be linearized by approximating them by a linear system using perturbation theory, and linear techniques can be used. Analysis techniques - frequency domain and time domain Mathematical techniques for analyzing and designing control systems fall into two different categories: Frequency domain – In this type the values of the state variables, the mathematical variables representing the system's input, output and feedback are represented as functions of frequency. The input signal and the system's transfer function are converted from time functions to functions of frequency by a transform such as the Fourier transform, Laplace transform, or Z transform. The advantage of this technique is that it results in a simplification of the mathematics; the differential equations that represent the system are replaced by algebraic equations in the frequency domain which is much simpler to solve. However, frequency domain techniques can only be used with linear systems, as mentioned above. Time-domain state space representation – In this type the values of the state variables are represented as functions of time. With this model, the system being analyzed is represented by one or more differential equations. Since frequency domain techniques are limited to linear systems, time domain is widely used to analyze real-world nonlinear systems. Although these are more difficult to solve, modern computer simulation techniques such as simulation languages have made their analysis routine. In contrast to the frequency domain analysis of the classical control theory, modern control theory utilizes the time-domain state space representation, a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs, and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form (the latter only being possible when the dynamical system is linear). The state space representation (also known as the
the joint cannot be "re-cracked", which lasts about twenty minutes, while the gases are slowly reabsorbed into the synovial fluid. There is some evidence that ligament laxity may be associated with an increased tendency to cavitate. In 2015, research showed that bubbles remained in the fluid after cracking, suggesting that the cracking sound was produced when the bubble within the joint was formed, not when it collapsed. In 2018, a team in France created a mathematical simulation of what happens in a joint just before it cracks. The team concluded that the sound is caused by bubbles' collapse, and bubbles observed in the fluid are the result of a partial collapse. Due to the theoretical basis and lack of physical experimentation, the scientific community is still not fully convinced of this conclusion. The snapping of tendons or scar tissue over a prominence (as in snapping hip syndrome) can also generate a loud snapping or popping sound. Effects The common claim
performed, the applied force separates the articular surfaces of a fully encapsulated synovial joint, which in turn creates a reduction in pressure within the joint cavity. In this low-pressure environment, some of the gases that are dissolved in the synovial fluid (which are naturally found in all bodily fluids) leave the solution, making a bubble, or cavity, which rapidly collapses upon itself, resulting in a "clicking" sound. The contents of the resultant gas bubble are thought to be mainly carbon dioxide, oxygen and nitrogen. The effects of this process will remain for a period of time known as the "refractory period", during which the joint cannot be "re-cracked", which lasts about twenty minutes, while the gases are slowly reabsorbed into the synovial fluid. There is some evidence that ligament laxity may be associated with an increased tendency to cavitate. In 2015, research showed that bubbles remained in the fluid after cracking, suggesting that the cracking sound was produced when the bubble within the joint was formed, not when it collapsed. In 2018, a team in France created a mathematical simulation of what happens in a joint just before it cracks. The team concluded that the sound is caused by bubbles' collapse, and bubbles observed in the fluid are the result
these types. This is possible if the relevant bonding is easy to show in one dimension. An example is the condensed molecular/chemical formula for ethanol, which is CH3-CH2-OH or CH3CH2OH. However, even a condensed chemical formula is necessarily limited in its ability to show complex bonding relationships between atoms, especially atoms that have bonds to four or more different substituents. Since a chemical formula must be expressed as a single line of chemical element symbols, it often cannot be as informative as a true structural formula, which is a graphical representation of the spatial relationship between atoms in chemical compounds (see for example the figure for butane structural and chemical formulae, at right). For reasons of structural complexity, a single condensed chemical formula (or semi-structural formula) may correspond to different molecules, known as isomers. For example glucose shares its molecular formula C6H12O6 with a number of other sugars, including fructose, galactose and mannose. Linear equivalent chemical names exist that can and do specify uniquely any complex structural formula (see chemical nomenclature), but such names must use many terms (words), rather than the simple element symbols, numbers, and simple typographical symbols that define a chemical formula. Chemical formulae may be used in chemical equations to describe chemical reactions and other chemical transformations, such as the dissolving of ionic compounds into solution. While, as noted, chemical formulae do not have the full power of structural formulae to show chemical relationships between atoms, they are sufficient to keep track of numbers of atoms and numbers of electrical charges in chemical reactions, thus balancing chemical equations so that these equations can be used in chemical problems involving conservation of atoms, and conservation of electric charge. Overview A chemical formula identifies each constituent element by its chemical symbol and indicates the proportionate number of atoms of each element. In empirical formulae, these proportions begin with a key element and then assign numbers of atoms of the other elements in the compound, by ratios to the key element. For molecular compounds, these ratio numbers can all be expressed as whole numbers. For example, the empirical formula of ethanol may be written C2H6O because the molecules of ethanol all contain two carbon atoms, six hydrogen atoms, and one oxygen atom. Some types of ionic compounds, however, cannot be written with entirely whole-number empirical formulae. An example is boron carbide, whose formula of CBn is a variable non-whole number ratio with n ranging from over 4 to more than 6.5. When the chemical compound of the formula consists of simple molecules, chemical formulae often employ ways to suggest the structure of the molecule. These types of formulae are variously known as molecular formulae and condensed formulae. A molecular formula enumerates the number of atoms to reflect those in the molecule, so that the molecular formula for glucose is C6H12O6 rather than the glucose empirical formula, which is CH2O. However, except for very simple substances, molecular chemical formulae lack needed structural information, and are ambiguous. For simple molecules, a condensed (or semi-structural) formula is a type of chemical formula that may fully imply a correct structural formula. For example, ethanol may be represented by the condensed chemical formula CH3CH2OH, and dimethyl ether by the condensed formula CH3OCH3. These two molecules have the same empirical and molecular formulae (C2H6O), but may be differentiated by the condensed formulae shown, which are sufficient to represent the full structure of these simple organic compounds. Condensed chemical formulae may also be used to represent ionic compounds that do not exist as discrete molecules, but nonetheless do contain covalently bound clusters within them. These polyatomic ions are groups of atoms that are covalently bound together and have an overall ionic charge, such as the sulfate ion. Each polyatomic ion in a compound is written individually in order to illustrate the separate groupings. For example, the compound dichlorine hexoxide has an empirical formula , and molecular formula , but in liquid or solid forms, this compound is more correctly shown by an ionic condensed formula , which illustrates that this compound consists of ions and ions. In such cases, the condensed formula only need be complex enough to show at least one of each ionic species. Chemical formulae as described here are distinct from the far more complex chemical systematic names that are used in various systems of chemical nomenclature. For example, one systematic name for glucose is (2R,3S,4R,5R)-2,3,4,5,6-pentahydroxyhexanal. This name, interpreted by the rules behind it, fully specifies glucose's structural formula, but the name is not a chemical formula as usually understood, and uses terms and words not used in chemical formulae. Such names, unlike basic formulae, may be able to represent full structural formulae without graphs. Empirical formula In chemistry, the empirical formula of a chemical is a simple expression of the relative number of each type of atom or ratio of the elements in the compound. Empirical formulae are the standard for ionic compounds, such as , and for macromolecules, such as . An empirical formula makes no reference to isomerism, structure, or absolute number of atoms. The term empirical refers to the process of elemental analysis, a technique of analytical chemistry used to determine the relative percent composition of a pure chemical substance by element. For example, hexane has a molecular formula of , or structurally , implying that it has a chain structure of 6 carbon atoms, and 14 hydrogen atoms. However, the empirical formula for hexane is . Likewise the empirical formula for hydrogen peroxide, , is simply HO expressing the 1:1 ratio of component elements. Formaldehyde and acetic acid have the same empirical formula, . This is the actual chemical formula for formaldehyde, but acetic acid has double the number of atoms. Molecular formula Molecular formulae indicate the simple numbers of each type of atom in a molecule of a molecular substance. They are the same as empirical formulae for molecules that only have one atom of a particular type, but otherwise may have larger numbers. An example of the difference is the empirical formula for glucose, which is CH2O (ratio 1:2:1), while its molecular formula is C6H12O6 (number of atoms 6:12:6). For water, both formulae are H2O. A molecular formula provides more information about a molecule than its empirical formula, but is more difficult to establish. A molecular formula shows the number of elements in a molecule, and determines whether it is a binary compound, ternary compound, quaternary compound, or has even more elements. Condensed formula The connectivity of a molecule often has a strong influence on its physical and chemical properties and behavior. Two molecules composed of the same numbers of the same types of atoms (i.e. a pair of isomers) might have completely different chemical and/or physical properties if the atoms are connected differently or in different positions. In such cases, a structural formula is useful, as it illustrates which atoms are bonded to which other ones. From the connectivity, it is often possible to deduce the approximate shape of the molecule. A condensed chemical formula may represent the types and spatial arrangement of bonds in a simple chemical substance, though it does not necessarily specify isomers or complex structures. For example, ethane consists of two carbon atoms single-bonded to each other, with each carbon atom having three hydrogen atoms bonded to it. Its chemical formula can be rendered as CH3CH3. In ethylene there is a double bond between the carbon atoms (and thus each carbon only has two hydrogens), therefore the chemical formula may be written: CH2CH2, and the fact that there is a double bond between the carbons is implicit because carbon has a valence of four. However, a more explicit method is to write H2C=CH2 or less commonly H2C::CH2. The two lines (or two pairs of dots) indicate that a double
used in chemical equations to describe chemical reactions and other chemical transformations, such as the dissolving of ionic compounds into solution. While, as noted, chemical formulae do not have the full power of structural formulae to show chemical relationships between atoms, they are sufficient to keep track of numbers of atoms and numbers of electrical charges in chemical reactions, thus balancing chemical equations so that these equations can be used in chemical problems involving conservation of atoms, and conservation of electric charge. Overview A chemical formula identifies each constituent element by its chemical symbol and indicates the proportionate number of atoms of each element. In empirical formulae, these proportions begin with a key element and then assign numbers of atoms of the other elements in the compound, by ratios to the key element. For molecular compounds, these ratio numbers can all be expressed as whole numbers. For example, the empirical formula of ethanol may be written C2H6O because the molecules of ethanol all contain two carbon atoms, six hydrogen atoms, and one oxygen atom. Some types of ionic compounds, however, cannot be written with entirely whole-number empirical formulae. An example is boron carbide, whose formula of CBn is a variable non-whole number ratio with n ranging from over 4 to more than 6.5. When the chemical compound of the formula consists of simple molecules, chemical formulae often employ ways to suggest the structure of the molecule. These types of formulae are variously known as molecular formulae and condensed formulae. A molecular formula enumerates the number of atoms to reflect those in the molecule, so that the molecular formula for glucose is C6H12O6 rather than the glucose empirical formula, which is CH2O. However, except for very simple substances, molecular chemical formulae lack needed structural information, and are ambiguous. For simple molecules, a condensed (or semi-structural) formula is a type of chemical formula that may fully imply a correct structural formula. For example, ethanol may be represented by the condensed chemical formula CH3CH2OH, and dimethyl ether by the condensed formula CH3OCH3. These two molecules have the same empirical and molecular formulae (C2H6O), but may be differentiated by the condensed formulae shown, which are sufficient to represent the full structure of these simple organic compounds. Condensed chemical formulae may also be used to represent ionic compounds that do not exist as discrete molecules, but nonetheless do contain covalently bound clusters within them. These polyatomic ions are groups of atoms that are covalently bound together and have an overall ionic charge, such as the sulfate ion. Each polyatomic ion in a compound is written individually in order to illustrate the separate groupings. For example, the compound dichlorine hexoxide has an empirical formula , and molecular formula , but in liquid or solid forms, this compound is more correctly shown by an ionic condensed formula , which illustrates that this compound consists of ions and ions. In such cases, the condensed formula only need be complex enough to show at least one of each ionic species. Chemical formulae as described here are distinct from the far more complex chemical systematic names that are used in various systems of chemical nomenclature. For example, one systematic name for glucose is (2R,3S,4R,5R)-2,3,4,5,6-pentahydroxyhexanal. This name, interpreted by the rules behind it, fully specifies glucose's structural formula, but the name is not a chemical formula as usually understood, and uses terms and words not used in chemical formulae. Such names, unlike basic formulae, may be able to represent full structural formulae without graphs. Empirical formula In chemistry, the empirical formula of a chemical is a simple expression of the relative number of each type of atom or ratio of the elements in the compound. Empirical formulae are the standard for ionic compounds, such as , and for macromolecules, such as . An empirical formula makes no reference to isomerism, structure, or absolute number of atoms. The term empirical refers to the process of elemental analysis, a technique of analytical chemistry used to determine the relative percent composition of a pure chemical substance by element. For example, hexane has a molecular formula of , or structurally , implying that it has a chain structure of 6 carbon atoms, and 14 hydrogen atoms. However, the empirical formula for hexane is . Likewise the empirical formula for hydrogen peroxide, , is simply HO expressing the 1:1 ratio of component elements. Formaldehyde and acetic acid have the same empirical formula, . This is the actual chemical formula for formaldehyde, but acetic acid has double the number of atoms. Molecular formula Molecular formulae indicate the simple numbers of each type of atom in a molecule of a molecular substance. They are the same as empirical formulae for molecules that only have one atom of a particular type, but otherwise may have larger numbers. An example of the difference is the empirical formula for glucose, which is CH2O (ratio 1:2:1), while its molecular formula is C6H12O6 (number of atoms 6:12:6). For water, both formulae are H2O. A molecular formula provides more information about a molecule than its empirical formula, but is more difficult to establish. A molecular formula shows the number of elements in a molecule, and determines whether it is a binary compound, ternary compound, quaternary compound, or has even more elements. Condensed formula The connectivity of a molecule often has a strong influence on its physical and chemical properties and behavior. Two molecules composed of the same numbers of the same types of atoms (i.e. a pair of isomers) might have completely different chemical and/or physical properties if the atoms are connected differently or in different positions. In such cases, a structural formula is useful, as it illustrates which atoms are bonded to which other ones. From the connectivity, it is often possible to deduce the approximate shape of the molecule. A condensed chemical formula may represent the types and spatial arrangement of bonds in a simple chemical substance, though it does not necessarily specify isomers or complex structures. For example, ethane consists of two carbon atoms single-bonded to each other, with each carbon atom having three hydrogen atoms bonded to it. Its chemical formula can be rendered as CH3CH3. In ethylene there is a double
very large number of beetle species poses special problems for classification. Some families contain tens of thousands of species, and need to be divided into subfamilies and tribes. This immense number led the evolutionary biologist J. B. S. Haldane to quip, when some theologians asked him what could be inferred about the mind of the Creator from the works of His Creation, "An inordinate fondness for beetles". Polyphaga is the largest suborder, containing more than 300,000 described species in more than 170 families, including rove beetles (Staphylinidae), scarab beetles (Scarabaeidae), blister beetles (Meloidae), stag beetles (Lucanidae) and true weevils (Curculionidae). These polyphagan beetle groups can be identified by the presence of cervical sclerites (hardened parts of the head used as points of attachment for muscles) absent in the other suborders. Adephaga contains about 10 families of largely predatory beetles, includes ground beetles (Carabidae), water beetles (Dytiscidae) and whirligig beetles (Gyrinidae). In these insects, the testes are tubular and the first abdominal sternum (a plate of the exoskeleton) is divided by the hind coxae (the basal joints of the beetle's legs). Archostemata contains four families of mainly wood-eating beetles, including reticulated beetles (Cupedidae) and the telephone-pole beetle. The Archostemata have an exposed plate called the metatrochantin in front of the basal segment or coxa of the hind leg. Myxophaga contains about 65 described species in four families, mostly very small, including Hydroscaphidae and the genus Sphaerius. The myxophagan beetles are small and mostly alga-feeders. Their mouthparts are characteristic in lacking galeae and having a mobile tooth on their left mandible. The consistency of beetle morphology, in particular their possession of elytra, has long suggested that Coleoptera is monophyletic, though there have been doubts about the arrangement of the suborders, namely the Adephaga, Archostemata, Myxophaga and Polyphaga within that clade. The twisted-wing parasites, Strepsiptera, are thought to be a sister group to the beetles, having split from them in the Early Permian. Molecular phylogenetic analysis confirms that the Coleoptera are monophyletic. Duane McKenna et al. (2015) used eight nuclear genes for 367 species from 172 of 183 Coleopteran families. They split the Adephaga into 2 clades, Hydradephaga and Geadephaga, broke up the Cucujoidea into 3 clades, and placed the Lymexyloidea within the Tenebrionoidea. The Polyphaga appear to date from the Triassic. Most extant beetle families appear to have arisen in the Cretaceous. The cladogram is based on McKenna (2015). The number of species in each group (mainly superfamilies) is shown in parentheses, and boldface if over 10,000. English common names are given where possible. Dates of origin of major groups are shown in italics in millions of years ago (mya). External morphology Beetles are generally characterized by a particularly hard exoskeleton and hard forewings (elytra) not usable for flying. Almost all beetles have mandibles that move in a horizontal plane. The mouthparts are rarely suctorial, though they are sometimes reduced; the maxillae always bear palps. The antennae usually have 11 or fewer segments, except in some groups like the Cerambycidae (longhorn beetles) and the Rhipiceridae (cicada parasite beetles). The coxae of the legs are usually located recessed within a coxal cavity. The genitalic structures are telescoped into the last abdominal segment in all extant beetles. Beetle larvae can often be confused with those of other endopterygote groups. The beetle's exoskeleton is made up of numerous plates, called sclerites, separated by thin sutures. This design provides armored defenses while maintaining flexibility. The general anatomy of a beetle is quite uniform, although specific organs and appendages vary greatly in appearance and function between the many families in the order. Like all insects, beetles' bodies are divided into three sections: the head, the thorax, and the abdomen. Because there are so many species, identification is quite difficult, and relies on attributes including the shape of the antennae, the tarsal formulae and shapes of these small segments on the legs, the mouthparts, and the ventral plates (sterna, pleura, coxae). In many species accurate identification can only be made by examination of the unique male genitalic structures. Head The head, having mouthparts projecting forward or sometimes downturned, is usually heavily sclerotized and is sometimes very large. The eyes are compound and may display remarkable adaptability, as in the case of the aquatic whirligig beetles (Gyrinidae), where they are split to allow a view both above and below the waterline. A few Longhorn beetles (Cerambycidae) and weevils as well as some fireflies (Rhagophthalmidae) have divided eyes, while many have eyes that are notched, and a few have ocelli, small, simple eyes usually farther back on the head (on the vertex); these are more common in larvae than in adults. The anatomical organization of the compound eyes may be modified and depends on whether a species is primarily crepuscular, or diurnally or nocturnally active. Ocelli are found in the adult carpet beetle (Dermestidae), some rove beetles (Omaliinae), and the Derodontidae. Beetle antennae are primarily organs of sensory perception and can detect motion, odour and chemical substances, but may also be used to physically feel a beetle's environment. Beetle families may use antennae in different ways. For example, when moving quickly, tiger beetles may not be able to see very well and instead hold their antennae rigidly in front of them in order to avoid obstacles. Certain Cerambycidae use antennae to balance, and blister beetles may use them for grasping. Some aquatic beetle species may use antennae for gathering air and passing it under the body whilst submerged. Equally, some families use antennae during mating, and a few species use them for defence. In the cerambycid Onychocerus albitarsis, the antennae have venom injecting structures used in defence, which is unique among arthropods. Antennae vary greatly in form, sometimes between the sexes, but are often similar within any given family. Antennae may be clubbed, threadlike, angled, shaped like a string of beads, comb-like (either on one side or both, bipectinate), or toothed. The physical variation of antennae is important for the identification of many beetle groups. The Curculionidae have elbowed or geniculate antennae. Feather like flabellate antennae are a restricted form found in the Rhipiceridae and a few other families. The Silphidae have a capitate antennae with a spherical head at the tip. The Scarabaeidae typically have lamellate antennae with the terminal segments extended into long flat structures stacked together. The Carabidae typically have thread-like antennae. The antennae arises between the eye and the mandibles and in the Tenebrionidae, the antennae rise in front of a notch that breaks the usually circular outline of the compound eye. They are segmented and usually consist of 11 parts, the first part is called the scape and the second part is the pedicel. The other segments are jointly called the flagellum. Beetles have mouthparts like those of grasshoppers. The mandibles appear as large pincers on the front of some beetles. The mandibles are a pair of hard, often tooth-like structures that move horizontally to grasp, crush, or cut food or enemies (see defence, below). Two pairs of finger-like appendages, the maxillary and labial palpi, are found around the mouth in most beetles, serving to move food into the mouth. In many species, the mandibles are sexually dimorphic, with those of the males enlarged enormously compared with those of females of the same species. Thorax The thorax is segmented into the two discernible parts, the pro- and pterothorax. The pterothorax is the fused meso- and metathorax, which are commonly separated in other insect species, although flexibly articulate from the prothorax. When viewed from below, the thorax is that part from which all three pairs of legs and both pairs of wings arise. The abdomen is everything posterior to the thorax. When viewed from above, most beetles appear to have three clear sections, but this is deceptive: on the beetle's upper surface, the middle section is a hard plate called the pronotum, which is only the front part of the thorax; the back part of the thorax is concealed by the beetle's wings. This further segmentation is usually best seen on the abdomen. Legs The multisegmented legs end in two to five small segments called tarsi. Like many other insect orders, beetles have claws, usually one pair, on the end of the last tarsal segment of each leg. While most beetles use their legs for walking, legs have been variously adapted for other uses. Aquatic beetles including the Dytiscidae (diving beetles), Haliplidae, and many species of Hydrophilidae, the legs, often the last pair, are modified for swimming, typically with rows of long hairs. Male diving beetles have suctorial cups on their forelegs that they use to grasp females. Other beetles have fossorial legs widened and often spined for digging. Species with such adaptations are found among the scarabs, ground beetles, and clown beetles (Histeridae). The hind legs of some beetles, such as flea beetles (within Chrysomelidae) and flea weevils (within Curculionidae), have enlarged femurs that help them leap. Wings The forewings of beetles are not used for flight, but form elytra which cover the hind part of the body and protect the hindwings. The elytra are usually hard shell-like structures which must be raised to allow the hind wings to move for flight. However, in the soldier beetles (Cantharidae), the elytra are soft, earning this family the name of leatherwings. Other soft wing beetles include the net-winged beetle Calopteron discrepans, which has brittle wings that rupture easily in order to release chemicals for defence. Beetles' flight wings are crossed with veins and are folded after landing, often along these veins, and stored below the elytra. A fold (jugum) of the membrane at the base of each wing is characteristic. Some beetles have lost the ability to fly. These include some ground beetles (Carabidae) and some true weevils (Curculionidae), as well as desert- and cave-dwelling species of other families. Many have the two elytra fused together, forming a solid shield over the abdomen. In a few families, both the ability to fly and the elytra have been lost, as in the glow-worms (Phengodidae), where the females resemble larvae throughout their lives. The presence of elytra and wings does not always indicate that the beetle will fly. For example, the tansy beetle walks between habitats despite being physically capable of flight. Abdomen The abdomen is the section behind the metathorax, made up of a series of rings, each with a hole for breathing and respiration, called a spiracle, composing three different segmented sclerites: the tergum, pleura, and the sternum. The tergum in almost all species is membranous, or usually soft and concealed by the wings and elytra when not in flight. The pleura are usually small or hidden in some species, with each pleuron having a single spiracle. The sternum is the most widely visible part of the abdomen, being a more or less sclerotized segment. The abdomen itself does not have any appendages, but some (for example, Mordellidae) have articulating sternal lobes. Anatomy and physiology Digestive system The digestive system of beetles is primarily adapted for a herbivorous diet. Digestion takes place mostly in the anterior midgut, although in predatory groups like the Carabidae, most digestion occurs in the crop by means of midgut enzymes. In the Elateridae, the larvae are liquid feeders that extraorally digest their food by secreting enzymes. The alimentary canal basically consists of a short, narrow pharynx, a widened expansion, the crop, and a poorly developed gizzard. This is followed by the midgut, that varies in dimensions between species, with a large amount of cecum, and the hindgut, with varying lengths. There are typically four to six Malpighian tubules. Nervous system The nervous system in beetles contains all the types found in insects, varying between different species, from three thoracic and seven or eight abdominal ganglia which can be distinguished to that in which all the thoracic and abdominal ganglia are fused to form a composite structure. Respiratory system Like most insects, beetles inhale air, for the oxygen it contains, and exhale carbon dioxide, via a tracheal system. Air enters the body through spiracles, and circulates within the haemocoel in a system of tracheae and tracheoles, through whose walls the gases can diffuse. Diving beetles, such as the Dytiscidae, carry a bubble of air with them when they dive. Such a bubble may be contained under the elytra or against the body by specialized hydrophobic hairs. The bubble covers at least some of the spiracles, permitting air to enter the tracheae. The function of the bubble is not only to contain a store of air but to act as a physical gill. The air that it traps is in contact with oxygenated water, so as the animal's consumption depletes the oxygen in the bubble, more oxygen can diffuse in to replenish it. Carbon dioxide is more soluble in water than either oxygen or nitrogen, so it readily diffuses out faster than in. Nitrogen is the most plentiful gas in the bubble, and the least soluble, so it constitutes a relatively static component of the bubble and acts as a stable medium for respiratory gases to accumulate in and pass through. Occasional visits to the surface are sufficient for the beetle to re-establish the constitution of the bubble. Circulatory system Like other insects, beetles have open circulatory systems, based on hemolymph rather than blood. As in other insects, a segmented tube-like heart is attached to the dorsal wall of the hemocoel. It has paired inlets or ostia at intervals down its length, and circulates the hemolymph from the main cavity of the haemocoel and out through the anterior cavity in the head. Specialized organs Different glands are specialized for different pheromones to attract mates. Pheromones from species of Rutelinae are produced from epithelial cells lining the inner surface of the apical abdominal segments; amino acid-based pheromones of Melolonthinae are produced from eversible glands on the abdominal apex. Other species produce different types of pheromones. Dermestids produce esters, and species of Elateridae produce fatty acid-derived aldehydes and acetates. To attract a mate, fireflies (Lampyridae) use modified fat body cells with transparent surfaces backed with reflective uric acid crystals to produce light by bioluminescence. Light production is highly efficient, by oxidation of luciferin catalyzed by enzymes (luciferases) in the presence of adenosine triphosphate (ATP) and oxygen, producing oxyluciferin, carbon dioxide, and light. Tympanal organs or hearing organs consist of a membrane (tympanum) stretched across a frame backed by an air sac and associated sensory neurons, are found in two families. Several species of the genus Cicindela (Carabidae) have hearing organs on the dorsal surfaces of their first abdominal segments beneath the wings; two tribes in the Dynastinae (within the Scarabaeidae) have hearing organs just beneath their pronotal shields or neck membranes. Both families are sensitive to ultrasonic frequencies, with strong evidence indicating they function to detect the presence of bats by their ultrasonic echolocation. Reproduction and development Beetles are members of the superorder Endopterygota, and accordingly most of them undergo complete metamorphosis. The typical form of metamorphosis in beetles passes through four main stages: the egg, the larva, the pupa, and the imago or adult. The larvae are commonly called grubs and the pupa sometimes is called the chrysalis. In some species, the pupa may be enclosed in a cocoon constructed by the larva towards the end of its final instar. Some beetles, such as typical members of the families Meloidae and Rhipiphoridae, go further, undergoing hypermetamorphosis in which the first instar takes the form of a triungulin. Mating Some beetles have intricate mating behaviour. Pheromone communication is often important in locating a mate. Different species use different pheromones. Scarab beetles such as the Rutelinae use pheromones derived from fatty acid synthesis, while other scarabs such as the Melolonthinae use amino acids and terpenoids. Another way beetles find mates is seen in the fireflies (Lampyridae) which are bioluminescent, with abdominal light-producing organs. The males and females engage in a complex dialogue before mating; each species has a unique combination of flight patterns, duration, composition, and intensity of the light produced. Before mating, males and females may stridulate, or vibrate the objects they are on. In the Meloidae, the male climbs onto the dorsum of the female and strokes his antennae on her head, palps, and antennae. In Eupompha, the male draws his antennae along his longitudinal vertex. They may not mate at all if they do not perform the precopulatory ritual. This mating behaviour may be different amongst dispersed populations of the same species. For example, the mating of a Russian population of tansy beetle (Chysolina graminis) is preceded by an elaborate ritual involving the male tapping the female's eyes, pronotum and antennae with its antennae, which is not evident in the population of this species in the United Kingdom. Competition can play a part in the mating rituals of species such as burying beetles (Nicrophorus), the insects fighting to determine which can mate. Many male beetles are territorial and fiercely defend their territories from intruding males. In such species, the male often has horns on the head or thorax, making its body length greater than that of a female. Copulation is generally quick, but in some cases lasts for several hours. During copulation, sperm cells are transferred to the female to fertilize the egg. Life cycle Egg Essentially all beetles lay eggs, though some myrmecophilous Aleocharinae and some Chrysomelinae which live in mountains or the subarctic are ovoviviparous, laying eggs which hatch almost immediately. Beetle eggs generally have smooth surfaces and are soft, though the Cupedidae have hard eggs. Eggs vary widely between species: the eggs tend to be small in species with many instars (larval stages),
Its penultimate larval stage is the pseudo-pupa or the coarcate larva, which will overwinter and pupate until the next spring. The larval period can vary widely. A fungus feeding staphylinid Phanerota fasciata undergoes three moults in 3.2 days at room temperature while Anisotoma sp. (Leiodidae) completes its larval stage in the fruiting body of slime mold in 2 days and possibly represents the fastest growing beetles. Dermestid beetles, Trogoderma inclusum can remain in an extended larval state under unfavourable conditions, even reducing their size between moults. A larva is reported to have survived for 3.5 years in an enclosed container. Pupa and adult As with all endopterygotes, beetle larvae pupate, and from these pupae emerge fully formed, sexually mature adult beetles, or imagos. Pupae never have mandibles (they are adecticous). In most pupae, the appendages are not attached to the body and are said to be exarate; in a few beetles (Staphylinidae, Ptiliidae etc.) the appendages are fused with the body (termed as obtect pupae). Adults have extremely variable lifespans, from weeks to years, depending on the species. Some wood-boring beetles can have extremely long life-cycles. It is believed that when furniture or house timbers are infested by beetle larvae, the timber already contained the larvae when it was first sawn up. A birch bookcase 40 years old released adult Eburia quadrigeminata (Cerambycidae), while Buprestis aurulenta and other Buprestidae have been documented as emerging as much as 51 years after manufacture of wooden items. Behaviour Locomotion The elytra allow beetles to both fly and move through confined spaces, doing so by folding the delicate wings under the elytra while not flying, and folding their wings out just before takeoff. The unfolding and folding of the wings is operated by muscles attached to the wing base; as long as the tension on the radial and cubital veins remains, the wings remain straight. In some day-flying species (for example, Buprestidae, Scarabaeidae), flight does not include large amounts of lifting of the elytra, having the metathorac wings extended under the lateral elytra margins. The altitude reached by beetles in flight varies. One study investigating the flight altitude of the ladybird species Coccinella septempunctata and Harmonia axyridis using radar showed that, whilst the majority in flight over a single location were at 150–195 m above ground level, some reached altitudes of over 1100 m. Many rove beetles have greatly reduced elytra, and while they are capable of flight, they most often move on the ground: their soft bodies and strong abdominal muscles make them flexible, easily able to wriggle into small cracks. Aquatic beetles use several techniques for retaining air beneath the water's surface. Diving beetles (Dytiscidae) hold air between the abdomen and the elytra when diving. Hydrophilidae have hairs on their under surface that retain a layer of air against their bodies. Adult crawling water beetles use both their elytra and their hind coxae (the basal segment of the back legs) in air retention, while whirligig beetles simply carry an air bubble down with them whenever they dive. Communication Beetles have a variety of ways to communicate, including the use of pheromones. The mountain pine beetle emits a pheromone to attract other beetles to a tree. The mass of beetles are able to overcome the chemical defenses of the tree. After the tree's defenses have been exhausted, the beetles emit an anti-aggregation pheromone. This species can stridulate to communicate, but others may use sound to defend themselves when attacked. Parental care Parental care is found in a few families of beetle, perhaps for protection against adverse conditions and predators. The rove beetle Bledius spectabilis lives in salt marshes, so the eggs and larvae are endangered by the rising tide. The maternal beetle patrols the eggs and larvae, burrowing to keep them from flooding and asphyxiating, and protects them from the predatory carabid beetle Dicheirotrichus gustavi and from the parasitoidal wasp Barycnemis blediator, which kills some 15% of the larvae. Burying beetles are attentive parents, and participate in cooperative care and feeding of their offspring. Both parents work to bury small animal carcass to serve as a food resource for their young and build a brood chamber around it. The parents prepare the carcass and protect it from competitors and from early decomposition. After their eggs hatch, the parents keep the larvae clean of fungus and bacteria and help the larvae feed by regurgitating food for them. Some dung beetles provide parental care, collecting herbivore dung and laying eggs within that food supply, an instance of mass provisioning. Some species do not leave after this stage, but remain to safeguard their offspring. Most species of beetles do not display parental care behaviors after the eggs have been laid. Subsociality, where females guard their offspring, is well-documented in two families of Chrysomelidae, Cassidinae and Chrysomelinae. Eusociality Eusociality involves cooperative brood care (including brood care of offspring from other individuals), overlapping generations within a colony of adults, and a division of labour into reproductive and non-reproductive groups. Few organisms outside Hymenoptera exhibit this behavior; the only beetle to do so is the weevil Austroplatypus incompertus. This Australian species lives in horizontal networks of tunnels, in the heartwood of Eucalyptus trees. It is one of more than 300 species of wood-boring Ambrosia beetles which distribute the spores of ambrosia fungi. The fungi grow in the beetles' tunnels, providing food for the beetles and their larvae; female offspring remain in the tunnels and maintain the fungal growth, probably never reproducing. Cooperative brood care is also found in the bess beetles (Passalidae) where the larvae feed on the semi-digested faeces of the adults. Feeding Beetles are able to exploit a wide diversity of food sources available in their many habitats. Some are omnivores, eating both plants and animals. Other beetles are highly specialized in their diet. Many species of leaf beetles, longhorn beetles, and weevils are very host-specific, feeding on only a single species of plant. Ground beetles and rove beetles (Staphylinidae), among others, are primarily carnivorous and catch and consume many other arthropods and small prey, such as earthworms and snails. While most predatory beetles are generalists, a few species have more specific prey requirements or preferences. In some species, digestive ability relies upon a symbiotic relationship with fungi - some beetles have yeasts living their guts, including some yeasts previously undiscovered anywhere else. Decaying organic matter is a primary diet for many species. This can range from dung, which is consumed by coprophagous species (such as certain scarab beetles in the Scarabaeidae), to dead animals, which are eaten by necrophagous species (such as the carrion beetles, Silphidae). Some beetles found in dung and carrion are in fact predatory. These include members of the Histeridae and Silphidae, preying on the larvae of coprophagous and necrophagous insects. Many beetles feed under bark, some feed on wood while others feed on fungi growing on wood or leaf-litter. Some beetles have special mycangia, structures for the transport of fungal spores. Ecology Anti-predator adaptations Beetles, both adults and larvae, are the prey of many animal predators including mammals from bats to rodents, birds, lizards, amphibians, fishes, dragonflies, robberflies, reduviid bugs, ants, other beetles, and spiders. Beetles use a variety of anti-predator adaptations to defend themselves. These include camouflage and mimicry against predators that hunt by sight, toxicity, and defensive behaviour. Camouflage Camouflage is common and widespread among beetle families, especially those that feed on wood or vegetation, such as leaf beetles (Chrysomelidae, which are often green) and weevils. In some species, sculpturing or various coloured scales or hairs cause beetles such as the avocado weevil Heilipus apiatus to resemble bird dung or other inedible objects. Many beetles that live in sandy environments blend in with the coloration of that substrate. Mimicry and aposematism Some longhorn beetles (Cerambycidae) are effective Batesian mimics of wasps. Beetles may combine coloration with behavioural mimicry, acting like the wasps they already closely resemble. Many other beetles, including ladybirds, blister beetles, and lycid beetles secrete distasteful or toxic substances to make them unpalatable or poisonous, and are often aposematic, where bright or contrasting coloration warn off predators; many beetles and other insects mimic these chemically protected species. Chemical defense is important in some species, usually being advertised by bright aposematic colours. Some Tenebrionidae use their posture for releasing noxious chemicals to warn off predators. Chemical defences may serve purposes other than just protection from vertebrates, such as protection from a wide range of microbes. Some species sequester chemicals from the plants they feed on, incorporating them into their own defenses. Other species have special glands to produce deterrent chemicals. The defensive glands of carabid ground beetles produce a variety of hydrocarbons, aldehydes, phenols, quinones, esters, and acids released from an opening at the end of the abdomen. African carabid beetles (for example, Anthia) employ the same chemicals as ants: formic acid. Bombardier beetles have well-developed pygidial glands that empty from the sides of the intersegment membranes between the seventh and eighth abdominal segments. The gland is made of two containing chambers, one for hydroquinones and hydrogen peroxide, the other holding hydrogen peroxide and catalase enzymes. These chemicals mix and result in an explosive ejection, reaching a temperature of around , with the breakdown of hydroquinone to hydrogen, oxygen, and quinone. The oxygen propels the noxious chemical spray as a jet that can be aimed accurately at predators. Other defences Large ground-dwelling beetles such as Carabidae, the rhinoceros beetle and the longhorn beetles defend themselves using strong mandibles, or heavily sclerotised (armored) spines or horns to deter or fight off predators. Many species of weevil that feed out in the open on leaves of plants react to attack by employing a drop-off reflex. Some combine it with thanatosis, in which they close up their appendages and "play dead". The click beetles (Elateridae) can suddenly catapult themselves out of danger by releasing the energy stored by a click mechanism, which consists of a stout spine on the prosternum and a matching groove in the mesosternum. Some species startle an attacker by producing sounds through a process known as stridulation. Parasitism A few species of beetles are ectoparasitic on mammals. One such species, Platypsyllus castoris, parasitises beavers (Castor spp.). This beetle lives as a parasite both as a larva and as an adult, feeding on epidermal tissue and possibly on skin secretions and wound exudates. They are strikingly flattened dorsoventrally, no doubt as an adaptation for slipping between the beavers' hairs. They are wingless and eyeless, as are many other ectoparasites. Others are kleptoparasites of other invertebrates, such as the small hive beetle (Aethina tumida) that infests honey bee nests, while many species are parasitic inquilines or commensal in the nests of ants. A few groups of beetles are primary parasitoids of other insects, feeding off of, and eventually killing their hosts. Pollination Beetle-pollinated flowers are usually large, greenish or off-white in color, and heavily scented. Scents may be spicy, fruity, or similar to decaying organic material. Beetles were most likely the first insects to pollinate flowers. Most beetle-pollinated flowers are flattened or dish-shaped, with pollen easily accessible, although they may include traps to keep the beetle longer. The plants' ovaries are usually well protected from the biting mouthparts of their pollinators. The beetle families that habitually pollinate flowers are the Buprestidae, Cantharidae, Cerambycidae, Cleridae, Dermestidae, Lycidae, Melyridae, Mordellidae, Nitidulidae and Scarabaeidae. Beetles may be particularly important in some parts of the world such as semiarid areas of southern Africa and southern California and the montane grasslands of KwaZulu-Natal in South Africa. Mutualism Mutualism is well known in a few beetles, such as the ambrosia beetle, which partners with fungi to digest the wood of dead trees. The beetles excavate tunnels in dead trees in which they cultivate fungal gardens, their sole source of nutrition. After landing on a suitable tree, an ambrosia beetle excavates a tunnel in which it releases spores of its fungal symbiont. The fungus penetrates the plant's xylem tissue, digests it, and concentrates the nutrients on and near the surface of the beetle gallery, so the weevils and the fungus both benefit. The beetles cannot eat the wood due to toxins, and uses its relationship with fungi to help overcome the defenses of its host tree in order to provide nutrition for their larvae. Chemically mediated by a bacterially produced polyunsaturated peroxide, this mutualistic relationship between the beetle and the fungus is coevolved. Tolerance of extreme environments About 90% of beetle species enter a period of adult diapause, a quiet phase with reduced metabolism to tide unfavourable environmental conditions. Adult diapause is the most common form of diapause in Coleoptera. To endure the period without food (often lasting many months) adults prepare by accumulating reserves of lipids, glycogen, proteins and other substances needed for resistance to future hazardous changes of environmental conditions. This diapause is induced by signals heralding the arrival of the unfavourable season; usually the cue is photoperiodic. Short (decreasing) day length serves as a signal of approaching winter and induces winter diapause (hibernation). A study of hibernation in the Arctic beetle Pterostichus brevicornis showed that the body fat levels of adults were highest in autumn with the alimentary canal filled with food, but empty by the end of January. This loss of body fat was a gradual process, occurring in combination with dehydration. All insects are poikilothermic, so the ability of a few beetles to live in extreme environments depends on their resilience to unusually high or low temperatures. The bark beetle Pityogenes chalcographus can survive whilst overwintering beneath tree bark; the Alaskan beetle Cucujus clavipes puniceus is able to withstand ; its larvae may survive . At these low temperatures, the formation of ice crystals in internal fluids is the biggest threat to survival to beetles, but this is prevented through the production of antifreeze proteins that stop water molecules from grouping together. The low temperatures experienced by Cucujus clavipes can be survived through their deliberate dehydration in conjunction with the antifreeze proteins. This concentrates the antifreezes several fold. The hemolymph of the mealworm beetle Tenebrio molitor contains several antifreeze proteins. The Alaskan beetle Upis ceramboides can survive −60 °C: its cryoprotectants are xylomannan, a molecule consisting of a sugar bound to a fatty acid, and the sugar-alcohol, threitol. Conversely, desert dwelling beetles are adapted to tolerate high temperatures. For example, the Tenebrionid beetle Onymacris rugatipennis can withstand . Tiger beetles in hot, sandy areas are often whitish (for example, Habroscelimorpha dorsalis), to reflect more heat than a darker colour would. These beetles also exhibits behavioural adaptions to tolerate the heat: they are able to stand erect on their tarsi to hold their bodies away from the hot ground, seek shade, and turn to face the sun so that only the front parts of their heads are directly exposed. The fogstand beetle of the Namib Desert, Stenocara gracilipes, is able to collect water from fog, as its elytra have a textured surface combining hydrophilic (water-loving) bumps and waxy, hydrophobic troughs. The beetle faces the early morning breeze, holding up its abdomen; droplets condense on the elytra and run along ridges towards their mouthparts. Similar adaptations are found in several other Namib desert beetles such as Onymacris unguicularis. Some terrestrial beetles that exploit shoreline and floodplain habitats have physiological adaptations for surviving floods. In the event of flooding, adult beetles may be mobile enough to move away from flooding, but larvae and pupa often cannot. Adults of Cicindela togata are unable to survive immersion in water, but larvae are able to survive a prolonged period, up to 6 days, of anoxia during floods. Anoxia tolerance in the larvae may have been sustained by switching to anaerobic metabolic pathways or by reducing metabolic rate. Anoxia tolerance in the adult carabid beetle Pelophilia borealis was tested in laboratory conditions and it was found that they could survive a continuous period of up to 127 days in an atmosphere of 99.9% nitrogen at 0 °C. Migration Many beetle species undertake annual mass movements which are termed as migrations. These include the pollen beetle Meligethes aeneus and many species of coccinellids. These mass movements may also be opportunistic, in search of food, rather than seasonal. A 2008 study of an unusually large outbreak of Mountain Pine Beetle (Dendroctonus ponderosae) in British Columbia found that beetles were capable of flying 30–110 km per day in densities of up to 18,600 beetles per hectare. Relationship to humans In ancient cultures Several species of dung beetle, especially the sacred scarab, Scarabaeus sacer, were revered in Ancient Egypt. The hieroglyphic image of the beetle may have had existential, fictional, or ontologic significance. Images of the scarab in bone, ivory, stone, Egyptian faience, and precious metals are known from the Sixth Dynasty and up to the period of Roman rule. The scarab was of prime significance in the funerary cult of ancient Egypt. The scarab was linked to Khepri, the god of the rising sun, from the supposed resemblance of the rolling of the dung ball by the beetle to the rolling of the sun by the god. Some of ancient Egypt's neighbors adopted the scarab motif for seals of varying types. The best-known of these are the Judean LMLK seals, where eight of 21 designs contained scarab beetles, which were used exclusively to stamp impressions on storage jars during the reign of Hezekiah. Beetles are mentioned as a symbol of the sun, as in ancient Egypt, in Plutarch's 1st century Moralia. The Greek Magical Papyri of the 2nd century BC to the 5th century AD describe scarabs as an ingredient in a spell. Pliny the Elder discusses beetles in his Natural History, describing the stag beetle: "Some insects, for the preservation of their wings, are covered with (elytra)—the beetle, for instance, the wing of which is peculiarly fine and frail. To these insects a sting has been denied by Nature; but in one large kind we find horns of a remarkable length, two-pronged at the extremities, and forming pincers, which the animal closes when it is its intention to bite." The stag beetle is recorded in a Greek myth by Nicander and recalled by Antoninus Liberalis in which Cerambus is turned into a beetle: "He can be seen on trunks and has hook-teeth, ever moving his jaws together. He is black, long and has hard wings like a great dung beetle". The story concludes with the comment that the beetles were used as toys by young boys, and that the head was removed and worn as a pendant. As pests About 75% of beetle species are phytophagous in both the larval and adult stages. Many feed on economically important plants and stored plant products, including trees, cereals, tobacco, and dried fruits. Some, such as the boll weevil, which feeds on cotton buds and flowers, can cause extremely serious damage to agriculture. The boll weevil crossed the Rio Grande near Brownsville, Texas, to enter the United States from Mexico around 1892, and had reached southeastern Alabama by 1915. By the mid-1920s, it had entered all cotton-growing regions in the US, traveling per year. It remains the most destructive cotton pest in North America. Mississippi State University has estimated, since the boll weevil entered the United States, it has cost cotton producers about $13 billion, and in recent times about $300 million per year. The bark beetle, elm leaf beetle and the Asian longhorned beetle (Anoplophora glabripennis) are among the species that attack elm trees. Bark beetles (Scolytidae) carry Dutch elm disease as they move from infected breeding sites to healthy trees. The disease has devastated elm trees across Europe and North America. Some species of beetle have evolved immunity to insecticides. For example, the Colorado potato beetle, Leptinotarsa decemlineata, is a destructive pest of potato plants. Its hosts include other members of the Solanaceae, such as nightshade, tomato, eggplant and capsicum, as well as the potato. Different populations have between them developed resistance to all major classes of insecticide. The Colorado potato beetle was evaluated as a tool of entomological warfare during World War II, the idea being to use the beetle and its larvae to damage the crops of enemy nations. Germany tested its Colorado potato beetle weaponisation program south of Frankfurt, releasing 54,000 beetles. The death watch beetle, Xestobium rufovillosum (Ptinidae), is a serious pest of older wooden buildings in Europe. It attacks hardwoods such as oak and chestnut, always where some fungal decay has taken or is taking place. The actual introduction of the pest into buildings is thought to take place at the time of construction. Other pests include the coconut hispine beetle, Brontispa longissima, which feeds on young leaves, seedlings and mature coconut trees, causing serious economic damage in the Philippines. The mountain pine beetle is a destructive pest of mature or weakened lodgepole pine, sometimes affecting large areas of Canada. As beneficial resources Beetles can be beneficial to human economics by controlling the populations of pests. The larvae and adults of some species of lady beetles (Coccinellidae) feed on aphids that are pests. Other lady beetles feed on scale insects, whitefly and mealybugs. If normal food sources are scarce, they may feed on small caterpillars, young plant bugs, or honeydew and nectar. Ground beetles (Carabidae) are common predators of many insect pests, including fly eggs, caterpillars, and wireworms. Ground beetles can help to control weeds by eating their seeds in the soil, reducing the need for herbicides to protect crops. The effectiveness of some species in reducing certain plant populations has resulted in the deliberate introduction of beetles in order to control weeds. For example, the genus Zygogramma is native to North America but has been used to control Parthenium hysterophorus in India and Ambrosia artemisiifolia in Russia. Dung beetles (Scarabidae) have been successfully used to reduce the populations of pestilent flies, such as Musca vetustissima and Haematobia exigua which are serious pests of cattle in Australia. The beetles make the dung unavailable to breeding pests by quickly rolling and burying it in the soil, with the added effect of improving soil fertility, tilth, and nutrient cycling. The Australian Dung Beetle Project (1965–1985), introduced species of dung beetle to Australia from South Africa and Europe to reduce populations of Musca vetustissima, following successful trials of this technique in Hawaii. The American Institute of Biological Sciences reports that dung beetles save the United States cattle industry an estimated US$380 million annually through burying above-ground livestock feces. The Dermestidae are often used in taxidermy and in the preparation of scientific specimens, to clean soft tissue from bones. Larvae feed on and remove cartilage along with other soft tissue. As food and medicine Beetles are the most widely eaten insects, with about 344 species used as food, usually at the larval stage. The mealworm (the larva of the darkling beetle) and the rhinoceros beetle are among the species commonly eaten. A wide range of species is also used in folk medicine to treat those suffering from a variety of disorders and illnesses, though this is done without clinical studies supporting the efficacy of such treatments. As biodiversity indicators Due to their habitat specificity, many species of beetles have been suggested as suitable as indicators, their presence, numbers, or absence providing a measure of habitat quality. Predatory beetles such as the tiger beetles (Cicindelidae) have found scientific use as an indicator taxon for measuring regional patterns of biodiversity. They are suitable for this as their taxonomy is stable; their life history is well described; they are large and simple to observe when visiting a site; they occur around the world in many habitats, with species specialised to particular habitats; and their occurrence by species accurately indicates other species, both vertebrate and invertebrate. According to the habitats, many other groups such as the rove beetles in human-modified habitats, dung beetles in savannas and saproxylic beetles in forests have been suggested as potential indicator species. In art and adornment Many beetles have durable elytra that has been used as material in art, with beetlewing the best example. Sometimes, they are incorporated into ritual objects for their religious significance. Whole beetles, either as-is or encased in clear plastic, are made into objects ranging from cheap souvenirs such as key chains to expensive fine-art jewellery. In parts of Mexico, beetles of the genus Zopherus are made into living brooches by attaching costume jewelry and golden chains, which is made possible by the incredibly hard elytra and sedentary habits of the genus. In entertainment Fighting beetles are used for entertainment and gambling. This sport exploits the territorial behavior and mating competition of certain species of large beetles. In the Chiang Mai district of northern Thailand, male Xylotrupes rhinoceros beetles are caught in the wild and trained for fighting. Females are held inside a log to stimulate the fighting males with their pheromones. These fights may be competitive and involve gambling both money and property. In South Korea the Dytiscidae species Cybister tripunctatus is used in a roulette-like game. Beetles are sometimes used as instruments: the Onabasulu of Papua New Guinea historically used the "hugu" weevil Rhynchophorus ferrugineus as a musical instrument by letting the human mouth serve as a variable resonance chamber for the wing vibrations of the live adult beetle. As pets Some species of beetle are kept as pets, for example diving beetles (Dytiscidae) may be kept in a domestic fresh water tank. In Japan the practice of keeping horned rhinoceros beetles (Dynastinae) and stag beetles (Lucanidae) is particularly popular amongst young boys. Such is the popularity in Japan that vending machines dispensing live beetles were developed in 1999, each holding up to 100 stag beetles. As things to collect Beetle collecting became extremely popular in the Victorian era. The naturalist Alfred Russel Wallace collected (by his own count) a total of 83,200 beetles during the eight years described in his 1869 book The Malay Archipelago, including 2,000 species new to science. As inspiration for technologies Several coleopteran adaptations have attracted interest in biomimetics with possible commercial applications. The bombardier beetle's powerful repellent spray has inspired the development of a fine mist spray technology, claimed to have a low carbon impact compared to aerosol sprays. Moisture harvesting behavior by the Namib desert beetle (Stenocara gracilipes) has inspired a self-filling water bottle which utilises hydrophilic and hydrophobic materials to benefit people living in dry regions with no regular rainfall. Living beetles have been
air show crash of the competing Soviet Tupolev Tu-144 had shocked potential buyers, and public concern over the environmental issues presented by a supersonic aircraft—the sonic boom, take-off noise and pollution—had produced a shift in public opinion of SSTs. By 1976 the remaining buyers were from four countries: Britain, France, China, and Iran. Only Air France and British Airways (the successor to BOAC) took up their orders, with the two governments taking a cut of any profits made. The United States government cut federal funding for the Boeing 2707, its rival supersonic transport programme, in 1971; Boeing did not complete its two 2707 prototypes. The US, India, and Malaysia all ruled out Concorde supersonic flights over the noise concern, although some of these restrictions were later relaxed. Professor Douglas Ross characterised restrictions placed upon Concorde operations by President Jimmy Carter's administration as having been an act of protectionism of American aircraft manufacturers. Design General features Concorde is an ogival delta winged aircraft with four Olympus engines based on those employed in the RAF's Avro Vulcan strategic bomber. It is one of the few commercial aircraft to employ a tailless design (the Tupolev Tu-144 being another). Concorde was the first airliner to have a (in this case, analogue) fly-by-wire flight-control system; the avionics system Concorde used was unique because it was the first commercial aircraft to employ hybrid circuits. The principal designer for the project was Pierre Satre, with Sir Archibald Russell as his deputy. Concorde pioneered the following technologies: For high speed and optimisation of flight: Double delta (ogee/ogival) shaped wings Variable engine air intake ramp system controlled by digital computers Supercruise capability Thrust-by-wire engines, predecessor of today's FADEC-controlled engines Droop nose for better landing visibility For weight-saving and enhanced performance: Mach 2.02 (~) cruising speed for optimum fuel consumption (supersonic drag minimum and turbojet engines are more efficient at higher speed) Fuel consumption at and at altitude of was . Mainly aluminium construction using a high temperature alloy similar to that developed for aero-engine pistons. This material gave low weight and allowed conventional manufacture (higher speeds would have ruled out aluminium) Full-regime autopilot and autothrottle allowing "hands off" control of the aircraft from climb out to landing Fully electrically controlled analogue fly-by-wire flight controls systems High-pressure hydraulic system using for lighter hydraulic components, tripled independent systems ("Blue", "Green", and "Yellow") for redundancy, with an emergency ram air turbine (RAT) stored in the port-inner elevon jack fairing supplying "Green" and "Yellow" as backup. Complex air data computer (ADC) for the automated monitoring and transmission of aerodynamic measurements (total pressure, static pressure, angle of attack, side-slip). Fully electrically controlled analogue brake-by-wire system Pitch trim by shifting fuel fore-and-aft for centre-of-gravity (CoG) control at the approach to Mach 1 and above with no drag penalty. Pitch trimming by fuel transfer had been used since 1958 on the B-58 supersonic bomber. Parts made using "sculpture milling", reducing the part count while saving weight and adding strength. No auxiliary power unit, as Concorde would only visit large airports where ground air start carts are available. Powerplant A symposium titled "Supersonic-Transport Implications" was hosted by the Royal Aeronautical Society on 8 December 1960. Various views were put forward on the likely type of powerplant for a supersonic transport, such as podded or buried installation and turbojet or ducted-fan engines. Boundary layer management in the podded installation was put forward as simpler with only an inlet cone but Dr. Seddon of the RAE saw "a future in a more sophisticated integration of shapes" in a buried installation. Another concern highlighted the case with two or more engines situated behind a single intake. An intake failure could lead to a double or triple engine failure. The advantage of the ducted fan over the turbojet was reduced airport noise but with considerable economic penalties with its larger cross-section producing excessive drag. At that time it was considered that the noise from a turbojet optimised for supersonic cruise could be reduced to an acceptable level using noise suppressors as used on subsonic jets. The powerplant configuration selected for Concorde, and its development to a certificated design, can be seen in light of the above symposium topics (which highlighted airfield noise, boundary layer management and interactions between adjacent engines) and the requirement that the powerplant, at Mach 2, tolerate combinations of pushovers, sideslips, pull-ups and throttle slamming without surging. Extensive development testing with design changes and changes to intake and engine control laws would address most of the issues except airfield noise and the interaction between adjacent powerplants at speeds above Mach 1.6 which meant Concorde "had to be certified aerodynamically as a twin-engined aircraft above Mach 1.6". Rolls-Royce had a design proposal, the RB.169, for the aircraft at the time of Concorde's initial design but "to develop a brand-new engine for Concorde would have been prohibitively expensive" so an existing engine, already flying in the supersonic BAC TSR-2 strike bomber prototype, was chosen. It was the BSEL Olympus Mk 320 turbojet, a development of the Bristol engine first used for the subsonic Avro Vulcan bomber. Great confidence was placed in being able to reduce the noise of a turbojet and massive strides by SNECMA in silencer design were reported during the programme. However, by 1974 the spade silencers which projected into the exhaust were reported to be ineffective but "entry-into-service aircraft are likely to meet their noise guarantees". The Olympus Mk.622 with reduced jet velocity was proposed to reduce the noise but it was not developed. Situated behind the leading edge of the wing, the engine intake had wing boundary layer ahead of it. Two-thirds was diverted and the remaining third which entered the intake did not adversely affect the intake efficiency except during pushovers when the boundary layer thickened ahead of the intake and caused surging. Extensive wind tunnel testing helped define leading edge modifications ahead of the intakes which solved the problem. Each engine had its own intake and the engine nacelles were paired with a splitter plate between them to minimise adverse behaviour of one powerplant influencing the other. Only above was an engine surge likely to affect the adjacent engine. Concorde needed to fly long distances to be economically viable; this required high efficiency from the powerplant. Turbofan engines were rejected due to their larger cross-section producing excessive drag. Olympus turbojet technology was available to be developed to meet the design requirements of the aircraft, although turbofans would be studied for any future SST. The aircraft used reheat (afterburners) only at take-off and to pass through the upper transonic regime to supersonic speeds, between Mach 0.95 and 1.7. Reheat was switched off at all other times. Due to jet engines being highly inefficient at low speeds, Concorde burned of fuel (almost 2% of the maximum fuel load) taxiing to the runway. Fuel used is Jet A-1. Due to the high thrust produced even with the engines at idle, only the two outer engines were run after landing for easier taxiing and less brake pad wear – at low weights after landing, the aircraft would not remain stationary with all four engines idling requiring the brakes to be continuously applied to prevent the aircraft from rolling. The air intake design for Concorde's engines was especially critical. The intakes had to slow down supersonic inlet air to subsonic speeds with high pressure recovery to ensure efficient operation at cruising speed while providing low distortion levels (to prevent engine surge) and maintaining high efficiency for all likely ambient temperatures to be met in cruise. They had to provide adequate subsonic performance for diversion cruise and low engine-face distortion at take-off. They also had to provide an alternative path for excess intake air during engine throttling or shutdowns. The variable intake features required to meet all these requirements consisted of front and rear ramps, a dump door, an auxiliary inlet and a ramp bleed to the exhaust nozzle. As well as supplying air to the engine, the intake also supplied air through the ramp bleed to the propelling nozzle. The nozzle ejector (or aerodynamic) design, with variable exit area and secondary flow from the intake, contributed to good expansion efficiency from take-off to cruise. Concorde's Air Intake Control Units (AICUs) made use of a digital processor to provide the necessary accuracy for intake control. It was the world's first use of a digital processor to be given full authority control of an essential system in a passenger aircraft. It was developed by the Electronics and Space Systems (ESS) division of the British Aircraft Corporation after it became clear that the analogue AICUs fitted to the prototype aircraft and developed by Ultra Electronics were found to be insufficiently accurate for the tasks in hand. Engine failure causes problems on conventional subsonic aircraft; not only does the aircraft lose thrust on that side but the engine creates drag, causing the aircraft to yaw and bank in the direction of the failed engine. If this had happened to Concorde at supersonic speeds, it theoretically could have caused a catastrophic failure of the airframe. Although computer simulations predicted considerable problems, in practice Concorde could shut down both engines on the same side of the aircraft at Mach 2 without the predicted difficulties. During an engine failure the required air intake is virtually zero. So, on Concorde, engine failure was countered by the opening of the auxiliary spill door and the full extension of the ramps, which deflected the air downwards past the engine, gaining lift and minimising drag. Concorde pilots were routinely trained to handle double engine failure. Concorde's thrust-by-wire engine control system was developed by Ultra Electronics. Heating problems Air compression on the outer surfaces caused the cabin to heat up during flight. Every surface, such as windows and panels, was warm to the touch by the end of the flight. Besides engines, the hottest part of the structure of any supersonic aircraft is the nose, due to aerodynamic heating. The engineers used Hiduminium R.R. 58, an aluminium alloy, throughout the aircraft because of its familiarity, cost and ease of construction. The highest temperature that aluminium could sustain over the life of the aircraft was , which limited the top speed to Mach 2.02. Concorde went through two cycles of heating and cooling during a flight, first cooling down as it gained altitude, then heating up after going supersonic. The reverse happened when descending and slowing down. This had to be factored into the metallurgical and fatigue modelling. A test rig was built that repeatedly heated up a full-size section of the wing, and then cooled it, and periodically samples of metal were taken for testing. The Concorde airframe was designed for a life of 45,000 flying hours. Owing to air compression in front of the plane as it travelled at supersonic speed, the fuselage heated up and expanded by as much as . The most obvious manifestation of this was a gap that opened up on the flight deck between the flight engineer's console and the bulkhead. On some aircraft that conducted a retiring supersonic flight, the flight engineers placed their caps in this expanded gap, wedging the cap when the airframe shrank again. To keep the cabin cool, Concorde used the fuel as a heat sink for the heat from the air conditioning. The same method also cooled the hydraulics. During supersonic flight the surfaces forward from the cockpit became heated, and a visor was used to deflect much of this heat from directly reaching the cockpit. Concorde had livery restrictions; the majority of the surface had to be covered with a highly reflective white paint to avoid overheating the aluminium structure due to heating effects from supersonic flight at Mach 2. The white finish reduced the skin temperature by . In 1996, Air France briefly painted F-BTSD in a predominantly blue livery, with the exception of the wings, in a promotional deal with Pepsi. In this paint scheme, Air France was advised to remain at for no more than 20 minutes at a time, but there was no restriction at speeds under Mach 1.7. F-BTSD was used because it was not scheduled for any long flights that required extended Mach 2 operations. Structural issues Due to its high speeds, large forces were applied to the aircraft during banks and turns, and caused twisting and distortion of the aircraft's structure. In addition there were concerns over maintaining precise control at supersonic speeds. Both of these issues were resolved by active ratio changes between the inboard and outboard elevons, varying at differing speeds including supersonic. Only the innermost elevons, which are attached to the stiffest area of the wings, were active at high speed. Additionally, the narrow fuselage meant that the aircraft flexed. This was visible from the rear passengers' viewpoints. When any aircraft passes the critical mach of that particular airframe, the centre of pressure shifts rearwards. This causes a pitch down moment on the aircraft if the centre of gravity remains where it was. The engineers designed the wings in a specific manner to reduce this shift, but there was still a shift of about . This could have been countered by the use of trim controls, but at such high speeds this would have dramatically increased drag. Instead, the distribution of fuel along the aircraft was shifted during acceleration and deceleration to move the centre of gravity, effectively acting as an auxiliary trim control. Range To fly non-stop across the Atlantic Ocean, Concorde required the greatest supersonic range of any aircraft. This was achieved by a combination of engines which were highly efficient at supersonic speeds, a slender fuselage with high fineness ratio, and a complex wing shape for a high lift-to-drag ratio. This also required carrying only a modest payload and a high fuel capacity, and the aircraft was trimmed with precision to avoid unnecessary drag. Nevertheless, soon after Concorde began flying, a Concorde "B" model was designed with slightly larger fuel capacity and slightly larger wings with leading edge slats to improve aerodynamic performance at all speeds, with the objective of expanding the range to reach markets in new regions. It featured more powerful engines with sound deadening and without the fuel-hungry and noisy afterburner. It was speculated that it was reasonably possible to create an engine with up to 25% gain in efficiency over the Rolls-Royce/Snecma Olympus 593. This would have given additional range and a greater payload, making new commercial routes possible. This was cancelled due in part to poor sales of Concorde, but also to the rising cost of aviation fuel in the 1970s. Radiation concerns Concorde's high cruising altitude meant people onboard received almost twice the flux of extraterrestrial ionising radiation as those travelling on a conventional long-haul flight. Upon Concorde's introduction, it was speculated that this exposure during supersonic travels would increase the likelihood of skin cancer. Due to the proportionally reduced flight time, the overall equivalent dose would normally be less than a conventional flight over the same distance. Unusual solar activity might lead to an increase in incident radiation. To prevent incidents of excessive radiation exposure, the flight deck had a radiometer and an instrument to measure the rate of increase or decrease of radiation. If the radiation level became too high, Concorde would descend below . Cabin pressurisation Airliner cabins were usually maintained at a pressure equivalent to elevation. Concorde's pressurisation was set to an altitude at the lower end of this range, . Concorde's maximum cruising altitude was ; subsonic airliners typically cruise below . A sudden reduction in cabin pressure is hazardous to all passengers and crew. Above , a sudden cabin depressurisation would leave a "time of useful consciousness" up to 10–15 seconds for a conditioned athlete. At Concorde's altitude, the air density is very low; a breach of cabin integrity would result in a loss of pressure severe enough that the plastic emergency oxygen masks installed on other passenger jets would not be effective and passengers would soon suffer from hypoxia despite quickly donning them. Concorde was equipped with smaller windows to reduce the rate of loss in the event of a breach, a reserve air supply system to augment cabin air pressure, and a rapid descent procedure to bring the aircraft to a safe altitude. The FAA enforces minimum emergency descent rates for aircraft and noting Concorde's higher operating altitude, concluded that the best response to pressure loss would be a rapid descent. Continuous positive airway pressure would have delivered pressurised oxygen directly to the pilots through masks. Flight characteristics While subsonic commercial jets took eight hours to fly from Paris to New York (seven hours from New York to Paris), the average supersonic flight time on the transatlantic routes was just under 3.5 hours. Concorde had a maximum cruising altitude of and an average cruise speed of , more than twice the speed of conventional aircraft. With no other civil traffic operating at its cruising altitude of about , Concorde had exclusive use of dedicated oceanic airways, or "tracks", separate from the North Atlantic Tracks, the routes used by other aircraft to cross the Atlantic. Due to the significantly less variable nature of high altitude winds compared to those at standard cruising altitudes, these dedicated SST tracks had fixed co-ordinates, unlike the standard routes at lower altitudes, whose co-ordinates are replotted twice daily based on forecast weather patterns (jetstreams). Concorde would also be cleared in a block, allowing for a slow climb from during the oceanic crossing as the fuel load gradually decreased. In regular service, Concorde employed an efficient cruise-climb flight profile following take-off. The delta-shaped wings required Concorde to adopt a higher angle of attack at low speeds than conventional aircraft, but it allowed the formation of large low pressure vortices over the entire upper wing surface, maintaining lift. The normal landing speed was . Because of this high angle, during a landing approach Concorde was on the "back side" of the drag force curve, where raising the nose would increase the rate of descent; the aircraft was thus largely flown on the throttle and was fitted with an autothrottle to reduce the pilot's workload. Brakes and undercarriage Because of the way Concorde's delta-wing generated lift, the undercarriage had to be unusually strong and tall to allow for the angle of attack at low speed. At rotation, Concorde would rise to a high angle of attack, about 18 degrees. Prior to rotation the wing generated almost no lift, unlike typical aircraft wings. Combined with the high airspeed at rotation ( indicated airspeed), this increased the stresses on the main undercarriage in a way that was initially unexpected during the development and required a major redesign. Due to the high angle needed at rotation, a small set of wheels was added aft to prevent tailstrikes. The main undercarriage units swing towards each other to be stowed but due to their great height also needed to contract in length telescopically before swinging to clear each other when stowed. The four main wheel tyres on each bogie unit are inflated to . The twin-wheel nose undercarriage retracts forwards and its tyres are inflated to a pressure of , and the wheel assembly carries a spray deflector to prevent standing water being thrown up into the engine intakes. The tyres are rated to a maximum speed on the runway of . The starboard nose wheel carries a single disc brake to halt wheel rotation during retraction of the undercarriage. The port nose wheel carries speed generators for the anti-skid braking system which prevents brake activation until nose and main wheels rotate at the same rate. Additionally, due to the high average take-off speed of , Concorde needed upgraded brakes. Like most airliners, Concorde has anti-skid braking – a system which prevents the tyres from losing traction when the brakes are applied for greater control during roll-out. The brakes, developed by Dunlop, were the first carbon-based brakes used on an airliner. The use of carbon over equivalent steel brakes provided a weight-saving of . Each wheel has multiple discs which are cooled by electric fans. Wheel sensors include brake overload, brake temperature, and tyre deflation. After a typical landing at Heathrow, brake temperatures were around . Landing Concorde required a minimum of runway length, this in fact being considerably less than the shortest runway Concorde ever actually landed on, that of Cardiff Airport. Droop nose Concorde's drooping nose, developed by Marshall's of Cambridge, enabled the aircraft to switch from being streamlined to reduce drag and achieve optimal aerodynamic efficiency during flight, to not obstructing the pilot's view during taxi, take-off, and landing operations. Due to the high angle of attack, the long pointed nose obstructed the view and necessitated the ability to droop. The droop nose was accompanied by a moving visor that retracted into the nose prior to being lowered. When the nose was raised to horizontal, the visor would rise in front of the cockpit windscreen for aerodynamic streamlining. A controller in the cockpit allowed the visor to be retracted and the nose to be lowered to 5° below the standard horizontal position for taxiing and take-off. Following take-off and after clearing the airport, the nose and visor were raised. Prior to landing, the visor was again retracted and the nose lowered to 12.5° below horizontal for maximal visibility. Upon landing the nose was raised to the 5° position to avoid the possibility of damage due to collision with ground vehicles, and then raised fully before engine shutdown to prevent pooling of internal condensation within the radome seeping down into the aircraft's pitot/ADC system probes. The US Federal Aviation Administration had objected to the restrictive visibility of the visor used on the first two prototype Concordes, which had been designed before a suitable high-temperature window glass had become available, and thus requiring alteration before the FAA would permit Concorde to serve US airports. This led to the redesigned visor used on the production and the four pre-production aircraft (101, 102, 201, and 202). The nose window and visor glass, needed to endure temperatures in excess of at supersonic flight, were developed by Triplex. Operational history 1973 Solar Eclipse Mission Concorde 001 was modified with rooftop portholes
above symposium topics (which highlighted airfield noise, boundary layer management and interactions between adjacent engines) and the requirement that the powerplant, at Mach 2, tolerate combinations of pushovers, sideslips, pull-ups and throttle slamming without surging. Extensive development testing with design changes and changes to intake and engine control laws would address most of the issues except airfield noise and the interaction between adjacent powerplants at speeds above Mach 1.6 which meant Concorde "had to be certified aerodynamically as a twin-engined aircraft above Mach 1.6". Rolls-Royce had a design proposal, the RB.169, for the aircraft at the time of Concorde's initial design but "to develop a brand-new engine for Concorde would have been prohibitively expensive" so an existing engine, already flying in the supersonic BAC TSR-2 strike bomber prototype, was chosen. It was the BSEL Olympus Mk 320 turbojet, a development of the Bristol engine first used for the subsonic Avro Vulcan bomber. Great confidence was placed in being able to reduce the noise of a turbojet and massive strides by SNECMA in silencer design were reported during the programme. However, by 1974 the spade silencers which projected into the exhaust were reported to be ineffective but "entry-into-service aircraft are likely to meet their noise guarantees". The Olympus Mk.622 with reduced jet velocity was proposed to reduce the noise but it was not developed. Situated behind the leading edge of the wing, the engine intake had wing boundary layer ahead of it. Two-thirds was diverted and the remaining third which entered the intake did not adversely affect the intake efficiency except during pushovers when the boundary layer thickened ahead of the intake and caused surging. Extensive wind tunnel testing helped define leading edge modifications ahead of the intakes which solved the problem. Each engine had its own intake and the engine nacelles were paired with a splitter plate between them to minimise adverse behaviour of one powerplant influencing the other. Only above was an engine surge likely to affect the adjacent engine. Concorde needed to fly long distances to be economically viable; this required high efficiency from the powerplant. Turbofan engines were rejected due to their larger cross-section producing excessive drag. Olympus turbojet technology was available to be developed to meet the design requirements of the aircraft, although turbofans would be studied for any future SST. The aircraft used reheat (afterburners) only at take-off and to pass through the upper transonic regime to supersonic speeds, between Mach 0.95 and 1.7. Reheat was switched off at all other times. Due to jet engines being highly inefficient at low speeds, Concorde burned of fuel (almost 2% of the maximum fuel load) taxiing to the runway. Fuel used is Jet A-1. Due to the high thrust produced even with the engines at idle, only the two outer engines were run after landing for easier taxiing and less brake pad wear – at low weights after landing, the aircraft would not remain stationary with all four engines idling requiring the brakes to be continuously applied to prevent the aircraft from rolling. The air intake design for Concorde's engines was especially critical. The intakes had to slow down supersonic inlet air to subsonic speeds with high pressure recovery to ensure efficient operation at cruising speed while providing low distortion levels (to prevent engine surge) and maintaining high efficiency for all likely ambient temperatures to be met in cruise. They had to provide adequate subsonic performance for diversion cruise and low engine-face distortion at take-off. They also had to provide an alternative path for excess intake air during engine throttling or shutdowns. The variable intake features required to meet all these requirements consisted of front and rear ramps, a dump door, an auxiliary inlet and a ramp bleed to the exhaust nozzle. As well as supplying air to the engine, the intake also supplied air through the ramp bleed to the propelling nozzle. The nozzle ejector (or aerodynamic) design, with variable exit area and secondary flow from the intake, contributed to good expansion efficiency from take-off to cruise. Concorde's Air Intake Control Units (AICUs) made use of a digital processor to provide the necessary accuracy for intake control. It was the world's first use of a digital processor to be given full authority control of an essential system in a passenger aircraft. It was developed by the Electronics and Space Systems (ESS) division of the British Aircraft Corporation after it became clear that the analogue AICUs fitted to the prototype aircraft and developed by Ultra Electronics were found to be insufficiently accurate for the tasks in hand. Engine failure causes problems on conventional subsonic aircraft; not only does the aircraft lose thrust on that side but the engine creates drag, causing the aircraft to yaw and bank in the direction of the failed engine. If this had happened to Concorde at supersonic speeds, it theoretically could have caused a catastrophic failure of the airframe. Although computer simulations predicted considerable problems, in practice Concorde could shut down both engines on the same side of the aircraft at Mach 2 without the predicted difficulties. During an engine failure the required air intake is virtually zero. So, on Concorde, engine failure was countered by the opening of the auxiliary spill door and the full extension of the ramps, which deflected the air downwards past the engine, gaining lift and minimising drag. Concorde pilots were routinely trained to handle double engine failure. Concorde's thrust-by-wire engine control system was developed by Ultra Electronics. Heating problems Air compression on the outer surfaces caused the cabin to heat up during flight. Every surface, such as windows and panels, was warm to the touch by the end of the flight. Besides engines, the hottest part of the structure of any supersonic aircraft is the nose, due to aerodynamic heating. The engineers used Hiduminium R.R. 58, an aluminium alloy, throughout the aircraft because of its familiarity, cost and ease of construction. The highest temperature that aluminium could sustain over the life of the aircraft was , which limited the top speed to Mach 2.02. Concorde went through two cycles of heating and cooling during a flight, first cooling down as it gained altitude, then heating up after going supersonic. The reverse happened when descending and slowing down. This had to be factored into the metallurgical and fatigue modelling. A test rig was built that repeatedly heated up a full-size section of the wing, and then cooled it, and periodically samples of metal were taken for testing. The Concorde airframe was designed for a life of 45,000 flying hours. Owing to air compression in front of the plane as it travelled at supersonic speed, the fuselage heated up and expanded by as much as . The most obvious manifestation of this was a gap that opened up on the flight deck between the flight engineer's console and the bulkhead. On some aircraft that conducted a retiring supersonic flight, the flight engineers placed their caps in this expanded gap, wedging the cap when the airframe shrank again. To keep the cabin cool, Concorde used the fuel as a heat sink for the heat from the air conditioning. The same method also cooled the hydraulics. During supersonic flight the surfaces forward from the cockpit became heated, and a visor was used to deflect much of this heat from directly reaching the cockpit. Concorde had livery restrictions; the majority of the surface had to be covered with a highly reflective white paint to avoid overheating the aluminium structure due to heating effects from supersonic flight at Mach 2. The white finish reduced the skin temperature by . In 1996, Air France briefly painted F-BTSD in a predominantly blue livery, with the exception of the wings, in a promotional deal with Pepsi. In this paint scheme, Air France was advised to remain at for no more than 20 minutes at a time, but there was no restriction at speeds under Mach 1.7. F-BTSD was used because it was not scheduled for any long flights that required extended Mach 2 operations. Structural issues Due to its high speeds, large forces were applied to the aircraft during banks and turns, and caused twisting and distortion of the aircraft's structure. In addition there were concerns over maintaining precise control at supersonic speeds. Both of these issues were resolved by active ratio changes between the inboard and outboard elevons, varying at differing speeds including supersonic. Only the innermost elevons, which are attached to the stiffest area of the wings, were active at high speed. Additionally, the narrow fuselage meant that the aircraft flexed. This was visible from the rear passengers' viewpoints. When any aircraft passes the critical mach of that particular airframe, the centre of pressure shifts rearwards. This causes a pitch down moment on the aircraft if the centre of gravity remains where it was. The engineers designed the wings in a specific manner to reduce this shift, but there was still a shift of about . This could have been countered by the use of trim controls, but at such high speeds this would have dramatically increased drag. Instead, the distribution of fuel along the aircraft was shifted during acceleration and deceleration to move the centre of gravity, effectively acting as an auxiliary trim control. Range To fly non-stop across the Atlantic Ocean, Concorde required the greatest supersonic range of any aircraft. This was achieved by a combination of engines which were highly efficient at supersonic speeds, a slender fuselage with high fineness ratio, and a complex wing shape for a high lift-to-drag ratio. This also required carrying only a modest payload and a high fuel capacity, and the aircraft was trimmed with precision to avoid unnecessary drag. Nevertheless, soon after Concorde began flying, a Concorde "B" model was designed with slightly larger fuel capacity and slightly larger wings with leading edge slats to improve aerodynamic performance at all speeds, with the objective of expanding the range to reach markets in new regions. It featured more powerful engines with sound deadening and without the fuel-hungry and noisy afterburner. It was speculated that it was reasonably possible to create an engine with up to 25% gain in efficiency over the Rolls-Royce/Snecma Olympus 593. This would have given additional range and a greater payload, making new commercial routes possible. This was cancelled due in part to poor sales of Concorde, but also to the rising cost of aviation fuel in the 1970s. Radiation concerns Concorde's high cruising altitude meant people onboard received almost twice the flux of extraterrestrial ionising radiation as those travelling on a conventional long-haul flight. Upon Concorde's introduction, it was speculated that this exposure during supersonic travels would increase the likelihood of skin cancer. Due to the proportionally reduced flight time, the overall equivalent dose would normally be less than a conventional flight over the same distance. Unusual solar activity might lead to an increase in incident radiation. To prevent incidents of excessive radiation exposure, the flight deck had a radiometer and an instrument to measure the rate of increase or decrease of radiation. If the radiation level became too high, Concorde would descend below . Cabin pressurisation Airliner cabins were usually maintained at a pressure equivalent to elevation. Concorde's pressurisation was set to an altitude at the lower end of this range, . Concorde's maximum cruising altitude was ; subsonic airliners typically cruise below . A sudden reduction in cabin pressure is hazardous to all passengers and crew. Above , a sudden cabin depressurisation would leave a "time of useful consciousness" up to 10–15 seconds for a conditioned athlete. At Concorde's altitude, the air density is very low; a breach of cabin integrity would result in a loss of pressure severe enough that the plastic emergency oxygen masks installed on other passenger jets would not be effective and passengers would soon suffer from hypoxia despite quickly donning them. Concorde was equipped with smaller windows to reduce the rate of loss in the event of a breach, a reserve air supply system to augment cabin air pressure, and a rapid descent procedure to bring the aircraft to a safe altitude. The FAA enforces minimum emergency descent rates for aircraft and noting Concorde's higher operating altitude, concluded that the best response to pressure loss would be a rapid descent. Continuous positive airway pressure would have delivered pressurised oxygen directly to the pilots through masks. Flight characteristics While subsonic commercial jets took eight hours to fly from Paris to New York (seven hours from New York to Paris), the average supersonic flight time on the transatlantic routes was just under 3.5 hours. Concorde had a maximum cruising altitude of and an average cruise speed of , more than twice the speed of conventional aircraft. With no other civil traffic operating at its cruising altitude of about , Concorde had exclusive use of dedicated oceanic airways, or "tracks", separate from the North Atlantic Tracks, the routes used by other aircraft to cross the Atlantic. Due to the significantly less variable nature of high altitude winds compared to those at standard cruising altitudes, these dedicated SST tracks had fixed co-ordinates, unlike the standard routes at lower altitudes, whose co-ordinates are replotted twice daily based on forecast weather patterns (jetstreams). Concorde would also be cleared in a block, allowing for a slow climb from during the oceanic crossing as the fuel load gradually decreased. In regular service, Concorde employed an efficient cruise-climb flight profile following take-off. The delta-shaped wings required Concorde to adopt a higher angle of attack at low speeds than conventional aircraft, but it allowed the formation of large low pressure vortices over the entire upper wing surface, maintaining lift. The normal landing speed was . Because of this high angle, during a landing approach Concorde was on the "back side" of the drag force curve, where raising the nose would increase the rate of descent; the aircraft was thus largely flown on the throttle and was fitted with an autothrottle to reduce the pilot's workload. Brakes and undercarriage Because of the way Concorde's delta-wing generated lift, the undercarriage had to be unusually strong and tall to allow for the angle of attack at low speed. At rotation, Concorde would rise to a high angle of attack, about 18 degrees. Prior to rotation the wing generated almost no lift, unlike typical aircraft wings. Combined with the high airspeed at rotation ( indicated airspeed), this increased the stresses on the main undercarriage in a way that was initially unexpected during the development and required a major redesign. Due to the high angle needed at rotation, a small set of wheels was added aft to prevent tailstrikes. The main undercarriage units swing towards each other to be stowed but due to their great height also needed to contract in length telescopically before swinging to clear each other when stowed. The four main wheel tyres on each bogie unit are inflated to . The twin-wheel nose undercarriage retracts forwards and its tyres are inflated to a pressure of , and the wheel assembly carries a spray deflector to prevent standing water being thrown up into the engine intakes. The tyres are rated to a maximum speed on the runway of . The starboard nose wheel carries a single disc brake to halt wheel rotation during retraction of the undercarriage. The port nose wheel carries speed generators for the anti-skid braking system which prevents brake activation until nose and main wheels rotate at the same rate. Additionally, due to the high average take-off speed of , Concorde needed upgraded brakes. Like most airliners, Concorde has anti-skid braking – a system which prevents the tyres from losing traction when the brakes are applied for greater control during roll-out. The brakes, developed by Dunlop, were the first carbon-based brakes used on an airliner. The use of carbon over equivalent steel brakes provided a weight-saving of . Each wheel has multiple discs which are cooled by electric fans. Wheel sensors include brake overload, brake temperature, and tyre deflation. After a typical landing at Heathrow, brake temperatures were around . Landing Concorde required a minimum of runway length, this in fact being considerably less than the shortest runway Concorde ever actually landed on, that of Cardiff Airport. Droop nose Concorde's drooping nose, developed by Marshall's of Cambridge, enabled the aircraft to switch from being streamlined to reduce drag and achieve optimal aerodynamic efficiency during flight, to not obstructing the pilot's view during taxi, take-off, and landing operations. Due to the high angle of attack, the long pointed nose obstructed the view and necessitated the ability to droop. The droop nose was accompanied by a moving visor that retracted into the nose prior to being lowered. When the nose was raised to horizontal, the visor would rise in front of the cockpit windscreen for aerodynamic streamlining. A controller in the cockpit allowed the visor to be retracted and the nose to be lowered to 5° below the standard horizontal position for taxiing and take-off. Following take-off and after clearing the airport, the nose and visor were raised. Prior to landing, the visor was again retracted and the nose lowered to 12.5° below horizontal for maximal visibility. Upon landing the nose was raised to the 5° position to avoid the possibility of damage due to collision with ground vehicles, and then raised fully before engine shutdown to prevent pooling of internal condensation within the radome seeping down into the aircraft's pitot/ADC system probes. The US Federal Aviation Administration had objected to the restrictive visibility of the visor used on the first two prototype Concordes, which had been designed before a suitable high-temperature window glass had become available, and thus requiring alteration before the FAA would permit Concorde to serve US airports. This led to the redesigned visor used on the production and the four pre-production aircraft (101, 102, 201, and 202). The nose window and visor glass, needed to endure temperatures in excess of at supersonic flight, were developed by Triplex. Operational history 1973 Solar Eclipse Mission Concorde 001 was modified with rooftop portholes for use on the 1973 Solar Eclipse mission and equipped with observation instruments. It performed the longest observation of a solar eclipse to date, about 74 minutes. Scheduled flights Scheduled flights began on 21 January 1976 on the London–Bahrain and Paris–Rio de Janeiro (via Dakar) routes, with BA flights using the Speedbird Concorde call sign to notify air traffic control of the aircraft's unique abilities and restrictions, but the French using their normal call signs. The Paris-Caracas route (via Azores) began on 10 April. The US Congress had just banned Concorde landings in the US, mainly due to citizen protest over sonic booms, preventing launch on the coveted North Atlantic routes. The US Secretary of Transportation, William Coleman, gave permission for Concorde service to Washington Dulles International Airport, and Air France and British Airways simultaneously began a thrice-weekly service to Dulles on 24 May 1976. Due to low demand, Air France cancelled its Washington service in October 1982, while British Airways cancelled it in November 1994. When the US ban on JFK Concorde operations was lifted in February 1977, New York banned Concorde locally. The ban came to an end on 17 October 1977 when the Supreme Court of the United States declined to overturn a lower court's ruling rejecting efforts by the Port Authority of New York and New Jersey and a grass-roots campaign led by Carol Berman to continue the ban. In spite of complaints about noise, the noise report noted that Air Force One, at the time a Boeing VC-137, was louder than Concorde at subsonic speeds and during take-off and landing. Scheduled service from Paris and London to New York's John F. Kennedy Airport began on 22 November 1977. In December 1977, British Airways and Singapore Airlines shared a Concorde for flights between London and Singapore International Airport at Paya Lebar via Bahrain. The aircraft, BA's Concorde G-BOAD, was painted in Singapore Airlines livery on the port side and British Airways livery on the starboard side. The service was discontinued after three return flights because of noise complaints from the Malaysian government; it could only be reinstated on a new route bypassing Malaysian airspace in 1979. A dispute with India prevented Concorde from reaching supersonic speeds in Indian airspace, so the route was eventually declared not viable and discontinued in 1980. During the Mexican oil boom, Air France flew Concorde twice weekly to Mexico City's Benito Juárez International Airport via Washington, DC, or New York City, from September 1978 to November 1982. The worldwide economic crisis during that period resulted in this route's cancellation; the last flights were almost empty. The routing between Washington or New York and Mexico City included a deceleration, from Mach 2.02 to Mach 0.95, to cross Florida subsonically and avoid creating a sonic boom over the state; Concorde then re-accelerated back to high speed while crossing the Gulf of Mexico. On 1 April 1989, on an around-the-world luxury tour charter, British Airways implemented changes to this routing that allowed G-BOAF to maintain Mach 2.02 by passing around Florida to the east and south. Periodically Concorde visited the region on similar chartered flights to Mexico City and Acapulco. From December 1978 to May 1980, Braniff International Airways leased 11 Concordes, five from Air France and six from British Airways. These were used on subsonic flights between Dallas–Fort Worth and Washington Dulles International Airport, flown by Braniff flight crews. Air France and British Airways crews then took over for the continuing supersonic flights to London and Paris. The aircraft were registered in both the United States and their home countries; the European registration was covered while being operated by Braniff, retaining full AF/BA liveries. The flights were not profitable and typically less than 50% booked, forcing Braniff to end its tenure as the only US Concorde operator in May 1980. In its early years, the British Airways Concorde service had a greater number of "no shows" (passengers who booked a flight and then failed to appear at the gate for boarding) than any other aircraft in the fleet. British Caledonian interest Following the launch of British Airways Concorde services, Britain's other major airline, British Caledonian (BCal), set up a task force headed by Gordon Davidson, BA's former Concorde director, to investigate the possibility of their own Concorde operations. This was seen as particularly viable for the airline's long-haul network as there were two unsold aircraft then available for purchase. One important reason for BCal's interest in Concorde was that the British Government's 1976 aviation policy review had opened the possibility of BA setting up supersonic services in competition with BCal's established sphere of influence. To counteract this potential threat, BCal considered their own independent Concorde plans, as well as a partnership with BA. BCal were considered most likely to have set up a Concorde service on the Gatwick–Lagos route, a major source of revenue and profits within BCal's scheduled route network; BCal's Concorde task force did assess the viability of a daily supersonic service complementing the existing subsonic widebody service on this route. BCal entered into a bid to acquire at least one Concorde. However, BCal eventually arranged for two aircraft to be leased from BA and Aérospatiale respectively, to be maintained by either BA or Air France. BCal's envisaged two-Concorde fleet would have required a high level of aircraft usage to be cost-effective; therefore, BCal had decided to operate the second aircraft on a supersonic service between Gatwick and Atlanta, with a stopover at either Gander or Halifax. Consideration was given to services to Houston and various points on its South American network at a later stage. Both supersonic services were to be launched at some point during 1980; however, steeply rising oil prices caused by the 1979 energy crisis led to BCal shelving their supersonic ambitions. British Airways buys its Concordes outright By around 1981 in the UK, the future for Concorde looked bleak. The British government had lost money operating Concorde every year, and moves were afoot to cancel the service entirely. A cost projection came back with greatly reduced metallurgical testing costs because the test rig for the wings had built up enough data to last for 30 years and could be shut down. Despite this, the government was not keen to continue. In 1983, BA's managing director, Sir John King, convinced the government to sell the aircraft outright to the then state-owned British Airways for £16.5 million plus the first year's profits. In 2003, Lord Heseltine, who was the Minister responsible at the time, revealed to Alan Robb on BBC Radio 5 Live, that the aircraft had been sold for "next to nothing". Asked by Robb if it was the worst deal ever negotiated by a government minister, he replied "That is probably right. But if you have your hands tied behind your back and no cards and a very skillful negotiator on the other side of the table... I defy you to do any [better]." British Airways was subsequently privatised in 1987. Operating economics In 1983, Pan American accused the British Government of subsidising British Airways Concorde air fares, on which a return London–New York was £2,399 (£ in prices), compared to £1,986 (£) with a subsonic first class return, and London–Washington return was £2,426 (£) instead of £2,258 (£) subsonic. Research revealed that passengers thought that the fare was higher than it actually was, so the airline raised ticket prices to match these perceptions. It is reported that British Airways then ran Concorde at a profit. Its estimated operating costs were $3,800 per block hour in 1972 (), compared to actual 1971 operating costs of $1,835 for a 707 and $3,500 for a 747 (equivalent to $ and $, respectively); for a London–New York sector, a 707 cost $13,750 or 3.04¢ per seat/nmi (in 1971 dollars), a 747 $26,200 or 2.4¢ per seat/nmi and Concorde $14,250 or 4.5¢ per seat/nmi. Concorde's unit cost was then $33.8 million ($ in dollars). Other services Between March 1984 and January 1991, British Airways flew a thrice-weekly Concorde service between London and Miami, stopping at Washington Dulles International Airport. Until 2003, Air France and British Airways continued to operate the New York services daily. From 1987 to 2003 British Airways flew a Saturday morning Concorde service to Grantley Adams International Airport, Barbados, during the summer and winter holiday season. Prior to the Air France Paris crash, several UK and French tour operators operated charter flights to European destinations on a regular basis; the charter business was viewed as lucrative by British Airways and Air France. In 1997, British Airways held a promotional contest to mark the 10th anniversary of the airline's move into the private sector. The promotion was a lottery to fly to New York held for 190 tickets valued at £5,400 each, to be offered at £10. Contestants had to call a special hotline to compete with up to 20 million people. Retirement On 10 April 2003, Air France and British Airways simultaneously announced they would retire Concorde later that year. They cited low passenger numbers following the 25 July 2000 crash, the slump in air travel following the September 11 attacks, and rising maintenance costs: Airbus, the company that acquired Aerospatiale in 2000, had made a decision in 2003 to no longer supply replacement parts for the aircraft. Although Concorde was technologically advanced when introduced in the 1970s, 30 years later, its analogue cockpit was outdated. There had been little commercial pressure to upgrade Concorde due to a lack of competing aircraft, unlike other airliners of the same era such as the Boeing 747. By its retirement, it was the last aircraft in the British Airways fleet that had a flight engineer; other aircraft, such as the modernised 747-400, had eliminated the role. On 11 April 2003, Virgin Atlantic founder Sir Richard Branson announced that the company was interested in purchasing British Airways' Concorde fleet "for the same price that they were given them for – one pound". British Airways dismissed the idea, prompting Virgin to increase their offer to £1 million each. Branson claimed that when BA was privatised, a clause in the agreement required them to allow another British airline to operate Concorde if BA ceased to do so, but the Government denied the existence of such a clause. In October 2003, Branson wrote in The Economist that his final offer was "over £5 million" and that he had intended to operate the fleet "for many years to come". The chances for keeping Concorde in service were
the difficulties of transporting cannon in mountainous terrain, their use was less common compared to their use in Europe. Southeast Asia The Javanese Majapahit Empire was arguably able to encompass much of modern-day Indonesia due to its unique mastery of bronze-smithing and use of a central arsenal fed by a large number of cottage industries within the immediate region. Cannons were introduced to Majapahit when Kublai Khan's Chinese army under the leadership of Ike Mese sought to invade Java in 1293. History of Yuan mentioned that the Mongol used a weapon called p'ao against Daha forces. This weapon is interpreted differently by researchers, it may be a trebuchet that throws thunderclap bombs, firearms, cannons, or rockets. It is possible that the gunpowder weapons carried by the Mongol-Chinese troops amounted to more than 1 type. Thomas Stamford Raffles wrote in The History of Java that in 1247 saka (1325 AD), cannons have been widely used in Java especially by the Majapahit. It is recorded that the small kingdoms in Java that that sought the protection of Majapahit had to hand over their cannons to the Majapahit. Majapahit under Mahapatih (prime minister) Gajah Mada (in office 1329–1364) utilized gunpowder technology obtained from Yuan dynasty for use in naval fleet. One of the earliest reference to cannon and artillerymen in Java is from the year 1346. Mongol-Chinese gunpowder technology of Yuan dynasty resulted in Eastern-style cetbang which is similar to Chinese cannon. Swivel guns however, only developed in the archipelago because of the close maritime relations of the Nusantara archipelago with the territory of West India after 1460 AD, which brought new types of gunpowder weapons to the archipelago, likely through Arab intermediaries. This weapon seems to be cannon and gun of Ottoman tradition, for example the prangi, which is a breech-loading swivel gun. A new type of cetbang, called the Western-style cetbang, was derived from the Turkish prangi. Just like prangi, this cetbang is a breech-loading swivel gun made of bronze or iron, firing single rounds or scattershots (a large number of small bullets). Cannons derived from Western-style cetbang can be found in Nusantara, among others were lantaka and lela. Most lantakas were made of bronze and the earliest ones were breech-loaded. There is a trend toward muzzle-loading weapons during colonial times. Pole gun (bedil tombak) was recorded as being used by Java in 1413. Portuguese and Spanish invaders were unpleasantly surprised and even outgunned on occasion. Circa 1540, the Javanese, always alert for new weapons found the newly arrived Portuguese weaponry superior to that of the locally made variants. Majapahit-era cetbang cannon were further improved and used in the Demak Sultanate period during the Demak invasion of Portuguese Malacca. During this period, the iron, for manufacturing Javanese cannon was imported from Khorasan in northern Persia. The material was known by Javanese as wesi kurasani (Khorasan iron). When the Portuguese came to the archipelago, they referred to it as Berço, which was also used to refer to any breech-loading swivel gun, while the Spaniards call it Verso. Duarte Barbosa ca. 1514 said that the inhabitants of Java are great masters in casting artillery and very good artillerymen. They make many one-pounder cannon (cetbang or rentaka), long muskets, spingarde (arquebus), schioppi (hand cannon), Greek fire, guns (cannon), and other fire-works. Every place are considered excellent in casting artillery, and in the knowledge of using it. In 1513, the Javanese fleet led by Patih Yunus sailed to attack Portuguese Malacca "with much artillery made in Java, for the Javanese are skilled in founding and casting, and in all works in iron, over and above what they have in India". By early 16th century, the Javanese already locally-producing large guns, some of them still survived until the present day and dubbed as "sacred cannon" or "holy cannon". These cannons varied between 180- and 260-pounders, weighing anywhere between 3 and 8 tons, length of them between . Cannons were used by the Ayutthaya Kingdom in 1352 during its invasion of the Khmer Empire. Within a decade large quantities of gunpowder could be found in the Khmer Empire. By the end of the century firearms were also used by the Trần dynasty. Saltpeter harvesting was recorded by Dutch and German travelers as being common in even the smallest villages and was collected from the decomposition process of large dung hills specifically piled for the purpose. The Dutch punishment for possession of non-permitted gunpowder appears to have been amputation. Ownership and manufacture of gunpowder was later prohibited by the colonial Dutch occupiers. According to colonel McKenzie quoted in Sir Thomas Stamford Raffles', The History of Java (1817), the purest sulfur was supplied from a crater from a mountain near the straits of Bali. Africa In Africa, the Adal Sultanate and the Abyssinian Empire both deployed cannons during the Adal-Abyssinian War. Imported from Arabia, and the wider Islamic world, the Adalites led by Ahmed ibn Ibrahim al-Ghazi were the first African power to introduce cannon warfare to the African continent. Later on as the Portuguese Empire entered the war it would supply and train the Abyssinians with cannons, while the Ottoman Empire sent soldiers and cannon to back Adal. The conflict proved, through their use on both sides, the value of firearms such as the matchlock musket, cannon, and the arquebus over traditional weapons. Offensive and defensive use While previous smaller guns could burn down structures with fire, larger cannons were so effective that engineers were forced to develop stronger castle walls to prevent their keeps from falling. This isn't to say that cannons were only used to batter down walls as fortifications began using cannons as defensive instruments such as an example in India where the fort of Raicher had gun ports built into its walls to accommodate the use of defensive cannons. In Art of War Niccolò Machiavelli opined that field artillery forced an army to take up a defensive posture and this opposed a more ideal offensive stance. Machiavelli's concerns can be seen in the criticisms of Portuguese mortars being used in India during the sixteenth century as lack of mobility was one of the key problems with the design. In Russia the early cannons were again placed in forts as a defensive tool. Cannon were also difficult to move around in certain types of terrain with mountains providing a great obstacle for them, for these reasons offensives conducted with cannons would be difficult to pull off in places such as Iran. Early modern period By the 16th century, cannons were made in a great variety of lengths and bore diameters, but the general rule was that the longer the barrel, the longer the range. Some cannons made during this time had barrels exceeding in length, and could weigh up to . Consequently, large amounts of gunpowder were needed to allow them to fire stone balls several hundred yards. By mid-century, European monarchs began to classify cannons to reduce the confusion. Henry II of France opted for six sizes of cannon, but others settled for more; the Spanish used twelve sizes, and the English sixteen. They are, from largest to smallest: the cannon royal, cannon, cannon serpentine, bastard cannon, demicannon, pedrero, culverin, basilisk, demiculverin, bastard culverin, saker, minion, falcon, falconet, serpentine, and rabinet. Better powder had been developed by this time as well. Instead of the finely ground powder used by the first bombards, powder was replaced by a "corned" variety of coarse grains. This coarse powder had pockets of air between grains, allowing fire to travel through and ignite the entire charge quickly and uniformly. The end of the Middle Ages saw the construction of larger, more powerful cannon, as well as their spread throughout the world. As they were not effective at breaching the newer fortifications resulting from the development of cannon, siege engines—such as siege towers and trebuchets—became less widely used. However, wooden "battery-towers" took on a similar role as siege towers in the gunpowder age—such as that used at Siege of Kazan in 1552, which could hold ten large-calibre cannon, in addition to 50 lighter pieces. Another notable effect of cannon on warfare during this period was the change in conventional fortifications. Niccolò Machiavelli wrote, "There is no wall, whatever its thickness that artillery will not destroy in only a few days." Although castles were not immediately made obsolete by cannon, their use and importance on the battlefield rapidly declined. Instead of majestic towers and merlons, the walls of new fortresses were thick, angled, and sloped, while towers became low and stout; increasing use was also made of earth and brick in breastworks and redoubts. These new defences became known as bastion forts, after their characteristic shape which attempted to force any advance towards it directly into the firing line of the guns. A few of these featured cannon batteries, such as the House of Tudor's Device Forts, in England. Bastion forts soon replaced castles in Europe, and, eventually, those in the Americas, as well. By the end of the 15th century, several technological advancements made cannons more mobile. Wheeled gun carriages and trunnions became common, and the invention of the limber further facilitated transportation. As a result, field artillery became more viable, and began to see more widespread use, often alongside the larger cannons intended for sieges. Better gunpowder, cast-iron projectiles (replacing stone), and the standardisation of calibres meant that even relatively light cannons could be deadly. In The Art of War, Niccolò Machiavelli observed that "It is true that the arquebuses and the small artillery do much more harm than the heavy artillery." This was the case at the Battle of Flodden, in 1513: the English field guns outfired the Scottish siege artillery, firing two or three times as many rounds. Despite the increased maneuverability, however, cannon were still the slowest component of the army: a heavy English cannon required 23 horses to transport, while a culverin needed nine. Even with this many animals pulling, they still moved at a walking pace. Due to their relatively slow speed, and lack of organisation, and undeveloped tactics, the combination of pike and shot still dominated the battlefields of Europe. Innovations continued, notably the German invention of the mortar, a thick-walled, short-barrelled gun that blasted shot upward at a steep angle. Mortars were useful for sieges, as they could hit targets behind walls or other defences. This cannon found more use with the Dutch, who learnt to shoot bombs filled with powder from them. Setting the bomb fuse was a problem. "Single firing" was first used to ignite the fuse, where the bomb was placed with the fuse down against the cannon's propellant. This often resulted in the fuse being blown into the bomb, causing it to blow up as it left the mortar. Because of this, "double firing" was tried where the gunner lit the fuse and then the touch hole. This, however, required considerable skill and timing, and was especially dangerous if the gun misfired, leaving a lighted bomb in the barrel. Not until 1650 was it accidentally discovered that double-lighting was superfluous as the heat of firing would light the fuse. Gustavus Adolphus of Sweden emphasised the use of light cannon and mobility in his army, and created new formations and tactics that revolutionised artillery. He discontinued using all 12 pounder—or heavier—cannon as field artillery, preferring, instead, to use cannons that could be handled by only a few men. One obsolete type of gun, the "leatheren" was replaced by 4 pounder and 9 pounder demi-culverins. These could be operated by three men, and pulled by only two horses. Gustavus Adolphus's army was also the first to use a cartridge that contained both powder and shot which sped up reloading, increasing the rate of fire. Finally, against infantry he pioneered the use of canister shot—essentially a tin can filled with musket balls. Until then there was no more than one cannon for every thousand infantrymen on the battlefield but Gustavus Adolphus increased the number of cannons sixfold. Each regiment was assigned two pieces, though he often arranged them into batteries instead of distributing them piecemeal. He used these batteries to break his opponent's infantry line, while his cavalry would outflank their heavy guns. At the Battle of Breitenfeld, in 1631, Adolphus proved the effectiveness of the changes made to his army, by defeating Johann Tserclaes, Count of Tilly. Although severely outnumbered, the Swedes were able to fire between three and five times as many volleys of artillery, and their infantry's linear formations helped ensure they didn't lose any ground. Battered by cannon fire, and low on morale, Tilly's men broke ranks and fled. In England cannons were being used to besiege various fortified buildings during the English Civil War. Nathaniel Nye is recorded as testing a Birmingham cannon in 1643 and experimenting with a saker in 1645. From 1645 he was the master gunner to the Parliamentarian garrison at Evesham and in 1646 he successfully directed the artillery at the Siege of Worcester, detailing his experiences and in his 1647 book The Art of Gunnery. Believing that war was as much a science as an art, his explanations focused on triangulation, arithmetic, theoretical mathematics, and cartography as well as practical considerations such as the ideal specification for gunpowder or slow matches. His book acknowledged mathematicians such as Robert Recorde and Marcus Jordanus as well as earlier military writers on artillery such as Niccolò Fontana Tartaglia and Thomas (or Francis) Malthus (author of A Treatise on Artificial Fire-Works). Around this time also came the idea of aiming the cannon to hit a target. Gunners controlled the range of their cannons by measuring the angle of elevation, using a "gunner's quadrant." Cannons did not have sights, therefore, even with measuring tools, aiming was still largely guesswork. In the latter half of the 17th century, the French engineer Sébastien Le Prestre de Vauban introduced a more systematic and scientific approach to attacking gunpowder fortresses, in a time when many field commanders "were notorious dunces in siegecraft." Careful sapping forward, supported by enfilading ricochets, was a key feature of this system, and it even allowed Vauban to calculate the length of time a siege would take. He was also a prolific builder of bastion forts, and did much to popularize the idea of "depth in defence" in the face of cannon. These principles were followed into the mid-19th century, when changes in armaments necessitated greater depth defence than Vauban had provided for. It was only in the years prior to World War I that new works began to break radically away from his designs. 18th and 19th centuries The lower tier of 17th-century English ships of the line were usually equipped with demi-cannons, guns that fired a solid shot, and could weigh up to . Demi-cannons were capable of firing these heavy metal balls with such force that they could penetrate more than a metre of solid oak, from a distance of , and could dismast even the largest ships at close range. Full cannon fired a shot, but were discontinued by the 18th century, as they were too unwieldy. By the end of the 18th century, principles long adopted in Europe specified the characteristics of the Royal Navy's cannon, as well as the acceptable defects, and their severity. The United States Navy tested guns by measuring them, firing them two or three times—termed "proof by powder"—and using pressurized water to detect leaks. The carronade was adopted by the Royal Navy in 1779; the lower muzzle velocity of the round shot when fired from this cannon was intended to create more wooden splinters when hitting the structure of an enemy vessel, as they were believed to be more deadly than the ball by itself. The carronade was much shorter, and weighed between a third to a quarter of the equivalent long gun; for example, a 32-pounder carronade weighed less than a ton, compared with a 32-pounder long gun, which weighed over 3 tons. The guns were, therefore, easier to handle, and also required less than half as much gunpowder, allowing fewer men to crew them. Carronades were manufactured in the usual naval gun calibres, but were not counted in a ship of the line's rated number of guns. As a result, the classification of Royal Navy vessels in this period can be misleading, as they often carried more cannons than were listed. Cannons were crucial in Napoleon's rise to power, and continued to play an important role in his army in later years. During the French Revolution, the unpopularity of the Directory led to riots and rebellions. When over 25,000 royalists led by General Danican assaulted Paris, Paul Barras was appointed to defend the capital; outnumbered five to one and disorganised, the Republicans were desperate. When Napoleon arrived, he reorganised the defences but realised that without cannons the city could not be held. He ordered Joachim Murat to bring the guns from the Sablons artillery park; the Major and his cavalry fought their way to the recently captured cannons, and brought them back to Napoleon. When Danican's poorly trained men attacked, on 13 Vendémiaire, 1795 – 5 October 1795, in the calendar used in France at the time—Napoleon ordered his cannon to fire grapeshot into the mob, an act that became known as the "whiff of grapeshot". The slaughter effectively ended the threat to the new government, while, at the same time, made Bonaparte a famous—and popular—public figure. Among the first generals to recognise that artillery was not being used to its full potential, Napoleon often massed his cannon into batteries and introduced several changes into the French artillery, improving it significantly and making it among the finest in Europe. Such tactics were successfully used by the French, for example, at the Battle of Friedland, when sixty-six guns fired a total of 3,000 roundshot and 500 rounds of grapeshot, inflicting severe casualties to the Russian forces, whose losses numbered over 20,000 killed and wounded, in total. At the Battle of Waterloo—Napoleon's final battle—the French army had many more artillery pieces than either the British or Prussians. As the battlefield was muddy, recoil caused cannons to bury themselves into the ground after firing, resulting in slow rates of fire, as more effort was required to move them back into an adequate firing position; also, roundshot did not ricochet with as much force from the wet earth. Despite the drawbacks, sustained artillery fire proved deadly during the engagement, especially during the French cavalry attack. The British infantry, having formed infantry squares, took heavy losses from the French guns, while their own cannons fired at the cuirassiers and lancers, when they fell back to regroup. Eventually, the French ceased their assault, after taking heavy losses from the British cannon and musket fire. In the 1810s and 1820s, greater emphasis was placed on the accuracy of long-range gunfire, and less on the weight of a broadside. Around 1822, George Marshall wrote Marshall's Practical Marine Gunnery. The book was used by cannon operators in the United States Navy throughout the 19th century. It listed all the types of cannons and instructions. The carronade, although initially very successful and widely adopted, disappeared from the Royal Navy in the 1850s after the development of wrought-iron-jacketed steel cannon by William Armstrong and Joseph Whitworth. Nevertheless, carronades were used in the American Civil War. Western cannons during the 19th century became larger, more destructive, more accurate, and could fire at longer range. One example is the American wrought-iron, muzzle-loading rifle, or Griffen gun (usually called the 3-inch Ordnance Rifle), used during the American Civil War, which had an effective range of over . Another is the smoothbore 12-pounder Napoleon, which originated in France in 1853 and was widely used by both sides in the American Civil War. This cannon was renowned for its sturdiness, reliability, firepower, flexibility, relatively lightweight, and range of . The practice of rifling—casting spiralling lines inside the cannon's barrel—was applied to artillery more frequently by 1855, as it gave cannon projectiles gyroscopic stability, which improved their accuracy. One of the earliest rifled cannons was the breech-loading Armstrong Gun—also invented by William Armstrong—which boasted significantly improved range, accuracy, and power than earlier weapons. The projectile fired from the Armstrong gun could reportedly pierce through a ship's side and explode inside the enemy vessel, causing increased damage and casualties. The British military adopted the Armstrong gun, and was impressed; the Duke of Cambridge even declared that it "could do everything but speak." Despite being significantly more advanced than its predecessors, the Armstrong gun was rejected soon after its integration, in favour of the muzzle-loading pieces that had been in use before. While both types of gun were effective against wooden ships, neither had the capability to pierce the armour of ironclads; due to reports of slight problems with the breeches of the Armstrong gun, and their higher cost, the older muzzle-loaders were selected to remain in service instead. Realising that iron was more difficult to pierce with breech-loaded cannons, Armstrong designed rifled muzzle-loading guns, which proved successful; The Times reported: "even the fondest believers in the invulnerability of our present ironclads were obliged to confess that against such artillery, at such ranges, their plates and sides were almost as penetrable as wooden ships." The superior cannon of the Western world brought them tremendous advantages in warfare. For example, in the First Opium War in China, during the 19th century, British battleships bombarded the coastal areas and fortifications from afar, safe from the reach of the Chinese cannons. Similarly, the shortest war in recorded history, the Anglo-Zanzibar War of 1896, was brought to a swift conclusion by shelling from British cruisers. The cynical attitude towards recruited infantry in the face of ever more powerful field artillery is the source of the term cannon fodder, first used by François-René de Chateaubriand, in 1814; however, the concept of regarding soldiers as nothing more than "food for powder" was mentioned by William Shakespeare as early as 1598, in Henry IV, Part 1. 20th and 21st centuries Cannons in the 20th and 21st centuries are usually divided into sub-categories and given separate names. Some of the most widely used types of modern cannon are howitzers, mortars, guns, and autocannon, although a few very large-calibre cannon, custom-designed, have also been constructed. Nuclear artillery was experimented with, but was abandoned as impractical. Modern artillery is used in a variety of roles, depending on its type. According to NATO, the general role of artillery is to provide fire support, which is defined as "the application of fire, coordinated with the manoeuvre of forces to destroy, neutralize, or suppress the enemy." When referring to cannons, the term gun is often used incorrectly. In military usage, a gun is a cannon with a high muzzle velocity and a flat trajectory, useful for hitting the sides of targets such as walls, as opposed to howitzers or mortars, which have lower muzzle velocities, and fire indirectly, lobbing shells up and over obstacles to hit the target from above. By the early 20th century, infantry weapons had become more powerful, forcing most artillery away from the front lines. Despite the change to indirect fire, cannons proved highly effective during World War I, directly or indirectly causing over 75% of casualties. The onset of trench warfare after the first few months of World War I greatly increased the demand for howitzers, as they were more suited at hitting targets in trenches. Furthermore, their shells carried more explosives than those of guns, and caused considerably less barrel wear. The German army had the advantage here as they began the war with many more howitzers than the French. World War I also saw the use of the Paris Gun, the longest-ranged gun ever fired. This calibre gun was used by the Germans against Paris and could hit targets more than away. The Second World War sparked new developments in cannon technology. Among them were sabot rounds, hollow-charge projectiles, and proximity fuses, all of which increased the effectiveness of cannon against specific target. The proximity fuse emerged on the battlefields of Europe in late December 1944. Used to great effect in anti-aircraft projectiles, proximity fuses were fielded in both the European and Pacific Theatres of Operations; they were particularly useful against V-1 flying bombs and kamikaze planes. Although widely used in naval warfare, and in anti-air guns, both the British and Americans feared unexploded proximity fuses would be reverse engineered leading to them limiting its use in continental battles. During the Battle of the Bulge, however, the fuses became known as the American artillery's "Christmas present" for the German army because of their effectiveness against German personnel in the open, when they frequently dispersed attacks. Anti-tank guns were also tremendously improved during the war: in 1939, the British used primarily 2 pounder and 6 pounder guns. By the end of the war, 17 pounders had proven much more effective against German tanks, and 32 pounders had entered development. Meanwhile, German tanks were continuously upgraded with better main guns, in addition to other improvements. For example, the Panzer III was originally designed with a 37 mm gun, but was mass-produced with a 50 mm cannon. To counter the threat of the Russian T-34s, another, more powerful 50 mm gun was introduced, only to give way to a larger 75 mm cannon, which was in a fixed mount as the StuG III, the most-produced German World War II armoured fighting vehicle of any type. Despite the improved guns, production of the Panzer III was ended in 1943, as the tank still could not match the T-34, and was replaced by the Panzer IV and Panther tanks. In 1944, the 8.8 cm KwK 43 and many variations, entered service with the Wehrmacht, and was used as both a tank main gun, and as the PaK 43 anti-tank gun. One of the most powerful guns to see service in World War II, it was capable of destroying any Allied tank at very long ranges. Despite being designed to fire at trajectories with a steep angle of descent, howitzers can be fired directly, as was done by the 11th Marine Regiment at the Battle of Chosin Reservoir, during the Korean War. Two field batteries fired directly upon a battalion of Chinese infantry; the Marines were forced to brace themselves against their howitzers, as they had no time to dig them in. The Chinese infantry took heavy casualties, and were forced to retreat. The tendency to create larger calibre cannons during the World Wars has reversed since. The United States Army, for example, sought a lighter, more versatile howitzer, to replace their ageing pieces. As it could be towed, the M198 was selected to be the successor to the World War II–era cannons used at the time, and entered service in 1979. Still in use today, the M198 is, in turn, being slowly replaced by the M777 Ultralightweight howitzer, which weighs nearly half as much and can be more easily moved. Although land-based artillery such as the M198 are powerful, long-ranged, and accurate, naval guns have not been neglected, despite being much smaller than in the past, and, in some cases, having been replaced by cruise missiles. However, the 's planned armament includes
term midfa, dated to textual sources from 1342 to 1352, did not refer to true hand-guns or bombards, and that contemporary accounts of a metal-barrel cannon in the Islamic world did not occur until 1365. Similarly, Andrade dates the textual appearance of cannons in middle eastern sources to the 1360s. Gabor Ágoston and David Ayalon note that the Mamluks had certainly used siege cannons by 1342 or the 1360s, respectively, but earlier uses of cannons in the Islamic World are vague with a possible appearance in the Emirate of Granada by the 1320s and 1330s, though evidence is inconclusive. Ibn Khaldun reported the use of cannon as siege machines by the Marinid sultan Abu Yaqub Yusuf at the siege of Sijilmasa in 1274. The passage by Ibn Khaldun on the Marinid Siege of Sijilmassa in 1274 occurs as follows: "[The Sultan] installed siege engines … and gunpowder engines …, which project small balls of iron. These balls are ejected from a chamber … placed in front of a kindling fire of gunpowder; this happens by a strange property which attributes all actions to the power of the Creator." The source is not contemporary and was written a century later around 1382. Its interpretation has been rejected as anachronistic by some historians, who urge caution regarding claims of Islamic firearms use in the 1204–1324 period as late medieval Arabic texts used the same word for gunpowder, naft, as they did for an earlier incendiary, naphtha. Ágoston and Peter Purton note that in the 1204–1324 period, late medieval Arabic texts used the same word for gunpowder, naft, that they used for an earlier incendiary naphtha. Needham believes Ibn Khaldun was speaking of fire lances rather than hand cannon. The Ottoman Empire made good use of cannon as siege artillery. Sixty-eight super-sized bombards were used by Mehmed the Conqueror to capture Constantinople in 1453. Jim Bradbury argues that Urban, a Hungarian cannon engineer, introduced this cannon from Central Europe to the Ottoman realm; according to Paul Hammer, however, it could have been introduced from other Islamic countries which had earlier used cannons. These cannon could fire heavy stone balls a mile, and the sound of their blast could reportedly be heard from a distance of . Shkodëran historian Marin Barleti discusses Turkish bombards at length in his book De obsidione Scodrensi (1504), describing the 1478–79 siege of Shkodra in which eleven bombards and two mortars were employed. The Ottomans also used cannon to control passage of ships through the Bosphorus strait. Ottoman cannons also proved effective at stopping crusaders at Varna in 1444 and Kosovo in 1448 despite the presence of European cannon in the former case. The similar Dardanelles Guns (for the location) were created by Munir Ali in 1464 and were still in use during the Anglo-Turkish War (1807–09). These were cast in bronze into two parts, the chase (the barrel) and the breech, which combined weighed 18.4 tonnes. The two parts were screwed together using levers to facilitate moving it. Fathullah Shirazi, a Persian inhabitant of India who worked for Akbar in the Mughal Empire, developed a volley gun in the 16th century. Iran While there is evidence of cannons in Iran as early as 1405 they were not widespread. This changed following the increased use of firearms by Shah Ismail I, and the Iranian army used 500 cannons by the 1620s, probably captured from the Ottomans or acquired by allies in Europe. By 1443 Iranians were also making some of their own cannon, as Mir Khawand wrote of a 1200 kg metal piece being made by an Iranian rikhtegar which was most likely a cannon. Due to the difficulties of transporting cannon in mountainous terrain, their use was less common compared to their use in Europe. Southeast Asia The Javanese Majapahit Empire was arguably able to encompass much of modern-day Indonesia due to its unique mastery of bronze-smithing and use of a central arsenal fed by a large number of cottage industries within the immediate region. Cannons were introduced to Majapahit when Kublai Khan's Chinese army under the leadership of Ike Mese sought to invade Java in 1293. History of Yuan mentioned that the Mongol used a weapon called p'ao against Daha forces. This weapon is interpreted differently by researchers, it may be a trebuchet that throws thunderclap bombs, firearms, cannons, or rockets. It is possible that the gunpowder weapons carried by the Mongol-Chinese troops amounted to more than 1 type. Thomas Stamford Raffles wrote in The History of Java that in 1247 saka (1325 AD), cannons have been widely used in Java especially by the Majapahit. It is recorded that the small kingdoms in Java that that sought the protection of Majapahit had to hand over their cannons to the Majapahit. Majapahit under Mahapatih (prime minister) Gajah Mada (in office 1329–1364) utilized gunpowder technology obtained from Yuan dynasty for use in naval fleet. One of the earliest reference to cannon and artillerymen in Java is from the year 1346. Mongol-Chinese gunpowder technology of Yuan dynasty resulted in Eastern-style cetbang which is similar to Chinese cannon. Swivel guns however, only developed in the archipelago because of the close maritime relations of the Nusantara archipelago with the territory of West India after 1460 AD, which brought new types of gunpowder weapons to the archipelago, likely through Arab intermediaries. This weapon seems to be cannon and gun of Ottoman tradition, for example the prangi, which is a breech-loading swivel gun. A new type of cetbang, called the Western-style cetbang, was derived from the Turkish prangi. Just like prangi, this cetbang is a breech-loading swivel gun made of bronze or iron, firing single rounds or scattershots (a large number of small bullets). Cannons derived from Western-style cetbang can be found in Nusantara, among others were lantaka and lela. Most lantakas were made of bronze and the earliest ones were breech-loaded. There is a trend toward muzzle-loading weapons during colonial times. Pole gun (bedil tombak) was recorded as being used by Java in 1413. Portuguese and Spanish invaders were unpleasantly surprised and even outgunned on occasion. Circa 1540, the Javanese, always alert for new weapons found the newly arrived Portuguese weaponry superior to that of the locally made variants. Majapahit-era cetbang cannon were further improved and used in the Demak Sultanate period during the Demak invasion of Portuguese Malacca. During this period, the iron, for manufacturing Javanese cannon was imported from Khorasan in northern Persia. The material was known by Javanese as wesi kurasani (Khorasan iron). When the Portuguese came to the archipelago, they referred to it as Berço, which was also used to refer to any breech-loading swivel gun, while the Spaniards call it Verso. Duarte Barbosa ca. 1514 said that the inhabitants of Java are great masters in casting artillery and very good artillerymen. They make many one-pounder cannon (cetbang or rentaka), long muskets, spingarde (arquebus), schioppi (hand cannon), Greek fire, guns (cannon), and other fire-works. Every place are considered excellent in casting artillery, and in the knowledge of using it. In 1513, the Javanese fleet led by Patih Yunus sailed to attack Portuguese Malacca "with much artillery made in Java, for the Javanese are skilled in founding and casting, and in all works in iron, over and above what they have in India". By early 16th century, the Javanese already locally-producing large guns, some of them still survived until the present day and dubbed as "sacred cannon" or "holy cannon". These cannons varied between 180- and 260-pounders, weighing anywhere between 3 and 8 tons, length of them between . Cannons were used by the Ayutthaya Kingdom in 1352 during its invasion of the Khmer Empire. Within a decade large quantities of gunpowder could be found in the Khmer Empire. By the end of the century firearms were also used by the Trần dynasty. Saltpeter harvesting was recorded by Dutch and German travelers as being common in even the smallest villages and was collected from the decomposition process of large dung hills specifically piled for the purpose. The Dutch punishment for possession of non-permitted gunpowder appears to have been amputation. Ownership and manufacture of gunpowder was later prohibited by the colonial Dutch occupiers. According to colonel McKenzie quoted in Sir Thomas Stamford Raffles', The History of Java (1817), the purest sulfur was supplied from a crater from a mountain near the straits of Bali. Africa In Africa, the Adal Sultanate and the Abyssinian Empire both deployed cannons during the Adal-Abyssinian War. Imported from Arabia, and the wider Islamic world, the Adalites led by Ahmed ibn Ibrahim al-Ghazi were the first African power to introduce cannon warfare to the African continent. Later on as the Portuguese Empire entered the war it would supply and train the Abyssinians with cannons, while the Ottoman Empire sent soldiers and cannon to back Adal. The conflict proved, through their use on both sides, the value of firearms such as the matchlock musket, cannon, and the arquebus over traditional weapons. Offensive and defensive use While previous smaller guns could burn down structures with fire, larger cannons were so effective that engineers were forced to develop stronger castle walls to prevent their keeps from falling. This isn't to say that cannons were only used to batter down walls as fortifications began using cannons as defensive instruments such as an example in India where the fort of Raicher had gun ports built into its walls to accommodate the use of defensive cannons. In Art of War Niccolò Machiavelli opined that field artillery forced an army to take up a defensive posture and this opposed a more ideal offensive stance. Machiavelli's concerns can be seen in the criticisms of Portuguese mortars being used in India during the sixteenth century as lack of mobility was one of the key problems with the design. In Russia the early cannons were again placed in forts as a defensive tool. Cannon were also difficult to move around in certain types of terrain with mountains providing a great obstacle for them, for these reasons offensives conducted with cannons would be difficult to pull off in places such as Iran. Early modern period By the 16th century, cannons were made in a great variety of lengths and bore diameters, but the general rule was that the longer the barrel, the longer the range. Some cannons made during this time had barrels exceeding in length, and could weigh up to . Consequently, large amounts of gunpowder were needed to allow them to fire stone balls several hundred yards. By mid-century, European monarchs began to classify cannons to reduce the confusion. Henry II of France opted for six sizes of cannon, but others settled for more; the Spanish used twelve sizes, and the English sixteen. They are, from largest to smallest: the cannon royal, cannon, cannon serpentine, bastard cannon, demicannon, pedrero, culverin, basilisk, demiculverin, bastard culverin, saker, minion, falcon, falconet, serpentine, and rabinet. Better powder had been developed by this time as well. Instead of the finely ground powder used by the first bombards, powder was replaced by a "corned" variety of coarse grains. This coarse powder had pockets of air between grains, allowing fire to travel through and ignite the entire charge quickly and uniformly. The end of the Middle Ages saw the construction of larger, more powerful cannon, as well as their spread throughout the world. As they were not effective at breaching the newer fortifications resulting from the development of cannon, siege engines—such as siege towers and trebuchets—became less widely used. However, wooden "battery-towers" took on a similar role as siege towers in the gunpowder age—such as that used at Siege of Kazan in 1552, which could hold ten large-calibre cannon, in addition to 50 lighter pieces. Another notable effect of cannon on warfare during this period was the change in conventional fortifications. Niccolò Machiavelli wrote, "There is no wall, whatever its thickness that artillery will not destroy in only a few days." Although castles were not immediately made obsolete by cannon, their use and importance on the battlefield rapidly declined. Instead of majestic towers and merlons, the walls of new fortresses were thick, angled, and sloped, while towers became low and stout; increasing use was also made of earth and brick in breastworks and redoubts. These new defences became known as bastion forts, after their characteristic shape which attempted to force any advance towards it directly into the firing line of the guns. A few of these featured cannon batteries, such as the House of Tudor's Device Forts, in England. Bastion forts soon replaced castles in Europe, and, eventually, those in the Americas, as well. By the end of the 15th century, several technological advancements made cannons more mobile. Wheeled gun carriages and trunnions became common, and the invention of the limber further facilitated transportation. As a result, field artillery became more viable, and began to see more widespread use, often alongside the larger cannons intended for sieges. Better gunpowder, cast-iron projectiles (replacing stone), and the standardisation of calibres meant that even relatively light cannons could be deadly. In The Art of War, Niccolò Machiavelli observed that "It is true that the arquebuses and the small artillery do much more harm than the heavy artillery." This was the case at the Battle of Flodden, in 1513: the English field guns outfired the Scottish siege artillery, firing two or three times as many rounds. Despite the increased maneuverability, however, cannon were still the slowest component of the army: a heavy English cannon required 23 horses to transport, while a culverin needed nine. Even with this many animals pulling, they still moved at a walking pace. Due to their relatively slow speed, and lack of organisation, and undeveloped tactics, the combination of pike and shot still dominated the battlefields of Europe. Innovations continued, notably the German invention of the mortar, a thick-walled, short-barrelled gun that blasted shot upward at a steep angle. Mortars were useful for sieges, as they could hit targets behind walls or other defences. This cannon found more use with the Dutch, who learnt to shoot bombs filled with powder from them. Setting the bomb fuse was a problem. "Single firing" was first used to ignite the fuse, where the bomb was placed with the fuse down against the cannon's propellant. This often resulted in the fuse being blown into the bomb, causing it to blow up as it left the mortar. Because of this, "double firing" was tried where the gunner lit the fuse and then the touch hole. This, however, required considerable skill and timing, and was especially dangerous if the gun misfired, leaving a lighted bomb in the barrel. Not until 1650 was it accidentally discovered that double-lighting was superfluous as the heat of firing would light the fuse. Gustavus Adolphus of Sweden emphasised the use of light cannon and mobility in his army, and created new formations and tactics that revolutionised artillery. He discontinued using all 12 pounder—or heavier—cannon as field artillery, preferring, instead, to use cannons that could be handled by only a few men. One obsolete type of gun, the "leatheren" was replaced by 4 pounder and 9 pounder demi-culverins. These could be operated by three men, and pulled by only two horses. Gustavus Adolphus's army was also the first to use a cartridge that contained both powder and shot which sped up reloading, increasing the rate of fire. Finally, against infantry he pioneered the use of canister shot—essentially a tin can filled with musket balls. Until then there was no more than one cannon for every thousand infantrymen on the battlefield but Gustavus Adolphus increased the number of cannons sixfold. Each regiment was assigned two pieces, though he often arranged them into batteries instead of distributing them piecemeal. He used these batteries to break his opponent's infantry line, while his cavalry would outflank their heavy guns. At the Battle of Breitenfeld, in 1631, Adolphus proved the effectiveness of the changes made to his army, by defeating Johann Tserclaes, Count of Tilly. Although severely outnumbered, the Swedes were able to fire between three and five times as many volleys of artillery, and their infantry's linear formations helped ensure they didn't lose any ground. Battered by cannon fire, and low on morale, Tilly's men broke ranks and fled. In England cannons were being used to besiege various fortified buildings during the English Civil War. Nathaniel Nye is recorded as testing a Birmingham cannon in 1643 and experimenting with a saker in 1645. From 1645 he was the master gunner to the Parliamentarian garrison at Evesham and in 1646 he successfully directed the artillery at the Siege of Worcester, detailing his experiences and in his 1647 book The Art of Gunnery. Believing that war was as much a science as an art, his explanations focused on triangulation, arithmetic, theoretical mathematics, and cartography as well as practical considerations such as the ideal specification for gunpowder or slow matches. His book acknowledged mathematicians such as Robert Recorde and Marcus Jordanus as well as earlier military writers on artillery such as Niccolò Fontana Tartaglia and Thomas (or Francis) Malthus (author of A Treatise on Artificial Fire-Works). Around this time also came the idea of aiming the cannon to hit a target. Gunners controlled the range of their cannons by measuring the angle of elevation, using a "gunner's quadrant." Cannons did not have sights, therefore, even with measuring tools, aiming was still largely guesswork. In the latter half of the 17th century, the French engineer Sébastien Le Prestre de Vauban introduced a more systematic and scientific approach to attacking gunpowder fortresses, in a time when many field commanders "were notorious dunces in siegecraft." Careful sapping forward, supported by enfilading ricochets, was a key feature of this system, and it even allowed Vauban to calculate the length of time a siege would take. He was also a prolific builder of bastion forts, and did much to popularize the idea of "depth in defence" in the face of cannon. These principles were followed into the mid-19th century, when changes in armaments necessitated greater depth defence than Vauban had provided for. It was only in the years prior to World War I that new works began to break radically away from his designs. 18th and 19th centuries The lower tier of 17th-century English ships of the line were usually equipped with demi-cannons, guns that fired a solid shot, and could weigh up to . Demi-cannons were capable of firing these heavy metal balls with such force that they could penetrate more than a metre of solid oak, from a distance of , and could dismast even the largest ships at close range. Full cannon fired a shot, but were discontinued by the 18th century, as they were too unwieldy. By the end of the 18th century, principles long adopted in Europe specified the characteristics of the Royal Navy's cannon, as well as the acceptable defects, and their severity. The United States Navy tested guns by measuring them, firing them two or three times—termed "proof by powder"—and using pressurized water to detect leaks. The carronade was adopted by the Royal Navy in 1779; the lower muzzle velocity of the round shot when fired from this cannon was intended to create more wooden splinters when hitting the structure of an enemy vessel, as they were believed to be more deadly than the ball by itself. The carronade was much shorter, and weighed between a third to a quarter of the equivalent long gun; for example, a 32-pounder carronade weighed less than a ton, compared with a 32-pounder long gun, which weighed over 3 tons. The guns were, therefore, easier to handle, and also required less than half as much gunpowder, allowing fewer men to crew them. Carronades were manufactured in the usual naval gun calibres, but were not counted in a ship of the line's rated number of guns. As a result, the classification of Royal Navy vessels in this period can be misleading, as they often carried more cannons than were listed. Cannons were crucial in Napoleon's rise to power, and continued to play an important role in his army in later years. During the French Revolution, the unpopularity of the Directory led to riots and rebellions. When over 25,000 royalists led by General Danican assaulted Paris, Paul Barras was appointed to defend the capital; outnumbered five to one and disorganised, the Republicans were desperate. When Napoleon arrived, he reorganised the defences but realised that without cannons the city could not be held. He ordered Joachim Murat to bring the guns from the Sablons artillery park; the Major and his cavalry fought their way to the recently captured cannons, and brought them back to Napoleon. When Danican's poorly trained men attacked, on 13 Vendémiaire, 1795 – 5 October 1795, in the calendar used in France at the time—Napoleon ordered his cannon to fire grapeshot into the mob, an act that became known as the "whiff of grapeshot". The slaughter effectively ended the threat to the new government, while, at the same time, made Bonaparte a famous—and popular—public figure. Among the first generals to recognise that artillery was not being used to its full potential, Napoleon often massed his cannon into batteries and introduced several changes into the French artillery, improving it significantly and making it among the finest in Europe. Such tactics were successfully used by the French, for example, at the Battle of Friedland, when sixty-six guns fired a total of 3,000 roundshot and 500 rounds of grapeshot, inflicting severe casualties to the Russian forces, whose losses numbered over 20,000 killed and wounded, in total. At the Battle of Waterloo—Napoleon's final battle—the French army had many more artillery pieces than either the British or Prussians. As the battlefield was muddy, recoil caused cannons to bury themselves into the ground after firing, resulting in slow rates of fire, as more effort was required to move them back into an adequate firing position; also, roundshot did not ricochet with as much force from the wet earth. Despite the drawbacks, sustained artillery fire proved deadly during the engagement, especially during the French cavalry attack. The British infantry, having formed infantry squares, took heavy losses from the French guns, while their own cannons fired at the cuirassiers and lancers, when they fell back to regroup. Eventually, the French ceased their assault, after taking heavy losses from the British cannon and musket fire. In the 1810s and 1820s, greater emphasis was placed on the accuracy of long-range gunfire, and less on the weight of a broadside. Around 1822, George Marshall wrote Marshall's Practical Marine Gunnery. The book was used by cannon operators in the United States Navy throughout the 19th century. It listed all the types of cannons and instructions. The carronade, although initially very successful and widely adopted, disappeared from the Royal Navy in the 1850s after the development of wrought-iron-jacketed steel cannon by William Armstrong and Joseph Whitworth. Nevertheless, carronades were used in the American Civil War. Western cannons during the 19th century became larger, more destructive, more accurate, and could fire at longer range. One example is the American wrought-iron, muzzle-loading rifle, or Griffen gun (usually called the 3-inch Ordnance Rifle), used during the American Civil War, which had an effective range of over . Another is the smoothbore 12-pounder Napoleon, which originated in France in 1853 and was widely used by both sides in the American Civil War. This cannon was renowned for its sturdiness, reliability, firepower, flexibility, relatively lightweight, and range of . The practice of rifling—casting spiralling lines inside the cannon's barrel—was applied to artillery more frequently by 1855, as it gave cannon projectiles gyroscopic stability, which improved their accuracy. One of the earliest rifled cannons was the breech-loading Armstrong Gun—also invented by William Armstrong—which boasted significantly improved range, accuracy, and power than earlier weapons. The projectile fired from the Armstrong gun could reportedly pierce through a ship's side and explode inside the enemy vessel, causing increased damage and casualties. The British military adopted the Armstrong gun, and was impressed; the Duke of Cambridge even declared that it "could do everything but speak." Despite being significantly more advanced than its predecessors, the Armstrong gun was rejected soon after its integration, in favour of the muzzle-loading pieces that had been in use before. While both types of gun were effective against wooden ships, neither had the capability to pierce the armour of ironclads; due to reports of slight problems with the breeches of the Armstrong gun, and their higher cost, the older muzzle-loaders were selected to remain in service instead. Realising that iron was more difficult to pierce with breech-loaded cannons, Armstrong designed rifled muzzle-loading guns, which proved successful; The Times reported: "even the fondest believers in the invulnerability of our present ironclads were obliged to confess that against such artillery, at such ranges, their plates and sides were almost as penetrable as wooden ships." The superior cannon of the Western world brought them tremendous advantages in warfare. For example, in the First Opium War in China, during the 19th century, British battleships bombarded the coastal areas and fortifications from afar, safe from the reach of the Chinese cannons. Similarly, the shortest war in recorded history, the Anglo-Zanzibar War of 1896, was brought to a swift conclusion by shelling from British cruisers. The cynical attitude towards recruited infantry in the face of ever more powerful field artillery is the source of the term cannon fodder, first used by François-René de Chateaubriand, in 1814; however, the concept of regarding soldiers as nothing more than "food for powder" was mentioned by William Shakespeare as early as 1598, in Henry IV, Part 1. 20th and 21st centuries Cannons in the 20th and 21st centuries are usually divided into sub-categories and given separate names. Some of the most widely used types of modern cannon are howitzers, mortars, guns, and autocannon, although a few very large-calibre cannon, custom-designed, have also been constructed. Nuclear artillery was experimented with, but was abandoned as impractical. Modern artillery is used in a variety of roles, depending on its type. According to NATO, the general role of artillery is to provide fire support, which is defined as "the application of fire, coordinated with the manoeuvre of forces to destroy, neutralize, or suppress the enemy." When referring to cannons, the term gun is often used incorrectly. In military usage, a gun is a cannon with a high muzzle velocity and a flat trajectory, useful for hitting the sides of targets such as walls, as opposed to howitzers or mortars, which have lower muzzle velocities, and fire indirectly, lobbing shells up and over obstacles to hit the target from above. By the early 20th century, infantry weapons had become more powerful, forcing most artillery away from the front lines. Despite the change to indirect fire, cannons proved highly effective during World War I, directly or indirectly causing over 75% of casualties. The onset of trench warfare after the first few months of World War I greatly increased the demand for howitzers, as they were more suited at hitting targets in trenches. Furthermore, their shells carried more explosives than those of guns, and caused considerably less barrel wear. The German army had the advantage here as they began the war with many more howitzers than the French. World War I also saw the use of the Paris Gun, the longest-ranged gun ever fired. This calibre gun was used by the Germans against Paris and could hit targets more than away. The Second World War sparked new developments in cannon technology. Among them were sabot rounds, hollow-charge projectiles, and proximity fuses, all of which increased the effectiveness of cannon against specific target. The proximity fuse emerged on the battlefields of Europe in late December 1944. Used to great effect in anti-aircraft projectiles, proximity fuses were fielded in both the European and Pacific Theatres of Operations; they were particularly useful against V-1 flying bombs and kamikaze planes. Although widely used in naval warfare, and in anti-air guns, both the British and Americans feared unexploded proximity fuses would be reverse engineered leading to them limiting its use in continental battles. During the Battle of the Bulge, however, the fuses became known as the American artillery's "Christmas present" for the German army because of their effectiveness against German personnel in the open, when they frequently dispersed attacks. Anti-tank guns were also tremendously improved during the war: in 1939, the British used primarily 2 pounder and 6 pounder guns. By the end of the war, 17 pounders had proven much more effective against German tanks, and 32 pounders had entered development. Meanwhile, German tanks were continuously upgraded with better main guns, in addition to other improvements. For example, the Panzer III was originally designed with a 37 mm gun, but was mass-produced with a 50 mm cannon. To counter the threat of the Russian T-34s, another, more powerful 50 mm gun was introduced, only to give way to a larger 75 mm cannon, which was in a fixed mount as the StuG III, the most-produced German World War II armoured fighting vehicle of any type. Despite the improved guns, production of the Panzer III was ended in
developed by the company since 1966 in what had been a parallel and independent discovery. As the name suggests and unlike Engelbart's mouse, the Telefunken model already had a ball (diameter 40 mm, weight 40 g) and two mechanical 4-bit rotational position transducers with Gray code-like states, allowing easy movement in any direction. The bits remained stable for at least two successive states to relax debouncing requirements. This arrangement was chosen so that the data could also be transmitted to the TR 86 front end process computer and over longer distance telex lines with c. 50 baud. Weighting 465 g, the device with a total height of about 7 cm came in a c. 12 cm diameter hemispherical injection-molded thermoplastic casing featuring one central push button. As noted above, the device was based on an earlier trackball-like device (also named ) that was embedded into radar flight control desks. This trackball had been originally developed by a team led by at Telefunken for the German (Federal Air Traffic Control). It was part of the corresponding work station system SAP 300 and the terminal SIG 3001, which had been designed and developed since 1963. Development for the TR 440 main frame began in 1965. This led to the development of the TR 86 process computer system with its SIG 100-86 terminal. Inspired by a discussion with a university customer, Mallebrein came up with the idea of "reversing" the existing trackball into a moveable mouse-like device in 1966, so that customers did not have to be bothered with mounting holes for the earlier trackball device. The device was finished in early 1968, and together with light pens and trackballs, it was commercially offered as an optional input device for their system starting later that year. Not all customers opted to buy the device, which added costs of per piece to the already up to 20-million DM deal for the main frame, of which only a total of 46 systems were sold or leased. They were installed at more than 20 German universities including RWTH Aachen, Technical University Berlin, University of Stuttgart and Konstanz. Several mice installed at the Leibniz Supercomputing Centre in Munich in 1972 are well preserved in a museum, two others survived in a museum at Stuttgart university, two in Hamburg, the one from Aachen at the Computer History Museum in the US, and yet another sample was recently donated to the Heinz Nixdorf MuseumsForum (HNF) in Paderborn. Telefunken attempted to patent the device, but, without considering the novelty of the construction's application, it was rejected by the German patent office stating a threshold of ingenuity too low. For the air traffic control system, the Mallebrein team had already developed a precursor to touch screens in form of an ultrasonic-curtain-based pointing device in front of the display. In 1970, they developed a device named "Touchinput-" ("touch input facility") based on a conductively coated glass screen. The Xerox Alto was one of the first computers designed for individual use in 1973 and is regarded as the first modern computer to utilize a mouse. Inspired by PARC's Alto, the Lilith, a computer which had been developed by a team around Niklaus Wirth at ETH Zürich between 1978 and 1980, provided a mouse as well. The third marketed version of an integrated mouse shipped as a part of a computer and intended for personal computer navigation came with the Xerox 8010 Star in 1981. By 1982, the Xerox 8010 was probably the best-known computer with a mouse. The Sun-1 also came with a mouse, and the forthcoming Apple Lisa was rumored to use one, but the peripheral remained obscure; Jack Hawley of The Mouse House reported that one buyer for a large organization believed at first that his company sold lab mice. Hawley, who manufactured mice for Xerox, stated that "Practically, I have the market all to myself right now"; a Hawley mouse cost $415. In 1982, Logitech introduced the P4 Mouse at the Comdex trade show in Las Vegas, its first hardware mouse. That same year Microsoft made the decision to make the MS-DOS program Microsoft Word mouse-compatible, and developed the first PC-compatible mouse. Microsoft's mouse shipped in 1983, thus beginning the Microsoft Hardware division of the company. However, the mouse remained relatively obscure until the appearance of the Macintosh 128K (which included an updated version of the single-button Lisa Mouse) in 1984, and of the Amiga 1000 and the Atari ST in 1985. Operation A mouse typically controls the motion of a pointer in two dimensions in a graphical user interface (GUI). The mouse turns movements of the hand backward and forward, left and right into equivalent electronic signals that in turn are used to move the pointer. The relative movements of the mouse on the surface are applied to the position of the pointer on the screen, which signals the point where actions of the user take place, so hand movements are replicated by the pointer. Clicking or pointing (stopping movement while the cursor is within the bounds of an area) can select files, programs or actions from a list of names, or (in graphical interfaces) through small images called "icons" and other elements. For example, a text file might be represented by a picture of a paper notebook and clicking while the cursor points at this icon might cause a text editing program to open the file in a window. Different ways of operating the mouse cause specific things to happen in the GUI: Point: stop the motion of the pointer while it is inside the boundaries of what the user wants to interact with. This act of pointing is what the "pointer" and "pointing device" are named after. In web design lingo, pointing is referred to as "hovering." This usage spread to web programing and Android programming, and is now found in many contexts. Click: pressing and releasing a button. (left) Single-click: clicking the main button. (left) Double-click: clicking the button two times in quick succession counts as a different gesture than two separate single clicks. (left) Triple-click: clicking the button three times in quick succession counts as a different gesture than three separate single clicks. Triple clicks are far less common in traditional navigation. Right-click: clicking the secondary button. In modern applications, this frequently opens a context menu. Middle-click: clicking the tertiary button. Drag: pressing and holding a button, and moving the mouse before releasing the button. This is frequently used to move or copy files or other objects via drag and drop; other uses include selecting text and drawing in graphics applications. Mouse button chording or chord clicking: Clicking with more than one button simultaneously. Clicking while simultaneously typing a letter on the keyboard. Clicking and rolling the mouse wheel simultaneously. Clicking while holding down a modifier key. Moving the pointer a long distance: When a practical limit of mouse movement is reached, one lifts up the mouse, brings it to the opposite edge of the working area while it is held above the surface, and then lowering it back onto the working surface. This is often not necessary, because acceleration software detects fast movement, and moves the pointer significantly faster in proportion than for slow mouse motion. Multi-touch: this method is similar to a multi-touch touchpad on a laptop with support for tap input for multiple fingers, the most famous example being the Apple Magic Mouse. Gestures Users can also employ mice gesturally; meaning that a stylized motion of the mouse cursor itself, called a "gesture", can issue a command or map to a specific action. For example, in a drawing program, moving the mouse in a rapid "x" motion over a shape might delete the shape. Gestural interfaces occur more rarely than plain pointing-and-clicking; and people often find them more difficult to use, because they require finer motor control from the user. However, a few gestural conventions have become widespread, including the drag and drop gesture, in which: The user presses the mouse button while the mouse cursor points at an interface object The user moves the cursor to a different location while holding the button down The user releases the mouse button For example, a user might drag-and-drop a picture representing a file onto a picture of a trash can, thus instructing the system to delete the file. Standard semantic gestures include: Crossing-based goal Drag and drop Menu traversal Pointing Mouseover (pointing or hovering) Selection Specific uses Other uses of the mouse's input occur commonly in special application domains. In interactive three-dimensional graphics, the mouse's motion often translates directly into changes in the virtual objects' or camera's orientation. For example, in the first-person shooter genre of games (see below), players usually employ the mouse to control the direction in which the virtual player's "head" faces: moving the mouse up will cause the player to look up, revealing the view above the player's head. A related function makes an image of an object rotate so that all sides can be examined. 3D design and animation software often modally chord many different combinations to allow objects and cameras to be rotated and moved through space with the few axes of movement mice can detect. When mice have more than one button, the software may assign different functions to each button. Often, the primary (leftmost in a right-handed configuration) button on the mouse will select items, and the secondary (rightmost in a right-handed) button will bring up a menu of alternative actions applicable to that item. For example, on platforms with more than one button, the Mozilla web browser will follow a link in response to a primary button click, will bring up a contextual menu of alternative actions for that link in response to a secondary-button click, and will often open the link in a new tab or window in response to a click with the tertiary (middle) mouse button. Types Mechanical mice The German company Telefunken published on their early ball mouse on 2 October 1968. Telefunken's mouse was sold as optional equipment for their computer systems. Bill English, builder of Engelbart's original mouse, created a ball mouse in 1972 while working for Xerox PARC. The ball mouse replaced the external wheels with a single ball that could rotate in any direction. It came as part of the hardware package of the Xerox Alto computer. Perpendicular chopper wheels housed inside the mouse's body chopped beams of light on the way to light sensors, thus detecting in their turn the motion of the ball. This variant of the mouse resembled an inverted trackball and became the predominant form used with personal computers throughout the 1980s and 1990s. The Xerox PARC group also settled on the modern technique of using both hands to type on a full-size keyboard and grabbing the mouse when required. The ball mouse has two freely rotating rollers. These are located 90 degrees apart. One roller detects the forward-backward motion of the mouse and the other the left-right motion. Opposite the two rollers is a third one (white, in the photo, at 45 degrees) that is spring-loaded to push the ball against the other two rollers. Each roller is on the same shaft as an encoder wheel that has slotted edges; the slots interrupt infrared light beams to generate electrical pulses that represent wheel movement. Each wheel's disc has a pair of light beams, located so that a given beam becomes interrupted or again starts to pass light freely when the other beam of the pair is about halfway between changes. Simple logic circuits interpret the relative timing to indicate which direction the wheel is rotating. This incremental rotary encoder scheme is sometimes called quadrature encoding of the wheel rotation, as the two optical sensors produce signals that are in approximately quadrature phase. The mouse sends these signals to the computer system via the mouse cable, directly as logic signals in very old mice such as the Xerox mice, and via a data-formatting IC in modern mice. The driver software in the system converts the signals into motion of the mouse cursor along X and Y axes on the computer screen. The ball is mostly steel, with a precision spherical rubber surface. The weight of the ball, given an appropriate working surface under the mouse, provides a reliable grip so the mouse's movement is transmitted accurately. Ball mice and wheel mice were manufactured for Xerox by Jack Hawley, doing business as The Mouse House in Berkeley, California, starting in 1975. Based on another invention by Jack Hawley, proprietor of the Mouse House, Honeywell produced another type of mechanical mouse. Instead of a ball, it had two wheels rotating at off axes. Key Tronic later produced a similar product. Modern computer mice took form at the École Polytechnique Fédérale de Lausanne (EPFL) under the inspiration of Professor Jean-Daniel Nicoud and at the hands of engineer and watchmaker André Guignard. This new design incorporated a single hard rubber mouseball and three buttons, and remained a common design until the mainstream adoption of the scroll-wheel mouse during the 1990s. In 1985, René Sommer added a microprocessor to Nicoud's and Guignard's design. Through this innovation, Sommer is credited with inventing a significant component of the mouse, which made it more "intelligent"; though optical mice from Mouse Systems had incorporated microprocessors by 1984. Another type of mechanical mouse, the "analog mouse" (now generally regarded as obsolete), uses potentiometers rather than encoder wheels, and is typically designed to be plug compatible with an analog joystick. The "Color Mouse", originally marketed by RadioShack for their Color Computer (but also usable on MS-DOS machines equipped with analog joystick ports, provided the software accepted joystick input) was the best-known example. Optical and laser mice Early optical mice relied entirely on one or more light-emitting diodes (LEDs) and an imaging array of photodiodes to detect movement relative to the underlying surface, eschewing the internal moving parts a mechanical mouse uses in addition to its optics. A laser mouse is an optical mouse that uses coherent (laser) light. The earliest optical mice detected movement on pre-printed mousepad surfaces, whereas the modern LED optical mouse works on most opaque diffuse surfaces; it is usually unable to detect movement on specular surfaces like polished stone. Laser diodes provide good resolution and precision, improving performance on opaque specular surfaces. Later, more surface-independent optical mice use an optoelectronic sensor (essentially, a tiny low-resolution video camera) to take successive images of the surface on which the mouse operates. Battery powered, wireless optical
and will often open the link in a new tab or window in response to a click with the tertiary (middle) mouse button. Types Mechanical mice The German company Telefunken published on their early ball mouse on 2 October 1968. Telefunken's mouse was sold as optional equipment for their computer systems. Bill English, builder of Engelbart's original mouse, created a ball mouse in 1972 while working for Xerox PARC. The ball mouse replaced the external wheels with a single ball that could rotate in any direction. It came as part of the hardware package of the Xerox Alto computer. Perpendicular chopper wheels housed inside the mouse's body chopped beams of light on the way to light sensors, thus detecting in their turn the motion of the ball. This variant of the mouse resembled an inverted trackball and became the predominant form used with personal computers throughout the 1980s and 1990s. The Xerox PARC group also settled on the modern technique of using both hands to type on a full-size keyboard and grabbing the mouse when required. The ball mouse has two freely rotating rollers. These are located 90 degrees apart. One roller detects the forward-backward motion of the mouse and the other the left-right motion. Opposite the two rollers is a third one (white, in the photo, at 45 degrees) that is spring-loaded to push the ball against the other two rollers. Each roller is on the same shaft as an encoder wheel that has slotted edges; the slots interrupt infrared light beams to generate electrical pulses that represent wheel movement. Each wheel's disc has a pair of light beams, located so that a given beam becomes interrupted or again starts to pass light freely when the other beam of the pair is about halfway between changes. Simple logic circuits interpret the relative timing to indicate which direction the wheel is rotating. This incremental rotary encoder scheme is sometimes called quadrature encoding of the wheel rotation, as the two optical sensors produce signals that are in approximately quadrature phase. The mouse sends these signals to the computer system via the mouse cable, directly as logic signals in very old mice such as the Xerox mice, and via a data-formatting IC in modern mice. The driver software in the system converts the signals into motion of the mouse cursor along X and Y axes on the computer screen. The ball is mostly steel, with a precision spherical rubber surface. The weight of the ball, given an appropriate working surface under the mouse, provides a reliable grip so the mouse's movement is transmitted accurately. Ball mice and wheel mice were manufactured for Xerox by Jack Hawley, doing business as The Mouse House in Berkeley, California, starting in 1975. Based on another invention by Jack Hawley, proprietor of the Mouse House, Honeywell produced another type of mechanical mouse. Instead of a ball, it had two wheels rotating at off axes. Key Tronic later produced a similar product. Modern computer mice took form at the École Polytechnique Fédérale de Lausanne (EPFL) under the inspiration of Professor Jean-Daniel Nicoud and at the hands of engineer and watchmaker André Guignard. This new design incorporated a single hard rubber mouseball and three buttons, and remained a common design until the mainstream adoption of the scroll-wheel mouse during the 1990s. In 1985, René Sommer added a microprocessor to Nicoud's and Guignard's design. Through this innovation, Sommer is credited with inventing a significant component of the mouse, which made it more "intelligent"; though optical mice from Mouse Systems had incorporated microprocessors by 1984. Another type of mechanical mouse, the "analog mouse" (now generally regarded as obsolete), uses potentiometers rather than encoder wheels, and is typically designed to be plug compatible with an analog joystick. The "Color Mouse", originally marketed by RadioShack for their Color Computer (but also usable on MS-DOS machines equipped with analog joystick ports, provided the software accepted joystick input) was the best-known example. Optical and laser mice Early optical mice relied entirely on one or more light-emitting diodes (LEDs) and an imaging array of photodiodes to detect movement relative to the underlying surface, eschewing the internal moving parts a mechanical mouse uses in addition to its optics. A laser mouse is an optical mouse that uses coherent (laser) light. The earliest optical mice detected movement on pre-printed mousepad surfaces, whereas the modern LED optical mouse works on most opaque diffuse surfaces; it is usually unable to detect movement on specular surfaces like polished stone. Laser diodes provide good resolution and precision, improving performance on opaque specular surfaces. Later, more surface-independent optical mice use an optoelectronic sensor (essentially, a tiny low-resolution video camera) to take successive images of the surface on which the mouse operates. Battery powered, wireless optical mice flash the LED intermittently to save power, and only glow steadily when movement is detected. Inertial and gyroscopic mice Often called "air mice" since they do not require a surface to operate, inertial mice use a tuning fork or other accelerometer (US Patent 4787051) to detect rotary movement for every axis supported. The most common models (manufactured by Logitech and Gyration) work using 2 degrees of rotational freedom and are insensitive to spatial translation. The user requires only small wrist rotations to move the cursor, reducing user fatigue or "gorilla arm". Usually cordless, they often have a switch to deactivate the movement circuitry between use, allowing the user freedom of movement without affecting the cursor position. A patent for an inertial mouse claims that such mice consume less power than optically based mice, and offer increased sensitivity, reduced weight and increased ease-of-use. In combination with a wireless keyboard an inertial mouse can offer alternative ergonomic arrangements which do not require a flat work surface, potentially alleviating some types of repetitive motion injuries related to workstation posture. 3D mice Also known as bats, flying mice, or wands, these devices generally function through ultrasound and provide at least three degrees of freedom. Probably the best known example would be 3Dconnexion ("Logitech's SpaceMouse") from the early 1990s. In the late 1990s Kantek introduced the 3D RingMouse. This wireless mouse was worn on a ring around a finger, which enabled the thumb to access three buttons. The mouse was tracked in three dimensions by a base station. Despite a certain appeal, it was finally discontinued because it did not provide sufficient resolution. One example of a 2000s consumer 3D pointing device is the Wii Remote. While primarily a motion-sensing device (that is, it can determine its orientation and direction of movement), Wii Remote can also detect its spatial position by comparing the distance and position of the lights from the IR emitter using its integrated IR camera (since the nunchuk accessory lacks a camera, it can only tell its current heading and orientation). The obvious drawback to this approach is that it can only produce spatial coordinates while its camera can see the sensor bar. More accurate consumer devices have since been released, including the PlayStation Move, the Razer Hydra, and the controllers part of the HTC Vive virtual reality system. All of these devices can accurately detect position and orientation in 3D space regardless of angle relative to the sensor station. A mouse-related controller called the SpaceBall has a ball placed above the work surface that can easily be gripped. With spring-loaded centering, it sends both translational as well as angular displacements on all six axes, in both directions for each. In November 2010 a German Company called Axsotic introduced a new concept of 3D mouse called 3D Spheric Mouse. This new concept of a true six degree-of-freedom input device uses a ball to rotate in 3 axes and an elastic polymer anchored tetrahedron inspired suspension for translating the ball without any limitations. A contactless sensor design uses a magnetic sensor array for sensing three aches translation and two optical mouse sensors for three aches rotation. The special tetrahedron suspension allows a user to rotate the ball with the fingers while input translations with the hand-wrist motion. Tactile mice In 2000, Logitech introduced a "tactile mouse" known as the "iFeel Mouse" developed by Immersion Corporation that contained a small actuator to enable the mouse to generate simulated physical sensations. Such a mouse can augment user-interfaces with haptic feedback, such as giving feedback when crossing a window boundary. To surf the internet by touch-enabled mouse was first developed in 1996 and first implemented commercially by the Wingman Force Feedback Mouse. It requires the user to be able to feel depth or hardness; this ability was realized with the first electrorheological tactile mice but never marketed. Pucks Tablet digitizers are sometimes used with accessories called pucks, devices which rely on absolute positioning, but can be configured for sufficiently mouse-like relative tracking that they are sometimes marketed as mice. Ergonomic mice As the name suggests, this type of mouse is intended to provide optimum comfort and avoid injuries such as carpal tunnel syndrome, arthritis, and other repetitive strain injuries. It is designed to fit natural hand position and movements, to reduce discomfort. When holding a typical mouse, the ulna and radius bones on the arm are crossed. Some designs attempt to place the palm more vertically, so the bones take more natural parallel position. Some limit wrist movement, encouraging arm movement instead, that may be less precise but more optimal from the health point of view. A mouse may be angled from the thumb downward to the opposite side – this is known to reduce wrist pronation. However such optimizations make the mouse right or left hand specific, making more problematic to change the tired hand. Time has criticized manufacturers for offering few or no left-handed ergonomic mice: "Oftentimes I felt like I was dealing with someone who’d never actually met a left-handed person before." Another solution is a pointing bar device. The so-called roller bar mouse is positioned snugly in front of the keyboard, thus allowing bi-manual accessibility. Gaming mice These mice are specifically designed for use in computer games. They typically employ a wider array of controls and buttons and have designs that differ radically from traditional mice. They may also have decorative monochrome or programmable RGB LED lighting. The additional buttons can often be used for changing the sensitivity of the mouse or they can be assigned (programmed) to macros (i.e., for opening a program or for use instead of a key combination). It is also common for game mice, especially those designed for use in real-time strategy games such as StarCraft, or in multiplayer online battle arena games such as Dota 2 to have a relatively high sensitivity, measured in dots per inch (DPI), which can be as high as 25,600. Some advanced mice from gaming manufacturers also allow users to adjust the weight of the mouse by adding or subtracting weights to allow for easier control. Ergonomic quality is also an important factor in gaming mice, as extended gameplay times may render further use of the mouse to be uncomfortable. Some mice have been designed to have adjustable features such as removable and/or elongated palm rests, horizontally adjustable thumb rests and pinky rests. Some mice may include several different rests with their products to ensure comfort for a wider range of target consumers. Gaming mice are held by gamers in three styles of grip: Palm Grip: the hand rests on the mouse, with extended fingers. Claw Grip: palm rests on the mouse, bent fingers. Finger-Tip Grip: bent fingers, palm doesn't touch the mouse. Connectivity and communication protocols To transmit their input, typical cabled mice use a thin electrical cord terminating in a standard connector, such as RS-232C, PS/2, ADB, or USB. Cordless mice instead transmit data via infrared radiation (see IrDA) or radio (including Bluetooth), although many such cordless interfaces are themselves connected through the aforementioned wired serial buses. While the electrical interface and the format of the data transmitted by commonly available mice is currently standardized on USB, in the past it varied between different manufacturers. A bus mouse used a dedicated interface card for connection to an IBM PC or compatible computer. Mouse use in DOS applications became more common after the introduction of the Microsoft Mouse, largely because Microsoft provided an open standard for communication between applications and mouse driver software. Thus, any application written to use the Microsoft standard could use a mouse with a driver that implements the same API, even if the mouse hardware itself was incompatible with Microsoft's. This driver provides the state of the buttons and the distance the mouse has moved in units that its documentation calls "mickeys". Early mice In the 1970s, the Xerox Alto mouse, and in the 1980s the Xerox optical mouse, used a quadrature-encoded X and Y interface. This two-bit encoding per dimension had the property that only one bit of the two would change at a time, like a Gray code or Johnson counter, so that the transitions would not be misinterpreted when asynchronously sampled. The earliest mass-market mice, such as on the original Macintosh, Amiga, and Atari ST mice used a D-subminiature 9-pin connector to send the quadrature-encoded X and Y axis signals directly, plus one pin per mouse button. The mouse was a simple optomechanical device, and the decoding circuitry was all in the main computer. The DE-9 connectors were designed to be electrically compatible with the joysticks popular on numerous 8-bit systems, such as the Commodore 64 and the Atari 2600. Although the ports could be used for both purposes, the signals must be interpreted differently. As a result, plugging a mouse into a joystick port causes the "joystick" to continuously move in some direction, even if the mouse stays still, whereas plugging a joystick into a mouse port causes the "mouse" to only be able to move a single pixel in each direction. Serial interface and protocol Because the IBM PC did not have a quadrature decoder built in, early PC mice used the RS-232C serial port to communicate encoded mouse movements, as well as provide power to the mouse's circuits. The Mouse Systems Corporation version used a five-byte protocol and supported three buttons. The Microsoft version used a three-byte protocol and supported two buttons. Due to the incompatibility between the two protocols, some manufacturers sold serial mice with a mode switch: "PC" for MSC mode, "MS" for Microsoft mode. Apple Desktop Bus In 1986 Apple first implemented the Apple Desktop Bus allowing the daisy chaining of up to 16 devices, including mice and other devices on the same bus with no configuration whatsoever. Featuring only a single data pin, the bus used a purely polled approach to device communications and survived as the standard on mainstream models (including a number of non-Apple workstations) until 1998 when Apple's iMac line of computers joined the industry-wide switch to using USB. Beginning with the Bronze Keyboard PowerBook G3 in May 1999, Apple dropped the external ADB port in favor of USB, but retained an internal ADB connection in the PowerBook G4 for communication with its built-in keyboard and trackpad until early 2005. PS/2 interface and protocol With the arrival of the IBM PS/2 personal-computer series in 1987, IBM introduced the eponymous PS/2 port for mice and keyboards, which other manufacturers rapidly adopted. The most visible change was the use of a round 6-pin mini-DIN, in lieu of the former 5-pin MIDI style full sized DIN 41524 connector. In default mode (called stream mode) a PS/2 mouse communicates motion, and the state of each button, by means of 3-byte packets. For any motion, button press or button release event, a PS/2 mouse sends, over a bi-directional serial port, a sequence of three bytes, with the following format: Here, XS and YS represent the sign bits of the movement vectors, XV and YV indicate an overflow in the respective vector component, and LB, MB and RB indicate the status of the left, middle and right mouse buttons (1 = pressed). PS/2 mice also understand several commands for reset and self-test, switching between different operating modes, and changing the resolution of the reported motion vectors. A Microsoft IntelliMouse relies on an extension of the PS/2 protocol: the ImPS/2 or IMPS/2 protocol (the abbreviation combines the concepts of "IntelliMouse" and "PS/2"). It initially operates in standard PS/2 format, for backward compatibility. After the host sends a special command sequence, it switches to an extended format in which a fourth byte carries information about wheel movements. The IntelliMouse Explorer works analogously, with the difference that its 4-byte packets also allow for two additional buttons (for a total of five). Mouse vendors also use other extended formats, often without providing public documentation. The Typhoon mouse uses 6-byte packets which can appear as a sequence of two standard 3-byte packets, such that an ordinary PS/2 driver can handle them. For 3-D (or 6-degree-of-freedom) input, vendors have made many extensions both to the hardware and to software. In the late 1990s, Logitech created ultrasound based tracking which gave 3D input to a few millimeters accuracy, which worked well as an input device but failed as a profitable product. In 2008, Motion4U introduced its "OptiBurst" system using IR tracking for use as a Maya (graphics software) plugin. USB The industry-standard USB (Universal Serial Bus) protocol and its connector have become widely used for mice; it is among the most popular types. Cordless or wireless Cordless or wireless mice transmit data via radio. Some mice connect to the computer through Bluetooth or Wi-Fi, while others use a receiver that plugs into the computer, for example through a USB port. Many mice that use a USB receiver have a storage compartment for it inside the mouse. Some "nano receivers" are designed to be small enough to remain plugged into a laptop during transport, while still being large enough to easily remove. Operating system support MS-DOS and Windows 1.0 support connecting a mouse such as a Microsoft Mouse via multiple interfaces: BallPoint, Bus (InPort), Serial port or PS/2. Windows 98 added built-in support for USB Human Interface Device class (USB HID), with native vertical scrolling support. Windows 2000 and Windows Me expanded this built-in support to 5-button mice. Windows XP Service Pack 2 introduced a Bluetooth stack, allowing Bluetooth mice to be used without any USB receivers. Windows Vista added native support for horizontal scrolling and standardized wheel movement granularity for finer scrolling. Windows 8 introduced BLE (Bluetooth Low Energy) mouse/HID support. Multiple-mouse systems Some systems allow two or more mice to be used at once as input devices. Late-1980s era home computers such as the Amiga used this to allow computer games with two players interacting on the same computer (Lemmings and The Settlers for example). The same idea is sometimes used in collaborative software, e.g. to simulate a whiteboard that multiple users can draw on without passing a single mouse around. Microsoft Windows, since Windows 98, has supported multiple simultaneous pointing devices. Because Windows only provides a single screen cursor, using more than one device at the same time requires cooperation of users or applications designed for multiple input devices. Multiple mice are often used in multi-user gaming in addition to specially designed devices that provide several input interfaces. Windows also has full support for multiple input/mouse configurations for multi-user environments. Starting with Windows XP, Microsoft introduced an SDK for developing applications that allow multiple input devices to be used at the same time with independent cursors and independent input points. However, it no longer appears to be available. The introduction of Windows Vista and Microsoft Surface (now known as Microsoft PixelSense) introduced a new set of input APIs that were adopted into Windows 7, allowing for 50 points/cursors, all controlled by independent users. The new input points provide traditional mouse input; however, they were designed with other input technologies like touch and image in mind. They inherently offer 3D coordinates along with pressure, size, tilt, angle, mask, and even an image bitmap to see and recognize the input point/object on the screen. As of 2009, Linux distributions and other operating systems that use X.Org, such as OpenSolaris and FreeBSD, support 255 cursors/input points through Multi-Pointer X. However, currently no window managers support Multi-Pointer X leaving it relegated to custom software usage. There have also been propositions of having a single operator use two mice simultaneously as a more sophisticated means of controlling various graphics and multimedia applications. Buttons Mouse buttons are microswitches which can be pressed to select or interact with an element of a graphical user interface, producing a distinctive clicking sound. Since around the late 1990s, the three-button scrollmouse has become the de facto standard. Users most commonly employ the second button to invoke a contextual menu in the computer's software user interface, which contains options specifically tailored to the interface element over which the mouse cursor currently sits. By default, the primary mouse button sits located on the left-hand side of the mouse, for the benefit of right-handed users; left-handed users can usually reverse this configuration via software. Scrolling Nearly all mice now have an integrated input primarily intended for scrolling on top, usually a single-axis digital wheel or rocker switch which can also be depressed to act as a third button. Though less common, many mice instead have two-axis inputs such as a tiltable wheel, trackball, or touchpad. Those with a trackball may be designed to stay stationary, using the trackball instead of moving the mouse. Speed Mickeys per second is a unit of measurement for the speed and movement direction of a computer mouse, where direction is often expressed as "horizontal" versus "vertical" mickey count. However, speed can also refer to the ratio between how many pixels the cursor moves on the screen and how far the mouse moves on the mouse pad, which may be expressed as pixels per mickey, pixels per inch, or pixels per centimeter. The computer industry often measures mouse sensitivity in terms of counts per inch (CPI), commonly expressed as dots per inch (DPI)the number of steps the mouse will report when it moves one inch. In early mice, this specification was called pulses per inch (ppi). The mickey originally referred to one of these counts, or one resolvable step of motion. If the default mouse-tracking condition involves moving the cursor by one screen-pixel or dot on-screen per reported step, then the CPI does equate to DPI: dots of cursor motion per inch of mouse motion. The CPI or DPI as reported by manufacturers depends on how they make the mouse; the higher the CPI, the faster the cursor moves with mouse movement. However, software can adjust the mouse sensitivity, making the cursor move faster or slower than its CPI. software can change the speed of the cursor dynamically, taking into account the mouse's absolute speed and the movement from the last stop-point. In most software, an example being the Windows platforms, this setting is named "speed", referring to "cursor precision". However, some operating systems name this setting "acceleration", the typical Apple OS designation. This term is incorrect. Mouse acceleration in most mouse software refers to the change in speed of the cursor over time while the mouse movement is constant. For simple software, when the mouse starts to move, the software will count the number of "counts" or "mickeys" received from the mouse and will move the cursor across the screen by that number of pixels (or multiplied by a rate factor, typically less than 1). The cursor will move slowly on the screen, with good precision. When the movement of the mouse passes the value set for some threshold, the software will start to move the cursor faster, with a greater rate factor. Usually, the user can set the value of the second rate factor by changing the "acceleration" setting. Operating systems sometimes apply acceleration, referred to as "ballistics", to the motion reported by the mouse. For example, versions of Windows prior to Windows XP doubled reported values above a configurable threshold, and then optionally doubled them again above a second configurable threshold. These doublings applied separately in the X and Y directions, resulting in very nonlinear response. Mousepads Engelbart's original mouse did not require a mousepad; the mouse had two large wheels which could roll on virtually any surface. However, most subsequent mechanical mice starting with the steel roller ball mouse have required a mousepad for optimal performance. The mousepad, the most common mouse accessory, appears most commonly in conjunction with mechanical mice, because to roll smoothly the ball requires more friction than common desk surfaces usually provide. So-called "hard mousepads" for gamers or optical/laser mice also exist. Most optical and laser mice do not require a pad, the notable exception being early optical mice which relied on a grid on the pad to detect movement (e.g. Mouse Systems). Whether to use a hard or soft mousepad with an optical mouse is largely a matter of personal preference. One exception occurs when the desk surface creates problems for the optical or
flee the city. To control the population harsh measures were proposed: bringing London under almost military control, and physically cordoning off the city with 120,000 troops to force people back to work. A different government department proposed setting up camps for refugees for a few days before sending them back to London. A special government department, the Civil Defence Service, was established by the Home Office in 1935. Its remit included the pre-existing ARP as well as wardens, firemen (initially the Auxiliary Fire Service (AFS) and latterly the National Fire Service (NFS)), fire watchers, rescue, first aid post, stretcher party and industry. Over 1.9 million people served within the CD; nearly 2,400 lost their lives to enemy action. The organization of civil defense was the responsibility of the local authority. Volunteers were ascribed to different units depending on experience or training. Each local civil defense service was divided into several sections. Wardens were responsible for local reconnaissance and reporting, and leadership, organization, guidance and control of the general public. Wardens would also advise survivors of the locations of rest and food centers, and other welfare facilities. Rescue Parties were required to assess and then access bombed-out buildings and retrieve injured or dead people. In addition they would turn off gas, electricity and water supplies, and repair or pull down unsteady buildings. Medical services, including First Aid Parties, provided on the spot medical assistance. The expected stream of information that would be generated during an attack was handled by 'Report and Control' teams. A local headquarters would have an ARP controller who would direct rescue, first aid and decontamination teams to the scenes of reported bombing. If local services were deemed insufficient to deal with the incident then the controller could request assistance from surrounding boroughs. Fire Guards were responsible for a designated area/building and required to monitor the fall of incendiary bombs and pass on news of any fires that had broken out to the NFS. They could deal with an individual magnesium electron incendiary bomb by dousing it with buckets of sand or water or by smothering. Additionally, 'Gas Decontamination Teams' kitted out with gas-tight and waterproof protective clothing were to deal with any gas attacks. They were trained to decontaminate buildings, roads, rail and other material that had been contaminated by liquid or jelly gases. Little progress was made over the issue of air-raid shelters, because of the apparently irreconcilable conflict between the need to send the public underground for shelter and the need to keep them above ground for protection against gas attacks. In February 1936 the Home Secretary appointed a technical Committee on Structural Precautions against Air Attack. During the Munich crisis, local authorities dug trenches to provide shelter. After the crisis, the British Government decided to make these a permanent feature, with a standard design of precast concrete trench lining. They also decided to issue the Anderson shelter free to poorer households and to provide steel props to create shelters in suitable basements. During the Second World War, the ARP was responsible for the issuing of gas masks, pre-fabricated air-raid shelters (such as Anderson shelters, as well as Morrison shelters), the upkeep of local public shelters, and the maintenance of the blackout. The ARP also helped rescue people after air raids and other attacks, and some women became ARP Ambulance Attendants whose job was to help administer first aid to casualties, search for survivors, and in many grim instances, help recover bodies, sometimes those of their own colleagues. As the war progressed, the military effectiveness of Germany's aerial bombardment was very limited. Thanks to the Luftwaffe's shifting aims, the strength of British air defenses, the use of early warning radar and the life-saving actions of local civil defense units, the aerial "Blitz" during the Battle of Britain failed to break the morale of the British people, destroy the Royal Air Force or significantly hinder British industrial production. Despite a significant investment in civil and military defense, British civilian losses during the Blitz were higher than in most strategic bombing campaigns throughout the war. For example, there were 14,000-20,000 UK civilian fatalities during the Battle of Britain, a relatively high number considering that the Luftwaffe dropped only an estimated 30,000 tons of ordinance during the battle. Granted, this resulting 0.47-0.67 civilian fatalities per ton of bombs dropped was lower than the earlier 121 casualties per ton prediction. However, in comparison, Allied strategic bombing of Germany during the war proved slightly less lethal than what was observed in the UK, with an estimated 400,000-600,000 German civilian fatalities for approximately 1.35 million tons of bombs dropped on Germany, an estimated resulting rate therefore of 0.30-0.44 civilian fatalities per ton of bombs dropped. United States In the United States, the Office of Civil Defense was established in May 1941 to coordinate civilian defense efforts. It coordinated with the Department of the Army and established similar groups to the British ARP. One of these groups that still exists today is the Civil Air Patrol, which was originally created as a civilian auxiliary to the Army. The CAP was created on December 1, 1941, with the main civil defense mission of search and rescue. The CAP also sank two Axis submarines and provided aerial reconnaissance for Allied and neutral merchant ships. In 1946, the Civil Air Patrol was barred from combat by Public Law 79-476. The CAP then received its current mission: search and rescue for downed aircraft. When the Air Force was created, in 1947, the Civil Air Patrol became the auxiliary of the Air Force. The Coast Guard Auxiliary performs a similar role in support of the U.S. Coast Guard. Like the Civil Air Patrol, the Coast Guard Auxiliary was established in the run up to World War II. Auxiliarists were sometimes armed during the war, and extensively participated in port security operations. After the war, the Auxiliary shifted its focus to promoting boating safety and assisting the Coast Guard in performing search and rescue and marine safety and environmental protection. In the United States a federal civil defense program existed under Public Law 920 of the 81st Congress, as amended, from 1951–1994. That statutory scheme was made so-called all-hazards by Public Law 103-160 in 1993 and largely repealed by Public Law 103-337 in 1994. Parts now appear in Title VI of the Robert T. Stafford Disaster Relief and Emergency Assistance Act, Public Law 100-107 [1988 as amended]. The term EMERGENCY PREPAREDNESS was largely codified by that repeal and amendment. See 42 USC Sections 5101 and following. In most of the states of the North Atlantic Treaty Organization, such as the United States, the United Kingdom and West Germany, as well as the Soviet Bloc, and especially in the neutral countries, such as Switzerland and in Sweden during the 1950s and 1960s, many civil defense practices took place to prepare for the aftermath of a nuclear war, which seemed quite likely at that time. In the United Kingdom, the Civil Defence Service was disbanded in 1945, followed by the ARP in 1946. With the onset of the growing tensions between East and West, the service was revived in 1949 as the Civil Defence Corps. As a civilian volunteer organization, it was tasked to take control in the aftermath of a major national emergency, principally envisaged as being a Cold War nuclear attack. Although under the authority of the Home Office, with a centralized administrative establishment, the corps was administered locally by Corps Authorities. In general every county was a Corps Authority, as were most county boroughs in England and Wales and large burghs in Scotland. Each division was divided into several sections, including the Headquarters, Intelligence and Operations, Scientific and Reconnaissance, Warden & Rescue, Ambulance and First Aid and Welfare. In 1954 Coventry City Council caused international controversy when it announced plans to disband its Civil Defence committee because the councillors had decided that hydrogen bombs meant that there could be no recovery from a nuclear attack. The British government opposed such a move and held a provocative Civil Defence exercise on the streets of Coventry which Labour council members protested against. The government also decided to implement its own committee at the city's cost until the council reinstituted its committee. In the United States, the sheer power of nuclear weapons and the perceived likelihood of such an attack precipitated a greater response than had yet been required of civil defense. Civil defense, previously considered an important and commonsense step, became divisive and controversial in the charged atmosphere of the Cold War. In 1950, the National Security Resources Board created a 162-page document outlining a model civil defense structure for the U.S. Called the "Blue Book" by civil defense professionals in reference to its solid blue cover, it was the template for legislation and organization for the next 40 years. Perhaps the most memorable aspect of the Cold War civil defense effort was the educational effort made or promoted by the government. In Duck and Cover, Bert the Turtle advocated that children "duck and cover" when they "see the flash." Booklets such as Survival Under Atomic Attack, Fallout Protection and Nuclear War Survival Skills were also commonplace. The transcribed radio program Stars for Defense combined hit music with civil defense advice. Government institutes created public service announcements including children's songs and distributed them to radio stations to educate the public in case of nuclear attack. The US President Kennedy (1961–63) launched an ambitious effort to install fallout shelters throughout the United States. These shelters would not protect against the blast and heat effects of nuclear weapons, but would provide some protection against the radiation effects that would last for weeks and even affect areas distant from a nuclear explosion. In order for most of these preparations to be effective, there had to be some degree of warning. In 1951, CONELRAD (Control of Electromagnetic Radiation) was established. Under the system, a few primary stations would be alerted of an emergency and would broadcast an alert. All broadcast stations throughout the country would be constantly listening to an upstream station and repeat the message, thus passing it from station to station. In a once classified US war game analysis, looking at varying levels of war escalation, warning and pre-emptive attacks in the late 1950s early 1960s, it was estimated that approximately 27 million US citizens would have been saved with civil defense education. At the time, however, the cost of a full-scale civil defense program was regarded as less effective in cost-benefit analysis than a ballistic missile defense (Nike Zeus) system, and as the Soviet adversary was increasing their nuclear stockpile, the efficacy of both would follow a diminishing returns trend. Contrary to the largely noncommittal approach taken in NATO, with its stops and starts in civil defense depending on the whims of each newly elected government, the military strategy in the comparatively more ideologically consistent USSR held that, amongst other things, a winnable nuclear war was possible. To this effect the Soviets planned to minimize, as far as possible, the effects of nuclear weapon strikes on its territory, and therefore spent considerably more thought on civil defense preparations than in U.S., with defense plans that have been assessed to be far more effective than those in the U.S. Soviet Civil Defense Troops played the main role in the massive disaster relief operation following the 1986 Chernobyl nuclear accident. Defense Troop reservists were officially mobilized (as in a case of war) from throughout the USSR to join the Chernobyl task force and formed on the basis of the Kyiv Civil Defense Brigade. The task force performed some high-risk tasks including, with the failure of their robotic machinery, the manual removal of highly-radioactive debris. Many of their personnel were later decorated with medals for their work at containing the release of radiation into the environment, with a number of the 56 deaths from the accident being Civil defense troops. Decline In Western countries, strong civil defense policies were never properly implemented, because it was fundamentally at odds with the doctrine of "mutual assured destruction" (MAD) by making provisions for survivors. It was also considered that a full-fledged total defense would have not been worth the very large expense. For whatever reason, the public saw efforts at civil defense as fundamentally ineffective against the powerful destructive forces of nuclear
of the Home Office, with a centralized administrative establishment, the corps was administered locally by Corps Authorities. In general every county was a Corps Authority, as were most county boroughs in England and Wales and large burghs in Scotland. Each division was divided into several sections, including the Headquarters, Intelligence and Operations, Scientific and Reconnaissance, Warden & Rescue, Ambulance and First Aid and Welfare. In 1954 Coventry City Council caused international controversy when it announced plans to disband its Civil Defence committee because the councillors had decided that hydrogen bombs meant that there could be no recovery from a nuclear attack. The British government opposed such a move and held a provocative Civil Defence exercise on the streets of Coventry which Labour council members protested against. The government also decided to implement its own committee at the city's cost until the council reinstituted its committee. In the United States, the sheer power of nuclear weapons and the perceived likelihood of such an attack precipitated a greater response than had yet been required of civil defense. Civil defense, previously considered an important and commonsense step, became divisive and controversial in the charged atmosphere of the Cold War. In 1950, the National Security Resources Board created a 162-page document outlining a model civil defense structure for the U.S. Called the "Blue Book" by civil defense professionals in reference to its solid blue cover, it was the template for legislation and organization for the next 40 years. Perhaps the most memorable aspect of the Cold War civil defense effort was the educational effort made or promoted by the government. In Duck and Cover, Bert the Turtle advocated that children "duck and cover" when they "see the flash." Booklets such as Survival Under Atomic Attack, Fallout Protection and Nuclear War Survival Skills were also commonplace. The transcribed radio program Stars for Defense combined hit music with civil defense advice. Government institutes created public service announcements including children's songs and distributed them to radio stations to educate the public in case of nuclear attack. The US President Kennedy (1961–63) launched an ambitious effort to install fallout shelters throughout the United States. These shelters would not protect against the blast and heat effects of nuclear weapons, but would provide some protection against the radiation effects that would last for weeks and even affect areas distant from a nuclear explosion. In order for most of these preparations to be effective, there had to be some degree of warning. In 1951, CONELRAD (Control of Electromagnetic Radiation) was established. Under the system, a few primary stations would be alerted of an emergency and would broadcast an alert. All broadcast stations throughout the country would be constantly listening to an upstream station and repeat the message, thus passing it from station to station. In a once classified US war game analysis, looking at varying levels of war escalation, warning and pre-emptive attacks in the late 1950s early 1960s, it was estimated that approximately 27 million US citizens would have been saved with civil defense education. At the time, however, the cost of a full-scale civil defense program was regarded as less effective in cost-benefit analysis than a ballistic missile defense (Nike Zeus) system, and as the Soviet adversary was increasing their nuclear stockpile, the efficacy of both would follow a diminishing returns trend. Contrary to the largely noncommittal approach taken in NATO, with its stops and starts in civil defense depending on the whims of each newly elected government, the military strategy in the comparatively more ideologically consistent USSR held that, amongst other things, a winnable nuclear war was possible. To this effect the Soviets planned to minimize, as far as possible, the effects of nuclear weapon strikes on its territory, and therefore spent considerably more thought on civil defense preparations than in U.S., with defense plans that have been assessed to be far more effective than those in the U.S. Soviet Civil Defense Troops played the main role in the massive disaster relief operation following the 1986 Chernobyl nuclear accident. Defense Troop reservists were officially mobilized (as in a case of war) from throughout the USSR to join the Chernobyl task force and formed on the basis of the Kyiv Civil Defense Brigade. The task force performed some high-risk tasks including, with the failure of their robotic machinery, the manual removal of highly-radioactive debris. Many of their personnel were later decorated with medals for their work at containing the release of radiation into the environment, with a number of the 56 deaths from the accident being Civil defense troops. Decline In Western countries, strong civil defense policies were never properly implemented, because it was fundamentally at odds with the doctrine of "mutual assured destruction" (MAD) by making provisions for survivors. It was also considered that a full-fledged total defense would have not been worth the very large expense. For whatever reason, the public saw efforts at civil defense as fundamentally ineffective against the powerful destructive forces of nuclear weapons, and therefore a waste of time and money, although detailed scientific research programs did underlie the much-mocked government civil defense pamphlets of the 1950s and 1960s. The Civil Defence Corps was stood down in Great Britain in 1968 due to the financial crisis of the mid 1960s. Its neighbors, however, remained committed to Civil Defence, namely the Isle of Man Civil Defence Corps and Civil Defence Ireland (Republic of Ireland). In the United States, the various civil defense agencies were replaced with the Federal Emergency Management Agency (FEMA) in 1979. In 2002 this became part of the Department of Homeland Security. The focus was shifted from nuclear war to an "all-hazards" approach of Comprehensive Emergency Management. Natural disasters and the emergence of new threats such as terrorism have caused attention to be focused away from traditional civil defense and into new forms of civil protection such as emergency management and homeland security. Today Many countries still maintain a national Civil Defence Corps, usually having a wide brief for assisting in large scale civil emergencies such as flood, earthquake, invasion, or civil disorder. After the September 11 attacks in 2001, in the United States the concept of civil defense has been revisited under the umbrella term of homeland security and all-hazards emergency management. In Europe, the triangle CD logo continues to be widely used. The old U.S. civil defense logo was used in the FEMA logo until 2006 and is hinted at in the United States Civil Air Patrol logo. Created in 1939 by Charles Coiner of the N. W. Ayer Advertising Agency, it was used throughout World War II and the Cold War era. In 2006, the National Emergency Management Association—a U.S. organization made up of state emergency managers—"officially" retired the Civil Defense triangle logo, replacing it with a stylised EM (standing for Emergency management). The name and logo, however, continue to be used by Hawaii State Civil Defense and Guam Homeland Security/Office of Civil Defense. The term "civil protection" is currently widely used within the European Union to refer to government-approved systems and resources tasked with protecting the non-combat population, primarily in the event of natural and technological disasters. For example, the EU's humanitarian aid policy director on the Ebola Crisis, Florika Fink-Hooijer, said that civil protection requires "not just more resources, but first and foremost better governance of the resources that are available including better synergies between humanitarian aid and civil protection". In recent years there has been emphasis on preparedness for technological disasters resulting from terrorist attack. Within EU countries the term "crisis-management" emphasizes the political and security dimension rather than measures to satisfy the immediate needs of the population. In Australia, civil defense is the responsibility of the volunteer-based State Emergency Service. In most former Soviet countries civil defense is the responsibility of governmental ministries, such as Russia's Ministry of
bonded to the substrate, forming an enzyme-substrate intermediate. Along with histidine 57 and aspartic acid 102, this serine residue constitutes the catalytic triad of the active site. These findings rely on inhibition assays and the study of the kinetics of cleavage of the aforementioned substrate, exploiting the fact that the enzyme-substrate intermediate p-nitrophenolate has a yellow colour, enabling measurement of its concentration by measuring light absorbance at 410 nm. Chymotrypsin catalysis of the hydrolysis of a protein substrate (in red) is performed in two steps. First, the nucleophilicity of Ser-195 is enhanced by general-base catalysis in which the proton of the serine hydroxyl group is transferred to the imidazole moiety of His-57 during its attack on the electron-deficient carbonyl carbon of the protein-substrate main chain (k1 step). This occurs via the concerted action of the three-amino-acid residues in the catalytic triad. The buildup of negative charge on the resultant tetrahedral intermediate is stabilized in the enzyme’s active site’s oxyanion hole, by formation of two hydrogen bonds to adjacent main-chain amide-hydrogens. The His-57 imidazolium moiety formed in the k1 step is a general acid catalyst for the k-1 reaction. However, evidence for similar general-acid catalysis of the k2 reaction (Tet2) has been controverted; apparently water provides a proton to the amine leaving group. Breakdown of Tet1 (via k3) generates an acyl enzyme, which is hydrolyzed with His-57 acting as a general base (kH2O) in formation of a tetrahedral intermediate, that breaks down to regenerate the serine hydroxyl moiety, as well as the protein
The hydrophobic and shape complementarity between the peptide substrate P1 side chain and the enzyme S1 binding cavity accounts for the substrate specificity of this enzyme. Chymotrypsin also hydrolyzes other amide bonds in peptides at slower rates, particularly those containing leucine and methionine at the P1 position. Structurally, it is the archetypal structure for its superfamily, the PA clan of proteases. Activation Chymotrypsin is synthesized in the pancreas. Its precursor is chymotrypsinogen. Trypsin activates chymotrypsinogen by cleaving peptidic bonds in positions Arg15 – Ile16 and produces π-chymotrypsin. In turn, aminic group (-NH3+) of the Ile16 residue interacts with the side chain of Asp194, producing the "oxyanion hole" and the hydrophobic "S1 pocket". Moreover, chymotrypsin induces its own activation by cleaving in positions 14–15, 146–147, and 148–149, producing α-chymotrypsin (which is more active and stable than π-chymotrypsin). The resulting molecule is a three-polypeptide molecule interconnected via disulfide bonds. Mechanism of action and kinetics In vivo, chymotrypsin is a proteolytic enzyme (serine protease) acting in the digestive systems of many organisms. It facilitates the cleavage of peptide bonds by a hydrolysis reaction, which despite being thermodynamically favorable, occurs extremely slowly in the absence of a catalyst. The main substrates of chymotrypsin are peptide bonds in which the amino acid N-terminal to the bond is a tryptophan, tyrosine, phenylalanine, or leucine. Like many proteases, chymotrypsin also hydrolyses amide bonds in vitro, a virtue that enabled the use of substrate analogs such as N-acetyl-L-phenylalanine p-nitrophenyl amide for enzyme assays. Chymotrypsin cleaves peptide bonds by attacking the unreactive carbonyl group with a powerful nucleophile, the serine 195 residue located in the active site of the enzyme, which briefly becomes covalently bonded to the substrate, forming an enzyme-substrate intermediate. Along with histidine 57 and aspartic acid 102, this serine residue constitutes the catalytic triad of the active site. These findings rely on inhibition assays and the study of
members. In some communities, the liaison is a volunteer and CERT member. As people are trained and agree to join the community emergency response effort, a CERT is formed. Initial efforts may result in a team with only a few members from across the community. As the number of members grow, a single community-wide team may subdivide. Multiple CERTs are organized into a hierarchy of teams consistent with ICS principles. This follows the Incident Command System (ICS) principle of Span of control until the ideal distribution is achieved: one or more teams are formed at each neighborhood within a community. A Teen Community Emergency Response Team (TEEN CERT), or Student Emergency Response Team (SERT), can be formed from any group of teens. A Teen CERT can be formed as a school club, service organization, Venturing Crew, Explorer Post, or the training can be added to a school's graduation curriculum. Some CERTs form a club or service corporation, and recruit volunteers to perform training on behalf of the sponsoring agency. This reduces the financial and human resource burden on the sponsoring agency. When not responding to disasters or large emergencies, CERTs may raise funds for emergency response equipment in their community; provide first-aid, crowd control or other services at community events; hold planning, training, or recruitment meetings; and conduct or participate in disaster response exercises. Some sponsoring agencies use state and federal grants to purchase response tools and equipment for their members and team(s) (subject to Stafford Act limitations). Most CERTs also acquire their own supplies, tools, and equipment. As community members, CERTs are aware of the specific needs of their community and equip the teams accordingly. Response The basic idea is to use CERT to perform the large number of tasks needed in emergencies. This frees highly trained professional responders for more technical tasks. Much of CERT training concerns the Incident Command System and organization, so CERT members fit easily into larger command structures. A team may self-activate (self-deploy) when their own neighborhood is affected by disaster. An effort is made to report their response status to the sponsoring agency. A self-activated team will size-up the loss in their neighborhood and begin performing the skills they have learned to minimize further loss of life, property, and environment. They will continue to respond safely until redirected or relieved by the sponsoring agency or professional responders on-scene. Teams in neighborhoods not affected by disaster may be deployed or activated by the sponsoring agency. The sponsoring agency may communicate with neighborhood CERT leaders through an organic communication team. In some areas the communications may be by amateur radio, FRS, GMRS or MURS radio, dedicated telephone or fire-alarm networks. In other areas, relays of bicycle-equipped runners can effectively carry messages between the teams and the local emergency operations center. The sponsoring agency may activate and dispatch teams in order to gather or respond to intelligence about an incident. Teams may be dispatched to affected neighborhoods, or organized to support operations. CERT members may augment support staff at an Incident Command Post or Emergency Operations Center. Additional teams may also be created to guard a morgue, locate supplies and food, convey messages to and from other CERTs and local authorities, and other duties on an as-needed basis as identified by the team leader. In the short term, CERTs perform data gathering, especially to locate mass-casualties requiring professional response, or situations requiring professional rescues, simple fire-fighting tasks (for example, small fires, turning off gas), light search and rescue, damage evaluation of structures, triage and first aid. In the longer term, CERTs may assist in the evacuation of residents, or assist with setting up a neighborhood shelter. While responding, CERT members are temporary volunteer government workers. In some areas, (such as California, Hawaii and Kansas) registered, activated CERT members are eligible for worker's compensation for on-the-job injuries during declared disasters. Member roles The Federal Emergency Management Agency (FEMA) recommends that the standard, minimum ten-person team be comprised as follows: CERT Leader/Incident Commander. Generally, the first CERT team member arriving on the scene is the designated Incident Commander (IC) until the arrival of someone more competent. This person makes the IC initial assessment of the scene and determines the appropriate course of action for team members; assumes role of Safety Officer until assigned to another team member; assigns team member roles if not already assigned; designates triage area, treatment area, morgue, and vehicle traffic routes; coordinates and directs team operations; determines logistical needs (water, food, medical supplies, transportation, equipment, and so on.) and determines ways to meet those needs through team members or citizen volunteers on the scene; collects and writes reports on the operation and victims; and communicates and coordinates with the incident commander, local authorities, and other CERT team leaders. The Incident Commander is identified by two pieces of crossed tape on the hard hat. Safety Officer/ Dispatch. Checks team members prior to deployment to ensure they are safe and equipped for the operation; determines safe or unsafe working environments; ensures team accountability; supervises operations (when possible) where team members and victims are at direct physical risk, and alerts team members when unsafe conditions arise. Advises team members of any updates on the situation. Keeps tabs on the situation as it unfolds Fire Suppression Team (2 people). Work under the supervision of a Team Leader to suppress small fires in designated work areas or as needed; when not accomplishing their primary mission, assist the search and rescue team or triage team; assist in evacuation and transport as needed; assist in the triage or treatment area as needed, other duties as assigned; communicate with Team Leader. Search and Rescue Team/Extraction (2). Work under the supervision of a Team Leader, searching for and providing rescue of victims as is prudent under the conditions, also bringing injured people to triage or the hospital for medical treatment ; when not accomplishing their primary mission, assist the Fire Suppression Team, assist in the triage or treatment area as needed; other duties as assigned;
for more technical tasks. Much of CERT training concerns the Incident Command System and organization, so CERT members fit easily into larger command structures. A team may self-activate (self-deploy) when their own neighborhood is affected by disaster. An effort is made to report their response status to the sponsoring agency. A self-activated team will size-up the loss in their neighborhood and begin performing the skills they have learned to minimize further loss of life, property, and environment. They will continue to respond safely until redirected or relieved by the sponsoring agency or professional responders on-scene. Teams in neighborhoods not affected by disaster may be deployed or activated by the sponsoring agency. The sponsoring agency may communicate with neighborhood CERT leaders through an organic communication team. In some areas the communications may be by amateur radio, FRS, GMRS or MURS radio, dedicated telephone or fire-alarm networks. In other areas, relays of bicycle-equipped runners can effectively carry messages between the teams and the local emergency operations center. The sponsoring agency may activate and dispatch teams in order to gather or respond to intelligence about an incident. Teams may be dispatched to affected neighborhoods, or organized to support operations. CERT members may augment support staff at an Incident Command Post or Emergency Operations Center. Additional teams may also be created to guard a morgue, locate supplies and food, convey messages to and from other CERTs and local authorities, and other duties on an as-needed basis as identified by the team leader. In the short term, CERTs perform data gathering, especially to locate mass-casualties requiring professional response, or situations requiring professional rescues, simple fire-fighting tasks (for example, small fires, turning off gas), light search and rescue, damage evaluation of structures, triage and first aid. In the longer term, CERTs may assist in the evacuation of residents, or assist with setting up a neighborhood shelter. While responding, CERT members are temporary volunteer government workers. In some areas, (such as California, Hawaii and Kansas) registered, activated CERT members are eligible for worker's compensation for on-the-job injuries during declared disasters. Member roles The Federal Emergency Management Agency (FEMA) recommends that the standard, minimum ten-person team be comprised as follows: CERT Leader/Incident Commander. Generally, the first CERT team member arriving on the scene is the designated Incident Commander (IC) until the arrival of someone more competent. This person makes the IC initial assessment of the scene and determines the appropriate course of action for team members; assumes role of Safety Officer until assigned to another team member; assigns team member roles if not already assigned; designates triage area, treatment area, morgue, and vehicle traffic routes; coordinates and directs team operations; determines logistical needs (water, food, medical supplies, transportation, equipment, and so on.) and determines ways to meet those needs through team members or citizen volunteers on the scene; collects and writes reports on the operation and victims; and communicates and coordinates with the incident commander, local authorities, and other CERT team leaders. The Incident Commander is identified by two pieces of crossed tape on the hard hat. Safety Officer/ Dispatch. Checks team members prior to deployment to ensure they are safe and equipped for the operation; determines safe or unsafe working environments; ensures team accountability; supervises operations (when possible) where team members and victims are at direct physical risk, and alerts team members when unsafe conditions arise. Advises team members of any updates on the situation. Keeps tabs on the situation as it unfolds Fire Suppression Team (2 people). Work under the supervision of a Team Leader to suppress small fires in designated work areas or as needed; when not accomplishing their primary mission, assist the search and rescue team or triage team; assist in evacuation and transport as needed; assist in the triage or treatment area as needed, other duties as assigned; communicate with Team Leader. Search and Rescue Team/Extraction (2). Work under the supervision of a Team Leader, searching for and providing rescue of victims as is prudent under the conditions, also bringing injured people to triage or the hospital for medical treatment ; when not accomplishing their primary mission, assist the Fire Suppression Team, assist in the triage or treatment area as needed; other duties as assigned; communicate with Team Leader. Medical Triage Team/Field Medic (2). Work under the supervision of a Team Leader, providing START triage for victims found at the scene; marking victims with category of injury per the standard operating procedures; when not accomplishing their primary mission, assist the Fire Suppression Team if needed, assist the Search and Rescue Team if needed, assist in the Medical Triage Area if needed, assist in the Treatment Area if needed, other duties as assigned; communicate with Incident Commander. Medical Treatment Team (2). Work under the supervision of the Team Leader, providing medical treatment to victims within the scope of their training. This task is normally accomplished in the Treatment Area, however, it may take place in the affected area as well. When not accomplishing their primary mission, assist the Fire Suppression Team as needed, assist the Medical Triage Team as needed; other duties as assigned; communicate with the Team Leader. Team Leader. Supervises designated tasks they are assigned to. Gives reports to Dispatch and Incident Commander. Because every CERT member in a community receives the same core instruction, any team member has the training necessary to assume any of these roles. This is important during a disaster response because not all members of a regular team may be available to respond. Hasty teams may be formed by whichever members are responding at the time. Additionally, members may need to adjust team roles due to stress, fatigue, injury, or other circumstances. Training While state and local jurisdictions will implement training in the manner that best suits the community, FEMA's National CERT Program has an established curriculum. Jurisdictions may augment the training, but are strongly encouraged to deliver the entire core content. The CERT core curriculum for the basic course is composed of the following nine units (time is instructional hours): Unit 1: Disaster Preparedness (2.5 hrs). Topics include (in part) identifying local disaster threats, disaster impact, mitigation and preparedness concepts, and an overview of Citizen Corps and CERT. Hands on skills include team-building exercises, and shutting off utilities. Unit 2: Fire Safety (2.5 hrs). Students learn about fire chemistry, mitigation practices, hazardous materials identification, suppression options, and are introduced to the concept of size-up. Hands-on skills include using a fire extinguisher to suppress a live flame, and wearing basic protective gear. Firefighting standpipes as well as unconventional firefighting methods are also covered. Unit 3: Disaster Medical Operations part 1 (2.5 hrs). Students learn to identify and treat certain life-threatening conditions in a disaster setting, as well as START triage. Hands-on skills include performing head-tilt/chin-lift, practicing bleeding control techniques, and performing triage as an exercise. Unit 4: Disaster Medical Operations part 2 (2.5 hrs). Topics cover mass casualty operations, public health, assessing patients, and treating injuries. Students practice patient assessment, and various treatment techniques. Unit 5: Light Search and Rescue Operations (2.5 hrs). Size-up is expanded as students learn about assessing structural damage, marking structures that have been searched, search techniques,
comes from the Latin 'catapulta', which in turn comes from the Greek (katapeltēs), itself from κατά (kata), "downwards" and πάλλω (pallō), "to toss, to hurl". Catapults were invented by the ancient Greeks and in ancient India where they were used by the Magadhan Emperor Ajatshatru around the early to mid 5th century BC. Greek and Roman catapults The catapult and crossbow in Greece are closely intertwined. Primitive catapults were essentially "the product of relatively straightforward attempts to increase the range and penetrating power of missiles by strengthening the bow which propelled them". The historian Diodorus Siculus (fl. 1st century BC), described the invention of a mechanical arrow-firing catapult (katapeltikon) by a Greek task force in 399 BC. The weapon was soon after employed against Motya (397 BC), a key Carthaginian stronghold in Sicily. Diodorus is assumed to have drawn his description from the highly rated history of Philistus, a contemporary of the events then. The introduction of crossbows however, can be dated further back: according to the inventor Hero of Alexandria (fl. 1st century AD), who referred to the now lost works of the 3rd-century BC engineer Ctesibius, this weapon was inspired by an earlier foot-held crossbow, called the gastraphetes, which could store more energy than the Greek bows. A detailed description of the gastraphetes, or the "belly-bow", along with a watercolor drawing, is found in Heron's technical treatise Belopoeica. A third Greek author, Biton (fl. 2nd century BC), whose reliability has been positively reevaluated by recent scholarship, described two advanced forms of the gastraphetes, which he credits to Zopyros, an engineer from southern Italy. Zopyrus has been plausibly equated with a Pythagorean of that name who seems to have flourished in the late 5th century BC. He probably designed his bow-machines on the occasion of the sieges of Cumae and Milet between 421 BC and 401 BC. The bows of these machines already featured a winched pull back system and could apparently throw two missiles at once. Philo of Byzantium provides probably the most detailed account on the establishment of a theory of belopoietics (belos = "projectile"; poietike = "(art) of making") circa 200 BC. The central principle to this theory was that "all parts of a catapult, including the weight or length of the projectile, were proportional to the size of the torsion springs". This kind of innovation is indicative of the increasing rate at which geometry and physics were being assimilated into military enterprises. From the mid-4th century BC onwards, evidence of the Greek use of arrow-shooting machines becomes more dense and varied: arrow firing machines (katapaltai) are briefly mentioned by Aeneas Tacticus in his treatise on siegecraft written around 350 BC. An extant inscription from the Athenian arsenal, dated between 338 and 326 BC, lists a number of stored catapults with shooting bolts of varying size and springs of sinews. The later entry is particularly noteworthy as it constitutes the first clear evidence for the switch to torsion catapults, which are more powerful than the more-flexible crossbows and which came to dominate Greek and Roman artillery design thereafter. This move to torsion springs was likely spurred by the engineers of Philip II of Macedonia. Another Athenian inventory from 330 to 329 BC includes catapult bolts with heads and flights. As the use of catapults became more commonplace, so did the training required to operate them. Many Greek children were instructed in catapult usage, as evidenced by "a 3rd Century B.C. inscription from the island of Ceos in the Cyclades [regulating] catapult shooting competitions for the young". Arrow firing machines in action are reported from Philip II's siege of Perinth (Thrace) in 340 BC. At the same time, Greek fortifications began to feature high towers with shuttered windows in the top, which could have been used to house anti-personnel arrow shooters, as in Aigosthena. Projectiles included both arrows and (later) stones that were sometimes lit on fire. Onomarchus of Phocis first used catapults on the battlefield against Philip II of Macedon. Philip's son, Alexander the Great, was the next commander in recorded history to make such use of catapults on the battlefield as well as to use them during sieges. The Romans started to use catapults as arms for their wars against Syracuse, Macedon, Sparta and Aetolia (3rd and 2nd centuries BC). The Roman machine known as an arcuballista was similar to a large crossbow. Later the Romans used ballista catapults on their warships. Other ancient catapults In chronological order: 19th century BC, Egypt, walls of the fortress of Buhen appear to contain platforms for siege weapons. c.750 BC, Judah, King Uzziah is documented as having overseen the construction of machines to "shoot great stones". between 484 to 468 BC, India, Ajatshatru is recorded in Jaina texts as having used catapults in his campaign against the Licchavis. between 500 to 300 BC, China, recorded use of mangonels. They were probably used by the Mohists as early as 4th century BC, descriptions of which can be found in the Mojing (compiled in the
the top, which could have been used to house anti-personnel arrow shooters, as in Aigosthena. Projectiles included both arrows and (later) stones that were sometimes lit on fire. Onomarchus of Phocis first used catapults on the battlefield against Philip II of Macedon. Philip's son, Alexander the Great, was the next commander in recorded history to make such use of catapults on the battlefield as well as to use them during sieges. The Romans started to use catapults as arms for their wars against Syracuse, Macedon, Sparta and Aetolia (3rd and 2nd centuries BC). The Roman machine known as an arcuballista was similar to a large crossbow. Later the Romans used ballista catapults on their warships. Other ancient catapults In chronological order: 19th century BC, Egypt, walls of the fortress of Buhen appear to contain platforms for siege weapons. c.750 BC, Judah, King Uzziah is documented as having overseen the construction of machines to "shoot great stones". between 484 to 468 BC, India, Ajatshatru is recorded in Jaina texts as having used catapults in his campaign against the Licchavis. between 500 to 300 BC, China, recorded use of mangonels. They were probably used by the Mohists as early as 4th century BC, descriptions of which can be found in the Mojing (compiled in the 4th century BC). In Chapter 14 of the Mojing, the mangonel is described hurling hollowed out logs filled with burning charcoal at enemy troops. The mangonel was carried westward by the Avars and appeared next in the eastern Mediterranean by the late 6th century AD, where it replaced torsion powered siege engines such as the ballista and onager due to its simpler design and faster rate of fire. The Byzantines adopted the mangonel possibly as early as 587, the Persians in the early 7th century, and the Arabs in the second half of the 7th century. The Franks and Saxons adopted the weapon in the 8th century. Medieval catapults Castles and fortified walled cities were common during this period and catapults were used as siege weapons against them. As well as their use in attempts to breach walls, incendiary missiles, or diseased carcasses or garbage could be catapulted over the walls. Defensive techniques in the Middle Ages progressed to a point that rendered catapults largely ineffective. The Viking siege of Paris (885–6 A.D.) "saw the employment by both sides of virtually every instrument of siege craft known to the classical world, including a variety of catapults", to little effect, resulting in failure. The most widely used catapults throughout the Middle Ages were as follows: Ballista Ballistae were similar to giant crossbows and were designed to work through torsion. The projectiles were large arrows or darts made from wood with an iron tip. These arrows were then shot "along a flat trajectory" at a target. Ballistae were accurate, but lacked firepower compared with that of a mangonel or trebuchet. Because of their immobility, most ballistae were constructed on site following a siege assessment by the commanding military officer. Springald The springald's design resembles that of the ballista, being a crossbow powered by tension. The springald's frame was more compact, allowing for use inside tighter confines, such as the inside of a castle or tower, but compromising its power. Mangonel This machine was designed to throw heavy projectiles from a "bowl-shaped bucket at the end of its arm". Mangonels were mostly used for “firing various missiles at fortresses, castles, and cities,” with a range of up to 1300 feet. These missiles included anything from stones to excrement to rotting carcasses. Mangonels were relatively simple to construct, and eventually wheels were added to increase mobility. Onager Mangonels are also sometimes referred to as Onagers. Onager catapults initially launched projectiles from a sling, which was later changed to a "bowl-shaped bucket". The word Onager is derived from the Greek word onagros for "wild ass", referring to the "kicking motion and force" that were recreated in the Mangonel's design. Historical records regarding onagers are scarce. The most detailed account of Mangonel use is from “Eric Marsden's translation of a text written by Ammianus Marcellius in the 4th Century AD” describing its construction and combat usage. Trebuchet Trebuchets were probably the most powerful catapult employed in the Middle Ages. The most commonly used ammunition were stones, but "darts and sharp wooden poles" could be substituted if necessary. The most effective kind of ammunition though involved fire, such as "firebrands, and deadly Greek Fire". Trebuchets came in two different designs: Traction, which were powered by people, or Counterpoise, where the people were replaced with "a weight on the short end". The most famous historical account of trebuchet use dates back to the siege of Stirling Castle in 1304, when the army of Edward I constructed a giant trebuchet known as Warwolf, which then proceeded to "level a section of [castle] wall, successfully concluding the siege". Couillard A simplified trebuchet, where the trebuchet's single counterweight is split, swinging on either side of a central support post. Leonardo da Vinci's catapult Leonardo da Vinci sought to improve the efficiency and range of earlier designs. His design incorporated a large wooden leaf spring as an accumulator to power the catapult. Both ends of the bow are connected by a rope, similar to the design of a bow and arrow. The leaf spring was not used to pull the catapult armature directly, rather the rope was wound around a drum. The catapult armature was attached to this drum which would be turned until enough potential energy was stored in the deformation of the spring. The drum would then be disengaged from the winding mechanism, and the catapult arm would snap around. Though no records exist of this design being built during Leonardo's lifetime, contemporary enthusiasts have reconstructed it. Modern use Military The last large scale military use of catapults was during the trench warfare of World War I. During the early stages of the war, catapults were used to throw hand grenades across no man's land into enemy trenches. They were eventually replaced by small mortars. In the 1840s the invention of vulcanized rubber allowed the making of small hand-held catapults, either improvised from Y-shaped sticks or manufactured for sale; both were popular with children and teenagers. These devices were also known as slingshots in the USA. Special variants called aircraft catapults are used to launch planes from land
modern form, known as American cinquain inspired by Japanese haiku and tanka, is akin in spirit to that of the Imagists. In her 1915 collection titled Verse, published a year after her death, Adelaide Crapsey included 28 cinquains. Crapsey's American Cinquain form developed in two stages. The first, fundamental form is a stanza of five lines of accentual verse, in which the lines comprise, in order, 1, 2, 3, 4, and 1 stresses. Then Crapsey decided to make the criterion a stanza of five lines of accentual-syllabic verse, in which the lines comprise, in order, 1, 2, 3, 4, and 1 stresses and 2, 4, 6, 8, and 2 syllables. Iambic feet were meant to be the standard for the cinquain, which made the dual criteria match perfectly. Some
based them, Crapsey always titled her cinquains, effectively utilizing the title as a sixth line. Crapsey's cinquain depends on strict structure and intense physical imagery to communicate a mood or feeling. The form is illustrated by Crapsey's "November Night": Listen... With faint dry sound, Like steps of passing ghosts, The leaves, frost-crisp'd, break from the trees And fall. The Scottish poet William Soutar also wrote over one hundred American cinquains (he labelled them "epigrams") between 1933 and 1940. Cinquain variations The Crapsey cinquain has subsequently seen a number of variations by modern poets, including: Didactic cinquain The didactic cinquain is closely related to the Crapsey cinquain. It is an informal cinquain widely taught in elementary schools and has been featured in, and popularized by, children's media resources, including Junie B. Jones and PBS Kids. This form is also embraced by young adults and older poets for its expressive simplicity. The prescriptions of this type of cinquain refer to word count, not syllables and stresses. Ordinarily, the first line is a one-word title, the subject of the poem; the second line is a pair of adjectives describing that title; the third line is a three-word phrase that gives more
Krusenstern in the 1820s. In 1813 John Williams, a missionary on the Endeavour (not the same ship as Cook's) made the first recorded European sighting of Rarotonga. The first recorded landing on Rarotonga by Europeans was in 1814 by the Cumberland; trouble broke out between the sailors and the Islanders and many were killed on both sides. The islands saw no more Europeans until English missionaries arrived in 1821. Christianity quickly took hold in the culture and many islanders are Christians today. The islands were a popular stop in the 19th century for whaling ships from the United States, Britain and Australia. They visited, from at least 1826, to obtain water, food, and firewood. Their favourite islands were Rarotonga, Aitutaki, Mangaia and Penrhyn. The Cook Islands became aligned to the United Kingdom in 1890, largely because of the fear of British residents that France might occupy the islands as it already had Tahiti. On 6 September 1900, the islanders' leaders presented a petition asking that the islands (including Niue "if possible") should be annexed as British territory. On 8 and 9 October 1900, seven instruments of cession of Rarotonga and other islands were signed by their chiefs and people. A British Proclamation was issued, stating that the cessions were accepted and the islands declared parts of Her Britannic Majesty's dominions. However, it did not include Aitutaki. Even though the inhabitants regarded themselves as British subjects, the Crown's title was unclear until the island was formally annexed by that Proclamation. In 1901 the islands were included within the boundaries of the Colony of New Zealand by Order in Council under the Colonial Boundaries Act, 1895 of the United Kingdom. The boundary change became effective on 11 June 1901, and the Cook Islands have had a formal relationship with New Zealand since that time. The Cook Islands responded to the call for service when World War One began, immediately sending five contingents, close to 500 men, to the war. The island's young men volunteered at the outbreak of the war to reinforce the Maori Contingents. A Patriotic Fund was set up very quickly, raising funds to support the war effort. The Cook Islanders were trained at Narrow Neck Camp in Devonport, and the first recruits departed on 13 October 1915 on the SS Te Anau. The ship arrived in Egypt just as the New Zealand units were about to be transferred to the Western Front. In September, 1916, the Pioneer Battalion, a combination of Cook Islanders, Maori and Pakeha soldiers, saw heavy action in the Allied attack on Flers, the first battle of the Somme. Three Cook Islanders from this first contingent died from enemy action and at least ten died of disease as they struggled to adapt to the conditions in Europe. The 2nd and 3rd Cook Island Contingents were part of the Sinai-Palestine campaign, first in a logistical role for the Australian and New Zealand Mounted Rifles at their Moascar base and later in ammunition supply for the Royal Artillery. After the war, the men returned to the outbreak of the influenza epidemic in New Zealand, and this, along with European diseases meant that a large number did not survive and died in New Zealand or on their return home over the coming years. When the British Nationality and New Zealand Citizenship Act 1948 came into effect on 1 January 1949, Cook Islanders who were British subjects automatically gained New Zealand citizenship. The islands remained a New Zealand dependent territory until the New Zealand Government decided to grant them self-governing status. On 4 August 1965, a constitution was promulgated. The first Monday in August is celebrated each year as Constitution Day. Albert Henry of the Cook Islands Party was elected as the first Premier and was knighted by Queen Elizabeth II. Henry led the nation until 1978, when he was accused of vote-rigging and resigned. He was stripped of his knighthood in 1979. He was succeeded by Tom Davis of the Democratic Party who held that position until March 1983. On 13 July 2017, the Cook Islands established Marae Moana, making it become the world's largest protected area by size. In March 2019, it was reported that the Cook Islands had plans to change its name and remove the reference to Captain James Cook in favour of "a title that reflects its 'Polynesian nature'". It was later reported in May 2019 that the proposed name change had been poorly received by the Cook Islands diaspora. As a compromise, it was decided that the English name of the islands would not be altered, but that a new Cook Islands Māori name would be adopted to replace the current name, a transliteration from English. Discussions over the name continued in 2020. Geography The Cook Islands are in the South Pacific Ocean, north-east of New Zealand, between French Polynesia and American Samoa. There are 15 major islands spread over of ocean, divided into two distinct groups: the Southern Cook Islands and the Northern Cook Islands of coral atolls. The islands were formed by volcanic activity; the northern group is older and consists of six atolls, which are sunken volcanoes topped by coral growth. The climate is moderate to tropical. The Cook Islands consist of 15 islands and two reefs. From March to December, the Cook Islands are in the path of tropical cyclones, the most notable of which were the cyclones Martin and Percy. Two terrestrial ecoregions lie within the islands' territory: Central Polynesian tropical moist forests and Cook Islands tropical moist forests. The table is ordered from north to south. Population figures from the 2016 census. Politics and foreign relations The Cook Islands are a representative democracy with a parliamentary system in an associated state relationship with New Zealand. Executive power is exercised by the government, with the Chief Minister as head of government. Legislative power is vested in both the government and the Parliament of the Cook Islands. There is a multi-party system. The Judiciary is independent of the executive and the legislature. The head of state is the Queen of New Zealand, who is represented in the Cook Islands by the Queen's Representative. The islands are self-governing in "free association" with New Zealand. New Zealand retains primary responsibility for external affairs, acting in consultation with the Cook Islands government. Cook Islands nationals are citizens of New Zealand and can receive New Zealand government services, but the reverse is not true; New Zealand citizens are not Cook Islands nationals. Despite this, , the Cook Islands had diplomatic relations in its own name with 52 other countries. The Cook Islands is not a United Nations member state, but, along with Niue, has had their "full treaty-making capacity" recognised by the United Nations Secretariat, and is a full member of the WHO, UNESCO, the International Civil Aviation Organization and the UN Food and Agriculture Organization, all UN specialised agencies, and is an associate member of the Economic and Social Commission for Asia and the Pacific (UNESCAP) and a Member of the Assembly of States of the International Criminal Court. On 11 June 1980, the United States signed a treaty with the Cook Islands specifying the maritime border between the Cook Islands and American Samoa and also relinquishing any American claims to Penrhyn Island, Pukapuka, Manihiki, and Rakahanga. In 1990 the Cook Islands and France signed a treaty that delimited the boundary between the Cook Islands and French Polynesia. In late August 2012, United States Secretary of State Hillary Clinton visited the islands. In 2017, the Cook Islands signed the UN treaty on the Prohibition of Nuclear Weapons. Human rights Male homosexuality is illegal in the Cook Islands and is punishable by a maximum term of seven years imprisonment. Administrative subdivisions There are island councils on all of the inhabited outer islands (Outer Islands Local Government Act 1987 with amendments up to 2004, and Palmerston Island Local Government Act 1993) except Nassau, which is governed by Pukapuka (Suwarrow, with only one caretaker living on the island, also governed by Pukapuka, is not counted with the inhabited islands in this context). Each council is headed by a mayor. The three Vaka councils of Rarotonga established in 1997 (Rarotonga Local Government Act 1997), also headed by mayors, were abolished in February 2008, despite much controversy. On the lowest level, there are village committees. Nassau, which is governed by Pukapuka, has an island committee (Nassau Island Committee), which advises the Pukapuka Island Council on matters concerning its own island. Demographics Births and deaths Religion In the Cook Islands the Church is separate from the state, and most of the population is Christian. The religious distribution is as follows: The various Protestant groups account for 62.8% of the believers, the most followed denomination being the Cook Islands Christian Church with 49.1%. Other Protestant Christian groups include Seventh-day Adventist 7.9%, Assemblies of God 3.7% and Apostolic Church 2.1%. The main non-Protestant group is Roman Catholics with 17% of the population.
immediately sending five contingents, close to 500 men, to the war. The island's young men volunteered at the outbreak of the war to reinforce the Maori Contingents. A Patriotic Fund was set up very quickly, raising funds to support the war effort. The Cook Islanders were trained at Narrow Neck Camp in Devonport, and the first recruits departed on 13 October 1915 on the SS Te Anau. The ship arrived in Egypt just as the New Zealand units were about to be transferred to the Western Front. In September, 1916, the Pioneer Battalion, a combination of Cook Islanders, Maori and Pakeha soldiers, saw heavy action in the Allied attack on Flers, the first battle of the Somme. Three Cook Islanders from this first contingent died from enemy action and at least ten died of disease as they struggled to adapt to the conditions in Europe. The 2nd and 3rd Cook Island Contingents were part of the Sinai-Palestine campaign, first in a logistical role for the Australian and New Zealand Mounted Rifles at their Moascar base and later in ammunition supply for the Royal Artillery. After the war, the men returned to the outbreak of the influenza epidemic in New Zealand, and this, along with European diseases meant that a large number did not survive and died in New Zealand or on their return home over the coming years. When the British Nationality and New Zealand Citizenship Act 1948 came into effect on 1 January 1949, Cook Islanders who were British subjects automatically gained New Zealand citizenship. The islands remained a New Zealand dependent territory until the New Zealand Government decided to grant them self-governing status. On 4 August 1965, a constitution was promulgated. The first Monday in August is celebrated each year as Constitution Day. Albert Henry of the Cook Islands Party was elected as the first Premier and was knighted by Queen Elizabeth II. Henry led the nation until 1978, when he was accused of vote-rigging and resigned. He was stripped of his knighthood in 1979. He was succeeded by Tom Davis of the Democratic Party who held that position until March 1983. On 13 July 2017, the Cook Islands established Marae Moana, making it become the world's largest protected area by size. In March 2019, it was reported that the Cook Islands had plans to change its name and remove the reference to Captain James Cook in favour of "a title that reflects its 'Polynesian nature'". It was later reported in May 2019 that the proposed name change had been poorly received by the Cook Islands diaspora. As a compromise, it was decided that the English name of the islands would not be altered, but that a new Cook Islands Māori name would be adopted to replace the current name, a transliteration from English. Discussions over the name continued in 2020. Geography The Cook Islands are in the South Pacific Ocean, north-east of New Zealand, between French Polynesia and American Samoa. There are 15 major islands spread over of ocean, divided into two distinct groups: the Southern Cook Islands and the Northern Cook Islands of coral atolls. The islands were formed by volcanic activity; the northern group is older and consists of six atolls, which are sunken volcanoes topped by coral growth. The climate is moderate to tropical. The Cook Islands consist of 15 islands and two reefs. From March to December, the Cook Islands are in the path of tropical cyclones, the most notable of which were the cyclones Martin and Percy. Two terrestrial ecoregions lie within the islands' territory: Central Polynesian tropical moist forests and Cook Islands tropical moist forests. The table is ordered from north to south. Population figures from the 2016 census. Politics and foreign relations The Cook Islands are a representative democracy with a parliamentary system in an associated state relationship with New Zealand. Executive power is exercised by the government, with the Chief Minister as head of government. Legislative power is vested in both the government and the Parliament of the Cook Islands. There is a multi-party system. The Judiciary is independent of the executive and the legislature. The head of state is the Queen of New Zealand, who is represented in the Cook Islands by the Queen's Representative. The islands are self-governing in "free association" with New Zealand. New Zealand retains primary responsibility for external affairs, acting in consultation with the Cook Islands government. Cook Islands nationals are citizens of New Zealand and can receive New Zealand government services, but the reverse is not true; New Zealand citizens are not Cook Islands nationals. Despite this, , the Cook Islands had diplomatic relations in its own name with 52 other countries. The Cook Islands is not a United Nations member state, but, along with Niue, has had their "full treaty-making capacity" recognised by the United Nations Secretariat, and is a full member of the WHO, UNESCO, the International Civil Aviation Organization and the UN Food and Agriculture Organization, all UN specialised agencies, and is an associate member of the Economic and Social Commission for Asia and the Pacific (UNESCAP) and a Member of the Assembly of States of
Samoa and also relinquishing the US claim to the islands of Penrhyn, Pukapuka, Manihiki, and Rakahanga. In 1990, the Cook Islands signed a treaty with France which delimited the maritime boundary between the Cook Islands and French Polynesia. On June 13, 2008, a small majority of members of the House of Ariki attempted a coup, claiming to dissolve the elected government and to take control of the country's leadership. "Basically we are dissolving the leadership, the prime minister and the deputy prime minister and the ministers," chief Makea Vakatini Joseph Ariki explained. The Cook Islands Herald suggested that the ariki were attempting thereby to regain some of their traditional prestige or mana. Prime Minister Jim Marurai described the take-over move as "ill-founded and nonsensical". By June 23, the situation appeared to have normalised, with members of the House of Ariki accepting to return to their regular duties. Gallery Timeline 900 - first People arrive to the islands 1595 — Spaniard Álvaro de Mendaña de Neira is the first European to sight the islands. 1606 — Portuguese-Spaniard Pedro Fernández de Quirós makes the first recorded European landing in the islands when he sets foot on Rakahanga. 1773 — Captain James Cook explores the islands and names them the Hervey Islands. Fifty years later they are renamed in his honour by Russian Admiral Adam Johann von Krusenstern. 1821 — English and Tahitian missionaries land in Aitutaki, become the first non-Polynesian settlers. 1823 — English missionary John Williams lands in Rarotonga, converting Makea Pori Ariki to Christianity. 1858 — The Cook Islands become united as a state, the Kingdom of Rarotonga. 1862 — Peruvian slave traders take a terrible toll on the islands of Penrhyn, Rakahanga and Pukapuka in 1862 and 1863. 1888 — Cook Islands are proclaimed a British protectorate and a single federal parliament is established. 1900 — The Cook Islands are ceded to the United Kingdom as British territory, except for Aitutaki which was annexed by the United Kingdom at the same time. 1901 — The boundaries of the Colony of New Zealand are extended by the United Kingdom to include the Cook Islands. 1924 — The All Black Invincibles stop in Rarotonga on their way to the United Kingdom and play a friendly match against a scratch Rarotongan team. 1946 — Legislative Council is established. For the first time since 1912, the territory has direct representation. 1957 — Legislative Council is reorganized as the Legislative Assembly. 1965 — The Cook Islands become a self-governing territory in free association with New Zealand. Albert Henry, leader of the Cook Islands Party, is elected as the territory's first prime minister. 1974 — Albert Henry is knighted by Queen Elizabeth II 1979 — Sir Albert Henry is found guilty of electoral fraud and stripped of his premiership and his knighthood. Tom Davis becomes Premier. 1980 — Cook Islands – United States Maritime Boundary Treaty establishes the Cook Islands – American Samoa boundary 1981 — Constitution is amended. Legislative Assembly is renamed Parliament, which grows from 22 to 24 seats, and the parliamentary term is extended from four to five years. Tom Davis is knighted. 1984 — The country's first coalition government, between Sir Thomas and Geoffrey Henry, is signed in the lead up to hosting regional Mini Games in 1985. Shifting coalitions saw ten years of political instability. At one stage, all but two MPs were in government. 1985 — Rarotonga Treaty is opened for signing in the Cook Islands, creating a nuclear-free zone in the South Pacific. 1986 — In January 1986, following the rift between New Zealand and the US in respect of the ANZUS security arrangements Prime Minister Tom Davis declared the Cook Islands a neutral country, because he considered that New Zealand (which has control over the islands' defence and foreign policy) was no longer in a position to defend the islands. The proclamation of neutrality meant that the Cook Islands would not enter into a military relationship with any foreign power, and, in particular, would prohibit visits by US warships. Visits by US naval vessels were allowed to resume by Henry's Government. 1990 — Cook Islands – France Maritime Delimitation Agreement establishes the Cook Islands–French Polynesia boundary 1991 — The Cook Islands signed a treaty of friendship and co-operation with France, covering economic development, trade and surveillance of the islands' EEZ. The establishment of closer relations with France was widely regarded as an expression of the Cook Islands' Government's dissatisfaction with existing arrangements with New Zealand which was no longer in a position to defend the Cook
by Order in Council under the Colonial Boundaries Act, 1895 of the United Kingdom. The boundary change became effective on 11 June 1901, and the Cook Islands have had a formal relationship with New Zealand since that time. Recent history In 1962 New Zealand asked the Cook Islands legislature to vote on four options for the future: independence, self-government, integration into New Zealand, or integration into a larger Polynesian federation. The legislature decided upon self-government. Following elections in 1965, the Cook Islands transitioned to become a self-governing territory in free association with New Zealand. This arrangement left the Cook Islands politically independent, but officially remaining under New Zealand sovereignty. This political transition was approved by the United Nations. Despite this status change, the islands remained financially dependent on New Zealand, and New Zealand believed that a failure of the free association agreement would lead to integration rather than full independence. New Zealand is tasked with overseeing the country's foreign relations and defense. The Cook Islands, Niue, and New Zealand (with its territories: Tokelau and the Ross Dependency) make up the Realm of New Zealand. After achieving autonomy in 1965, the Cook Islands elected Albert Henry of the Cook Islands Party as their first Prime Minister. He led the country until 1978 when he was accused of vote-rigging. He was succeeded by Tom Davis of the Democratic Party. On 11 June 1980, the United States signed a treaty with the Cook Islands specifying the maritime border between the Cook Islands and American Samoa and also relinquishing the US claim to the islands of Penrhyn, Pukapuka, Manihiki, and Rakahanga. In 1990, the Cook Islands signed a treaty with France which delimited the maritime boundary between the Cook Islands and French Polynesia. On June 13, 2008, a small majority of members of the House of Ariki attempted a coup, claiming to dissolve the elected government and to take control of the country's leadership. "Basically we are dissolving the leadership, the prime minister and the deputy prime minister and the ministers," chief Makea Vakatini Joseph Ariki explained. The Cook Islands Herald suggested that the ariki were attempting thereby to regain some of their traditional prestige or mana. Prime Minister Jim Marurai described the take-over move as "ill-founded and nonsensical". By June 23, the situation appeared to have normalised, with members of the House of Ariki accepting to return to their regular duties. Gallery Timeline 900 - first People arrive to the islands 1595 — Spaniard Álvaro de Mendaña de Neira is the first European to sight the islands. 1606 — Portuguese-Spaniard Pedro Fernández de Quirós makes the first recorded European landing in the islands when he sets foot on Rakahanga. 1773 — Captain James Cook explores the islands and names them the Hervey Islands. Fifty years later they are renamed in his honour by Russian Admiral Adam Johann von Krusenstern. 1821 — English and Tahitian missionaries land in Aitutaki, become the first non-Polynesian settlers. 1823 — English missionary John Williams lands in Rarotonga, converting Makea Pori Ariki to Christianity. 1858 — The Cook Islands become united as a state, the Kingdom of Rarotonga. 1862 — Peruvian slave traders take a terrible toll on the islands of Penrhyn, Rakahanga and Pukapuka in 1862 and 1863. 1888 — Cook Islands are proclaimed a British protectorate and a single federal parliament is established. 1900 — The Cook Islands are ceded to the United Kingdom as British territory, except for Aitutaki which was annexed by the United Kingdom at the same time. 1901 — The boundaries of the Colony of New Zealand are extended by the United Kingdom to include the Cook Islands. 1924 — The All Black Invincibles stop in Rarotonga on their way to the United Kingdom and play a friendly match against a scratch Rarotongan team. 1946 — Legislative Council is established. For the first time since 1912, the territory has direct representation. 1957 — Legislative Council is reorganized as the Legislative Assembly. 1965 — The Cook Islands become a self-governing territory in free association with New Zealand. Albert Henry, leader of the Cook Islands Party, is elected as the territory's first prime minister. 1974 — Albert Henry is knighted by Queen Elizabeth II 1979 — Sir Albert Henry is found guilty of electoral fraud and stripped of his premiership and his knighthood. Tom Davis becomes Premier. 1980 — Cook Islands – United States Maritime Boundary Treaty establishes the Cook Islands – American Samoa boundary 1981 — Constitution is amended. Legislative Assembly is renamed Parliament, which grows from 22 to 24 seats, and the parliamentary term is extended from four to five years. Tom Davis is knighted. 1984 — The country's first coalition government, between Sir Thomas and Geoffrey Henry, is signed in the lead up to hosting regional Mini Games in 1985. Shifting coalitions saw ten years of political instability. At one stage, all but two MPs were in government. 1985 — Rarotonga Treaty is opened for signing in the Cook Islands, creating a nuclear-free zone in the South Pacific. 1986 — In January 1986, following the rift between New Zealand and the US in respect of the ANZUS security arrangements Prime Minister Tom Davis declared the Cook Islands a neutral country, because he considered that New Zealand (which has control over the islands' defence and foreign policy) was no longer in a position to defend the islands. The proclamation of neutrality meant that the Cook Islands would not enter into a military relationship with any foreign power, and, in particular, would prohibit visits by US warships. Visits by US naval vessels were allowed to resume by Henry's Government. 1990 — Cook Islands – France Maritime Delimitation Agreement establishes the Cook Islands–French Polynesia boundary 1991 — The Cook Islands signed a treaty of friendship and co-operation with France, covering economic development, trade and surveillance of the islands' EEZ. The establishment of closer relations with France was widely regarded as an expression of the Cook Islands' Government's dissatisfaction with existing arrangements with New Zealand which was no longer in a position to defend the Cook Islands. 1995 — The French Government resumed its programme of nuclear-weapons testing at Mururoa Atoll in September 1995 upsetting the Cook Islands. New Prime Minister Geoffrey Henry was fiercely critical of the
to New Zealand. Southern Cook Islands Aitutaki Atiu Mangaia Manuae Mauke Mitiaro Palmerston Island Rarotonga (capital) Takutea Northern Cook Islands Manihiki Nassau Penrhyn atoll Pukapuka Rakahanga Suwarrow Statistics Area Total: Land: 236 km2 Water: 0 km2 Area - comparative 1.3 times the size of Washington, DC Coastline Maritime claims Territorial sea: Continental shelf: or to the edge of the continental margin
April to November and a more humid season from December to March Terrain Low coral atolls in north; volcanic, hilly islands in south Elevation extremes Lowest point: Pacific Ocean 0 m Highest point: Te Manga Natural resources coconuts Land use Arable land: 4.17% Permanent crops: 4.17% Other: 91.67% (2012 est.) Natural hazards Typhoons (November to March) Environment - international agreements Party to: Biodiversity, Climate Change-Kyoto Protocol, Desertification, Hazardous Wastes, Law
Factbook demographic statistics The following demographic statistics are from the CIA World Factbook, unless otherwise indicated. Population 9,290 Age structure (2017 est.) 0–14 years: 21.12% (male 1,154/female 1,025) 15–24 years: 16.63% (male 929/female 806) 25–54 years: 38.09% (male 1,876/female 1,867) 55–64 years: 11.99% (male 569/female 494) 65 years and over: 12.16% (male 551/female 567) Population growth rate -2.79% Birth rate 14 births/1,000 population Death rate 8.4 deaths/1,000 population Infant mortality rate Total: 13 deaths/1,000 live births Male: 15.8 deaths/1,000 live births Female: 10.1 deaths/1,000 live births Life expectancy at birth Total population: 76 years Male: 73.2 years Female: 79 years (2017 est.) Total fertility rate 2.19 children born/woman Nationality Cook Islander(s) (Noun) Cook Islander (Adjective) Ethnic groups Cook Island Maori (Polynesian) 81.3% part Cook Island Maori 6.7% Other 11.9%
the population. A census is carried out every five years in the Cook Islands. The last census was carried out in 2016 and the next census will be carried out in December 2021. Vital statistics Births and deaths CIA World Factbook demographic statistics The following demographic statistics are from the CIA World Factbook, unless otherwise indicated. Population 9,290 Age structure (2017 est.) 0–14 years: 21.12% (male 1,154/female 1,025) 15–24 years: 16.63% (male 929/female 806) 25–54 years: 38.09% (male 1,876/female 1,867) 55–64
sworn in as prime minister on 30 November 2010. Following uncertainty about the ability of the government to maintain its majority, the Queen's representative dissolved parliament midway through its term and a 'snap' election was held on 26 September 2006. Jim Marurai's Democratic Party retained the Treasury benches with an increased majority. The New Zealand High Commissioner is appointed by the New Zealand Government. Legislature The Parliament of the Cook Islands has 24 members, elected for a five-year term in single-seat constituencies. There is also a House of Ariki, composed of chiefs, which has a purely advisory role. The Koutu Nui is a similar organization consisting of sub-chiefs. It was established by an amendment in 1972 of the 1966 House of Ariki Act. The current president is Te Tika Mataiapo Dorice Reid. On June 13, 2008, a small majority of members of the House of Ariki attempted a coup, claiming to dissolve the elected government and to take control of the country's leadership. "Basically we are dissolving the leadership, the prime minister and the deputy prime minister and the ministers," chief Makea Vakatini Joseph Ariki explained. The Cook Islands Herald suggested that the ariki were attempting thereby to regain some of their traditional prestige or mana. Prime Minister Jim Marurai described the take-over move as "ill-founded and nonsensical". By June 23, the situation appeared to have normalised, with members of the House of Ariki accepting to return to their regular duties. Judiciary The judiciary is established by part IV of the Constitution, and consists of the High Court of the Cook Islands and the Cook Islands Court of Appeal. The Judicial Committee of the Privy Council serves as a final court of appeal. Judges are appointed by the Queen's Representative on the advice of the Executive Council as given by the Chief Justice and the Minister of Justice. Non-resident Judges are appointed for a three-year term; other Judges are appointed for life. Judges may be removed from office by the Queen's Representative on the recommendation of an investigative tribunal and only for inability to perform their office, or for misbehaviour. With regard to the legal profession, Iaveta Taunga o Te Tini Short was the first Cook Islander to establish a law practice in 1968. He would later become a Cabinet Minister (1978) and High Commissioner for the Cook Islands (1985). Political parties and elections Recent political history The 1999 election produced a hung Parliament. Cook Islands Party leader Geoffrey Henry remained prime minister, but was replaced after a month by Joe Williams following a coalition realignment. A further realignment three months later saw Williams replaced by Democratic Party leader Terepai Maoate. A third realignment saw Maoate replaced mid-term by his deputy Robert Woonton in 2002, who ruled with the backing of the CIP. The
30 November 2010 |} The monarch is hereditary; her representative is appointed by the monarch on the recommendation of the Cook Islands Government. The cabinet is chosen by the prime minister and collectively responsible to Parliament. Ten years of rule by the Cook Islands Party (CIP) came to an end 18 November 1999 with the resignation of Prime Minister Joe Williams. Williams had led a minority government since October 1999 when the New Alliance Party (NAP) left the government coalition and joined the main opposition Democratic Party (DAP). On 18 November 1999, DAP leader Dr. Terepai Maoate was sworn in as prime minister. He was succeeded by his co-partisan Robert Woonton. When Dr Woonton lost his seat in the 2004 elections, Jim Marurai took over. In the 2010 elections, the CIP regained power and Henry Puna was sworn in as prime minister on 30 November 2010. Following uncertainty about the ability of the government to maintain its majority, the Queen's representative dissolved parliament midway through its term and a 'snap' election was held on 26 September 2006. Jim Marurai's Democratic Party retained the Treasury benches with an increased majority. The New Zealand High Commissioner is appointed by the New Zealand Government. Legislature The Parliament of the Cook Islands has 24 members, elected for a five-year term in single-seat constituencies. There is also a House of Ariki, composed of chiefs, which has a purely advisory role. The Koutu Nui is a similar organization consisting of sub-chiefs. It was established by an amendment in 1972 of the 1966 House of Ariki Act. The current president is Te Tika Mataiapo Dorice Reid. On June 13, 2008, a small majority of members of the House of Ariki attempted a coup, claiming to dissolve the elected government and to take control of the country's leadership. "Basically we are dissolving the leadership, the prime minister and the deputy prime minister and the ministers," chief Makea Vakatini Joseph Ariki explained. The Cook Islands Herald suggested that the ariki were attempting thereby to regain some of their traditional prestige or mana. Prime Minister Jim Marurai described the take-over move as "ill-founded and nonsensical". By June 23, the situation appeared to have normalised, with members of the House of Ariki accepting to return to their regular duties. Judiciary The judiciary is established by part IV of the Constitution, and consists of the High Court of the Cook Islands and the Cook Islands Court of Appeal. The Judicial Committee of the Privy Council serves as a final court of appeal. Judges are appointed by the Queen's Representative on the advice of the Executive Council as given by the Chief Justice and the Minister of Justice. Non-resident Judges are appointed for a three-year term; other
(2002) Electricity - production 28 GW·h (2003) Electricity - production by source Fossil fuel: 100% Hydro: 0% Nuclear: 0% Other: 0% (2001) Electricity - consumption 34.46 GW·h (2005 est) Electricity - exports 0 kW·h (2003) Electricity - imports 0 kW·h (2003) Oil consumption (2003) Agriculture - products Copra, citrus, pineapples, tomatoes, beans, pawpaws, bananas, yams, taro, coffee, pigs, poultry Exports $5.222 million (2005) Exports - commodities Copra, papayas, fresh and canned citrus fruit, coffee; fish; pearls and pearl shells; clothing Exports - partners Australia 34%, Japan 27%, New Zealand 25%, US 8% (2004) Imports $81.04 million (2005) Imports - commodities Foodstuffs, textiles, fuels, timber, capital goods Imports - partners New Zealand 61%, Fiji 19%, US 9%, Australia 6%, Japan 2% (2004) Debt - external $141 million (1996 est.) Economic aid - recipient $13.1 million (1995); note - New Zealand furnishes the greater part Currency 1 New Zealand dollar (NZ$) = 100 cents Exchange rates New Zealand dollars (NZ$) per US$1 - 1.4203 (2005), 1.9451 (January 2000), 1.8886 (1999), 1.8632 (1998), 1.5083 (1997), 1.4543 (1996), 1.5235 (1995) Fiscal year 1 April–31 March Telecommunications Telecom Cook Islands Ltd (TCI) is the sole provider of telecommunications in the Cook Islands. TCI is a private company owned by Spark New Zealand
Leaks. Trusts incorporated in the Cook Islands are used to provide anonymity and asset-protection. The cook Islands also featured in the Panama Papers, Paradise Papers, and Pandora Papers financial leaks. Economist Vaine Nooana-Arioka has been executive director of the Bank of the Cook Islands since 2008. Economic statistics GDP Purchasing power parity - $183.2 million (2005 est.) GDP - real growth rate -.05% (2005); -1.2% (2014); -1.7% (2013). Growth in the Cook Islands has slowed due to a lack of infrastructure projects and accommodation capacity constraints in the tourism sector. Cook Islands economic activity is expected to be flat in FY2016; to grow by 0.2% in FY2017. Inflation 1.8% (FY2016); 2.0% (FY2017). Statistics Asian Development Bank GDP - per capita $9 100 (2005 estimate) GDP - composition by sector Agriculture: 78.9% Industry: 9.6% Services: 75.3% (2000) Population below poverty line 28.4% of the population lives below the national poverty line. Statistics 2016 Asian Development Bank Household income or consumption by percentage share Lowest 10%: NA% Highest 10%: NA% Inflation rate (consumer prices) 2.1% (2005 est.) Labor force 6,820 (2001) Labor force - by occupation Agriculture 29%, industry 15%, services 56% (1995) Unemployment rate 13.1% (2005) Budget Revenues: $70.95 million Expenditures: $69.05 million; including capital expenditures of $5.744 million (FY00/01 est.) Industries Fruit processing, tourism,
GSM 900 from Mangaia 3 villages (Oneroa, Ivirua, Tamarua), Atiu, Mauke, Mitiaro, Palmerston in the Southern Group (Pa Enua Tonga) and the Northern Group (Pa Enua Tokerau) Nassau, Pukapuka, Rakahanga, Manihiki 2 Village (Tukao, Tauhunu) and Penrhyn 2 villages (Omoka Tetautua). The Cook Islands uses the country calling code +682. Broadcasting There are six radio stations in the Cook Islands, with one reaching all islands. there were 14,000 radios. Cook Islands Television broadcasts from Rarotonga, providing a mix of local news and overseas-sourced programs. there were 4,000 television sets. Internet There were 6,000 Internet users in 2009 and 3,562 Internet hosts as of 2012. The country code top-level domain for the Cook Islands is .ck. In June 2010, Telecom Cook Islands partnered with O3b Networks, Ltd. to provide faster Internet connection to the Cook Islands. On 25 June 2013 the O3b satellite constellation was launched from an Arianespace Soyuz ST-B rocket in French Guiana. The medium Earth orbit satellite orbits at and uses the Ka band. It has a latency of about 100 milliseconds because it is much closer to Earth than
March 2017 4G+ launch in Rarotonga with LTE700 (B28A) and LTE1800 (B3). Mobile service covers Aitutaki GSM/GPRS mobile data service system in GSM 900 from 2006 to 2013 while in 2014 3G UMTS 900 was introduce with HSPA+ stand system. In March 2017 4G+ also launch in Aitutaki with LTE700 (B28A). The rest of the Outer Islands (Pa Enua) mobile was well establish in 2007 with mobile coverage at GSM 900 from Mangaia 3 villages (Oneroa, Ivirua, Tamarua), Atiu, Mauke, Mitiaro, Palmerston in the Southern Group (Pa Enua Tonga) and the Northern Group (Pa Enua Tokerau) Nassau, Pukapuka, Rakahanga, Manihiki 2 Village (Tukao, Tauhunu) and Penrhyn 2 villages (Omoka Tetautua). The Cook Islands uses the country calling code +682. Broadcasting There are six radio stations in the Cook Islands, with one reaching all islands. there were 14,000 radios. Cook Islands Television broadcasts from Rarotonga, providing a mix of local news and overseas-sourced programs. there were 4,000 television sets. Internet There were 6,000 Internet users in 2009 and 3,562 Internet hosts as of 2012. The country code top-level domain for the Cook Islands is .ck. In June 2010, Telecom Cook Islands partnered with O3b Networks, Ltd. to provide faster Internet connection
article lists transport in the Cook Islands. Road transport The Cook Islands uses left-handed traffic. The maximum speed limit is 50 km/h. On the main island of Rarotonga, there are no traffic lights and only two roundabouts. A bus operates clockwise and anti-clockwise services around the islands coastal ring-road. Road safety is poor. In 2011, the Cook Islands had the second-highest per-capita road deaths in the world. In 2018, crashes neared a record high, with speeding, alcohol and careless behaviour being the main causes. Motor-scooters are a common form of transport, but there was no requirement for helmets, making them a common cause of death and injuries. Legislation requiring helmets was passed in 2007, but scrapped in early 2008 before it came into force. In 2016 a law was passed requiring visitors and riders aged 16 to 25 to wear helmets, but it was widely flouted. In March 2020 the Cook Islands parliament again legislated for compulsory helmets to be worn from June 26, but implementation was delayed until July 31, and then until September 30. Highways Total: 295 km (2018) Paved: 207 km (2018) Unpaved: 88 km (2018) Rail transport The Cook Islands has no effective rail transport. Rarotonga had a 170m tourist railway, the Rarotonga Steam Railway, but it is no longer in working condition. Water transport The Cook Islands have a long history of sea transport. The islands were colonised from Tahiti, and in turn colonised New Zealand in ocean-going waka. In the late nineteenth century, following European contact, the islands had a significant fleet of schooners, which they
Roadmap, and issued a tender for a Pa Enua Shipping Charter. The Cook Islands operates an open ship registry and has been placed on the Paris Memorandum of Understanding on Port State Control Black List as a flag of convenience. Ships registered in the Cook Islands have been used to smuggle oil from Iran in defiance of international sanctions. In February 2021 two ships were removed from the shipping register for concealing their movements by turning their Automatic identification system off. Ports and harbours Container ports: Avatiu Other ports: Avarua (Rarotonga), Arutanga (Aitutaki) The smaller islands have passages through their reefs, but these are unsuitable for large vessels. Merchant marine total: 205 by type: bulk carrier 21, container ship 3, general cargo 85, oil tanker 33, other 63 (2019) country comparison to the world: 65 Air transport The Cook Islands is served by one domestic airline, Air Rarotonga. A further three foreign airlines provide international service. Airports There is one international airport, Rarotonga International Airport. Eight airports provide
user. For example, in a word-processing program, the user manipulates document files that the user personally names. Although the content of the document file is arranged in a format that the word-processing program understands, the user is able to choose the name and location of the file and provide the bulk of the information (such as words and text) that will be stored in the file. Many applications pack all their data files into a single file called an archive file, using internal markers to discern the different types of information contained within. The benefits of the archive file are to lower the number of files for easier transfer, to reduce storage usage, or just to organize outdated files. The archive file must often be unpacked before next using. Operations The most basic operations that programs can perform on a file are: Create a new file Change the access permissions and attributes of a file Open a file, which makes the file contents available to the program Read data from a file Write data to a file Delete a file Close a file, terminating the association between it and the program Truncate a file, shortening it to a specified size within the file system without rewriting any content Files on a computer can be created, moved, modified, grown, shrunk (truncated), and deleted. In most cases, computer programs that are executed on the computer handle these operations, but the user of a computer can also manipulate files if necessary. For instance, Microsoft Word files are normally created and modified by the Microsoft Word program in response to user commands, but the user can also move, rename, or delete these files directly by using a file manager program such as Windows Explorer (on Windows computers) or by command lines (CLI). In Unix-like systems, user space programs do not operate directly, at a low level, on a file. Only the kernel deals with files, and it handles all user-space interaction with files in a manner that is transparent to the user-space programs. The operating system provides a level of abstraction, which means that interaction with a file from user-space is simply through its filename (instead of its inode). For example, rm filename will not delete the file itself, but only a link to the file. There can be many links to a file, but when they are all removed, the kernel considers that file's memory space free to be reallocated. This free space is commonly considered a security risk (due to the existence of file recovery software). Any secure-deletion program uses kernel-space (system) functions to wipe the file's data. File moves within a file system complete almost immediately because the data content does not need to be rewritten. Only the paths need to be changed. Moving methods There are two distinct implementations of file moves. When moving files between devices or partitions, some file managing software deletes each selected file from the source directory individually after being transferred, while other software deletes all files at once' only after every file has been transferred. With the mv command for instance, the former method is used when selecting files individually, possibly with the use of wildcards (example: mv -n sourcePath/* targetPath, while the latter method is used when selecting entire directories (example: mv -n sourcePath targetPath). Microsoft Windows Explorer uses the former method for mass storage filemoves, but the latter method using Media Transfer Protocol, as described in . The former method (individual deletion from source) has the benefit that space is released from the source device or partition imminently after the transfer has begun, meaning after the first file is finished. With the latter method, space is only freed after the transfer of the entire selection has finished. If an incomplete file transfer with the latter method is aborted unexpectedly, perhaps due to an unexpected power-off, system halt or disconnection of a device, no space will have been freed up on the source device or partition. The user would need to merge the remaining files from the source, including the incompletely written (truncated) last file. With the individual deletion method, the file moving software also does not need to cumulatively keep track of all files finished transferring for the case that a user manually aborts the file transfer. A file manager using the latter (afterwards deletion) method will have to only delete the files from the source directory that have already finished transferring. Identifying and organizing In modern computer systems, files are typically accessed using names (filenames). In some operating systems, the name is associated with the file itself. In others, the file is anonymous, and is pointed to by links that have names. In the latter case, a user can identify the name of the link with the file itself, but this is a false analogue, especially where there exists more than one link to the same file. Files (or links to files) can be located in directories. However, more generally, a directory can contain either a list of files or a list of links to files. Within this definition, it is of paramount importance that the term "file" includes directories. This permits the existence of directory hierarchies, i.e., directories containing sub-directories. A name that refers to a file within a directory must be typically unique. In other words, there must be no identical names within a directory. However, in some operating systems, a name may include a specification of type that means a directory can contain an identical name for more than one type of object such as a directory and a file. In environments in which a file is named, a file's name and the path to the file's directory must uniquely identify it among all other files in the computer system—no two files can have the same name and path. Where a file is anonymous, named references to it will exist within a namespace. In most cases, any name within the namespace will refer to exactly zero or one file. However, any file may be represented within any namespace by zero, one or more names. Any string of characters may be a well-formed name for a file or a link depending upon the context of application. Whether or not a name is well-formed depends on the type of computer system being used. Early computers permitted only a few letters or digits in the name of a file, but modern computers allow long names (some up to 255 characters) containing almost any combination of unicode letters or unicode digits, making it easier to understand the purpose of a file at a glance. Some computer systems allow file names to contain spaces; others do not. Case-sensitivity of file names is determined by the file system. Unix file systems are usually case sensitive and allow user-level applications to create files whose names differ only in the case of characters. Microsoft Windows supports multiple file systems, each with different policies regarding case-sensitivity. The common FAT file system can have multiple files whose names differ only in case if the user uses a disk editor to edit the file names in the directory entries. User applications, however, will usually not allow the user to create multiple files with the same name but differing in case. Most computers organize files into hierarchies using folders, directories, or catalogs. The concept is the same irrespective of the terminology used. Each folder can contain an arbitrary number of files, and it can also contain other folders. These other folders are referred to as subfolders. Subfolders can contain still more files and folders and so on, thus building a tree-like structure in which one "master folder" (or "root folder" — the name varies from one operating system to another) can contain any number of levels of other folders and files. Folders can be named just as files can (except for the root folder, which often does not have
of the early Hollerith Tabulator in astronomy was made by Comrie. He used it for building a table from successive differences, and for adding large numbers of harmonic terms". "Tables of functions are constructed from their differences with great efficiency, either as printed tables or as a file of punched cards." In February 1950, in a Radio Corporation of America (RCA) advertisement in Popular Science magazine describing a new "memory" vacuum tube it had developed, RCA stated: "the results of countless computations can be kept 'on file' and taken out again. Such a 'file' now exists in a 'memory' tube developed at RCA Laboratories. Electronically it retains figures fed into calculating machines, holds them in storage while it memorizes new ones – speeds intelligent solutions through mazes of mathematics." In 1952, "file" denoted, among other things, information stored on punched cards. In early use, the underlying hardware, rather than the contents stored on it, was denominated a "file". For example, the IBM 350 disk drives were denominated "disk files". The introduction, circa 1961, by the Burroughs MCP and the MIT Compatible Time-Sharing System of the concept of a "file system" that managed several virtual "files" on one storage device is the origin of the contemporary denotation of the word. Although the contemporary "register file" demonstrates the early concept of files, its use has greatly decreased. File contents On most modern operating systems, files are organized into one-dimensional arrays of bytes. The format of a file is defined by its content since a file is solely a container for data. On some platforms the format is indicated by its filename extension, specifying the rules for how the bytes must be organized and interpreted meaningfully. For example, the bytes of a plain text file ( in Windows) are associated with either ASCII or UTF-8 characters, while the bytes of image, video, and audio files are interpreted otherwise. Most file types also allocate a few bytes for metadata, which allows a file to carry some basic information about itself. Some file systems can store arbitrary (not interpreted by the file system) file-specific data outside of the file format, but linked to the file, for example extended attributes or forks. On other file systems this can be done via sidecar files or software-specific databases. All those methods, however, are more susceptible to loss of metadata than container and archive file formats. File size At any instant in time, a file have a size, normally expressed as number of bytes, that indicates how much storage is associated with the file. In most modern operating systems the size can be any non-negative whole number of bytes up to a system limit. Many older operating systems kept track only of the number of blocks or tracks occupied by a file on a physical storage device. In such systems, software employed other methods to track the exact byte count (e.g., CP/M used a special control character, Ctrl-Z, to signal the end of text files). The general definition of a file does not require that its size have any real meaning, however, unless the data within the file happens to correspond to data within a pool of persistent storage. A special case is a zero byte file; these files can be newly created files that have not yet had any data written to them, or may serve as some kind of flag in the file system, or are accidents (the results of aborted disk operations). For example, the file to which the link points in a typical Unix-like system probably has a defined size that seldom changes. Compare this with which is also a file, but as a character special file, its size is not meaningful. Organization of data in a file Information in a computer file can consist of smaller packets of information (often called "records" or "lines") that are individually different but share some common traits. For example, a payroll file might contain information concerning all the employees in a company and their payroll details; each record in the payroll file concerns just one employee, and all the records have the common trait of being related to payroll—this is very similar to placing all payroll information into a specific filing cabinet in an office that does not have a computer. A text file may contain lines of text, corresponding to printed lines on a piece of paper. Alternatively, a file may contain an arbitrary binary image (a blob) or it may contain an executable. The way information is grouped into a file is entirely up to how it is designed. This has led to a plethora of more or less standardized file structures for all imaginable purposes, from the simplest to the most complex. Most computer files are used by computer programs which create, modify or delete the files for their own use on an as-needed basis. The programmers who create the programs decide what files are needed, how they are to be used and (often) their names. In some cases, computer programs manipulate files that are made visible to the computer user. For example, in a word-processing program, the user manipulates document files that the user personally names. Although the content of the document file is arranged in a format that the word-processing program understands, the user is able to choose the name and location of the file and provide the bulk of the information (such as words and text) that will be stored in the file. Many applications pack all their data files into a single file called an archive file, using internal markers to discern the different types of information contained within. The benefits of the archive file are to lower the number of files for easier transfer, to reduce storage usage, or just to organize outdated files. The archive file must often be unpacked before next using. Operations The most basic operations that programs can perform on a file are: Create a new file Change the access permissions and attributes of a file Open a file, which makes the file contents available to the program Read data from a file Write data to a file Delete a file Close a file, terminating the association between it and the program Truncate a file, shortening it to a specified size within the file system without rewriting any content Files on a computer can be created, moved, modified, grown, shrunk (truncated), and deleted. In most cases, computer programs that are executed on the computer handle these operations, but the user of a computer can also manipulate files if necessary. For instance, Microsoft Word files are normally created and modified by the Microsoft Word program in response to user commands, but the user can also move, rename, or delete these files directly by using a file manager program such as Windows Explorer (on Windows computers) or by command lines (CLI). In Unix-like systems, user space programs do not operate directly, at a low level, on a file. Only the kernel deals with files, and it handles all user-space interaction with files in a manner that is transparent to the user-space programs. The operating system provides a level of abstraction, which means that interaction with a file from user-space is simply through its filename (instead of its inode). For example, rm filename will not delete the file itself, but only a link to the file. There can be many links to a file, but when they are all removed, the kernel considers that file's memory space free to be reallocated. This free space is commonly considered a security risk (due to the existence of file recovery software). Any secure-deletion program uses kernel-space (system) functions to wipe the file's data. File moves within a file system complete almost immediately because the data content does not need to be rewritten. Only the paths need to be changed. Moving methods There are two distinct implementations of file moves. When moving files between devices or partitions, some file managing software deletes each selected file from the source directory individually after being transferred, while other software deletes all files at once' only after every file has been transferred. With the mv command for instance, the former method is used when selecting files individually, possibly with the use of wildcards (example: mv -n sourcePath/* targetPath, while the latter method is used when selecting entire directories (example: mv -n sourcePath targetPath). Microsoft Windows Explorer uses the former method for mass storage filemoves, but the latter method using Media Transfer Protocol, as described in . The former method (individual deletion from source) has the benefit that space is released from the source device or partition imminently after the transfer has begun, meaning after the first file is finished. With the latter method, space is only freed after the transfer of the entire selection has finished. If an incomplete file transfer with the latter method is aborted unexpectedly, perhaps due to an unexpected power-off, system halt or disconnection of a device, no space will have been freed up on the source device or partition. The user would need to merge the remaining files from the source, including the incompletely written (truncated) last file. With the individual deletion method, the file moving software also does not need to cumulatively keep track of all files finished transferring for the case that a user manually aborts the file transfer. A file manager using the latter (afterwards deletion) method will have to only delete the files from the source directory that have already finished transferring. Identifying and organizing In modern computer systems, files are typically accessed using names (filenames). In some operating systems, the name is associated with the file itself. In others, the file is anonymous, and is pointed to by links that have names. In the latter case, a user can identify the name of the link with the file itself, but this is a false analogue, especially where there exists more than one link to the same file. Files (or links to files) can be located in directories. However, more generally, a directory can contain either a list of files or a list of links to files. Within this definition, it is of paramount importance that the term "file" includes directories. This permits the existence of directory hierarchies, i.e., directories containing sub-directories. A name that refers to a file within a directory must be typically unique. In other words, there must be no identical names within a directory. However, in some operating
telephone number to the called party Card Identification Number, a security feature on credit cards Cell ID, used to identify cell phone towers of the Universidad de La Habana Certified Interconnect Designer, a certification for printed circuit-board designers CID fonts, a font file format Other uses in science and technology Channel-iron deposits, one of the major sources of saleable iron ore Controlled Impact Demonstration, a project to improve aircraft crash survivability Cubic inch displacement, a measurement in internal combustion engines Other uses Centro Insular de Deportes, an indoor sports arena in Spain Combat Identification, the accurate characterization of detected objects for military action Common-interest development, a form of housing Community improvement district,
Sri Lanka Science and technology Biology and medicine Clinical Infectious Diseases, a medical journal Cytomegalic inclusion disease Chemistry Collision-induced dissociation, a mass spectrometry mechanism Compound identification number, a field in the PubChem database Configuration interaction doubles, in quantum chemistry Computing and telecommunications Caller ID, a telephone service that transmits the caller's telephone number to the called party Card Identification Number, a security feature on credit cards Cell ID, used to identify cell phone towers of the Universidad de La Habana Certified Interconnect Designer, a certification for printed circuit-board designers CID fonts, a font file format Other uses in science and technology Channel-iron deposits, one of the major sources of saleable iron ore Controlled Impact Demonstration, a project to improve aircraft crash survivability Cubic inch displacement, a
Salzburg (today Austria) in 1803. After completing high school, Doppler studied philosophy in Salzburg and mathematics and physics at the Imperial–Royal Polytechnic Institute (now TU Wien), where he became an assistant in 1829. In 1835 he began work at the Prague Polytechnic (now Czech Technical University in Prague), where he received an appointment in 1841. One year later, at the age of 38, Doppler gave a lecture to the Royal Bohemian Society of Sciences and subsequently published his most notable work, Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels ("On the coloured light of the binary stars and some other stars of the heavens"). There is a facsimile edition with an English translation by Alec Eden. In this work, Doppler postulated his principle (later coined the Doppler effect) that the observed frequency of a wave depends on the relative speed of the source and the observer, and he later tried to use this concept for explaining the colour of binary stars. Physicist Armand Hippolyte Louis Fizeau () also contributed to aspects of the discovery of the Doppler effect, which is known by the French as the Doppler-Fizeau Effect. Fizeau contributed towards understanding its effect with light and also developed formal mathematical theorems underlying the principles of this effect. In 1848, he predicted the frequency shift of a wave when the source and receiver are moving relative to each other, therefore, being the first to predict blue shifts and red shifts of spectral lines in stars. Doppler continued working as a professor at the Prague Polytechnic, publishing over 50 articles on mathematics, physics and astronomy, but in 1847 he left Prague for the professorship of mathematics, physics, and mechanics at the Academy of Mines and Forests (its successor is the University of Miskolc) in Selmecbánya (then Kingdom of Hungary, now Banská Štiavnica, Slovakia), and in 1849 he moved to Vienna. Doppler's research was interrupted by the revolutionary incidents of 1848. During the Hungarian Revolution, he fled to Vienna. There he was appointed head of
Selmecbánya (then Kingdom of Hungary, now Banská Štiavnica, Slovakia), and in 1849 he moved to Vienna. Doppler's research was interrupted by the revolutionary incidents of 1848. During the Hungarian Revolution, he fled to Vienna. There he was appointed head of the Institute for Experimental Physics at the University of Vienna in 1850. While there, Doppler, along with Franz Unger, influenced the development of young Gregor Mendel, the founding father of genetics, who was a student at the University of Vienna from 1851 to 1853. Doppler died on 17 March 1853 at age 49 from a pulmonary disease in Venice (at that time part of the Austrian Empire). His tomb, found by Dr. Peter M. Schuster, is just inside the entrance of the Venetian island cemetery of San Michele. Full name Some confusion exists about Doppler's full name. Doppler referred to himself as Christian Doppler. The records of his birth and baptism stated Christian Andreas Doppler. Forty years after Doppler's death the misnomer Johann Christian Doppler was introduced by the astronomer Julius Scheiner. Scheiner's mistake has since been copied by many. Works Christian Doppler (1803–1853). Wien: Böhlau, 1992. Bd. 1: 1. Teil: Helmuth Grössing (unter Mitarbeit von B. Reischl): Wissenschaft, Leben, Umwelt, Gesellschaft; 2. Teil: Karl Kadletz (unter Mitarbeit von Peter Schuster und Ildikó Cazan-Simányi) Quellenanhang. Bd. 2: 3. Teil: Peter Schuster: Das Werk. See also List of Austrian scientists List of Austrians List of minor planets named after people References Further reading Alec Eden: Christian Doppler: Leben und Werk. Salzburg: Landespressebureau, 1988. Hoffmann, Robert (2007). The Life of an (almost) Unknown Person. Christian Doppler's Youth in Salzburg and Vienna. In: Ewald Hiebl, Maurizio Musso (Eds.), Christian Doppler –
16-year-old pupil at St Paul's School in London, the lines of his first clerihew, about Humphry Davy, came into his head during a science class. Together with his schoolfriends, he filled a notebook with examples. The first known use of the word in print dates from 1928. Bentley published three volumes of his own clerihews: Biography for Beginners (1905), published as "edited by E. Clerihew"; More Biography (1929); and Baseless Biography (1939), a compilation of clerihews originally published in Punch illustrated by the author's son Nicolas Bentley. G. K. Chesterton, a friend of Bentley, was also a practitioner of the clerihew and one of the sources of its popularity. Chesterton provided verses and illustrations for the original schoolboy notebook and illustrated Biography for Beginners. Other serious authors also produced clerihews, including W. H. Auden, and it remains a popular humorous form among other writers and the general public. Among contemporary writers, the satirist Craig Brown has made considerable use of the clerihew in his columns for The Daily Telegraph. There has been newfound popularity of the form on Twitter. Examples Bentley's first clerihew, published in 1905, was written about Sir Humphry Davy: The original poem had the second line "Was not fond of gravy"; but the published version has "Abominated gravy". Other clerihews by Bentley include: and W. H. Auden's Academic Graffiti (1971) includes: Satirical magazine Private Eye noted Auden's work
is the name of the poem's subject, usually a famous person put in an absurd light, or revealing something unknown or spurious about them. The rhyme scheme is AABB, and the rhymes are often forced. The line length and metre are irregular. Bentley invented the clerihew in school and then popularized it in books. One of his best known is this (1905): Form A clerihew has the following properties: It is biographical and usually whimsical, showing the subject from an unusual point of view; it mostly pokes fun at famous people It has four lines of irregular length and metre for comic effect The rhyme structure is AABB; the subject matter and wording are often humorously contrived in order to achieve a rhyme, including the use of phrases in Latin, French and other non-English languages The first line contains, and may consist solely of, the subject's name. According to a letter in The Spectator in the 1960s, Bentley said that a true clerihew has to have the name "at the end of the first line", as the whole point was the skill in rhyming awkward names. Clerihews are not satirical or abusive, but they target famous individuals
of the 138 intrastate conflicts between the end of World War II and 2000 saw international intervention, with the United States intervening in 35 of these conflicts. A civil war is a high-intensity conflict, often involving regular armed forces, that is sustained, organized and large-scale. Civil wars may result in large numbers of casualties and the consumption of significant resources. Civil wars since the end of World War II have lasted on average just over four years, a dramatic rise from the one-and-a-half-year average of the 1900–1944 period. While the rate of emergence of new civil wars has been relatively steady since the mid-19th century, the increasing length of those wars has resulted in increasing numbers of wars ongoing at any one time. For example, there were no more than five civil wars underway simultaneously in the first half of the 20th century while there were over 20 concurrent civil wars close to the end of the Cold War. Since 1945, civil wars have resulted in the deaths of over 25 million people, as well as the forced displacement of millions more. Civil wars have further resulted in economic collapse; Somalia, Burma (Myanmar), Uganda and Angola are examples of nations that were considered to have had promising futures before being engulfed in civil wars. Formal classification James Fearon, a scholar of civil wars at Stanford University, defines a civil war as "a violent conflict within a country fought by organized groups that aim to take power at the center or in a region, or to change government policies". Ann Hironaka further specifies that one side of a civil war is the state. Stathis Kalyvas defines civil war as "armed combat taking place within the boundaries of a recognized sovereign entity between parties that are subject to a common authority at the outset of the hostilities." The intensity at which a civil disturbance becomes a civil war is contested by academics. Some political scientists define a civil war as having more than 1,000 casualties, while others further specify that at least 100 must come from each side. The Correlates of War, a dataset widely used by scholars of conflict, classifies civil wars as having over 1000 war-related casualties per year of conflict. This rate is a small fraction of the millions killed in the Second Sudanese Civil War and Cambodian Civil War, for example, but excludes several highly publicized conflicts, such as The Troubles of Northern Ireland and the struggle of the African National Congress in Apartheid-era South Africa. Based on the 1,000-casualties-per-year criterion, there were 213 civil wars from 1816 to 1997, 104 of which occurred from 1944 to 1997. If one uses the less-stringent 1,000 casualties total criterion, there were over 90 civil wars between 1945 and 2007, with 20 ongoing civil wars as of 2007. The Geneva Conventions do not specifically define the term "civil war"; nevertheless, they do outline the responsibilities of parties in "armed conflict not of an international character". This includes civil wars; however, no specific definition of civil war is provided in the text of the Conventions. Nevertheless, the International Committee of the Red Cross has sought to provide some clarification through its commentaries on the Geneva Conventions, noting that the Conventions are "so general, so vague, that many of the delegations feared that it might be taken to cover any act committed by force of arms". Accordingly, the commentaries provide for different 'conditions' on which the application of the Geneva Convention would depend; the commentary, however, points out that these should not be interpreted as rigid conditions. The conditions listed by the ICRC in its commentary are as follows: That the Party in revolt against the de jure Government possesses an organized military force, an authority responsible for its acts, acting within a determinate territory and having the means of respecting and ensuring respect for the Convention. That the legal Government is obliged to have recourse to the regular military forces against insurgents organized as military and in possession of a part of the national territory. (a) That the de jure Government has recognized the insurgents as belligerents; or (b) That it has claimed for itself the rights of a belligerent; or (c) That it has accorded the insurgents recognition as belligerents for the purposes only of the present Convention; or (d) That the dispute has been admitted to the agenda of the Security Council or the General Assembly of the United Nations as being a threat to international peace, a breach of the peace, or an act of aggression. (a) That the insurgents have an organization purporting to have the characteristics of a State. (b) That the insurgent civil authority exercises de facto authority over the population within a determinate portion of the national territory. (c) That the armed forces act under the direction of an organized authority and are prepared to observe the ordinary laws of war. (d) That the insurgent civil authority agrees to be bound by the provisions of the Convention. Causes According to a 2017 review study of civil war research, there are three prominent explanations for civil war: greed-based explanations which center on individuals’ desire to maximize their profits, grievance-based explanations which center on conflict as a response to socioeconomic or political injustice, and opportunity-based explanations which center on factors that make it easier to engage in violent mobilization. According to the study, the most influential explanation for civil war onset is the opportunity-based explanation by James Fearon and David Laitin in their 2003 American Political Science Review article. Greed Scholars investigating the cause of civil war are attracted by two opposing theories, greed versus grievance. Roughly stated: are conflicts caused by differences of ethnicity, religion or other social affiliation, or do conflicts begin because it is in the economic best interests of individuals and groups to start them? Scholarly analysis supports the conclusion that economic and structural factors are more important than those of identity in predicting occurrences of civil war. A comprehensive study of civil war was carried out by a team from the World Bank in the early 21st century. The study framework, which came to be called the Collier–Hoeffler Model, examined 78 five-year increments when civil war occurred from 1960 to 1999, as well as 1,167 five-year increments of "no civil war" for comparison, and subjected the data set to regression analysis to see the effect of various factors. The factors that were shown to have a statistically significant effect on the chance that a civil war would occur in any given five-year period were: A high proportion of primary commodities in national exports significantly increases the risk of a conflict. A country at "peak danger", with commodities comprising 32% of gross domestic product, has a 22% risk of falling into civil war in a given five-year period, while a country with no primary commodity exports has a 1% risk. When disaggregated, only petroleum and non-petroleum groupings showed different results: a country with relatively low levels of dependence on petroleum exports is at slightly less risk, while a high level of dependence on oil as an export results in slightly more risk of a civil war than national dependence on another primary commodity. The authors of the study interpreted this as being the result of the ease by which primary commodities may be extorted or captured compared to other forms of wealth; for example, it is easy to capture and control the output of a gold mine or oil field compared to a sector of garment manufacturing or hospitality services. A second source of finance is national diasporas, which can fund rebellions and insurgencies from abroad. The study found that statistically switching the size of a country's diaspora from the smallest found in the study to the largest resulted in a sixfold increase in the chance of a civil war. Higher male secondary school enrollment, per capita income and economic growth rate all had significant effects on reducing the chance of civil war. Specifically, a male secondary school enrollment 10% above the average reduced the chance of a conflict by about 3%, while a growth rate 1% higher than the study average resulted in a decline in the chance of a civil war of about 1%. The study interpreted these three factors as proxies for earnings forgone by rebellion, and therefore that lower forgone earnings encourage rebellion. Phrased another way: young males (who make up the vast majority of combatants in civil wars) are less likely to join a rebellion if they are getting an education or have a comfortable salary, and can reasonably assume that they will prosper in the future. Low per capita income has been proposed as a cause for grievance, prompting armed rebellion. However, for this to be true, one would expect economic inequality to also be a significant factor in rebellions, which it is not. The study therefore concluded that the economic model of opportunity cost better explained the findings. Grievance Most proxies for "grievance"—the theory that civil wars begin because of issues of identity, rather than economics—were statistically insignificant, including economic equality, political rights, ethnic polarization and religious fractionalization. Only ethnic dominance, the case where the largest ethnic group comprises a majority of the population, increased the risk of civil war. A country characterized by ethnic dominance has nearly twice the chance of a civil war. However, the combined effects of ethnic and religious fractionalization, i.e. the greater chance that any two randomly chosen people will be from separate ethnic or religious groups, the less chance of a civil war, were also significant and positive, as long as the country avoided ethnic dominance. The study interpreted this as stating that minority groups are more likely to rebel if they feel that they are being dominated, but that rebellions are more likely to occur the more homogeneous the population and thus more cohesive the rebels. These two factors may thus be seen as mitigating each other in many cases. Criticism of the "greed versus grievance" theory David Keen, a professor at the Development Studies Institute at the London School of Economics is one of the major critics of greed vs. grievance theory, defined primarily by Paul Collier, and argues the point that a conflict, although he cannot define it, cannot be pinpointed to simply one motive. He believes that conflicts are much more complex and thus should not be analyzed through simplified methods. He disagrees with the quantitative research methods of Collier and believes a stronger emphasis should be put on personal data and human perspective of the people in conflict. Beyond Keen, several other authors have introduced works that either disprove greed vs. grievance theory with empirical data, or dismiss its ultimate conclusion. Authors such as Cristina Bodea and Ibrahim Elbadawi, who co-wrote the entry, "Riots, coups and civil war: Revisiting the greed and grievance debate", argue that empirical data can disprove many of the proponents of greed theory and make the idea "irrelevant". They examine a myriad of factors and conclude that too many factors come into play with conflict, which cannot be confined to simply greed or grievance. Anthony Vinci makes a strong argument that "fungible concept of power and the primary motivation of survival provide superior explanations of armed group motivation and, more broadly, the conduct of internal conflicts". Opportunities James Fearon and David Laitin find that ethnic and religious diversity does not make civil war more likely. They instead find that factors that make it easier for rebels to recruit foot soldiers and sustain insurgencies, such as "poverty—which marks financially & bureaucratically weak states and also favors rebel recruitment—political instability, rough terrain, and large populations" make civil wars more likely. Such research finds that civil wars happen because the state is weak; both authoritarian and democratic states can be stable if they have the financial and military capacity to put down rebellions. Other causes Bargaining problems In a state torn by civil war, the contesting powers often do not have the ability to commit or the trust to believe in the other side's commitment to put an end to war. When considering a peace agreement, the involved parties are aware of the high incentives to withdraw once one of them has taken an action that weakens their military, political or economical power. Commitment problems may deter a lasting peace agreement as the powers in question are aware that neither of them is able to commit to their end of the bargain in the future. States are often unable to escape conflict traps (recurring civil war conflicts) due to the lack of strong political and legal institutions that motivate bargaining, settle disputes, and enforce peace settlements. Governance Political scientist Barbara Walter suggests that most contemporary civil wars are actually repeats of earlier civil wars that often arise when leaders are not accountable to the public, when there is poor public participation in politics, and when there is a lack of transparency of information between the executives and the public. Walter argues that when these issues are properly reversed, they act as political and legal restraints on executive power forcing the established government to better serve the people. Additionally, these political and legal restraints create a standardized avenue to influence government and increase the commitment credibility of established peace treaties. It is the strength of a nation’s institutionalization and good governance—not the presence of democracy nor the poverty level—that is the number one indicator of the chance of a repeat civil war, according to Walter. Military advantage High levels of population dispersion and, to a lesser extent, the presence of mountainous terrain, increased the chance of conflict. Both of these factors favor rebels, as a population dispersed outward toward the borders is harder to control than one concentrated in a central region, while mountains offer terrain where rebels can seek sanctuary. Rough terrain was highlighted as one of the more important factors in a 2006 systematic review. Population size The various factors contributing to the risk of civil war rise increase with population size. The risk of a civil war rises approximately proportionately with the size of a country's population. Poverty There is a correlation between poverty and civil war, but the causality (which causes the other) is unclear. Some studies have found that in regions with lower income per capita, the likelihood of civil war is greater. Economists Simeon Djankov and Marta Reynal-Querol argue that the correlation is spurious, and that lower income and heightened conflict are instead products of other phenomena. In contrast, a study by Alex Braithwaite and colleagues showed systematic evidence of "a causal arrow running from poverty to conflict". Inequality While there is a supposed negative correlation between absolute welfare levels and the probability of civil war outbreak, relative deprivation may actually be a more pertinent possible cause. Historically, higher inequality levels led to higher civil war probability. Since colonial rule or population size are known to increase civil war risk, also, one may conclude that "the discontent of the colonized, caused by the creation of borders across tribal lines and bad treatment by the colonizers" is one important cause of civil conflicts. Time The more time that has elapsed since the last civil war, the less likely it is that a conflict will recur. The study had two possible explanations for this: one opportunity-based and the other grievance-based. The elapsed time may represent the depreciation of whatever capital the rebellion was fought over and thus increase the opportunity cost of restarting the conflict.
for different 'conditions' on which the application of the Geneva Convention would depend; the commentary, however, points out that these should not be interpreted as rigid conditions. The conditions listed by the ICRC in its commentary are as follows: That the Party in revolt against the de jure Government possesses an organized military force, an authority responsible for its acts, acting within a determinate territory and having the means of respecting and ensuring respect for the Convention. That the legal Government is obliged to have recourse to the regular military forces against insurgents organized as military and in possession of a part of the national territory. (a) That the de jure Government has recognized the insurgents as belligerents; or (b) That it has claimed for itself the rights of a belligerent; or (c) That it has accorded the insurgents recognition as belligerents for the purposes only of the present Convention; or (d) That the dispute has been admitted to the agenda of the Security Council or the General Assembly of the United Nations as being a threat to international peace, a breach of the peace, or an act of aggression. (a) That the insurgents have an organization purporting to have the characteristics of a State. (b) That the insurgent civil authority exercises de facto authority over the population within a determinate portion of the national territory. (c) That the armed forces act under the direction of an organized authority and are prepared to observe the ordinary laws of war. (d) That the insurgent civil authority agrees to be bound by the provisions of the Convention. Causes According to a 2017 review study of civil war research, there are three prominent explanations for civil war: greed-based explanations which center on individuals’ desire to maximize their profits, grievance-based explanations which center on conflict as a response to socioeconomic or political injustice, and opportunity-based explanations which center on factors that make it easier to engage in violent mobilization. According to the study, the most influential explanation for civil war onset is the opportunity-based explanation by James Fearon and David Laitin in their 2003 American Political Science Review article. Greed Scholars investigating the cause of civil war are attracted by two opposing theories, greed versus grievance. Roughly stated: are conflicts caused by differences of ethnicity, religion or other social affiliation, or do conflicts begin because it is in the economic best interests of individuals and groups to start them? Scholarly analysis supports the conclusion that economic and structural factors are more important than those of identity in predicting occurrences of civil war. A comprehensive study of civil war was carried out by a team from the World Bank in the early 21st century. The study framework, which came to be called the Collier–Hoeffler Model, examined 78 five-year increments when civil war occurred from 1960 to 1999, as well as 1,167 five-year increments of "no civil war" for comparison, and subjected the data set to regression analysis to see the effect of various factors. The factors that were shown to have a statistically significant effect on the chance that a civil war would occur in any given five-year period were: A high proportion of primary commodities in national exports significantly increases the risk of a conflict. A country at "peak danger", with commodities comprising 32% of gross domestic product, has a 22% risk of falling into civil war in a given five-year period, while a country with no primary commodity exports has a 1% risk. When disaggregated, only petroleum and non-petroleum groupings showed different results: a country with relatively low levels of dependence on petroleum exports is at slightly less risk, while a high level of dependence on oil as an export results in slightly more risk of a civil war than national dependence on another primary commodity. The authors of the study interpreted this as being the result of the ease by which primary commodities may be extorted or captured compared to other forms of wealth; for example, it is easy to capture and control the output of a gold mine or oil field compared to a sector of garment manufacturing or hospitality services. A second source of finance is national diasporas, which can fund rebellions and insurgencies from abroad. The study found that statistically switching the size of a country's diaspora from the smallest found in the study to the largest resulted in a sixfold increase in the chance of a civil war. Higher male secondary school enrollment, per capita income and economic growth rate all had significant effects on reducing the chance of civil war. Specifically, a male secondary school enrollment 10% above the average reduced the chance of a conflict by about 3%, while a growth rate 1% higher than the study average resulted in a decline in the chance of a civil war of about 1%. The study interpreted these three factors as proxies for earnings forgone by rebellion, and therefore that lower forgone earnings encourage rebellion. Phrased another way: young males (who make up the vast majority of combatants in civil wars) are less likely to join a rebellion if they are getting an education or have a comfortable salary, and can reasonably assume that they will prosper in the future. Low per capita income has been proposed as a cause for grievance, prompting armed rebellion. However, for this to be true, one would expect economic inequality to also be a significant factor in rebellions, which it is not. The study therefore concluded that the economic model of opportunity cost better explained the findings. Grievance Most proxies for "grievance"—the theory that civil wars begin because of issues of identity, rather than economics—were statistically insignificant, including economic equality, political rights, ethnic polarization and religious fractionalization. Only ethnic dominance, the case where the largest ethnic group comprises a majority of the population, increased the risk of civil war. A country characterized by ethnic dominance has nearly twice the chance of a civil war. However, the combined effects of ethnic and religious fractionalization, i.e. the greater chance that any two randomly chosen people will be from separate ethnic or religious groups, the less chance of a civil war, were also significant and positive, as long as the country avoided ethnic dominance. The study interpreted this as stating that minority groups are more likely to rebel if they feel that they are being dominated, but that rebellions are more likely to occur the more homogeneous the population and thus more cohesive the rebels. These two factors may thus be seen as mitigating each other in many cases. Criticism of the "greed versus grievance" theory David Keen, a professor at the Development Studies Institute at the London School of Economics is one of the major critics of greed vs. grievance theory, defined primarily by Paul Collier, and argues the point that a conflict, although he cannot define it, cannot be pinpointed to simply one motive. He believes that conflicts are much more complex and thus should not be analyzed through simplified methods. He disagrees with the quantitative research methods of Collier and believes a stronger emphasis should be put on personal data and human perspective of the people in conflict. Beyond Keen, several other authors have introduced works that either disprove greed vs. grievance theory with empirical data, or dismiss its ultimate conclusion. Authors such as Cristina Bodea and Ibrahim Elbadawi, who co-wrote the entry, "Riots, coups and civil war: Revisiting the greed and grievance debate", argue that empirical data can disprove many of the proponents of greed theory and make the idea "irrelevant". They examine a myriad of factors and conclude that too many factors come into play with conflict, which cannot be confined to simply greed or grievance. Anthony Vinci makes a strong argument that "fungible concept of power and the primary motivation of survival provide superior explanations of armed group motivation and, more broadly, the conduct of internal conflicts". Opportunities James Fearon and David Laitin find that ethnic and religious diversity does not make civil war more likely. They instead find that factors that make it easier for rebels to recruit foot soldiers and sustain insurgencies, such as "poverty—which marks financially & bureaucratically weak states and also favors rebel recruitment—political instability, rough terrain, and large populations" make civil wars more likely. Such research finds that civil wars happen because the state is weak; both authoritarian and democratic states can be stable if they have the financial and military capacity to put down rebellions. Other causes Bargaining problems In a state torn by civil war, the contesting powers often do not have the ability to commit or the trust to believe in the other side's commitment to put an end to war. When considering a peace agreement, the involved parties are aware of the high incentives to withdraw once one of them has taken an action that weakens their military, political or economical power. Commitment problems may deter a lasting peace agreement as the powers in question are aware that neither of them is able to commit to their end of the bargain in the future. States are often unable to escape conflict traps (recurring civil war conflicts) due to the lack of strong political and legal institutions that motivate bargaining, settle disputes, and enforce peace settlements. Governance Political scientist Barbara Walter suggests that most contemporary civil wars are actually repeats of earlier civil wars that often arise when leaders are not accountable to the public, when there is poor public participation in politics, and when there is a lack of transparency of information between the executives and the public. Walter argues that when these issues are properly reversed, they act as political and legal restraints on executive power forcing the established government to better serve the people. Additionally, these political and legal restraints create a standardized avenue to influence government and increase the commitment credibility of established peace treaties. It is the strength of a nation’s institutionalization and good governance—not the presence of democracy nor the poverty level—that is the number one indicator of the chance of a repeat civil war, according to Walter. Military advantage High levels of population dispersion and, to a lesser extent, the presence of mountainous terrain, increased the chance of conflict. Both of these factors favor rebels, as a population dispersed outward toward the borders is harder to control than one concentrated in a central region, while mountains offer terrain where rebels can seek sanctuary. Rough terrain was highlighted as one of the more important factors in a 2006 systematic review. Population size The various factors contributing to the risk of civil war rise increase with population size. The risk of a civil war rises approximately proportionately with the size of a country's population. Poverty There is a correlation between poverty and civil war, but the causality (which causes the other) is unclear. Some studies have found that in regions with lower income per capita, the likelihood of civil war is greater. Economists Simeon Djankov and Marta Reynal-Querol argue that the correlation is spurious, and that lower income and heightened conflict are instead products of other phenomena. In contrast, a study by Alex Braithwaite and colleagues showed systematic evidence of "a causal arrow running from poverty to conflict". Inequality While there is a supposed negative correlation between absolute welfare levels and the probability of civil war outbreak, relative deprivation may actually be a more pertinent possible cause. Historically, higher inequality levels led to higher civil war probability. Since colonial rule or population size are known to increase civil war risk, also, one may conclude that "the discontent of the colonized, caused by the creation of borders across tribal lines and bad treatment by the colonizers" is one important cause of civil conflicts. Time The more time that has elapsed since the last civil war, the less likely it is that a conflict will recur. The study had two possible explanations for this: one opportunity-based and the other grievance-based. The elapsed time may represent the depreciation of whatever capital the rebellion was fought over and thus increase the opportunity cost of restarting the conflict. Alternatively, elapsed time may represent
Donald Michie UK, GC&CS, Bletchley Park worked on Cryptanalysis of the Lorenz cipher and the Colossus computer. Max Newman, UK, GC&CS, Bletchley Park headed the section that developed the Colossus computer for Cryptanalysis of the Lorenz cipher. Georges Painvin French, broke the ADFGVX cipher during the First World War. Marian Rejewski, Poland, Biuro Szyfrów, a Polish mathematician and cryptologist who, in 1932, solved the Enigma machine with plugboard, the main cipher device then in use by Germany. John Joseph Rochefort US, made major contributions to the break into JN-25 after the attack on Pearl Harbor. Leo Rosen US, SIS, deduced that the Japanese Purple machine was built with stepping switches. Frank Rowlett US, SIS, leader of the team that broke Purple. Jerzy Różycki, Poland, Biuro Szyfrów, helped break German Enigma ciphers. Luigi Sacco, Italy, Italian General and author of the Manual of Cryptography. Laurance Safford US, chief cryptographer for the US Navy for 2 decades+, including World War II. Abraham Sinkov US, SIS. John Tiltman UK, Brigadier, Room 40, GC&CS, Bletchley Park, GCHQ, NSA. Extraordinary length and range of cryptographic service Alan Mathison Turing UK, GC&CS, Bletchley Park where he was chief cryptographer, inventor of the Bombe that was used in decrypting Enigma, mathematician, logician, and renowned pioneer of Computer Science. William Thomas Tutte UK, GC&CS, Bletchley Park, with John Tiltman, broke Lorenz SZ 40/42 encryption machine (codenamed Tunny) leading to the development of the Colossus computer. William Stone Weedon, US, Gordon Welchman UK, GC&CS, Bletchley Park where he was head of Hut Six (German Army and Air Force Enigma cipher. decryption), made an important contribution to the design of the Bombe. Herbert Yardley US, MI8 (US), author "The American Black Chamber", worked in China as a cryptographer and briefly in Canada. Henryk Zygalski, Poland, Biuro Szyfrów, helped break German Enigma ciphers. Karl Stein German, Head of the Division IVa (security of own processes) at Cipher Department of the High Command of the Wehrmacht. Discoverer of Stein manifold. Gisbert Hasenjaeger German, Tester of the Enigma. Discovered new proof of the completeness theorem of Kurt Gödel for predicate logic. Heinrich Scholz German, Worked in Division IVa at OKW. Logician and pen friend of Alan Turning. Gottfried Köthe German, Cryptanalyst at OKW. Mathematician created theory of topological vector spaces. Ernst Witt German, Mathematician at OKW. Mathematical Discoveries Named After Ernst Witt. Helmut Grunsky German, worked in complex analysis and geometric function theory. He introduced Grunsky's theorem and the Grunsky inequalities. Georg Hamel. Oswald Teichmüller German, Temporarily employed at OKW as cryptanalyst. Introduced quasiconformal mappings and differential geometric methods into complex analysis. Described by Friedrich L. Bauer as an extreme Nazi and a true genius. Hans Rohrbach German, Mathematician at AA/Pers Z, the German department of state, civilian diplomatic cryptological agency. Wolfgang Franz German, Mathematician who worked at OKW. Later significant discoveries in Topology. Werner Weber German, Mathematician at OKW. Georg Aumann German, Mathematician at OKW. His doctoral student was Friedrich L. Bauer. Otto Leiberich German, Mathematician who worked as a linguist at the Cipher Department of the High Command of the Wehrmacht. Alexander Aigner German, Mathematician who worked at OKW. Erich Hüttenhain German, Chief cryptanalyst of and led Chi IV (section 4) of the Cipher Department of the High Command of the Wehrmacht. A German mathematician and cryptanalyst who tested a number of German cipher machines and found them to be breakable. Wilhelm Fenner German, Chief Cryptologist and Director of Cipher Department of the High Command of the Wehrmacht. Walther Fricke German, Worked alongside Dr Erich Hüttenhain at Cipher Department of the High Command of the Wehrmacht. Mathematician, logician, cryptanalyst and linguist. Fritz Menzer German. Inventor of SG39 and SG41. Other pre-computer Rosario Candela, US, Architect and notable amateur cryptologist who authored books and taught classes on the subject to civilians at Hunter College. Claude Elwood Shannon, US, founder of information theory, proved the one-time pad to be unbreakable. Modern See also: Category:Modern cryptographers for a more exhaustive list. Symmetric-key algorithm inventors Ross Anderson, UK, University of Cambridge, co-inventor of the Serpent cipher. Paulo S. L. M. Barreto, Brazilian, University of São Paulo, co-inventor of the Whirlpool hash function. George Blakley, US, independent inventor of secret sharing. Eli Biham, Israel, co-inventor of the Serpent cipher. Don Coppersmith, co-inventor of DES and MARS ciphers. Joan Daemen, Belgian, co-developer of Rijndael which became the Advanced Encryption Standard (AES), and Keccak
Stone Weedon, US, Gordon Welchman UK, GC&CS, Bletchley Park where he was head of Hut Six (German Army and Air Force Enigma cipher. decryption), made an important contribution to the design of the Bombe. Herbert Yardley US, MI8 (US), author "The American Black Chamber", worked in China as a cryptographer and briefly in Canada. Henryk Zygalski, Poland, Biuro Szyfrów, helped break German Enigma ciphers. Karl Stein German, Head of the Division IVa (security of own processes) at Cipher Department of the High Command of the Wehrmacht. Discoverer of Stein manifold. Gisbert Hasenjaeger German, Tester of the Enigma. Discovered new proof of the completeness theorem of Kurt Gödel for predicate logic. Heinrich Scholz German, Worked in Division IVa at OKW. Logician and pen friend of Alan Turning. Gottfried Köthe German, Cryptanalyst at OKW. Mathematician created theory of topological vector spaces. Ernst Witt German, Mathematician at OKW. Mathematical Discoveries Named After Ernst Witt. Helmut Grunsky German, worked in complex analysis and geometric function theory. He introduced Grunsky's theorem and the Grunsky inequalities. Georg Hamel. Oswald Teichmüller German, Temporarily employed at OKW as cryptanalyst. Introduced quasiconformal mappings and differential geometric methods into complex analysis. Described by Friedrich L. Bauer as an extreme Nazi and a true genius. Hans Rohrbach German, Mathematician at AA/Pers Z, the German department of state, civilian diplomatic cryptological agency. Wolfgang Franz German, Mathematician who worked at OKW. Later significant discoveries in Topology. Werner Weber German, Mathematician at OKW. Georg Aumann German, Mathematician at OKW. His doctoral student was Friedrich L. Bauer. Otto Leiberich German, Mathematician who worked as a linguist at the Cipher Department of the High Command of the Wehrmacht. Alexander Aigner German, Mathematician who worked at OKW. Erich Hüttenhain German, Chief cryptanalyst of and led Chi IV (section 4) of the Cipher Department of the High Command of the Wehrmacht. A German mathematician and cryptanalyst who tested a number of German cipher machines and found them to be breakable. Wilhelm Fenner German, Chief Cryptologist and Director of Cipher Department of the High Command of the Wehrmacht. Walther Fricke German, Worked alongside Dr Erich Hüttenhain at Cipher Department of the High Command of the Wehrmacht. Mathematician, logician, cryptanalyst and linguist. Fritz Menzer German. Inventor of SG39 and SG41. Other pre-computer Rosario Candela, US, Architect and notable amateur cryptologist who authored books and taught classes on the subject to civilians at Hunter College. Claude Elwood Shannon, US, founder of information theory, proved the one-time pad to be unbreakable. Modern See also: Category:Modern cryptographers for a more exhaustive list. Symmetric-key algorithm inventors Ross Anderson, UK, University of Cambridge, co-inventor of the Serpent cipher. Paulo S. L. M. Barreto, Brazilian, University of São Paulo, co-inventor of the Whirlpool hash function. George Blakley, US, independent inventor of secret sharing. Eli Biham, Israel, co-inventor of the Serpent cipher. Don Coppersmith, co-inventor of DES and MARS ciphers. Joan Daemen, Belgian, co-developer of Rijndael which became the Advanced Encryption Standard (AES), and Keccak which became SHA-3. Horst Feistel, German, IBM, namesake of Feistel networks and Lucifer cipher. Lars Knudsen, Denmark, co-inventor of the Serpent cipher. Ralph Merkle, US, inventor of Merkle trees. Bart Preneel, Belgian, co-inventor of RIPEMD-160. Vincent Rijmen, Belgian, co-developer of Rijndael which became the Advanced Encryption Standard (AES). Ronald L. Rivest, US, MIT, inventor of RC cipher series and MD algorithm series. Bruce Schneier, US, inventor of Blowfish and co-inventor of Twofish and Threefish. Xuejia Lai, CH, co-inventor of International Data Encryption Algorithm (IDEA). Adi Shamir, Israel, Weizmann Institute, inventor of secret sharing. Asymmetric-key algorithm inventors Leonard Adleman, US, USC, the 'A' in RSA. David Chaum, US, inventor of blind
the frothy drink was part of the after-dinner routine of Montezuma. José de Acosta, a Spanish Jesuit missionary who lived in Peru and then Mexico in the later 16th century, wrote of its growing influence on the Spaniards: Although bananas are more profitable, cocoa is more highly esteemed in Mexico. . . Cocoa is a smaller fruit than almonds and thicker, which toasted do not taste bad. It is so prized among the Indians and even among Spaniards. . . because since it is a dried fruit it can be stored for a long time without deterioration, and they brings ships loaded with them from the province of Guatemala. . . It also serves as currency, because with five cocoas you can buy one thing, with thirty another, and with a hundred something else, without there being contradiction; and they give these cocoas as alms to the poor who beg for them. The principal product of this cocoa is a concoction which they make that they call “chocolate,” which is a crazy thing treasured in that land, and those who are not accustomed are disgusted by it, because it has a foam on top and a bubbling like that of feces, which certainly takes a lot to put up with. Anyway, it is the prized beverage which the Indians offer to nobles who come to or pass through their lands; and the Spaniards, especially Spanish women born in those lands die for black chocolate. This aforementioned chocolate is said to the be made in various forms and temperaments, hot, cold, and lukewarm. They are wont to use spices and much chili; they also make it into a paste, and it is said that it is a medicine to treat coughs, the stomach, and colds. Whatever may be the case, in fact those who have not been reared on this opinion are not appetized by it. While Columbus had taken cocoa beans with him back to Spain, chocolate made no impact until Spanish friars introduced it to the Spanish court. After the Spanish conquest of the Aztecs, chocolate was imported to Europe. There, it quickly became a court favorite. It was still served as a beverage, but the Spanish added sugar, as well as honey (the original sweetener used by the Aztecs for chocolate), to counteract the natural bitterness. Vanilla, another indigenous American introduction, was also a popular additive, with pepper and other spices sometimes used to give the illusion of a more potent vanilla flavor. Unfortunately, these spices tended to unsettle the European constitution; the Encyclopédie states, "The pleasant scent and sublime taste it imparts to chocolate have made it highly recommended; but a long experience having shown that it could potentially upset one's stomach", which is why chocolate without vanilla was sometimes referred to as "healthy chocolate". By 1602, chocolate had made its way from Spain to Austria. By 1662, Pope Alexander VII had declared that religious fasts were not broken by consuming chocolate drinks. Within about a hundred years, chocolate established a foothold throughout Europe. The new craze for chocolate brought with it a thriving slave market, as between the early 1600s and late 1800s, the laborious and slow processing of the cocoa bean was manual. Cocoa plantations spread, as the English, Dutch, and French colonized and planted. With the depletion of Mesoamerican workers, largely to disease, cocoa production was often the work of poor wage laborers and African slaves. Wind-powered and horse-drawn mills were used to speed production, augmenting human labor. Heating the working areas of the table-mill, an innovation that emerged in France in 1732, also assisted in extraction. New processes that sped the production of chocolate emerged early in the Industrial Revolution. In 1815, Dutch chemist Coenraad van Houten introduced alkaline salts to chocolate, which reduced its bitterness. A few years thereafter, in 1828, he created a press to remove about half the natural fat (cocoa butter or cocoa butter) from chocolate liquor, which made chocolate both cheaper to produce and more consistent in quality. This innovation introduced the modern era of chocolate. Known as "Dutch cocoa", this machine-pressed chocolate was instrumental in the transformation of chocolate to its solid form when, in 1847, English chocolatier Joseph Fry discovered a way to make chocolate moldable when he mixed the ingredients of cocoa powder and sugar with melted cocoa butter. Subsequently, his chocolate factory, Fry's of Bristol, England, began mass-producing chocolate bars, Fry's Chocolate Cream, launched in 1866, and they became very popular. Milk had sometimes been used as an addition to chocolate beverages since the mid-17th century, but in 1875 Swiss chocolatier Daniel Peter invented milk chocolate by mixing a powdered milk developed by Henri Nestlé with the liquor. In 1879, the texture and taste of chocolate was further improved when Rudolphe Lindt invented the conching machine. Besides Nestlé, several notable chocolate companies had their start in the late 19th and early 20th centuries. Rowntree's of York set up and began producing chocolate in 1862, after buying out the Tuke family business. Cadbury was manufacturing boxed chocolates in England by 1868. Manufacturing their first Easter egg in 1875, Cadbury created the modern chocolate Easter egg after developing a pure cocoa butter that could easily be molded into smooth shapes. In 1893, Milton S. Hershey purchased chocolate processing equipment at the World's Columbian Exposition in Chicago, and soon began the career of Hershey's chocolates with chocolate-coated caramels. Introduction to the United States The Baker Chocolate Company, which makes Baker's Chocolate, is the oldest producer of chocolate in the United States. In 1765 Dr. James Baker and John Hannon founded the company in Boston. Using cocoa beans from the West Indies, the pair built their chocolate business, which is still in operation. White chocolate was first introduced to the U.S. in 1946 by Frederick E. Hebert of Hebert Candies in Shrewsbury, Massachusetts, near Boston, after he had tasted "white coat" candies while traveling in Europe. Etymology Cocoa, pronounced by the Olmecs as kakawa, dates to 1000 BC or earlier. The word "chocolate" entered the English language from Spanish in about 1600. The word entered Spanish from the word chocolātl in Nahuatl, the language of the Aztecs. The origin of the Nahuatl word is uncertain, as it does not appear in any early Nahuatl source, where the word for chocolate drink is cacahuatl, "cocoa water". It is possible that the Spaniards coined the word (perhaps in order to avoid caca, a vulgar Spanish word for "faeces") by combining the Yucatec Mayan word chocol, "hot", with the Nahuatl word atl, "water". A widely cited proposal is that the derives from unattested xocolatl meaning "bitter drink" is unsupported; the change from x- to ch- is unexplained, as is the -l-. Another proposed etymology derives it from the word chicolatl, meaning "beaten drink", which may derive from the word for the frothing stick, chicoli. Other scholars reject all these proposals, considering the origin of first element of the name to be unknown. The term "chocolatier", for a chocolate confection maker, is attested from 1888. Types Several types of chocolate can be distinguished. Pure, unsweetened chocolate, often called "baking chocolate", contains primarily cocoa solids and cocoa butter in varying proportions. Much of the chocolate consumed today is in the form of sweet chocolate, which combines chocolate with sugar. By cocoa content Raw chocolate Raw chocolate is chocolate produced primarily from unroasted cocoa beans. Dark Dark chocolate is produced by adding fat and sugar to the cocoa mixture. The U.S. Food and Drug Administration calls this "sweet chocolate", and requires a 15% concentration of chocolate liquor. European rules specify a minimum of 35% cocoa solids. A higher amount of cocoa solids indicates more bitterness. Semisweet chocolate is dark chocolate with low sugar content. Bittersweet chocolate is chocolate liquor to which some sugar (typically a third), more cocoa butter and vanilla are added. It has less sugar and more liquor than semisweet chocolate, but the two are interchangeable in baking. It is also known to last for two years if stored properly. , there is no high-quality evidence that dark chocolate affects blood pressure significantly or provides other health benefits. Milk Milk chocolate is sweet chocolate that also contains milk powder or condensed milk. In the UK and Ireland, milk chocolate must contain a minimum of 20% total dry cocoa solids; in the rest of the European Union, the minimum is 25%. White White chocolate, although similar in texture to that of milk and dark chocolate, does not contain any cocoa solids that impart a dark color. In 2002, the US Food and Drug Administration established a standard for white chocolate as the "common or usual name of products made from cocoa fat (i.e., cocoa butter), milk solids, nutritive carbohydrate sweeteners, and other safe and suitable ingredients, but containing no nonfat cocoa solids". By application Baking chocolate Baking chocolate, or cooking chocolate, is chocolate intended to be used for baking and in sweet foods that may or may not be sweetened. Dark chocolate, milk chocolate, and white chocolate, are produced and marketed as baking chocolate. However, lower quality baking chocolate may not be as flavorful compared to higher-quality chocolate, and may have a different mouthfeel. Poorly tempered or untempered chocolate may have whitish spots on the dark chocolate part, called chocolate bloom; it is an indication that sugar or fat has separated due to poor storage. It is not toxic and can be safely consumed. Modeling chocolate Modeling chocolate is a chocolate paste made by melting chocolate and combining it with corn syrup, glucose syrup, or golden syrup. Production Roughly two-thirds of the entire world's cocoa is produced in West Africa, with 43% sourced from Côte d'Ivoire, where, , child labor is a common practice to obtain the product. According to the World Cocoa Foundation, in 2007 some 50 million people around the world depended on cocoa as a source of livelihood. in the UK, most chocolatiers purchase their chocolate from them, to melt, mold and package to their own design. According to the WCF's 2012 report, the Ivory Coast is the largest producer of cocoa in the world. The two main jobs associated with creating chocolate candy are chocolate makers and chocolatiers. Chocolate makers use harvested cocoa beans and other ingredients to produce couverture chocolate (covering). Chocolatiers use the finished couverture to make chocolate candies (bars, truffles, etc.). Production costs can be decreased by reducing cocoa solids content or by substituting cocoa butter with another fat. Cocoa growers object to allowing the resulting food to be called "chocolate", due to the risk of lower demand for their crops. Genome The sequencing in 2010 of the genome of the cacao tree may allow yields to be improved. Due to concerns about global warming effects on lowland climate in the narrow band of latitudes where cocoa is grown (20 degrees north and south of the equator), the commercial company Mars, Incorporated and the University of California, Berkeley are conducting genomic research in 2017–18 to improve the survivability of cacao plants in hot climates. Cacao varieties Chocolate is made from cocoa beans, the dried and fermented seeds of the cacao tree (Theobroma cacao), a small, 4–8 m tall (15–26 ft tall) evergreen tree native to the deep tropical region of the Americas. Recent genetic studies suggest the most common genotype of the plant originated in the Amazon basin and was gradually transported by humans throughout South and Central America. Early forms of another genotype have also been found in what is now Venezuela. The scientific name, Theobroma, means "food of the gods". The fruit, called a cocoa pod, is ovoid, long and wide, ripening yellow to orange, and weighing about when ripe. Cacao trees are small, understory trees that need rich, well-drained soils. They naturally grow within 20° of either side of the equator because they need about 2000 mm of rainfall a year, and temperatures in the range of . Cacao trees cannot tolerate a temperature lower than . The three main varieties of cocoa beans used in chocolate are criollo, forastero, and trinitario. Processing Cocoa pods are harvested by cutting them from the tree using a machete, or by knocking them off the tree using a stick. The beans with their surrounding pulp are removed from the pods and placed in piles or bins, allowing access to micro-organisms so fermentation of the pectin-containing material can begin. Yeasts produce ethanol, lactic acid bacteria produce lactic acid, and acetic acid bacteria produce acetic acid. The fermentation process, which takes up to seven days, also produces several flavor precursors, eventually resulting in the familiar chocolate taste. It is important to harvest the pods when they are fully ripe, because if the pod is unripe, the beans will have a low cocoa butter content, or sugars in the white pulp will be insufficient for fermentation, resulting in a weak flavor. After fermentation, the beans must be quickly dried to prevent mold growth. Climate and weather permitting, this is done by spreading the beans out in the sun from five to seven days. The dried beans are then transported to a chocolate manufacturing facility. The beans are cleaned (removing twigs, stones, and other debris), roasted, and graded. Next, the shell of each bean is removed to extract the nib. Finally, the nibs are ground and liquefied, resulting in pure chocolate in fluid form: chocolate liquor. The liquor can be further processed into two components: cocoa solids and cocoa butter. Blending Chocolate liquor is blended with the cocoa butter in varying quantities to make different types of chocolate or couverture. The basic blends of ingredients for the various types of chocolate (in order of highest quantity of cocoa liquor first), are: Dark chocolate: sugar, cocoa butter, cocoa liquor, and (sometimes) vanilla Milk chocolate: sugar, cocoa butter, cocoa liquor, milk or milk powder, and vanilla White chocolate: sugar, cocoa butter, milk or milk powder, and vanilla Usually, an emulsifying agent, such as soy lecithin, is added, though a few manufacturers prefer to exclude this ingredient for purity reasons and to remain GMO-free, sometimes at the cost of a perfectly smooth texture. Some manufacturers are now using PGPR, an artificial emulsifier derived from castor oil that allows them to reduce the amount of cocoa butter while maintaining the same mouthfeel. The texture is also heavily influenced by processing, specifically conching (see below). The more expensive chocolate tends to be processed longer and thus has a smoother texture and mouthfeel, regardless of whether emulsifying agents are added. Different manufacturers develop their own "signature" blends based on the above formulas, but varying proportions of the different constituents are used. The finest, plain dark chocolate couverture contains at least 70% cocoa (both solids and butter), whereas milk chocolate usually contains up to 50%. High-quality white chocolate couverture contains only about 35% cocoa butter. Producers of high-quality, small-batch chocolate argue that mass production produces bad-quality chocolate. Some mass-produced chocolate contains much less cocoa (as low as 7% in many cases), and fats other than cocoa butter. Vegetable oils and artificial vanilla flavor are often used in cheaper chocolate to mask poorly fermented and/or roasted beans. In 2007, the Chocolate Manufacturers Association in the United States, whose members include Hershey, Nestlé, and Archer Daniels Midland, lobbied the Food and Drug Administration (FDA) to change the legal definition of chocolate to let them substitute partially hydrogenated vegetable oils for cocoa butter, in addition to using artificial sweeteners and milk substitutes. Currently, the FDA does not allow a product to be referred to as "chocolate" if the product contains any of these ingredients. In the EU a product can be sold as chocolate if it contains up to 5% vegetable oil, and must be labeled as "family milk chocolate" rather than "milk chocolate" if it contains 20% milk. According to Canadian Food and Drug Regulations, a "chocolate product" is a food product that is sourced from at least one "cocoa product" and contains at least one of the following: "chocolate, bittersweet chocolate, semi-sweet chocolate, dark chocolate, sweet chocolate, milk chocolate, or white chocolate". A "cocoa product" is defined as a food product that is sourced from cocoa beans and contains "cocoa nibs, cocoa liquor, cocoa mass, unsweetened chocolate, bitter chocolate, chocolate liquor, cocoa, low-fat cocoa, cocoa powder, or low-fat cocoa powder". Conching The penultimate process is called conching. A conche is a container filled with metal beads, which act as grinders. The refined and blended chocolate mass is kept in a liquid state by frictional heat. Chocolate before conching has an uneven and gritty texture. The conching process produces cocoa and sugar particles smaller than the tongue can detect (typically around 20 μm) and reduces rough edges, hence the smooth feel in the mouth. The length of the conching process determines the final smoothness and quality of the chocolate. High-quality chocolate is conched for about 72 hours, and lesser grades about four to six hours. After the process is complete, the chocolate mass is stored in tanks heated to about until final processing. Tempering The final process is called tempering. Uncontrolled crystallization of cocoa butter typically results in crystals of varying size, some or all large enough to be seen with the naked eye. This causes the surface of the chocolate to appear mottled and matte, and causes the chocolate to crumble rather than snap when broken. The uniform sheen and crisp bite of properly processed chocolate are the results of consistently small cocoa butter crystals produced by the tempering process. The fats in cocoa butter can crystallize in six different forms (polymorphous crystallization). The primary purpose of tempering is to assure that only the best form is present. The six different crystal forms have different properties. As a solid piece of chocolate, the cocoa butter fat particles are in a crystalline rigid structure that gives the chocolate its solid appearance. Once heated, the crystals of the polymorphic cocoa butter can break apart from the rigid structure and allow the chocolate to obtain a more fluid consistency as the temperature increases – the melting process. When the heat is removed, the cocoa butter crystals become rigid again and come closer together, allowing the chocolate to solidify. The temperature in which the crystals obtain enough energy to break apart from their rigid conformation would depend on the milk fat content in the chocolate and the shape of the fat molecules, as well as the form of the cocoa butterfat. Chocolate with a higher fat content will melt at a lower temperature. Making chocolate considered "good" is about forming as many type V crystals as possible. This provides the best appearance and texture and creates the most stable crystals, so the texture and appearance will not degrade over time. To accomplish this, the temperature is carefully manipulated during the crystallization. Generally, the chocolate is first heated to to melt all six forms of crystals. Next, the chocolate is cooled to about , which will allow crystal types IV and V to form. At this temperature, the chocolate is agitated to create many small crystal "seeds" which will serve as nuclei to create small crystals in the chocolate. The chocolate is then heated to about to eliminate any type IV crystals, leaving just type V. After this point, any excessive heating of the chocolate will destroy the temper and this process will have to be repeated. Other methods of chocolate tempering are used as well. The most common variant is introducing already tempered, solid "seed" chocolate. The temper of chocolate can be measured with a chocolate temper meter to ensure accuracy and consistency. A sample cup is filled with the chocolate and placed in the unit which then displays or prints the
cocoa beans and one fresh avocado was worth three beans. The Maya and Aztecs associated cocoa with human sacrifice, and chocolate drinks specifically with sacrificial human blood. The Spanish royal chronicler Gonzalo Fernández de Oviedo y Valdés described a chocolate drink he had seen in Nicaragua in 1528, mixed with achiote: "because those people are fond of drinking human blood, to make this beverage seem like blood, they add a little achiote, so that it then turns red. ... and part of that foam is left on the lips and around the mouth, and when it is red for having achiote, it seems a horrific thing, because it seems like blood itself." European adaptation Until the 16th century, no European had ever heard of the popular drink from the Central American peoples. Christopher Columbus and his son Ferdinand encountered the cocoa bean on Columbus's fourth mission to the Americas on 15 August 1502, when he and his crew stole a large native canoe that proved to contain cocoa beans among other goods for trade. Spanish conquistador Hernán Cortés may have been the first European to encounter it, as the frothy drink was part of the after-dinner routine of Montezuma. José de Acosta, a Spanish Jesuit missionary who lived in Peru and then Mexico in the later 16th century, wrote of its growing influence on the Spaniards: Although bananas are more profitable, cocoa is more highly esteemed in Mexico. . . Cocoa is a smaller fruit than almonds and thicker, which toasted do not taste bad. It is so prized among the Indians and even among Spaniards. . . because since it is a dried fruit it can be stored for a long time without deterioration, and they brings ships loaded with them from the province of Guatemala. . . It also serves as currency, because with five cocoas you can buy one thing, with thirty another, and with a hundred something else, without there being contradiction; and they give these cocoas as alms to the poor who beg for them. The principal product of this cocoa is a concoction which they make that they call “chocolate,” which is a crazy thing treasured in that land, and those who are not accustomed are disgusted by it, because it has a foam on top and a bubbling like that of feces, which certainly takes a lot to put up with. Anyway, it is the prized beverage which the Indians offer to nobles who come to or pass through their lands; and the Spaniards, especially Spanish women born in those lands die for black chocolate. This aforementioned chocolate is said to the be made in various forms and temperaments, hot, cold, and lukewarm. They are wont to use spices and much chili; they also make it into a paste, and it is said that it is a medicine to treat coughs, the stomach, and colds. Whatever may be the case, in fact those who have not been reared on this opinion are not appetized by it. While Columbus had taken cocoa beans with him back to Spain, chocolate made no impact until Spanish friars introduced it to the Spanish court. After the Spanish conquest of the Aztecs, chocolate was imported to Europe. There, it quickly became a court favorite. It was still served as a beverage, but the Spanish added sugar, as well as honey (the original sweetener used by the Aztecs for chocolate), to counteract the natural bitterness. Vanilla, another indigenous American introduction, was also a popular additive, with pepper and other spices sometimes used to give the illusion of a more potent vanilla flavor. Unfortunately, these spices tended to unsettle the European constitution; the Encyclopédie states, "The pleasant scent and sublime taste it imparts to chocolate have made it highly recommended; but a long experience having shown that it could potentially upset one's stomach", which is why chocolate without vanilla was sometimes referred to as "healthy chocolate". By 1602, chocolate had made its way from Spain to Austria. By 1662, Pope Alexander VII had declared that religious fasts were not broken by consuming chocolate drinks. Within about a hundred years, chocolate established a foothold throughout Europe. The new craze for chocolate brought with it a thriving slave market, as between the early 1600s and late 1800s, the laborious and slow processing of the cocoa bean was manual. Cocoa plantations spread, as the English, Dutch, and French colonized and planted. With the depletion of Mesoamerican workers, largely to disease, cocoa production was often the work of poor wage laborers and African slaves. Wind-powered and horse-drawn mills were used to speed production, augmenting human labor. Heating the working areas of the table-mill, an innovation that emerged in France in 1732, also assisted in extraction. New processes that sped the production of chocolate emerged early in the Industrial Revolution. In 1815, Dutch chemist Coenraad van Houten introduced alkaline salts to chocolate, which reduced its bitterness. A few years thereafter, in 1828, he created a press to remove about half the natural fat (cocoa butter or cocoa butter) from chocolate liquor, which made chocolate both cheaper to produce and more consistent in quality. This innovation introduced the modern era of chocolate. Known as "Dutch cocoa", this machine-pressed chocolate was instrumental in the transformation of chocolate to its solid form when, in 1847, English chocolatier Joseph Fry discovered a way to make chocolate moldable when he mixed the ingredients of cocoa powder and sugar with melted cocoa butter. Subsequently, his chocolate factory, Fry's of Bristol, England, began mass-producing chocolate bars, Fry's Chocolate Cream, launched in 1866, and they became very popular. Milk had sometimes been used as an addition to chocolate beverages since the mid-17th century, but in 1875 Swiss chocolatier Daniel Peter invented milk chocolate by mixing a powdered milk developed by Henri Nestlé with the liquor. In 1879, the texture and taste of chocolate was further improved when Rudolphe Lindt invented the conching machine. Besides Nestlé, several notable chocolate companies had their start in the late 19th and early 20th centuries. Rowntree's of York set up and began producing chocolate in 1862, after buying out the Tuke family business. Cadbury was manufacturing boxed chocolates in England by 1868. Manufacturing their first Easter egg in 1875, Cadbury created the modern chocolate Easter egg after developing a pure cocoa butter that could easily be molded into smooth shapes. In 1893, Milton S. Hershey purchased chocolate processing equipment at the World's Columbian Exposition in Chicago, and soon began the career of Hershey's chocolates with chocolate-coated caramels. Introduction to the United States The Baker Chocolate Company, which makes Baker's Chocolate, is the oldest producer of chocolate in the United States. In 1765 Dr. James Baker and John Hannon founded the company in Boston. Using cocoa beans from the West Indies, the pair built their chocolate business, which is still in operation. White chocolate was first introduced to the U.S. in 1946 by Frederick E. Hebert of Hebert Candies in Shrewsbury, Massachusetts, near Boston, after he had tasted "white coat" candies while traveling in Europe. Etymology Cocoa, pronounced by the Olmecs as kakawa, dates to 1000 BC or earlier. The word "chocolate" entered the English language from Spanish in about 1600. The word entered Spanish from the word chocolātl in Nahuatl, the language of the Aztecs. The origin of the Nahuatl word is uncertain, as it does not appear in any early Nahuatl source, where the word for chocolate drink is cacahuatl, "cocoa water". It is possible that the Spaniards coined the word (perhaps in order to avoid caca, a vulgar Spanish word for "faeces") by combining the Yucatec Mayan word chocol, "hot", with the Nahuatl word atl, "water". A widely cited proposal is that the derives from unattested xocolatl meaning "bitter drink" is unsupported; the change from x- to ch- is unexplained, as is the -l-. Another proposed etymology derives it from the word chicolatl, meaning "beaten drink", which may derive from the word for the frothing stick, chicoli. Other scholars reject all these proposals, considering the origin of first element of the name to be unknown. The term "chocolatier", for a chocolate confection maker, is attested from 1888. Types Several types of chocolate can be distinguished. Pure, unsweetened chocolate, often called "baking chocolate", contains primarily cocoa solids and cocoa butter in varying proportions. Much of the chocolate consumed today is in the form of sweet chocolate, which combines chocolate with sugar. By cocoa content Raw chocolate Raw chocolate is chocolate produced primarily from unroasted cocoa beans. Dark Dark chocolate is produced by adding fat and sugar to the cocoa mixture. The U.S. Food and Drug Administration calls this "sweet chocolate", and requires a 15% concentration of chocolate liquor. European rules specify a minimum of 35% cocoa solids. A higher amount of cocoa solids indicates more bitterness. Semisweet chocolate is dark chocolate with low sugar content. Bittersweet chocolate is chocolate liquor to which some sugar (typically a third), more cocoa butter and vanilla are added. It has less sugar and more liquor than semisweet chocolate, but the two are interchangeable in baking. It is also known to last for two years if stored properly. , there is no high-quality evidence that dark chocolate affects blood pressure significantly or provides other health benefits. Milk Milk chocolate is sweet chocolate that also contains milk powder or condensed milk. In the UK and Ireland, milk chocolate must contain a minimum of 20% total dry cocoa solids; in the rest of the European Union, the minimum is 25%. White White chocolate, although similar in texture to that of milk and dark chocolate, does not contain any cocoa solids that impart a dark color. In 2002, the US Food and Drug Administration established a standard for white chocolate as the "common or usual name of products made from cocoa fat (i.e., cocoa butter), milk solids, nutritive carbohydrate sweeteners, and other safe and suitable ingredients, but containing no nonfat cocoa solids". By application Baking chocolate Baking chocolate, or cooking chocolate, is chocolate intended to be used for baking and in sweet foods that may or may not be sweetened. Dark chocolate, milk chocolate, and white chocolate, are produced and marketed as baking chocolate. However, lower quality baking chocolate may not be as flavorful compared to higher-quality chocolate, and may have a different mouthfeel. Poorly tempered or untempered chocolate may have whitish spots on the dark chocolate part, called chocolate bloom; it is an indication
involves separate parts for trumpet and cornet. As several instrument builders made improvements to both instruments, they started to look and sound more alike. The modern-day cornet is used in brass bands, concert bands, and in specific orchestral repertoire that requires a more mellow sound. The name cornet derives from corne, meaning horn, itself from Latin 'cornu'. While not musically related, instruments of the Zink family (which includes serpents) are named "cornetto" or "cornett" in modern English to distinguish them from the valved cornet described here. The 11th edition of the Encyclopædia Britannica referred to serpents as "old wooden cornets". The Roman/Etruscan cornu (or simply "horn") is the lingual ancestor of these. It is a predecessor of the post horn from which the cornet evolved, and was used like a bugle to signal orders on the battlefield. Relationship to trumpet The cornet's valves allowed for melodic playing throughout the register of the cornet. Trumpets were slower to adopt the new valve technology, so for 100 years or more, composers often wrote separate parts for trumpet and cornet. The trumpet would play fanfare-like passages, while the cornet played more melodic passages. The modern trumpet has valves that allow it to play the same notes and fingerings as the cornet. Cornets and trumpets made in a given key (usually the key of B) play at the same pitch, and the technique for playing the instruments is nearly identical. However, cornets and trumpets are not entirely interchangeable, as they differ in timbre. Also available, but usually seen only in the brass band, is an E soprano model, pitched a fourth above the standard B. Unlike the trumpet, which has a cylindrical bore up to the bell section, the tubing of the cornet has a mostly conical bore, starting very narrow at the mouthpiece and gradually widening towards the bell. Cornets following the 1913 patent of E.A. Couturier can have a continuously conical bore. The conical bore of the cornet is primarily responsible for its characteristic warm, mellow tone, which can be distinguished from the more penetrating sound of the trumpet. The conical bore of the cornet also makes it more agile than the trumpet when playing fast passages, but correct pitching is often less assured. The cornet is often preferred for young beginners as it is easier to hold, with its centre of gravity much closer to the player. The cornet mouthpiece has a shorter and narrower shank than that of a trumpet so it can fit the cornet's smaller mouthpiece receiver. The cup size is often deeper than that of a trumpet mouthpiece. One variety is the short model traditional cornet, also known as a "Shepherd's Crook" shaped model. These are most often large–bore instruments with a rich mellow sound. There is also a long-model or "American-wrap" cornet, often with a smaller bore and a brighter sound, which is produced in a variety of different tubing wraps and is closer to a trumpet in appearance. The Shepherd's Crook model is preferred by cornet traditionalists. The long-model cornet is generally used in concert bands in the United States, but has found little following in British-style brass and concert bands. A third and relatively rare variety—distinct from the long-model or "American-wrap" cornet—is the "long cornet", which was produced in the mid-20th Century by C.G. Conn and F.E. Olds and visually is nearly indistinguishable from a trumpet except that it has a receiver fashioned to accept cornet mouthpieces. Echo cornet The echo cornet has been called an obsolete variant. It has a mute chamber (or echo chamber) mounted to the side acting as a second bell when the fourth valve is pressed. The second bell has a sound similar to that of a Harmon mute and is typically used
nearly identical. However, cornets and trumpets are not entirely interchangeable, as they differ in timbre. Also available, but usually seen only in the brass band, is an E soprano model, pitched a fourth above the standard B. Unlike the trumpet, which has a cylindrical bore up to the bell section, the tubing of the cornet has a mostly conical bore, starting very narrow at the mouthpiece and gradually widening towards the bell. Cornets following the 1913 patent of E.A. Couturier can have a continuously conical bore. The conical bore of the cornet is primarily responsible for its characteristic warm, mellow tone, which can be distinguished from the more penetrating sound of the trumpet. The conical bore of the cornet also makes it more agile than the trumpet when playing fast passages, but correct pitching is often less assured. The cornet is often preferred for young beginners as it is easier to hold, with its centre of gravity much closer to the player. The cornet mouthpiece has a shorter and narrower shank than that of a trumpet so it can fit the cornet's smaller mouthpiece receiver. The cup size is often deeper than that of a trumpet mouthpiece. One variety is the short model traditional cornet, also known as a "Shepherd's Crook" shaped model. These are most often large–bore instruments with a rich mellow sound. There is also a long-model or "American-wrap" cornet, often with a smaller bore and a brighter sound, which is produced in a variety of different tubing wraps and is closer to a trumpet in appearance. The Shepherd's Crook model is preferred by cornet traditionalists. The long-model cornet is generally used in concert bands in the United States, but has found little following in British-style brass and concert bands. A third and relatively rare variety—distinct from the long-model or "American-wrap" cornet—is the "long cornet", which was produced in the mid-20th Century by C.G. Conn and F.E. Olds and visually is nearly indistinguishable from a trumpet except that it has a receiver fashioned to accept cornet mouthpieces. Echo cornet The echo cornet has been called an obsolete variant. It has a mute chamber (or echo chamber) mounted to the side acting as a second bell when the fourth valve is pressed. The second bell has a sound similar to that of a Harmon mute and is typically used to play echo phrases, whereupon the player imitates the sound from the primary bell using the echo chamber. Playing technique Like the trumpet and all other modern brass wind instruments, the cornet makes a sound when the player vibrates ("buzzes") the lips in the mouthpiece, creating a vibrating column of air in the tubing. The frequency of the air column's vibration can be modified by changing the lip tension and aperture or "embouchure", and by altering the tongue position to change the shape of the oral cavity, thereby increasing or decreasing the speed of the airstream. In addition, the column of air can be lengthened by engaging one or more valves, thus lowering the pitch. Double and triple tonguing are also possible. Without valves, the player could produce only a harmonic series of notes like those played by the bugle and other "natural" brass instruments. These notes are far apart for most of the instrument's range, making diatonic and chromatic playing impossible except in the extreme high register. The valves change the length of the vibrating column and provide the cornet with the ability to play chromatically. Ensembles with cornets Brass band British brass bands consist only of brass instruments and a percussion section. The cornet is the leading melodic instrument in this ensemble; trumpets are never used. The ensemble consists of about thirty musicians, including nine B cornets and one E cornet (soprano cornet). In the UK, companies such as Besson and Boosey & Hawkes specialized in instrument for brass bands. In America, 19th-century manufacturers such as Graves and Company, Hall and Quinby, E.G. Wright and the Boston Musical Instrument Manufactury made instruments for this ensemble. Concert band The cornet features in the British-style concert band, and early American concert band pieces, particularly those written or transcribed before 1960, often feature distinct, separate parts for trumpets and cornets. Cornet parts are rarely included in later American pieces, however, and cornets are replaced in modern American bands by the trumpet. This slight difference in instrumentation derives from the British concert band's heritage in military bands, where the highest brass instrument is always the cornet. There are usually four to six B cornets present in a British concert band, but no E instrument, as this role is taken by the E clarinet. Fanfareorkest Fanfareorkesten ("fanfare orchestras"), found in only the Netherlands, Belgium, Northern France and Lithuania, use the complete saxhorn family of instruments. The standard instrumentation includes both the cornet and the trumpet; however, in recent decades, the cornet has largely been replaced by the
province CAMP (company), an Italian manufacturer of climbing equipment cAMP: Cyclic adenosine monophosphate (cAMP) (+)-cis-2-Aminomethylcyclopropane carboxylic acid, a GABAA-ρ agonist camP: 2,5-diketocamphane 1,2-monooxygenase,
Campaign Against Marijuana Planting CAMP, Center for Architecture and Metropolitan Planning in Prague Central Atlantic magmatic province CAMP (company), an Italian manufacturer of climbing
good manufacturing practice (cGMP) CGMP, Cisco Group Management Protocol, the
CGMP is an initialism. It can refer to: cyclic guanosine monophosphate (cGMP) current good
were invalid, forcing them to apply and pay for new royal patents on land that they already occupied or face eviction. In April 1687, Increase Mather sailed to London, where he remained for the next four years, pleading with the Court for what he regarded as the interests of the Massachusetts colony. The birth of a male heir to King James in June 1688, which could have cemented a Roman Catholic dynasty in the English throne, triggered the so-called Glorious Revolution in which Parliament deposed James and gave the Crown jointly to his Protestant daughter Mary and her husband, the Dutch Prince William of Orange. News of the events in London greatly emboldened the opposition in Boston to Governor Andros, finally precipitating the 1689 Boston revolt. Cotton Mather, then aged twenty-six, was one of the Puritan ministers who guided resistance in Boston to Andros's regime. Early in 1689, Randolph had a warrant issued for Cotton Mather's arrest on a charge of "scandalous libel", but the warrant was overruled by Wait Winthrop. According to some sources, Cotton Mather escaped a second attempted arrest on April 18, 1689, the same day that the people of Boston took up arms against Andros. The young Mather may have authored, in whole or in part, the "Declaration of the Gentlemen, Merchants, and Inhabitants of Boston and the Country Adjacent", which justified that uprising by a list of grievances that the declaration attributed to the deposed officials. The authorship of that document is uncertain: it was not signed by Mather or any other clergymen, and Puritans frowned upon the clergy being seen to play too direct and personal a hand in political affairs. That day, Mather probably read the Declaration to a crowd gathered in front of the Boston Town House. In July, Andros, Randolph, Joseph Dudley, and other officials who had been deposed and arrested in the Boston revolt were summoned to London to answer the complaints against them. The administration of Massachusetts was temporarily assumed by Simon Bradstreet, whose rule proved weak and contentious. In 1691, the government of King William and Queen Mary issued a new Massachusetts Charter. This charter united the Massachusetts Bay Colony with Plymouth Colony into the new Province of Massachusetts Bay. Rather than restoring the old Puritan rule, the Charter of 1691 mandated religious toleration for all non-Catholics and established a government led by a Crown-appointed governor. The first governor under the new charter was Sir William Phips, who was a member of the Mathers' church in Boston. Salem witch trials of 1692, the Mather influence Pre-trials In 1689, Mather published Memorable Providences detailing the supposed afflictions of several children in the Goodwin family in Boston. Mather had a prominent role in the witchcraft case against Catholic washerwoman Goody Glover, which ultimately resulted in her conviction and execution. Besides praying for the children, which also included fasting and meditation, he would also observe and record their activities. The children were subject to hysterical fits, which he detailed in Memorable Providences. In his book, Mather argued that since there are witches and devils, there are "immortal souls." He also claimed that witches appear spectrally as themselves. He opposed any natural explanations for the fits; he believed that people who confessed to using witchcraft were sane; he warned against performing magic due to its connection with the devil. Robert Calef was a contemporary of Mather and critical of him, and he considered this book responsible for laying the groundwork for the Salem witch trials three years later: Nineteenth-century historian Charles Wentworth Upham shared the view that the afflicted in Salem were imitating the Goodwin children, but he put the blame on both Cotton and his father Increase Mather: Cambridge Association of ministers In 1690, Cotton Mather played a primary role in forming a new ministers club called the Cambridge Association. Their first order of business was to respond to a letter from the pastor of Salem Village (Samuel Parris). A second meeting was planned a week later in the college library and Parris was invited to travel down to Cambridge to meet with them, which he did. Throughout 1692, this association of powerful ministers were often queried for their opinion on Christian doctrine relative to witchcraft. The court of Oyer and Terminer In 1692, Cotton Mather claimed to have been industrious and influential in the direction of things at Salem from the beginning (see Sept. 2 1692 letter to Stoughton below). But it remains unknown how much of a role he had in the formation or construction of the Court of Oyer and Terminer at the end of May or what the original intent for this court may have been. Sir William Phips, governor of the newly chartered Province of Massachusetts Bay, signed an order forming the new court and allowed his lieutenant governor, William Stoughton, to become the court's chief justice. According to George Bancroft, Mather had been influential in gaining the politically unpopular Stoughton his appointment as lieutenant governor under Phips through the intervention of Mather's own politically powerful father, Increase. "Intercession had been made by Cotton Mather for the advancement of Stoughton, a man of cold affections, proud, self-willed and covetous of distinction." Apparently Mather saw in Stoughton, a bachelor who had never wed, an ally for church-related matters. Bancroft quotes Mather's reaction to Stoughton's appointment as follows:"The time for a favor is come", exulted Cotton Mather; "Yea, the set time is come." Just prior to the first session of the new court, Mather wrote a lengthy essay which was copied and distributed to the judges. One of Mather's recommendations, invasive bodily searches for witch-marks, took place for the first time only days later, on June 2, 1692. Mather claimed not to have personally attended any sessions of the court of Oyer and Terminer (although his father attended the trial of George Burroughs). His contemporaries Calef and Thomas Brattle do place him at the executions (see below). Mather began to publicize and celebrate the trials well before they were put to an end: "If in the midst of the many Dissatisfaction among us, the publication of these Trials may promote such a pious Thankfulness unto God, for Justice being so far executed among us, I shall Re-joyce that God is Glorified." Mather called himself a historian not an advocate but, according to one modern writer, his writing largely presumes the guilt of the accused and includes such comments as calling Martha Carrier "a rampant hag". Mather referred to George Burroughs as a "very puny man" whose "tergiversations, contradictions, and falsehoods" made his testimony not "worth considering". The use of so-called "spectral evidence" The afflicted girls claimed that the semblance of a defendant, invisible to any but themselves, was tormenting them; it was greatly contested whether this should be considered evidence, but for the Court of Oyer and Terminer decided to allow it, despite the defendant's denial and profession of strongly held Christian beliefs. In his May 31, 1692 essay to the judges (see photo above), Mather expressed his support of the prosecutions, but also included some words of caution; "do not lay more stress on pure spectral evidence than it will bear … It is very certain that the Devils have sometimes represented the shapes of persons not only innocent, but also very virtuous. Though I believe that the just God then ordinarily provides a way for the speedy vindication of the persons thus abused." Return of the Ministers An opinion on the trials was sought from the ministers of the area in mid June. In an anonymous work written years later, Cotton Mather took credit for being the scribe: "drawn up at their desire, by Cotton Mather the younger, as I have been informed." The "Return of the Several Ministers" ambivalently discussed whether or not to allow spectral evidence. The original full version of the letter was reprinted in late 1692 in the final two pages of Increase Mather's Cases of Conscience. It is a curious document and remains a source of confusion and argument. Calef calls it "perfectly ambidexter, giving as great as greater encouragement to proceed in those dark methods, then cautions against them… indeed the Advice then given, looks most like a thing of his composing, as carrying both fire to increase and water to quench the conflagration." It seems likely that the "Several" ministers consulted did not agree, and thus Cotton Mather's careful construction and presentation of the advice, sent from Boston to Salem, could have been crucial to its interpretation (see photos). Thomas Hutchinson summarized the Return, "The two first and the last sections of this advice took away the force of all the others, and the prosecutions went on with more vigor than before." Reprinting the Return five years later in his anonymously published Life of Phips (1697), Cotton Mather omitted the fateful "two first and the last" sections, though they were the ones he had already given most attention in his "Wonders of the Invisible World" rushed into publication in the summer and early autumn of 1692. Pushing forward the August 19 executions On August 19, 1692, Mather attended the execution of George Burroughs (and four others who were executed after Mather spoke) and Robert Calef presents him as playing a direct and influential role: On September 2, 1692, after eleven of the accused had been executed, Cotton Mather wrote a letter to Chief Justice William Stoughton congratulating him on "extinguishing of as wonderful a piece of devilism as has been seen in the world" and claiming that "one half of my endeavors to serve you have not been told or seen." Regarding spectral evidence, Upham concludes that "Cotton Mather never in any public writing 'denounced the admission' of it, never advised its absolute exclusion; but on the contrary recognized it as a ground of 'presumption' … [and once admitted] nothing could stand against it. Character, reason, common sense, were swept away." In a letter to an English clergyman in 1692, Boston intellectual Thomas Brattle, criticizing the trials, said of the judges' use of spectral evidence: The later exclusion of spectral evidence from trials by Governor Phips, around the same time his own wife's (Lady Mary Phips) name coincidentally started being bandied about in connection with witchcraft, began in January 1693. This immediately brought about a sharp decrease in convictions. Due to a reprieve by Phips, there were no further executions. Phips's actions were vigorously opposed by William Stoughton. Bancroft notes that Mather considered witches "among the poor, and vile, and ragged beggars upon Earth", and Bancroft asserts that Mather considered the people against the witch trials to be witch advocates. Post-trials In the years after the trials, of the principal actors in the trial, whose lives are recorded after, neither he nor Stoughton admitted strong misgivings. For several years after the trials, Cotton Mather continued to defend them and seemed to hold out a hope for their return. Wonders of the Invisible World contained a few of Mather's sermons, the conditions of the colony and a description of witch trials in Europe. He somewhat clarified the contradictory advice he had given in Return of the Several Ministers, by defending the use of spectral evidence. Wonders of the Invisible World appeared around the same time as Increase Mather's Cases of Conscience. Mather did not sign his name or support his father's book initially: The last major events in Mather's involvement with witchcraft were his interactions with Mercy Short in December 1692 and Margaret Rule in September 1693. The latter brought a five year campaign by Boston merchant Robert Calef against the influential and powerful Mathers. Calef's book More Wonders of the Invisible World was inspired by the fear that Mather would succeed in once again stirring up new witchcraft trials, and the need to bear witness to the horrible experiences of 1692. He quotes the public apologies of the men on the jury and one of the judges. Increase Mather was said to have publicly burned Calef's book in Harvard Yard around the time he was removed from the head of the college and replaced by Samuel Willard. Poole vs. Upham Charles Wentworth Upham wrote Salem Witchcraft Volumes I and II With an Account of Salem Village and a History of Opinions on Witchcraft and Kindred Subjects, which runs to almost 1,000 pages. It came out in 1867 and cites numerous criticisms of Mather by Robert Calef. William Frederick Poole defended Mather from these criticisms. In 1869, Poole quoted from various school textbooks of the time demonstrating they were in agreement on Cotton Mather's role in the Witch Trial If anyone imagines that we are stating the case too strongly, let him try an experiment with the first bright boy he meets by asking,... 'Who got up Salem Witchcraft?'... he will reply, 'Cotton Mather'. Let him try another boy... 'Who was Cotton Mather?' and the answer will come, 'The man who was on horseback, and hung witches.' Poole was a librarian, and a lover of literature, including Mather's Magnalia "and other books and tracts, numbering nearly 400 [which] were never so prized by collectors as today." Poole announced his intention to redeem Mather's name, using as a springboard a harsh critique of Upham's book, via his own book Cotton Mather and Salem witchcraft. A quick search of the name Mather in Upham's book (referring to either father, son, or ancestors) shows that it occurs 96 times. Poole's critique runs less than 70 pages but the name "Mather" occurs many more times than the other book, which is more than ten times as long. Upham shows a balanced and complicated view of Cotton Mather, such as this first mention: "One of Cotton Mather's most characteristic productions is the tribute to his venerated master. It flows from a heart warm with gratitude." Upham's book refers to Robert Calef no fewer than 25 times with the majority of these regarding documents compiled by Calef in the mid-1690s and stating: "Although zealously devoted to the work of exposing the enormities connected with the witchcraft prosecutions, there is no ground to dispute the veracity of Calef as to matters of fact." He goes on to say that Calef's collection of writings "gave a shock to Mather's influence, from which it never recovered." Calef produced only the one book; he is self-effacing and apologetic for his limitations, and on the title page he is listed not as author but "collector". Poole, champion of literature, could not accept Calef whose "faculties, as indicated by his writings appear to us to have been of an inferior order;…", and his book "in our opinion, has a reputation much beyond its merits." Poole refers to Calef as Mather's "personal enemy" and opens a line, "Without discussing the character and motives of Calef…" but does not follow up on this suggestive comment to discuss any actual or purported motive or reason to impugn Calef. Upham responded to Poole (referring to Poole as "the Reviewer") in a book running five times as long and sharing the same title but with the clauses reversed: Salem Witchcraft and Cotton Mather. Many of Poole's arguments were addressed, but both authors emphasize the importance of Cotton Mather's difficult and contradictory view on spectral evidence, as copied in the final pages, called "The Return of Several Ministers", of Increase Mather's "Cases of Conscience". The debate continues: Kittredge vs. Burr Evidenced by the published opinion in the years that followed the Poole vs Upham debate, it would seem Upham was considered the clear winner (see Sibley, GH Moore, WC Ford, and GH Burr below.). In 1891, Harvard English professor Barrett Wendall wrote Cotton Mather, The Puritan Priest. His book often expresses agreement with Upham but also announces an intention to show Cotton Mather in a more positive light. "[Cotton Mather] gave utterance to many hasty things not always consistent with fact or with each other…" And some pages later: "[Robert] Calef's temper was that of the rational Eighteenth century; the Mathers belonged rather to the Sixteenth, the age of passionate religious enthusiasm." In 1907, George Lyman Kittredge published an essay that would become foundational to a major change in the 20th-century view of witchcraft and Mather culpability therein. Kittredge is dismissive of Robert Calef, and sarcastic toward Upham, but shows a fondness for Poole and a similar soft touch toward Cotton Mather. Responding to Kittredge in 1911, George Lincoln Burr, a historian at Cornell, published an essay that begins in a professional and friendly fashion toward both Poole and Kittredge, but quickly becomes a passionate and direct criticism, stating that Kittredge in the "zeal of his apology… reached results so startlingly new, so contradictory of what my own lifelong study in this field has seemed to teach, so unconfirmed by further research… and withal so much more generous to our ancestors than I can find it in my conscience to deem fair, that I should be less than honest did I not seize this earliest opportunity share with you the reasons for my doubts…". (In referring to "ancestors" Burr primarily means the Mathers, as is made clear in the substance of the essay.) The final paragraph of Burr's 1911 essay pushes these men's debate into the realm of a progressive creed … I fear that they who begin by excusing their ancestors may end by excusing themselves. Perhaps as a continuation of his argument, in 1914, George Lincoln Burr published a large compilation "Narratives". This book arguably continues to be the single most cited reference on the subject. Unlike Poole and Upham, Burr avoids forwarding his previous debate with Kittredge directly into his book and mentions Kittredge only once, briefly in a footnote citing both of their essays from 1907 and 1911, but without further comment. But in addition to the viewpoint displayed by Burr's selections, he weighs in on the Poole vs Upham debate at various times, including siding with Upham in a note on Thomas Brattle's letter, "The strange suggestion of W. F. Poole that Brattle here means Cotton Mather himself, is adequately answered by Upham…" Burr's "Narratives" reprint a lengthy but abridged portion of Calef's book and introducing it he digs deep into the historical record for information on Calef and concludes "…that he had else any grievance against the Mathers or their colleagues there is no reason to think." Burr finds that a comparison between Calef's work and original documents in the historical record collections "testify to the care and exactness…" 20th century revision: The Kittredge lineage at Harvard 1920–3 Kenneth B. Murdock wrote a doctoral dissertation on Increase Mather advised by Chester Noyes Greenough and Kittredge. Murdock's father was a banker hired in 1920 to run the Harvard Press and he published his son's dissertation as a handsome volume in 1925: Increase Mather, The Foremost American Puritan (Harvard University Press). Kittredge was right hand man to the elder Murdock at the Press. This work focuses on Increase Mather and is more critical of the son, but the following year he published a selection of Cotton Mather's writings with
Mather considered the people against the witch trials to be witch advocates. Post-trials In the years after the trials, of the principal actors in the trial, whose lives are recorded after, neither he nor Stoughton admitted strong misgivings. For several years after the trials, Cotton Mather continued to defend them and seemed to hold out a hope for their return. Wonders of the Invisible World contained a few of Mather's sermons, the conditions of the colony and a description of witch trials in Europe. He somewhat clarified the contradictory advice he had given in Return of the Several Ministers, by defending the use of spectral evidence. Wonders of the Invisible World appeared around the same time as Increase Mather's Cases of Conscience. Mather did not sign his name or support his father's book initially: The last major events in Mather's involvement with witchcraft were his interactions with Mercy Short in December 1692 and Margaret Rule in September 1693. The latter brought a five year campaign by Boston merchant Robert Calef against the influential and powerful Mathers. Calef's book More Wonders of the Invisible World was inspired by the fear that Mather would succeed in once again stirring up new witchcraft trials, and the need to bear witness to the horrible experiences of 1692. He quotes the public apologies of the men on the jury and one of the judges. Increase Mather was said to have publicly burned Calef's book in Harvard Yard around the time he was removed from the head of the college and replaced by Samuel Willard. Poole vs. Upham Charles Wentworth Upham wrote Salem Witchcraft Volumes I and II With an Account of Salem Village and a History of Opinions on Witchcraft and Kindred Subjects, which runs to almost 1,000 pages. It came out in 1867 and cites numerous criticisms of Mather by Robert Calef. William Frederick Poole defended Mather from these criticisms. In 1869, Poole quoted from various school textbooks of the time demonstrating they were in agreement on Cotton Mather's role in the Witch Trial If anyone imagines that we are stating the case too strongly, let him try an experiment with the first bright boy he meets by asking,... 'Who got up Salem Witchcraft?'... he will reply, 'Cotton Mather'. Let him try another boy... 'Who was Cotton Mather?' and the answer will come, 'The man who was on horseback, and hung witches.' Poole was a librarian, and a lover of literature, including Mather's Magnalia "and other books and tracts, numbering nearly 400 [which] were never so prized by collectors as today." Poole announced his intention to redeem Mather's name, using as a springboard a harsh critique of Upham's book, via his own book Cotton Mather and Salem witchcraft. A quick search of the name Mather in Upham's book (referring to either father, son, or ancestors) shows that it occurs 96 times. Poole's critique runs less than 70 pages but the name "Mather" occurs many more times than the other book, which is more than ten times as long. Upham shows a balanced and complicated view of Cotton Mather, such as this first mention: "One of Cotton Mather's most characteristic productions is the tribute to his venerated master. It flows from a heart warm with gratitude." Upham's book refers to Robert Calef no fewer than 25 times with the majority of these regarding documents compiled by Calef in the mid-1690s and stating: "Although zealously devoted to the work of exposing the enormities connected with the witchcraft prosecutions, there is no ground to dispute the veracity of Calef as to matters of fact." He goes on to say that Calef's collection of writings "gave a shock to Mather's influence, from which it never recovered." Calef produced only the one book; he is self-effacing and apologetic for his limitations, and on the title page he is listed not as author but "collector". Poole, champion of literature, could not accept Calef whose "faculties, as indicated by his writings appear to us to have been of an inferior order;…", and his book "in our opinion, has a reputation much beyond its merits." Poole refers to Calef as Mather's "personal enemy" and opens a line, "Without discussing the character and motives of Calef…" but does not follow up on this suggestive comment to discuss any actual or purported motive or reason to impugn Calef. Upham responded to Poole (referring to Poole as "the Reviewer") in a book running five times as long and sharing the same title but with the clauses reversed: Salem Witchcraft and Cotton Mather. Many of Poole's arguments were addressed, but both authors emphasize the importance of Cotton Mather's difficult and contradictory view on spectral evidence, as copied in the final pages, called "The Return of Several Ministers", of Increase Mather's "Cases of Conscience". The debate continues: Kittredge vs. Burr Evidenced by the published opinion in the years that followed the Poole vs Upham debate, it would seem Upham was considered the clear winner (see Sibley, GH Moore, WC Ford, and GH Burr below.). In 1891, Harvard English professor Barrett Wendall wrote Cotton Mather, The Puritan Priest. His book often expresses agreement with Upham but also announces an intention to show Cotton Mather in a more positive light. "[Cotton Mather] gave utterance to many hasty things not always consistent with fact or with each other…" And some pages later: "[Robert] Calef's temper was that of the rational Eighteenth century; the Mathers belonged rather to the Sixteenth, the age of passionate religious enthusiasm." In 1907, George Lyman Kittredge published an essay that would become foundational to a major change in the 20th-century view of witchcraft and Mather culpability therein. Kittredge is dismissive of Robert Calef, and sarcastic toward Upham, but shows a fondness for Poole and a similar soft touch toward Cotton Mather. Responding to Kittredge in 1911, George Lincoln Burr, a historian at Cornell, published an essay that begins in a professional and friendly fashion toward both Poole and Kittredge, but quickly becomes a passionate and direct criticism, stating that Kittredge in the "zeal of his apology… reached results so startlingly new, so contradictory of what my own lifelong study in this field has seemed to teach, so unconfirmed by further research… and withal so much more generous to our ancestors than I can find it in my conscience to deem fair, that I should be less than honest did I not seize this earliest opportunity share with you the reasons for my doubts…". (In referring to "ancestors" Burr primarily means the Mathers, as is made clear in the substance of the essay.) The final paragraph of Burr's 1911 essay pushes these men's debate into the realm of a progressive creed … I fear that they who begin by excusing their ancestors may end by excusing themselves. Perhaps as a continuation of his argument, in 1914, George Lincoln Burr published a large compilation "Narratives". This book arguably continues to be the single most cited reference on the subject. Unlike Poole and Upham, Burr avoids forwarding his previous debate with Kittredge directly into his book and mentions Kittredge only once, briefly in a footnote citing both of their essays from 1907 and 1911, but without further comment. But in addition to the viewpoint displayed by Burr's selections, he weighs in on the Poole vs Upham debate at various times, including siding with Upham in a note on Thomas Brattle's letter, "The strange suggestion of W. F. Poole that Brattle here means Cotton Mather himself, is adequately answered by Upham…" Burr's "Narratives" reprint a lengthy but abridged portion of Calef's book and introducing it he digs deep into the historical record for information on Calef and concludes "…that he had else any grievance against the Mathers or their colleagues there is no reason to think." Burr finds that a comparison between Calef's work and original documents in the historical record collections "testify to the care and exactness…" 20th century revision: The Kittredge lineage at Harvard 1920–3 Kenneth B. Murdock wrote a doctoral dissertation on Increase Mather advised by Chester Noyes Greenough and Kittredge. Murdock's father was a banker hired in 1920 to run the Harvard Press and he published his son's dissertation as a handsome volume in 1925: Increase Mather, The Foremost American Puritan (Harvard University Press). Kittredge was right hand man to the elder Murdock at the Press. This work focuses on Increase Mather and is more critical of the son, but the following year he published a selection of Cotton Mather's writings with an introduction that claims Cotton Mather was "not less but more humane than his contemporaries. Scholars have demonstrated that his advice to the witch judges was always that they should be more cautious in accepting evidence" against the accused. Murdock's statement seems to claim a majority view. But one wonders who Murdock would have meant by "scholars" at this time other than Poole, Kittredge, and TJ Holmes (below) and Murdock's obituary calls him a pioneer "in the reversal of a movement among historians of American culture to discredit the Puritan and colonial period…" 1924 Thomas J. Holmes was an Englishman with no college education, but he apprenticed in bookbinding and emigrated to the U.S. and became the librarian at the William G. Mather Library in Ohio where he likely met Murdock. In 1924, Holmes wrote an essay for the Bibliographical Society of America identifying himself as part of the Poole-Kittredge lineage and citing Kenneth B. Murdock's still unpublished dissertation. In 1932 Holmes published a bibliography of Increase Mather followed by Cotton Mather, A Bibliography (1940). Holmes often cites Murdock and Kittredge and is highly knowledgeable about the construction of books. Holmes' work also includes Cotton Mather's October 20, 1692 letter (see above) to his uncle opposing an end to the trials. 1930 Samuel Eliot Morison published Builders of the Bay Colony. Morison chose not to include anyone with the surname Mather or Cotton in his collection of twelve "builders" and in the bibliography writes "I have a higher opinion than most historians of Cotton Mather's Magnalia… Although Mather is inaccurate, pedantic, and not above suppresio veri, he does succeed in giving a living picture of the person he writes about." Whereas Kittredge and Murdock worked from the English department, Morison was from Harvard's history department. Morison's view seems to have evolved over the course of the 1930s, as can be seen in Harvard College in the Seventeenth Century (1936) published while Kittredge ran the Harvard press, and in a year that coincided with the tercentary of the college: "Since the appearance of Professor Kittredge's work, it is not necessary to argue that a man of learning…" of that era should be judged on his view of witchcraft. In The Intellectual Life of Colonial New England (1956), Morison writes that Cotton Mather found balance and level-thinking during the witchcraft trials. Like Poole, Morison suggests Calef had an agenda against Mather, without providing supporting evidence. 1953 Perry Miller published The New England Mind: From Colony to Province (Belknap Press of Harvard University Press). Miller worked from the Harvard English Department and his expansive prose contains few citations, but the "Bibliographical Notes" for Chapter XIII "The Judgement of the Witches" references the bibliographies of TJ Holmes (above) calling Holmes portrayal of Cotton Mather's composition of Wonders "an epoch in the study of Salem Witchcraft." However, following the discovery of the authentic holograph of the September 2, 1692 letter, in 1985, David Levin writes that the letter demonstrates that the timeline employed by TJ Holmes and Perry Miller, is off by "three weeks." Contrary to the evidence in the later arriving letter, Miller portrays Phips and Stoughton as pressuring Cotton Mather to write the book (p.201): "If ever there was a false book produced by a man whose heart was not in it, it is The Wonders….he was insecure, frightened, sick at heart…" The book "has ever since scarred his reputation," Perry Miller writes. Miller seems to imagine Cotton Mather as sensitive, tender, and a good vehicle for his jeremiad thesis: "His mind was bubbling with every sentence of the jeremiads, for he was heart and soul in the effort to reorganize them. 1969 Chadwick Hansen Witchcraft at Salem. Hansen states a purpose to "set the record straight" and reverse the "traditional interpretation of what happened at Salem…" and names Poole and Kittredge as like-minded influences. (Hansen reluctantly keys his footnotes to Burr's anthology for the reader's convenience, "in spite of [Burr's] anti-Puritan bias…") Hansen presents Mather as a positive influence on the Salem Trials and considers Mather's handling of the Goodwin children sane and temperate. Hansen posits that Mather was a moderating influence by opposing the death penalty for those who confessed—or feigned confession—such as Tituba and Dorcas Good, and that most negative impressions of him stem from his "defense" of the ongoing trials in Wonders of the Invisible World. Writing an introduction to a facsimile of Robert Calef's book in 1972, Hansen compares Robert Calef to Joseph Goebbels, and also explains that, in Hansen's opinion, women "are more subject to hysteria than men." 1971 The Admirable Cotton Mather by James Playsted Wood. A young adult book. In the preface, Wood discusses the Harvard-based revision and writes that Kittredge and Murdock "added to a better understanding of a vital and courageous man…" 1985 David Hall writes, "With [Kittredge] one great phase of interpretation came to a dead end." Hall writes that whether the old interpretation favored by "antiquarians" had begun with the "malice of Robert Calef or deep hostility to Puritanism," either way "such notions are no longer… the concern of the historian." But David Hall notes "one minor exception. Debate continues on the attitude and role of Cotton Mather…" Tercentenary of the trials and ongoing scholarship Toward the later half of the twentieth century, a number of historians at universities far from New England seemed to find inspiration in the Kittredge lineage. In Selected Letters of Cotton Mather Ken Silverman writes, "Actually, Mather had very little to do with the trials." Twelve pages later Silverman publishes, for the first time, a letter to chief judge William Stoughton on September 2, 1692, in which Cotton Mather writes "… I hope I can may say that one half of my endeavors to serve you have not been told or seen … I have labored to divert the thoughts of my readers with something of a designed contrivance…" Writing in the early 1980s, historian John Demos imputed to Mather a purportedly moderating influence on the trials. Coinciding with the tercentenary of the trials in 1992, there was a flurry of publications. Historian Larry Gregg highlights Mather's cloudy thinking and confusion between sympathy for the possessed, and the boundlessness of spectral evidence when Mather stated, "the devil have sometimes represented the shapes of persons not only innocent, but also the very virtuous." Historical and theological writings Cotton Mather was an extremely prolific writer, producing 388 different books and pamphlets during his lifetime. His most widely distributed work was Magnalia Christi Americana (which may be translated as "The Glorious Works of Christ in America"), subtitled "The ecclesiastical history of New England, from its first planting in the year 1620 unto the year of Our Lord 1698. In seven books." Despite the Latin title, the work is written in English. Mather began working on it towards the end of 1693 and it was finally published in London in 1702. The work incorporates information that Mather put together from a variety of sources, such as letters, diaries, sermons, Harvard College records, personal conversations, and the manuscript histories composed by William Hubbard and William Bradford. The Magnalia includes about fifty biographies of eminent New Englanders (ranging from John Eliot, the first Puritan missionary to the Native Americans, to Sir William Phips, the incumbent governor of Massachusetts at the time that Mather began writing), plus dozens of brief biographical sketches. According to Kenneth Silverman, an expert on early American literature and Cotton Mather's biographer, Silverman argues that, although Mather glorifies New England's Puritan past, in the Magnalia he also attempts to transcend the religious separatism of the old Puritan settlers, reflecting Mather's more ecumenical and cosmopolitan embrace of a Transatlantic Protestant Christianity that included, in addition to Mather's own Congregationalists, also Presbyterians, Baptists, and low church Anglicans. In 1693 Mather also began work on a grand intellectual project that he titled Biblia Americana, which sought to provide a commentary and interpretation of the Christian Bible in light of "all of the Learning in the World". Mather, who continued to work on it for many years, sought to incorporate into his reading of Scripture the new scientific knowledge and theories, including geography, heliocentrism, atomism, and Newtonianism. According to Silverman, the project "looks forward to Mather's becoming probably the most influential spokesman in New England for a rationalized, scientized Christianity." Mather could not find a publisher for the Biblia Americana, which remained in manuscript form during his lifetime. It is currently being edited in ten volumes, published by Mohr Siebeck under the direction of Reiner Smolinski and Jan Stievermann. As of 2019, six of the ten volumes have appeared in print. Conflict with Governor Dudley In Massachusetts at the start of the 18th century, Joseph Dudley was a highly controversial figure, as he had participated actively in the government of Sir Edmund Andros in 1686–1689. Dudley was among those arrested in the revolt of 1689, and was later called to London to answer the charges against him brought by a committee of the colonists. However, Dudley was able to pursue a successful political career in Britain. Upon the death in 1701 of acting governor William Stoughton, Dudley began enlisting support in London to procure appointment as the new governor of Massachusetts. Although the Mathers (to whom he was related by marriage), continued to resent Dudley's role in the Andros administration, they eventually came around to the view that Dudley would now be preferable as governor to the available alternatives, at a time when the English Parliament was threatening to repeal the Massachusetts Charter. With the Mathers' support, Dudley was appointed governor by the Crown and returned to Boston in 1702. Contrary to the promises that he had made to the Mathers, Governor Dudley proved a divisive and high-handed executive, reserving his patronage for a small circle composed of transatlantic merchants, Anglicans, and religious liberals such as Thomas Brattle, Benjamin Colman, and John Leverett. In the context of Queen Anne's War (1702–1713), Cotton Mather preached and published against Governor Dudley, whom Mather accused of corruption and misgovernment. Mather sought unsuccessfully to have Dudley replaced by Sir Charles Hobby. Outmaneuvered by Dudley, this political rivalry left Mather increasingly isolated at a time when Massachusetts society was steadily moving away from the Puritan tradition that Mather represented. Relationship with Harvard and Yale Cotton Mather was a fellow of Harvard College from 1690 to 1702, and at various times sat on its Board of Overseers. His father Increase had succeeded John Rogers as president of Harvard in 1684, first as acting president (1684–1686), later with the title of "rector" (1686–1692, during much of which period he was away from Massachusetts, pleading the Puritans' case before the Royal Court in London), and finally with the full title of president (1692–1701). Increase was unwilling to move permanently to the Harvard campus in Cambridge, Massachusetts, since his congregation in Boston was much larger than the Harvard student body, which at the time counted only a few dozen. Instructed by a committee of the Massachusetts General Assembly that the president of Harvard had to reside in Cambridge and preach to the students in person, Increase resigned in 1701 and was replaced by the Rev. Samuel Willard as acting president. Cotton Mather sought the presidency of Harvard, but in 1708 the fellows instead appointed a layman, John Leverett, who had the support of Governor Dudley. The Mathers disapproved of the increasing independence and liberalism of the Harvard faculty, which they regarded as laxity. Cotton Mather came to see the Collegiate School, which had moved in 1716 from Saybrook to New Haven, Connecticut, as a better vehicle for preserving the Puritan orthodoxy in New England. In 1718, Cotton convinced Boston-born British businessman Elihu Yale to make a charitable gift sufficient to ensure the school's survival. It was also Mather who suggested that the school change its name to Yale College after it accepted that donation. Cotton Mather sought the presidency of Harvard again after Leverett's death in 1724, but the fellows offered the position to the Rev. Joseph Sewall (son of Judge Samuel Sewall, who had repented publicly for his role in the Salem witch trials). When Sewall turned it down, Mather once again hoped that he might get the appointment. Instead, the fellows offered it to one of its own number, the Rev. Benjamin Coleman, an old rival of Mather. When Coleman refused it, the presidency went finally to the Rev. Benjamin Wadsworth. Advocacy for smallpox inoculation The practice of smallpox inoculation (as distinguished from to the later practice of vaccination) was developed possibly in 8th-century India or 10th-century China and by the 17th-century had reached Turkey. It was also practiced in western Africa, but we do not know when it started there. Inoculation or, rather, variolation, involved infecting a person via a cut in the skin with exudate from a patient with a relatively mild case of smallpox (variola), to bring about a manageable and recoverable infection that would provide later immunity. By the beginning of the 18th century, the Royal Society in England was discussing the practice of inoculation, and the smallpox epidemic in 1713 spurred further interest. It was not until 1721, however, that England recorded its first case of inoculation. Early New England Smallpox was a serious threat in colonial America, most devastating to Native Americans, but also to Anglo-American settlers. New England suffered smallpox epidemics in 1677, 1689–90, and 1702. It was highly contagious, and mortality could reach as high as 30 percent. Boston had been plagued by smallpox outbreaks in 1690 and 1702. During this era, public authorities in Massachusetts dealt with the threat primarily by means of quarantine. Incoming ships were quarantined in Boston Harbor, and any smallpox patients in town were held under guard or in a "pesthouse". In 1716, Onesimus, one of Mather's slaves, explained to Mather how he had been inoculated as a child in Africa. Mather was fascinated by the idea. By July 1716, he had read an endorsement of inoculation by Dr Emanuel Timonius of Constantinople in the Philosophical Transactions. Mather then declared, in a letter to Dr John Woodward of Gresham College in London, that he planned to press Boston's doctors to adopt the practice of inoculation should smallpox reach the colony again. By 1721, a whole generation of young Bostonians was vulnerable and memories of the last epidemic's horrors had by and large disappeared. Smallpox returned on April 22 of that year, when HMS Seahorse arrived from the West Indies carrying smallpox on board. Despite attempts to protect the town through quarantine, nine known cases of smallpox appeared in Boston by May 27, and by mid-June, the disease was spreading at an alarming rate. As a new wave of smallpox hit the area and continued to spread, many residents fled to outlying rural settlements. The combination of exodus, quarantine, and outside traders' fears disrupted business in the capital of the Bay Colony for weeks. Guards were stationed at the House of Representatives to keep Bostonians from entering without special permission. The death toll reached 101 in September, and the Selectmen, powerless to stop it, "severely limited the length of time funeral bells could toll." As one response, legislators delegated a thousand pounds from the treasury to help the people who, under these conditions, could no longer support their families. On June 6, 1721, Mather sent an abstract of reports on inoculation by Timonius and Jacobus Pylarinus to local physicians, urging them to consult about the matter. He received no response. Next, Mather pleaded his case to Dr. Zabdiel Boylston, who tried the procedure on his youngest son and two slaves—one grown and one a boy. All recovered in about a week. Boylston inoculated seven more people by mid-July. The epidemic peaked in October 1721, with 411 deaths; by February 26, 1722, Boston was again free from smallpox. The total number of cases since April 1721 came to 5,889, with 844 deaths—more than three-quarters of all the deaths in Boston during 1721. Meanwhile, Boylston had inoculated 287 people, with six resulting deaths. Inoculation debate Boylston and Mather's inoculation crusade "raised a horrid Clamour" among the people of Boston. Both Boylston and Mather were "Object[s] of their Fury; their furious Obloquies and Invectives", which Mather acknowledges in his diary. Boston's Selectmen, consulting a doctor who claimed that the practice caused many deaths and only spread the infection, forbade Boylston from performing it again. The New-England Courant published writers who opposed the practice. The editorial stance was that the Boston populace feared that inoculation spread, rather than prevented, the disease; however, some historians, notably H. W. Brands, have argued that this position was a result of the contrarian positions of editor-in-chief James Franklin (a brother of Benjamin Franklin). Public discourse ranged in tone from organized arguments by John Williams from Boston, who posted that "several arguments proving that inoculating the smallpox is not contained in the law of Physick, either natural or divine, and therefore unlawful", to those put forth in a pamphlet by Dr. William Douglass of Boston, entitled The Abuses and Scandals of Some Late Pamphlets in Favour of Inoculation of the Small Pox (1721), on the qualifications of inoculation's proponents. (Douglass was exceptional at the time for holding a medical degree from Europe.) At the extreme, in November 1721, someone hurled a lighted grenade into Mather's home. Medical opposition Several opponents of smallpox inoculation, among them John Williams, stated that there were only two laws of physick (medicine): sympathy and antipathy. In his estimation, inoculation was neither a sympathy toward a wound or a disease, or an antipathy toward one, but the creation of one. For this reason, its practice violated the natural laws of medicine, transforming health care practitioners into those who harm rather than heal. As with most colonists, Williams' Puritan beliefs were enmeshed in every aspect of his life, and he used the Bible to state his case. He quoted Matthew 9:12, when Jesus said: "It is not the healthy who need a doctor, but the sick." William Douglass proposed a more secular argument against inoculation, stressing the importance of reason over passion and urging the public to be pragmatic in their choices. In addition, he demanded that ministers leave the practice of medicine to physicians, and not meddle in areas where they
experiences in the war to write the book Psychological Warfare (1948), regarded by many in the field as a classic text. He eventually rose to the rank of colonel in the reserves. He was recalled to advise the British forces in the Malayan Emergency and the U.S. Eighth Army in the Korean War. While he was known to call himself a "visitor to small wars", he refrained from becoming involved in the Vietnam War, but is known to have done work for the Central Intelligence Agency. In 1969 CIA officer Miles Copeland Jr. wrote that Linebarger was "perhaps the leader practitioner of 'black' and 'gray' propaganda in the Western world". According to Joseph Burkholder Smith, a former CIA operative, he conducted classes in psychological warfare for CIA agents at his home in Washington under cover of his position at the School of Advanced International Studies. He traveled extensively and became a member of the Foreign Policy Association, and was called upon to advise President John F. Kennedy. Marriage and family In 1936, Linebarger married Margaret Snow. They had a daughter in 1942 and another in 1947. They divorced in 1949. In 1950, Linebarger married again to Genevieve Collins; they had no children. They remained married until his death from a heart attack in 1966, at Johns Hopkins University Medical Center in Baltimore, Maryland, at age 53. Linebarger had expressed a wish to retire to Australia, which he had visited in his travels. He is buried in Arlington National Cemetery, Section 35, Grave Number 4712. His widow, Genevieve Collins Linebarger, was interred with him on November 16, 1981. Case history debate Linebarger is long rumored to have been "Kirk Allen", the fantasy-haunted subject of "The Jet-Propelled Couch," a chapter in psychologist Robert M. Lindner's best-selling 1954 collection The Fifty-Minute Hour. According to Cordwainer Smith scholar Alan C. Elms, this speculation first reached print in Brian Aldiss's 1973 history of science fiction, Billion Year Spree; Aldiss, in turn, claimed to have received the information from science fiction fan and scholar Leon Stover. More recently, both Elms and librarian Lee Weinstein have gathered circumstantial evidence to support the case for Linebarger's being Allen, but both concede there is no direct proof that Linebarger was ever a patient of Lindner's or that he suffered from a disorder similar to that of Kirk Allen. Science fiction style According to Frederik Pohl Linebarger's identity as "Cordwainer Smith" was secret until his death. ("Cordwainer" is an archaic word for "a worker in cordwain or cordovan leather; a shoemaker", and a "smith" is "one who works in iron or other metals; esp. a blacksmith or farrier": two kinds of skilled workers with traditional materials.) Linebarger also employed the literary pseudonyms "Carmichael Smith" (for his political thriller Atomsk), "Anthony Bearden" (for his poetry) and "Felix C. Forrest" (for the novels Ria and Carola). Smith's stories are unusual, sometimes being written in narrative styles closer to traditional Chinese stories than to most English-language fiction, as well as reminiscent of the Genji tales of Lady Murasaki. The total volume of his science fiction output is relatively small, because of his time-consuming profession and his early death. Smith's works consist of one novel, originally published in two volumes in edited form as The Planet Buyer, also known as The Boy Who Bought Old Earth (1964) and The Underpeople (1968), and later restored to its original form as Norstrilia (1975); and 32 short stories (collected in The Rediscovery of Man (1993), including two versions of the short story "War No. 81-Q"). Linebarger's cultural links to China are partially expressed in the pseudonym "Felix C. Forrest", which he used in addition to "Cordwainer Smith": his godfather Sun Yat-Sen suggested to Linebarger that he adopt the Chinese name "Lin Bai-lo" (), which may be roughly translated as "Forest of Incandescent Bliss". ("Felix" is Latin for "happy".) In his later years, Linebarger proudly wore a tie with the Chinese characters for this name embroidered on it. As an expert in psychological warfare, Linebarger was very interested in the newly developing fields of psychology and psychiatry. He used many of their concepts in his fiction. His fiction often has religious overtones or motifs, particularly evident in characters who have no control over their actions. James B. Jordan argued for the importance of Anglicanism to Smith's works back to 1949. But Linebarger's daughter Rosana Hart has indicated that he did not become an Anglican until 1950, and was not strongly interested in religion until later still. The introduction to the collection Rediscovery of Man notes that from around 1960 Linebarger became more devout and expressed this in his writing. Linebarger's works are sometimes included in analyses of Christianity in fiction, along with the works of authors such as C. S. Lewis and J.R.R. Tolkien. Most of Smith's stories are set in the far future, between 4,000 and 14,000 years from now. After the Ancient Wars devastate Earth, humans, ruled by the Instrumentality of Mankind, rebuild and expand to the stars in the Second Age of Space around 6000 AD. Over the next few thousand years, mankind spreads to thousands of worlds and human life becomes safe but sterile, as robots and the animal-derived Underpeople take over many human jobs and humans themselves are genetically programmed as embryos for specified duties. Towards the end of this period, the Instrumentality attempts to revive old cultures and languages in a process known as the Rediscovery of Man, where humans emerges from their mundane utopia and Underpeople are freed from slavery. For years, Linebarger had a pocket notebook which he had filled with ideas about The Instrumentality and additional stories in the series. But while in a small boat in a lake or bay in the mid 60s, he leaned over the side, and his notebook fell out of his breast pocket into the water, where it was lost forever. Another story claims that he accidentally left the notebook in a restaurant in Rhodes in 1965. With the book gone, he felt empty of ideas, and decided to start a new series which was an allegory of Mid-Eastern politics. Smith's stories describe a long future history of Earth. The settings range from a postapocalyptic landscape with walled cities, defended by agents of the Instrumentality, to a state of sterile utopia, in which freedom can be found only deep below the surface, in long-forgotten and buried anthropogenic strata. These features may place Smith's works within the Dying Earth subgenre of science fiction. They are ultimately more optimistic and distinctive. Smith's most celebrated short story is his first-published, "Scanners Live in Vain", which led many of its earliest readers to assume that "Cordwainer Smith" was a new pen name for one of the established giants of the genre. It was selected as one of the best science fiction short stories of the pre-Nebula Award period by the Science Fiction and Fantasy Writers of America, appearing in The Science Fiction Hall of Fame Volume One, 1929-1964. "The Ballad of Lost C'Mell" was similarly honored, appearing in The Science Fiction Hall of Fame, Volume Two. After "Scanners Live in Vain", Smith's next story did not appear for several years, but from 1955 until his death in 1966 his stories appeared regularly, for the most part in Galaxy Science Fiction. His universe featured strange and vivid creations, such as: The planet Norstrilia (Old North Australia), a semi-arid planet where an immortality drug called stroon is harvested from gigantic, virus-infected sheep each weighing more than 100 tons. Norstrilians are nominally the
while in a small boat in a lake or bay in the mid 60s, he leaned over the side, and his notebook fell out of his breast pocket into the water, where it was lost forever. Another story claims that he accidentally left the notebook in a restaurant in Rhodes in 1965. With the book gone, he felt empty of ideas, and decided to start a new series which was an allegory of Mid-Eastern politics. Smith's stories describe a long future history of Earth. The settings range from a postapocalyptic landscape with walled cities, defended by agents of the Instrumentality, to a state of sterile utopia, in which freedom can be found only deep below the surface, in long-forgotten and buried anthropogenic strata. These features may place Smith's works within the Dying Earth subgenre of science fiction. They are ultimately more optimistic and distinctive. Smith's most celebrated short story is his first-published, "Scanners Live in Vain", which led many of its earliest readers to assume that "Cordwainer Smith" was a new pen name for one of the established giants of the genre. It was selected as one of the best science fiction short stories of the pre-Nebula Award period by the Science Fiction and Fantasy Writers of America, appearing in The Science Fiction Hall of Fame Volume One, 1929-1964. "The Ballad of Lost C'Mell" was similarly honored, appearing in The Science Fiction Hall of Fame, Volume Two. After "Scanners Live in Vain", Smith's next story did not appear for several years, but from 1955 until his death in 1966 his stories appeared regularly, for the most part in Galaxy Science Fiction. His universe featured strange and vivid creations, such as: The planet Norstrilia (Old North Australia), a semi-arid planet where an immortality drug called stroon is harvested from gigantic, virus-infected sheep each weighing more than 100 tons. Norstrilians are nominally the richest people in the galaxy and defend their immensely valuable stroon with sophisticated weapons (as shown in the story "Mother Hitton's Littul Kittons"). However, extremely high taxes ensure that everyone on the planet lives a frugal, rural life, like the farmers of old Australia, to keep the Norstrilians tough. The punishment world Shayol (cf. Sheol), where criminals are punished by the regrowth and harvesting of their organs for transplanting Planoforming spacecraft, which are crewed by humans telepathically linked with cats to defend against the attacks of malevolent entities in space, which are perceived by the humans as dragons, and by the cats as gigantic rats, in "The Game of Rat and Dragon". The Underpeople, animals modified into human form and intelligence to fulfill servile roles, and treated as property. Several stories feature clandestine efforts to liberate the Underpeople and grant them civil rights. They are seen everywhere throughout regions controlled by the Instrumentality. Names of Underpeople have a single-letter prefix based on their animal species. Thus C'Mell ("The Ballad of Lost C'Mell") is cat-derived; D'Joan ("The Dead Lady of Clown Town"), a Joan of Arc figure, is descended from dogs; and B'dikkat ("A Planet Named Shayol") has bovine ancestors. Habermans and their supervisors, Scanners, who are essential for space travel, but at the cost of having their sensory nerves cut to block the "pain of space", and who perceive only by vision and various life-support implants. A technological breakthrough removes the need for the treatment, but resistance among the Scanners to their perceived loss of status ensues, forming the basis of the story "Scanners Live in Vain". Early works in the timeline include neologisms which are not explained to any great extent, but serve to produce an atmosphere of strangeness. These words are usually derived from non-English words. For instance, manshonyagger derives from the German words "menschen" meaning, in some senses, "men" or "mankind", and "jäger", meaning a hunter, and refers to war machines that roam the wild lands between the walled cities and prey on men, except for those they can identify as Germans. Another example is "Meeya Meefla", the only city to have preserved its name from the pre-atomic era: evidently Miami, Florida, from its abbreviated form (as on road signs) "MIAMI FLA". Character names in the stories often derive from words in languages other than English. Smith seemed particularly fond of using numbers for this purpose. For instance, the name "Lord Sto Odin" in the story "Under Old Earth" is derived from the Russian words for "One hundred and one", сто один; it also suggests the name of the Norse god Odin. Quite a few of the names mean "five-six" in different languages, including both the robot Fisi (fi[ve]-si[x]), the dead Lady Panc Ashash (in Sanskrit "pañcha" [पञ्च] is "five" and "ṣaṣ" [षष्] is "six"), Limaono (lima-ono, Hawaiian and/or Fijian), Englok (ng5-luk6 [五-六], in Cantonese), Goroke (go-roku [五-六], Japanese) and Femtiosex ("fifty-six" in Swedish) in "The Dead Lady of Clown Town" as well as the main character in "Think Blue, Count Two", Veesey-koosey, which is an English transcription of the Finnish
Store book, in the PHIGS 3D API Chirp spread spectrum, a modulation concept, part of the standard IEEE 802.15.4aCSS Proprietary software, software that is not distributed with source code; sometimes known as closed-source software Computational social science, academic sub-disciplines concerned with computational approaches to the social sciences Content Scramble System, an encryption algorithm in DVDs Content Services Switch, a family of load balancers produced by Cisco CSS code, a type of error-correcting code in quantum information theory Arts and entertainment Campus SuperStar, a popular Singapore school-based singing competition Closed Shell Syndrome, a fictional disease in the Ghost in the Shell television series Comcast/Charter Sports Southeast, a defunct southeast U.S. sports cable television network Counter-Strike: Source, an online first-person shooter computer game CSS (band), Cansei de Ser Sexy, a Brazilian electro-rock band Government Canadian Survey Ship, of the Canadian Hydrographic Service Center for Strategic Studies in Iran Central Security Service, the military component of the US National Security Agency Central Superior Services of Pakistan Chicago South Shore and South Bend Railroad, a U.S. railroad Committee for State Security (Bulgaria), a former name for the Bulgarian secret service KGB, the Committee for State Security, the Soviet Union's security agency Supreme Security Council of Moldova, named (CSS) in Romanian Military Combat service support Confederate Secret Service, the secret service operations of the Confederate States of America during the American Civil War Confederate States Ship, a ship of the historical naval branch of the Confederate States armed forces Dongfeng missile, a Chinese surface-to-surface missile system (NATO designation code CSS) Schools and education Centennial Secondary School (disambiguation) Certificat de Sécurité Sauvetage, the former name of Certificat de formation à la sécurité, the French national degree required to be flight attendant in France Chase Secondary School, British Columbia, Canada Clementi Secondary School, Hong Kong SAR, China College of Social Studies, at Wesleyan University, Middletown, Connecticut, USA College of St. Scholastica, Duluth, Minnesota, USA Colorado Springs School, Colorado Springs, CO, USA Columbia Secondary School, New York,
network Citizens Signpost Service, a body of the European Commission Community Service Society of New York Congregation of the Sacred Stigmata, or Stigmatines, a Catholic religious order Cryptogamic Society of Scotland, a Scottish botanical research society Medicine and health science Cancer-specific survival, survival rates specific to cancer type Cytokine storm syndrome Churg–Strauss syndrome, a type of autoimmune vasculitis, also known as eosinophilic granulomatosis with polyangiitis Cross-sectional study, a study collecting data across a population at one point in time Coronary steal syndrome, the syndrome resulting from the blood flow problem called coronary steal Carotid sinus syndrome (carotid sinus syncope)—see Carotid sinus § Disease of the carotid sinus Other uses Chessington South railway station, a National Rail station code in England Chicago South Shore and South Bend Railroad, a freight railroad between Chicago, Illinois, and South Bend, Indiana Constant surface speed, a mode of machine tool operation, an aspect of speeds and feeds Context-sensitive solutions, in transportation planning Customer satisfaction survey, a tool used in customer satisfaction research Cyclic steam stimulation, an oil field extraction technique; see Steam injection (oil industry) Cab Signaling System, a train protection system Close-space sublimation, a method for producing thin film solar cells, esp. Cadmium telluride Competition Scratch Score, an element of the golf handicapping system in the United Kingdom and Republic of Ireland The ISO 639-3 code for Southern Ohlone, also known as Costanoan, an indigenous language or language family spoken in California See also Cross-site
by the Charlton family, descendants of the noted Border Reivers family of the English Middle March, the lodge formed part of the extensive Hesleyside estate, located some 10 miles from Hesleyside Hall itself. Consisting of the main house, stable block, hunting-dog kennels and gamekeepers bothy, when the property was acquired by the Chesters Estate in 1887 the 'Cairnsyke' estate consisted of several thousand acres of moorland, much of which was managed to support shooting of the formerly populous black grouse. Although much of this land has now reverted to fellside or has been otherwise managed as part of the commercial timber plantations of Kielder Forest, areas of heather moorland persist, dotted with remnants of the shooting butts. It is with reference to these fells that the 1887
the property was acquired by the Chesters Estate in 1887 the 'Cairnsyke' estate consisted of several thousand acres of moorland, much of which was managed to support shooting of the formerly populous black grouse. Although much of this land has now reverted to fellside or has been otherwise managed as part of the commercial timber plantations of Kielder Forest, areas of heather moorland persist, dotted with remnants of the shooting butts. It is with reference to these fells that
to attack several ships when given a chance, including a Dutchman and a New York privateer. Both were out of bounds of his commission. The latter would have been considered out of bounds because New York was part of the territories of the Crown, and Kidd was authorised in part by the New York governor. Some of the crew deserted Kidd the next time that Adventure Galley anchored offshore. Those who decided to stay on made constant open threats of mutiny. Kidd killed one of his own crewmen on 30 October 1697. Kidd's gunner William Moore was on deck sharpening a chisel when a Dutch ship appeared. Moore urged Kidd to attack the Dutchman, an act that would have been considered piratical, since the nation was not at war with England, but also certain to anger Dutch-born King William. Kidd refused, calling Moore a lousy dog. Moore retorted, "If I am a lousy dog, you have made me so; you have brought me to ruin and many more." Kidd reportedly dropped an ironbound bucket on Moore, fracturing his skull. Moore died the following day. Seventeenth-century English admiralty law allowed captains great leeway in using violence against their crew, but outright killing was not permitted. Kidd said to his ship's surgeon that he had "good friends in England, that will bring me off for that". Accusations of piracy Escaped prisoners told stories of being hoisted up by the arms and "drubbed" (thrashed) with a drawn cutlass by Kidd. But on one occasion, crew members ransacked the trading ship Mary and tortured several of its crew members while Kidd and the other captain, Thomas Parker, conversed privately in Kidd's cabin. When Kidd found out what had happened, he was outraged and forced his men to return most of the stolen property. Kidd was declared a pirate very early in his voyage by a Royal Navy officer, to whom he had promised "thirty men or so". Kidd sailed away during the night to preserve his crew, rather than subject them to Royal Navy impressment. The letter of marque was intended to protect a privateer's crew from such impressment. On 30 January 1698, Kidd raised French colours and took his greatest prize, the 400-ton Quedagh Merchant, an Indian ship hired by Armenian merchants. It was loaded with satins, muslins, gold, silver, and an incredible variety of East Indian merchandise, as well as extremely valuable silks. The captain of Quedagh Merchant was an Englishman named Wright, who had purchased passes from the French East India Company promising him the protection of the French Crown. After realising the captain of the taken vessel was an Englishman, Kidd tried to persuade his crew to return the ship to its owners, but they refused, claiming that their prey was perfectly legal. Kidd was commissioned to take French ships, and an Armenian ship counted as French if it had French passes. In an attempt to maintain his tenuous control over his crew, Kidd relented and kept the prize. When news of his capture of this ship reached England, however, officials classified Kidd as a pirate. Various naval commanders were ordered to "pursue and seize the said Kidd and his accomplices" for the "notorious piracies" they had committed. Kidd kept the French sea passes of the Quedagh Merchant, as well as the vessel itself. While the passes were at best a dubious defence of his capture, British admiralty and vice-admiralty courts (especially in North America) heretofore had often winked at privateers' excesses into piracy. Kidd might have hoped that the passes would provide the legal figleaf that would allow him to keep Quedagh Merchant and her cargo. Renaming the seized merchantman as Adventure Prize, he set sail for Madagascar. On 1 April 1698, Kidd reached Madagascar. After meeting privately with trader Tempest Rogers (who would later be accused of trading and selling Kidd's looted East India goods), he found the first pirate of his voyage, Robert Culliford (the same man who had stolen Kidd's ship at Antigua years before) and his crew aboard Mocha Frigate. Two contradictory accounts exist of how Kidd proceeded. According to A General History of the Pyrates, published more than 25 years after the event by an author whose identity is disputed by historians, Kidd made peaceful overtures to Culliford: he "drank their Captain's health", swearing that "he was in every respect their Brother", and gave Culliford "a Present of an Anchor and some Guns". This account appears to be based on the testimony of Kidd's crewmen Joseph Palmer and Robert Bradinham at his trial. The other version was presented by Richard Zacks in his 2002 book The Pirate Hunter: The True Story of Captain Kidd. According to Zacks, Kidd was unaware that Culliford had only about 20 crew with him, and felt ill-manned and ill-equipped to take Mocha Frigate until his two prize ships and crews arrived. He decided to leave Culliford alone until these reinforcements arrived. After Adventure Prize and Rouparelle reached port, Kidd ordered his crew to attack Culliford's Mocha Frigate. However, his crew refused to attack Culliford and threatened instead to shoot Kidd. Zacks does not refer to any source for his version of events. Both accounts agree that most of Kidd's men abandoned him for Culliford. Only 13 remained with Adventure Galley. Deciding to return home, Kidd left the Adventure Galley behind, ordering her to be burnt because she had become worm-eaten and leaky. Before burning the ship, he salvaged every last scrap of metal, such as hinges. With the loyal remnant of his crew, he returned to the Caribbean aboard the Adventure Prize, stopping first at St. Augustine's Bay for repairs. Some of his crew later returned to North America on their own as passengers aboard Giles Shelley's ship Nassau. The 1698 Act of Grace, which offered a royal pardon to pirates in the Indian Ocean, specifically exempted Kidd (and Henry Every) from receiving a pardon, in Kidd's case due to his association with prominent Whig statesmen. Kidd became aware both that he was wanted and that he could not make use of the Act of Grace upon his arrival in Anguilla, his first port of call since St. Augustine's Bay. Trial and execution Prior to returning to New York City, Kidd knew that he was wanted as a pirate and that several English men-of-war were searching for him. Realizing that Adventure Prize was a marked vessel, he cached it in the Caribbean Sea, sold off his remaining plundered goods through pirate and fence William Burke, and continued towards New York aboard a sloop. He deposited some of his treasure on Gardiners Island, hoping to use his knowledge of its location as a bargaining tool. Kidd landed in Oyster Bay to avoid mutinous crew who had gathered in New York City. To avoid them, Kidd sailed around the eastern tip of Long Island, and doubled back along the Sound to Oyster Bay. He felt this was a safer passage than the highly trafficked Narrows between Staten Island and Brooklyn. New York Governor Bellomont, also an investor, was away in Boston, Massachusetts. Aware of the accusations against Kidd, Bellomont was afraid of being implicated in piracy himself and believed that presenting Kidd to England in chains was his best chance to survive. He lured Kidd into Boston with false promises of clemency, and ordered him arrested on 6 July 1699. Kidd was placed in Stone Prison, spending most of the time in solitary confinement. His wife, Sarah, was also arrested and imprisoned. The conditions of Kidd's imprisonment were extremely harsh, and were said to have driven him at least temporarily insane. By then, Bellomont had turned against Kidd and other pirates, writing that the inhabitants of Long Island were "a lawless and unruly people" protecting pirates who had "settled among them". After over a year, Kidd was sent to England for questioning by the Parliament of England. The civil government had changed and the new Tory ministry hoped to use Kidd as a tool to discredit the Whigs who had backed him, but
expected to encounter off Madagascar. With his ambitious enterprise failing, Kidd became desperate to cover its costs. Yet he failed to attack several ships when given a chance, including a Dutchman and a New York privateer. Both were out of bounds of his commission. The latter would have been considered out of bounds because New York was part of the territories of the Crown, and Kidd was authorised in part by the New York governor. Some of the crew deserted Kidd the next time that Adventure Galley anchored offshore. Those who decided to stay on made constant open threats of mutiny. Kidd killed one of his own crewmen on 30 October 1697. Kidd's gunner William Moore was on deck sharpening a chisel when a Dutch ship appeared. Moore urged Kidd to attack the Dutchman, an act that would have been considered piratical, since the nation was not at war with England, but also certain to anger Dutch-born King William. Kidd refused, calling Moore a lousy dog. Moore retorted, "If I am a lousy dog, you have made me so; you have brought me to ruin and many more." Kidd reportedly dropped an ironbound bucket on Moore, fracturing his skull. Moore died the following day. Seventeenth-century English admiralty law allowed captains great leeway in using violence against their crew, but outright killing was not permitted. Kidd said to his ship's surgeon that he had "good friends in England, that will bring me off for that". Accusations of piracy Escaped prisoners told stories of being hoisted up by the arms and "drubbed" (thrashed) with a drawn cutlass by Kidd. But on one occasion, crew members ransacked the trading ship Mary and tortured several of its crew members while Kidd and the other captain, Thomas Parker, conversed privately in Kidd's cabin. When Kidd found out what had happened, he was outraged and forced his men to return most of the stolen property. Kidd was declared a pirate very early in his voyage by a Royal Navy officer, to whom he had promised "thirty men or so". Kidd sailed away during the night to preserve his crew, rather than subject them to Royal Navy impressment. The letter of marque was intended to protect a privateer's crew from such impressment. On 30 January 1698, Kidd raised French colours and took his greatest prize, the 400-ton Quedagh Merchant, an Indian ship hired by Armenian merchants. It was loaded with satins, muslins, gold, silver, and an incredible variety of East Indian merchandise, as well as extremely valuable silks. The captain of Quedagh Merchant was an Englishman named Wright, who had purchased passes from the French East India Company promising him the protection of the French Crown. After realising the captain of the taken vessel was an Englishman, Kidd tried to persuade his crew to return the ship to its owners, but they refused, claiming that their prey was perfectly legal. Kidd was commissioned to take French ships, and an Armenian ship counted as French if it had French passes. In an attempt to maintain his tenuous control over his crew, Kidd relented and kept the prize. When news of his capture of this ship reached England, however, officials classified Kidd as a pirate. Various naval commanders were ordered to "pursue and seize the said Kidd and his accomplices" for the "notorious piracies" they had committed. Kidd kept the French sea passes of the Quedagh Merchant, as well as the vessel itself. While the passes were at best a dubious defence of his capture, British admiralty and vice-admiralty courts (especially in North America) heretofore had often winked at privateers' excesses into piracy. Kidd might have hoped that the passes would provide the legal figleaf that would allow him to keep Quedagh Merchant and her cargo. Renaming the seized merchantman as Adventure Prize, he set sail for Madagascar. On 1 April 1698, Kidd reached Madagascar. After meeting privately with trader Tempest Rogers (who would later be accused of trading and selling Kidd's looted East India goods), he found the first pirate of his voyage, Robert Culliford (the same man who had stolen Kidd's ship at Antigua years before) and his crew aboard Mocha Frigate. Two contradictory accounts exist of how Kidd proceeded. According to A General History of the Pyrates, published more than 25 years after the event by an author whose identity is disputed by historians, Kidd made peaceful overtures to Culliford: he "drank their Captain's health", swearing that "he was in every respect their Brother", and gave Culliford "a Present of an Anchor and some Guns". This account appears to be based on the testimony of Kidd's crewmen Joseph Palmer and Robert Bradinham at his trial. The other version was presented by Richard Zacks in his 2002 book The Pirate Hunter: The True Story of Captain Kidd. According to Zacks, Kidd was unaware that Culliford had only about 20 crew with him, and felt ill-manned and ill-equipped to take Mocha Frigate until his two prize ships and crews arrived. He decided to leave Culliford alone until these reinforcements arrived. After Adventure Prize and Rouparelle reached port, Kidd ordered his crew to attack Culliford's Mocha Frigate. However, his crew refused to attack Culliford and threatened instead to shoot Kidd. Zacks does not refer to any source for his version of events. Both accounts agree that most of Kidd's men abandoned him for Culliford. Only 13 remained with Adventure Galley. Deciding to return home, Kidd left the Adventure Galley behind, ordering her to be burnt because she had become worm-eaten and leaky. Before burning the ship, he salvaged every last scrap of metal, such as hinges. With the loyal remnant of his crew, he returned to the Caribbean aboard the Adventure Prize, stopping first at St. Augustine's Bay for repairs. Some of his crew later returned to North America on their own as passengers aboard Giles Shelley's ship Nassau. The 1698 Act of Grace, which offered a royal pardon to pirates in the Indian Ocean, specifically exempted Kidd (and Henry Every) from receiving a pardon, in Kidd's case due to his association with prominent Whig statesmen. Kidd became aware both that he was wanted and that he could not make use of the Act of Grace upon his arrival in Anguilla, his first port of call since St. Augustine's Bay. Trial and execution Prior to returning to New York City, Kidd knew that he was wanted as a pirate and that several English men-of-war were searching for him. Realizing that Adventure Prize was a marked vessel, he cached it in the Caribbean Sea, sold off his remaining plundered goods through pirate and fence William Burke, and continued towards New York aboard a sloop. He deposited some of his treasure on Gardiners Island, hoping to use his knowledge of its location as a bargaining tool. Kidd landed in Oyster Bay to avoid mutinous crew who had gathered in New York City. To avoid them, Kidd sailed around the eastern tip of Long Island, and doubled back along the Sound to Oyster Bay. He felt this was a safer passage than the highly trafficked Narrows between Staten Island and Brooklyn. New York Governor Bellomont, also an investor, was away in Boston, Massachusetts. Aware of the accusations against Kidd, Bellomont was afraid of being implicated in piracy himself and believed that presenting Kidd to England in chains was his best chance to survive. He lured Kidd into Boston with false promises of clemency, and ordered him arrested on 6 July 1699. Kidd was placed in Stone Prison, spending most of the time in solitary confinement. His wife, Sarah, was also arrested and imprisoned. The conditions of Kidd's imprisonment were extremely harsh, and were said to have driven him at least temporarily insane. By then, Bellomont had turned against Kidd and other pirates, writing that the inhabitants of Long Island were "a lawless and unruly people" protecting pirates who had "settled among them". After over a year, Kidd was sent to England for questioning by the Parliament of England. The civil government had changed and the new Tory ministry hoped to use Kidd as a tool to discredit the Whigs who had backed him, but Kidd refused to name names, naively confident his patrons would reward his loyalty by interceding on his behalf. There is speculation that he could have been spared had he talked. Finding Kidd politically useless, the Tory leaders sent him to stand trial before the High Court of Admiralty in London, for the charges of piracy on high seas and the murder of William Moore. Whilst awaiting trial, Kidd was confined in the infamous Newgate Prison. He wrote several letters to King William requesting clemency. Kidd had two lawyers to assist in his defence. He was shocked to learn at his trial that he was charged with murder. He was found guilty on all charges (murder and five counts of piracy) and sentenced to death. He was hanged in a public execution on 23 May 1701, at Execution Dock, Wapping, in London. He had to be hanged twice. On the first attempt, the hangman's rope broke and Kidd survived. Although
soluble protein that binds Ca2+ ions (a second messenger in signal transduction), rendering it inactive. The Ca2+ is bound with low affinity, but high capacity, and can be released on a signal (see inositol trisphosphate). Calreticulin is located in storage compartments associated with the endoplasmic reticulum and is considered an ER resident protein. The term "Mobilferrin" is considered to be the same as calreticulin by some sources. Function Calreticulin binds to misfolded proteins and prevents them from being exported from the endoplasmic reticulum to the Golgi apparatus. A similar quality-control molecular chaperone, calnexin, performs the same service for soluble proteins as does calreticulin, however it is a membrane-bound protein. Both proteins, calnexin and calreticulin, have the function of binding to oligosaccharides containing terminal glucose residues, thereby targeting them for degradation. Calreticulin and Calnexin's ability to bind carbohydrates associates them with the lectin protein family. In normal cellular function, trimming of glucose residues off the core oligosaccharide added during N-linked glycosylation is a part of protein processing. If "overseer" enzymes note that residues are misfolded, proteins within the rER will re-add glucose residues so that other calreticulin/calnexin can bind to these proteins and prevent them from proceeding to the Golgi. This leads these aberrantly folded proteins down a path whereby they are targeted for degradation. Studies on transgenic mice reveal that calreticulin is a cardiac embryonic gene that is essential during development. Calreticulin and calnexin are also integral proteins in the production of MHC class I Proteins. As newly synthesized MHC class I α-chains enter the endoplasmic reticulum, calnexin binds on to them retaining them in a partly folded state. After the β2-microglobulin binds to the peptide-loading complex (PLC), calreticulin (along with ERp57) takes over the job of chaperoning the MHC class I protein while the tapasin links the complex to the transporter associated with antigen processing (TAP) complex. This association prepares the MHC class I for binding an antigen for presentation on the cell surface. Transcription regulation Calreticulin is also found in the nucleus, suggesting that it may
ions (a second messenger in signal transduction), rendering it inactive. The Ca2+ is bound with low affinity, but high capacity, and can be released on a signal (see inositol trisphosphate). Calreticulin is located in storage compartments associated with the endoplasmic reticulum and is considered an ER resident protein. The term "Mobilferrin" is considered to be the same as calreticulin by some sources. Function Calreticulin binds to misfolded proteins and prevents them from being exported from the endoplasmic reticulum to the Golgi apparatus. A similar quality-control molecular chaperone, calnexin, performs the same service for soluble proteins as does calreticulin, however it is a membrane-bound protein. Both proteins, calnexin and calreticulin, have the function of binding to oligosaccharides containing terminal glucose residues, thereby targeting them for degradation. Calreticulin and Calnexin's ability to bind carbohydrates associates them with the lectin protein family. In normal cellular function, trimming of glucose residues off the core oligosaccharide added during N-linked glycosylation is a part of protein processing. If "overseer" enzymes note that residues are misfolded, proteins within the rER will re-add glucose residues so that other calreticulin/calnexin can bind to these proteins and prevent them from proceeding to the Golgi. This leads these aberrantly folded proteins down a path whereby they are targeted for degradation. Studies on transgenic mice reveal that calreticulin
lakeside settlements are evident in Ireland from 4500 BC, these settlements are not crannogs, as they were not intended to be islands. Despite having a lengthy chronology, their use was not at all consistent or unchanging. Crannog construction and occupation was at its peak in Scotland from about 800 BC to AD 200. Not surprisingly, crannogs have useful defensive properties, although there appears to be more significance to prehistoric use than simple defense, as very few weapons or evidence for destruction appear in excavations of prehistoric crannogs. In Ireland, crannogs were at their zenith during the Early Historic period, when they were the homes and retreats of kings, lords, prosperous farmers and, occasionally, socially marginalised groups, such as monastic hermits or metalsmiths who could work in isolation. Despite scholarly concepts supporting a strict Early Historic evolution, Irish excavations are increasingly uncovering examples that date from the "missing" Iron Age in Ireland. Construction The construction techniques for a crannog (prehistoric or otherwise) are as varied as the multitude of finished forms that make up the archaeological record. Island settlement in Scotland and Ireland is manifest through the entire range of possibilities ranging from entirely natural, small islets to completely artificial islets, therefore definitions remain contentious. For crannogs in the strict sense, typically the construction effort began on a shallow reef or rise in the lochbed. When timber was available, many crannogs were surrounded by a circle of wooden piles, with axe-sharpened bases that were driven into the bottom, forming a circular enclosure that helped to retain the main mound and prevent erosion. The piles could also be joined together by mortise and tenon, or large holes cut to carefully accept specially shaped timbers designed to interlock and provide structural rigidity. On other examples, interior surfaces were built up with any mixture of clay, peat, stone, timber or brush – whatever was available. In some instances, more than one structure was built on crannogs. In other types of crannogs, builders and occupants added large stones to the waterline of small natural islets, extending and enlarging them over successive phases of renewal. Larger crannogs could be occupied by extended families or communal groups, and access was either by logboats or coracles. Evidence for timber or stone causeways exists on a large number of crannogs. The causeways may have been slightly submerged; this has been interpreted as a device to make access difficult but may also be a result of loch level fluctuations over the ensuing centuries or millennia. Organic remains are often found in excellent condition on these water-logged sites. The bones of cattle, deer, and swine have been found in excavated crannogs, while remains of wooden utensils and even dairy products have been completely preserved for several millennia. Fire and reconstruction In June 2021, the Loch Tay Crannog was seriously damaged in a fire but funding was given to repair the structure, and conserve the museum materials retained. The UNESCO Chair in Refugee Integration through Languages and the Arts, Professor Alison Phipps, OBE of Glasgow University and African artist Tawona Sithole considered its future and its impact as a symbol of common human history and 'potent ways of healing' including restarting the creative weaving with Soay sheep wool in 'a thousand touches'. Notes References External links Robert Lenfert MA PhD, crannogs.weebly.com, "Living on Water: Scottish Crannogs and Island Dwellings". Crannog.co.uk, The Scottish Crannog Centre Reconstruction of a crannog. McMahonsOfMonaghan.org, Crannog illustration showing attack in Monaghan, Ireland in the 16th century. Channel4.com, Time Team on Crannogs. Channel4.com, Time Team excavation at Loch Migdale, January 2004. Canmore, Royal Commission on the Ancient and Historical Monuments of Scotland's Canmore database, a searchable database of archaeological and architectural sites in Scotland, including crannogs. About.com, Llangors Crannog. The Iron Age Crannogs of Ayrshire, www.youtube.com, Crannogs in Ayrshire, Scotland. Fortifications by
proper excavation failed to accurately measure or record stratigraphy, thereby failing to provide a secure context for artefact finds. Thus only extremely limited interpretations are possible. Preservation and conservation techniques for waterlogged materials such as logboats or structural material were all but non-existent, and a number of extremely important finds were destroyed as a result: in some instances dried out for firewood. From about 1900 to the late 1940s there was very little crannog excavation in Scotland, while some important and highly influential contributions were made in Ireland. In contrast, relatively few crannogs have been excavated since the Second World War. But this number has steadily grown, especially since the early 1980s, and may soon surpass pre-war totals. The overwhelming majority of crannogs show multiple phases of occupation and re-use, often extending over centuries. Thus the re-occupiers may have viewed crannogs as a legacy that was alive in local tradition and memory. Crannog reoccupation is important and significant, especially in the many instances of crannogs built near natural islets, which were often completely unused. This long chronology of use has been verified by both radiocarbon dating and more precisely by dendrochronology. Interpretations of crannog function have not been static; instead they appear to have changed in both the archaeological and historic records. Rather than the simple domestic residences of prehistory, the medieval crannogs were increasingly seen as strongholds of the upper class or regional political players, such as the Gaelic chieftains of the O'Boylans and McMahons in County Monaghan and the Kingdom of Airgíalla, until the 17th century. In Scotland, the medieval and post-medieval use of crannogs is also documented into the early 18th century. Whether this increase in status is real, or just a by-product of increasingly complex material assemblages, remains to be convincingly validated. History The earliest-known constructed crannog is the completely artificial Neolithic islet of Eilean Dòmhnuill, Loch Olabhat on North Uist in Scotland. Eilean Domhnuill has produced radiocarbon dates ranging from 3650 to 2500 BC. Irish crannogs appear in middle Bronze Age layers at Ballinderry (1200–600 BC). Recent radiocarbon dating of worked timber found in Loch Bhorghastail on the Isle of Lewis has produced evidence of crannogs as old as 3380-3630 BC. Prior to the Bronze Age, the existence of artificial island settlement in Ireland is not as clear. While lakeside settlements are evident in Ireland from 4500 BC, these settlements are not crannogs, as they were not intended to be islands. Despite having a lengthy chronology, their use was not at all consistent or unchanging. Crannog construction and occupation was at its peak in Scotland from about 800 BC to AD 200. Not surprisingly, crannogs have useful defensive properties, although there appears to be more significance to prehistoric use than simple defense, as very few weapons or evidence for destruction appear in excavations of prehistoric crannogs. In Ireland, crannogs were at their zenith during the Early Historic period, when they were the homes and retreats of kings, lords, prosperous farmers and, occasionally, socially marginalised groups, such as monastic hermits or metalsmiths who could work in isolation. Despite scholarly concepts supporting a strict Early Historic evolution, Irish excavations are increasingly uncovering examples that date from the "missing" Iron Age in Ireland. Construction The construction techniques for a crannog (prehistoric or otherwise) are as varied as the multitude of finished forms that make up the archaeological record. Island settlement in Scotland and Ireland is manifest through the entire range of possibilities ranging from entirely natural, small islets to completely artificial islets, therefore definitions remain contentious. For crannogs in the strict sense, typically the construction effort began on a shallow reef or rise in the lochbed. When timber was available, many crannogs were surrounded by a circle of wooden piles, with axe-sharpened bases that were driven into the bottom, forming a circular enclosure that helped to retain the main mound and prevent erosion. The piles could also be joined together by mortise and tenon, or large holes cut to carefully accept specially shaped timbers designed to interlock and provide structural rigidity. On other examples, interior surfaces were built up with any mixture of clay, peat, stone, timber or brush – whatever was available. In some instances, more than one structure was built on crannogs. In other types of crannogs, builders and occupants added large stones to the waterline of small natural islets, extending and enlarging them over successive phases of renewal. Larger crannogs could be occupied by extended families or communal groups, and access was either by logboats or coracles. Evidence for timber or stone causeways exists on a large number of crannogs. The causeways may have been slightly submerged; this has been interpreted as a device to make access difficult but may also be a result of loch level fluctuations over the ensuing centuries or millennia. Organic remains are often found in excellent condition on these water-logged sites. The bones of cattle, deer, and swine have been found in excavated crannogs, while remains of wooden utensils and even dairy products have been completely preserved for several millennia. Fire and reconstruction In June 2021, the Loch Tay Crannog was seriously damaged in a fire but funding was given to repair the structure, and conserve the museum materials retained. The UNESCO Chair in Refugee Integration through Languages and the Arts, Professor Alison Phipps, OBE of Glasgow
2003Nov09 2003-Nov-9 2003-Nov-09 2003-Nov-9, Sunday 2003. 9. – The official format in Hungary, point after year and day, month name with small initial. Following shorter formats also can be used: 2003. . 9., 2003. 11. 9., 2003. XI. 9. 2003.11.9 using dots and no leading zeros, common in China. 2003.11.09 2003/11/09 using slashes and leading zeros, common in Internet in Japan. 2003/11/9 03/11/09 20031109 : the "basic format" profile of ISO 8601, an 8-digit number providing monotonic date codes, common in computing and increasingly used in dated computer file names. It is used in the standard iCalendar file format defined in RFC 5545. A big advantage of the ISO 8601 "basic format" is that a simple textual sort is equivalent to a sort by date. It is also extended through the universal big-endian format clock time: 9 November 2003, 18h 14m 12s, or 2003/11/9/18:14:12 or (ISO 8601) 2003-11-09T18:14:12. Gregorian, month–day–year (MDY) This sequence is used primarily in the Philippines and the United States. This date format was commonly used alongside the little-endian form in the United Kingdom until the mid-20th century and can be found in both defunct and modern print media such as the London Gazette and The Times, respectively. This format was also commonly used by several English-language print media in many former British colonies and also one of two formats commonly used in India during British Raj era until the mid-20th century. In the United States, it is said as of Sunday, November 9, for example, although usage of "the" isn't uncommon (e.g. Sunday, November the 9th, and even November the 9th, Sunday, are also possible and readily understood). Thursday, November 9, 2006 November 9, 2006 Nov 9, 2006 Nov-9-2006 Nov-09-2006 11/9/2006 or 11/09/2006 11-09-2006 or 11-9-2006 11.09.2006 or 11.9.2006 11.09.06 11/09/06 20060911 The modern convention is to avoid using the ordinal (th, st, rd, nd) form of numbers when the day follows the month (July 4 or July 4, 2006). The ordinal was common in the past and is still sometimes used ([the] 4th [of] July or July 4th). Gregorian, year–day–month (YDM) This date format is used in Kazakhstan, Latvia, Nepal, and Turkmenistan. According to the official rules of documenting dates by governmental authorities, the long date format in Kazakh is written in the year–day–month order, e.g. 2006 5 April (). Standards There are several standards that specify date formats: ISO 8601 Data elements and interchange formats – Information interchange – Representation of dates and times specifies YYYY-MM-DD (the separators are optional, but only hyphens are allowed to be used), where all values are fixed length numeric, but also allows YYYY-DDD, where DDD is the ordinal number of the day within the year, e.g. 2001–365. RFC 3339 Date and Time on the Internet: Timestamps specifies YYYY-MM-DD, i.e. a particular subset of the options allowed by ISO 8601. RFC 5322 Internet Message Format specifies day month year where day is one or two digits, month is a three letter month abbreviation, and year is four digits. Usage overloading Many numerical forms can create confusion when used in international correspondence, particularly when abbreviating the year to its final two digits, with no context. For example, "07/08/06" could refer to either 7 August 2006 or July 8, 2006 (or 1906, or the sixth year of any century), or 6 August 2007. In the United States, dates are rarely written in purely numerical forms in formal writing, although they are very common elsewhere; when numerical forms are used, the month appears first. In the United Kingdom, while it is regarded as acceptable albeit less common to write month-name day, year, this order is never used when written numerically. However, as an exception, the American shorthand "9/11" is widely understood as referring to the September 11, 2001 terrorist attacks. When numbers are used to represent months, a significant amount of confusion can arise from the ambiguity of a date order; especially when the numbers representing the day, month, or year are low, it can be impossible to tell which order is being used. This can be clarified by using four digits to represent years, and naming the month; for example, "Feb" instead of "02". The ISO 8601 date order with four-digit years: YYYY-MM-DD (introduced in ISO 2014), is specifically chosen to be unambiguous. The ISO 8601 standard also has the advantage of being language independent and is therefore useful when there may be no language context and a universal application is desired (expiration dating on export products, for example). Many Internet sites use YYYY-MM-DD, and those using other conventions often use -MMM- for the month to further clarify and avoid ambiguity (2001-MAY-09, 9-MAY-2001, MAY 09 2001, etc.). In addition, the International Organization for Standardization considers its ISO 8601 standard to make sense from a logical perspective. Mixed units, for example, feet and inches, or pounds and ounces, are normally written with the largest unit first, in decreasing order. Numbers are also written in that order, so the digits of 2006 indicate, in order, the millennium, the century within the millennium, the decade within the century, and the year within the decade. The only date order that is consistent with these well-established conventions is year–month–day. A plain text list of dates with this format can be easily sorted by file managers, word processors, spreadsheets, and other software tools with built-in sorting functions. Some database systems use an eight-digit YYYYMMDD representation to handle date values. Naming folders with YYYY-MM-DD at the beginning allows them to be listed in date order when sorting by name – especially useful for organizing document libraries. An early U.S. Federal Information Processing Standard recommended 2-digit years. This is now widely recognized as extremely problematic, because of the year 2000 problem. Some U.S. government agencies now use ISO 8601 with 4-digit years. When transitioning from one date notation to another, people often write both styles; for example Old Style and New Style dates in the transition from the Julian to the Gregorian calendar. Advantages for ordering in sequence One of the advantages of using the ISO 8601 date format is that the lexicographical order (ASCIIbetical) of the representations is equivalent to the chronological order of the dates, assuming that all dates are in the same time zone. Thus dates can be sorted using simple string comparison algorithms, and indeed by any left to right collation. For example: 2003-02-28 (28 February 2003) sorts before 2006-03-01 (1 March 2006) which sorts before 2015-01-30 (30 January 2015) The YYYY-MM-DD layout is the only common format that can provide this. Sorting other date representations involves some parsing of the date strings. This also works when a time in 24-hour format is included after the date, as long as all times are understood to be in the same time zone. ISO 8601 is used widely where concise, human-readable yet easily computable and unambiguous dates are required, although
is regarded as acceptable albeit less common to write month-name day, year, this order is never used when written numerically. However, as an exception, the American shorthand "9/11" is widely understood as referring to the September 11, 2001 terrorist attacks. When numbers are used to represent months, a significant amount of confusion can arise from the ambiguity of a date order; especially when the numbers representing the day, month, or year are low, it can be impossible to tell which order is being used. This can be clarified by using four digits to represent years, and naming the month; for example, "Feb" instead of "02". The ISO 8601 date order with four-digit years: YYYY-MM-DD (introduced in ISO 2014), is specifically chosen to be unambiguous. The ISO 8601 standard also has the advantage of being language independent and is therefore useful when there may be no language context and a universal application is desired (expiration dating on export products, for example). Many Internet sites use YYYY-MM-DD, and those using other conventions often use -MMM- for the month to further clarify and avoid ambiguity (2001-MAY-09, 9-MAY-2001, MAY 09 2001, etc.). In addition, the International Organization for Standardization considers its ISO 8601 standard to make sense from a logical perspective. Mixed units, for example, feet and inches, or pounds and ounces, are normally written with the largest unit first, in decreasing order. Numbers are also written in that order, so the digits of 2006 indicate, in order, the millennium, the century within the millennium, the decade within the century, and the year within the decade. The only date order that is consistent with these well-established conventions is year–month–day. A plain text list of dates with this format can be easily sorted by file managers, word processors, spreadsheets, and other software tools with built-in sorting functions. Some database systems use an eight-digit YYYYMMDD representation to handle date values. Naming folders with YYYY-MM-DD at the beginning allows them to be listed in date order when sorting by name – especially useful for organizing document libraries. An early U.S. Federal Information Processing Standard recommended 2-digit years. This is now widely recognized as extremely problematic, because of the year 2000 problem. Some U.S. government agencies now use ISO 8601 with 4-digit years. When transitioning from one date notation to another, people often write both styles; for example Old Style and New Style dates in the transition from the Julian to the Gregorian calendar. Advantages for ordering in sequence One of the advantages of using the ISO 8601 date format is that the lexicographical order (ASCIIbetical) of the representations is equivalent to the chronological order of the dates, assuming that all dates are in the same time zone. Thus dates can be sorted using simple string comparison algorithms, and indeed by any left to right collation. For example: 2003-02-28 (28 February 2003) sorts before 2006-03-01 (1 March 2006) which sorts before 2015-01-30 (30 January 2015) The YYYY-MM-DD layout is the only common format that can provide this. Sorting other date representations involves some parsing of the date strings. This also works when a time in 24-hour format is included after the date, as long as all times are understood to be in the same time zone. ISO 8601 is used widely where concise, human-readable yet easily computable and unambiguous dates are required, although many applications store dates internally as UNIX time and only convert to ISO 8601 for display. It is worth noting that all modern computer Operating Systems retain date information of files outside of their titles, allowing the user to choose which format they prefer and have them sorted thus, irrespective of the files' names. Specialized usage Day and year only The U.S. military sometimes uses a system, which they call "Julian date format" that indicates the year and the actual day out of the 365 days of the year (and thus a designation of the month would not be needed). For example, "11 December 1999" can be written in some contexts as "1999345" or "99345", for the 345th day of 1999. This system is most often used in US military logistics since it simplifies the process of calculating estimated shipping and arrival dates. For example: say a tank engine takes an estimated 35 days to ship by sea from the US to South Korea. If the engine is sent on 06104 (Friday, 14 April 2006), it should arrive on 06139 (Friday, 19 May). Note that outside of the US military and some US government agencies, including the Internal Revenue Service, this format is usually referred to as "ordinal date", rather than "Julian date". Such ordinal date formats are also used by many computer programs (especially those for mainframe systems). Using a three-digit Julian day number saves one byte of computer storage over a two-digit month plus two-digit day, for example, "January 17" is 017 in Julian versus 0117 in month-day format. OS/390 or its successor, z/OS, display dates in yy.ddd format for most operations. UNIX time stores time as a number in seconds since the beginning of the UNIX Epoch (1970-01-01). Another "ordinal" date system ("ordinal" in the sense of advancing in value by one as the date advances
and Norwegian, where it is the word for a funerary coffin. Regional examples Sri Lanka bellanbedipalassa pothana Ibbankatuwa Megalithic Stones Udaranchamadama England Hepburn woods, Northumberland Estonia Jõelähtme (Rebala) stone-cist graves, Harju County Guatemala Mundo Perdido, Tikal, Petén Department Israel Tel Kabri (Area A), Upper Galilee Scotland Balblair cist, Beauly, Inverness Dunan Aula, Craignish, Argyll and Bute Holm Mains Farm, Inverness See also Kistvaen Dartmoor kistvaens Stone box grave References External links Pretanic World - Chart of Neolithic, Bronze Age and
under a cairn or long barrow. Several cists are sometimes found close together within the same cairn or barrow. Often ornaments have been found within an excavated cist, indicating the wealth or prominence of the interred individual. This old word is preserved in the nordic languages as "kista" in Swedish and "kiste" in Danish and Norwegian, where it is the word for a funerary coffin. Regional examples Sri Lanka bellanbedipalassa pothana Ibbankatuwa Megalithic Stones Udaranchamadama England Hepburn woods,
abelian group, , is all of . The center of the Heisenberg group, , is the set of matrices of the form: The center of a nonabelian simple group is trivial. The center of the dihedral group, , is trivial for odd . For even , the center consists of the identity element together with the 180° rotation of the polygon. The center of the quaternion group, , is . The center of the symmetric group, , is trivial for . The center of the alternating group, , is trivial for . The center of the general linear group over a field , , is the collection of scalar matrices, . The center of the orthogonal group, is . The center of the special orthogonal group, is the whole group when , and otherwise when n is even, and trivial when n is odd. The center of the unitary group, is . The center of the special unitary group, is . The center of the multiplicative group of non-zero quaternions is the multiplicative group of non-zero real numbers. Using the class equation, one can prove that the center of any non-trivial finite p-group is non-trivial. If the quotient group is cyclic, is abelian (and hence , so is trivial). The center of the megaminx group is a cyclic group of order 2, and the center of the kilominx group is trivial. Higher centers Quotienting out by the center of a group yields a sequence of groups called the upper central series: The kernel of the map is the th center of (second center, third center, etc.) and is denoted . Concretely, the ()-st center are the terms that commute with all elements up to an element of the th center. Following this definition, one can define the 0th center of a group to be the identity subgroup. This can be continued to transfinite ordinals by transfinite induction; the union of all the higher centers is
the set of matrices of the form: The center of a nonabelian simple group is trivial. The center of the dihedral group, , is trivial for odd . For even , the center consists of the identity element together with the 180° rotation of the polygon. The center of the quaternion group, , is . The center of the symmetric group, , is trivial for . The center of the alternating group, , is trivial for . The center of the general linear group over a field , , is the collection of scalar matrices, . The center of the orthogonal group, is . The center of the special orthogonal group, is the whole group when , and otherwise when n is even, and trivial when n is odd. The center of the unitary group, is . The center of the special unitary group, is . The center of the multiplicative group of non-zero quaternions is the multiplicative group of non-zero real numbers. Using the class equation, one can prove that the center of any non-trivial finite p-group is non-trivial. If the quotient group is cyclic, is abelian (and hence , so is trivial). The center of the megaminx group is a cyclic group of order 2, and the center of the kilominx group is trivial. Higher centers Quotienting out by the center of a group yields a sequence of groups called the upper central series: The kernel of the map is the th center of (second center, third center, etc.) and is denoted . Concretely, the ()-st center are the terms that commute with all elements up to an element of the
the gentry. Limited reforms were enough to antagonise the ruling class but not enough to satisfy the radicals. Despite its unpopularity, the Rump was a link with the old constitution and helped to settle England down and make it secure after the biggest upheaval in its history. By 1653, France and Spain had recognised England's new government. Reforms Though the Church of England was retained, episcopacy was suppressed and the Act of Uniformity 1558 was repealed in September 1650. Mainly on the insistence of the Army, many independent churches were tolerated, although everyone still had to pay tithes to the established church. Some small improvements were made to law and court procedure; for example, all court proceedings were now conducted in English rather than in Law French or Latin. However, there were no widespread reforms of the common law. This would have upset the gentry, who regarded the common law as reinforcing their status and property rights. The Rump passed many restrictive laws to regulate people's moral behaviour, such as closing down theatres and requiring strict observance of Sunday. This antagonised most of the gentry. Dismissal Cromwell, aided by Thomas Harrison, forcibly dismissed the Rump on 20 April 1653, for reasons that are unclear. Theories are that he feared the Rump was trying to perpetuate itself as the government, or that the Rump was preparing for an election which could return an anti-Commonwealth majority. Many former members of the Rump continued to regard themselves as England's only legitimate constitutional authority. The Rump had not agreed to its own dissolution; their legal, constitutional view it was unlawful was based on Charles' concessionary Act prohibiting the dissolution of Parliament without its own consent (on 11 May 1641, leading to the entire Commonwealth being the latter years of the Long Parliament in their majority view). Barebone's Parliament, July–December 1653 The dissolution of the Rump was followed by a short period in which Cromwell and the Army ruled alone. Nobody had the constitutional authority to call an election, but Cromwell did not want to impose a military dictatorship. Instead, he ruled through a 'nominated assembly' which he believed would be easy for the Army to control since Army officers did the nominating. Barebone's Parliament was opposed by former Rumpers and ridiculed by many gentries as being an assembly of 'inferior' people. However, over 110 of its 140 members were lesser gentry or of higher social status. (An exception was Praise-God Barebone, a Baptist merchant after whom the Assembly got its derogatory nickname.) Many were well educated. The assembly reflected the range of views of the officers who nominated it. The Radicals (approximately 40) included a hard core of Fifth Monarchists who wanted to be rid of Common Law and any state control of religion. The Moderates (approximately 60) wanted some improvements within the existing system and might move to either the radical or conservative side depending on the issue. The Conservatives (approximately 40) wanted to keep the status quo (since Common Law protected the interests of the gentry, and tithes and advowsons were valuable property). Cromwell saw Barebone's Parliament as a temporary legislative body which he hoped would produce reforms and develop a constitution for the Commonwealth. However, members were divided over key issues, only 25 had previous parliamentary experience, and although many had some legal training, there were no qualified lawyers. Cromwell seems to have expected this group of 'amateurs' to produce reform without management or direction. When the radicals mustered enough support to defeat a bill which would have preserved the status quo in religion, the conservatives, together with many moderates, surrendered their authority back to Cromwell who sent soldiers to clear the rest of the Assembly. Barebone's Parliament was over. The Protectorate, 1653–1659 Throughout 1653, Cromwell and the Army slowly dismantled the machinery of the Commonwealth state. The English Council of State, which had assumed the executive function formerly held by the King and his Privy Council, was forcibly dissolved by Cromwell on 20 April, and in its place a new council, filled with Cromwell's own chosen men, was installed. Three days after Barebone's Parliament dissolved itself, the Instrument of Government was adopted by Cromwell's council and a new state structure, now known historically as The Protectorate, was given its shape. This new constitution granted Cromwell sweeping powers as Lord Protector, an office which ironically had much the same role and powers as the King had under the monarchy, a fact not lost on Cromwell's critics. On 12 April 1654, under the terms of the Tender of Union, the Ordinance for uniting Scotland into one Commonwealth with England was issued by the Lord Protector and proclaimed in Scotland by the military governor of Scotland, General George Monck, 1st Duke of Albemarle. The ordinance declared that "the people of Scotland should be united with the people of England into one Commonwealth and under one Government" and decreed that a new "Arms of the Commonwealth", incorporating the Saltire, should be placed on "all the public seals, seals of office, and seals of bodies civil or corporate, in Scotland" as "a badge of this Union". First Protectorate Parliament Cromwell and his Council of State spent the first several months of 1654 preparing for the First Protectorate Parliament by drawing up a set of 84 bills for consideration. The Parliament was freely elected (as free as such elections could be in the 17th century) and as such, the Parliament was filled with a wide range of political interests, and as such did not accomplish any of its goals; it was dissolved as soon as law would allow by Cromwell having passed none of Cromwell's proposed bills. Rule of the Major-Generals and Second Protectorate Parliament Having decided that Parliament was not an efficient means of getting his policies enacted, Cromwell instituted a system of direct military rule of England during a period known as the Rule of the Major-Generals; all of England was divided into ten regions, each was governed directly by one of Cromwell's Major-Generals, who were given sweeping powers to collect taxes and enforce the peace. The Major-Generals were highly unpopular, a fact that they themselves noticed and many urged Cromwell to call another Parliament to give his rule legitimacy. Unlike the prior Parliament, which had been open to all eligible males in the Commonwealth, the new elections specifically excluded Catholics and Royalists from running or voting; as a result, it was stocked with members who were more in line with Cromwell's own politics. The first major bill to be brought up for debate was the Militia Bill, which was ultimately voted down by the House. As a result, the authority of the Major-Generals to collect taxes to support their own regimes ended, and the Rule of the Major Generals came to an end. The second piece of major legislation was the passage of the Humble Petition and Advice, a sweeping constitutional reform which had two purposes. The first was to reserve for Parliament certain rights, such as a three-year fixed-term (which the Lord Protector was required to abide by) and to reserve for the Parliament the sole right of taxation. The second, as a concession to Cromwell, was to make the Lord Protector a hereditary position and to convert the title to a formal constitutional Kingship. Cromwell refused the title of King, but accepted the rest of the legislation, which was passed in final form on 25 May 1657. A second session of the Parliament met in 1658; it allowed previously excluded MPs (who had been not allowed to take their seats because of Catholic and/or Royalist leanings) to take their seats, however, this made the Parliament far less compliant to the wishes of Cromwell and the Major-Generals; it accomplished little in the way of a legislative agenda and was dissolved after a few months. Richard Cromwell and the Third Protectorate Parliament On the death of Oliver Cromwell in 1658, his son, Richard Cromwell, inherited the title, Lord Protector. Richard had never served in the Army, which meant he lost control over the Major-Generals that had been the source of his own father's power. The Third Protectorate Parliament was summoned in late 1658 and was seated on 27 January 1659. Its first act was to confirm Richard's role as Lord Protector, which it did by a sizeable, but not overwhelming, majority. Quickly, however, it became apparent that Richard had no control over the Army and divisions quickly developed in the Parliament. One faction called for a recall of the Rump Parliament and a return to the constitution of the Commonwealth, while another preferred the existing constitution. As the parties grew increasingly quarrelsome, Richard dissolved it. He was quickly removed from power, and the remaining Army leadership recalled the
a few months at a time. Several administrative structures were tried, and several Parliaments called and seated, but little in the way of meaningful, lasting legislation was passed. The only force keeping it together was the personality of Oliver Cromwell, who exerted control through the military by way of the "Grandees", being the Major-Generals and other senior military leaders of the New Model Army. Not only did Cromwell's regime crumble into near anarchy upon his death and the brief administration of his son, but the monarchy he overthrew was restored in 1660, and its first act was officially to erase all traces of any constitutional reforms of the Republican period. Still, the memory of the Parliamentarian cause, dubbed Good Old Cause by the soldiers of the New Model Army, lingered on. It would carry through English politics and eventually result in a constitutional monarchy. The Commonwealth period is better remembered for the military success of Thomas Fairfax, Oliver Cromwell, and the New Model Army. Besides resounding victories in the English Civil War, the reformed Navy under the command of Robert Blake defeated the Dutch in the First Anglo-Dutch War which marked the first step towards England's naval supremacy. In Ireland, the Commonwealth period is remembered for Cromwell's brutal subjugation of the Irish, which continued the policies of the Tudor and Stuart periods. 1649–1653 Rump Parliament The Rump was created by Pride's Purge of those members of the Long Parliament who did not support the political position of the Grandees in the New Model Army. Just before and after the execution of King Charles I on 30 January 1649, the Rump passed a number of acts of Parliament creating the legal basis for the republic. With the abolition of the monarchy, Privy Council and the House of Lords, it had unchecked executive and legislative power. The English Council of State, which replaced the Privy Council, took over many of the executive functions of the monarchy. It was selected by the Rump, and most of its members were MPs. However, the Rump depended on the support of the Army with which it had a very uneasy relationship. After the execution of Charles I, the House of Commons abolished the monarchy and the House of Lords. It declared the people of England "and of all the Dominions and Territories thereunto belonging" to be henceforth under the governance of a "Commonwealth", effectively a republic. Structure In Pride's Purge, all members of parliament (including most of the political Presbyterians) who would not accept the need to bring the King to trial had been removed. Thus the Rump never had more than two hundred members (less than half the number of the Commons in the original Long Parliament). They included: supporters of religious independents who did not want an established church and some of whom had sympathies with the Levellers; Presbyterians who were willing to countenance the trial and execution of the King; and later admissions, such as formerly excluded MPs who were prepared to denounce the Newport Treaty negotiations with the King. Most Rumpers were gentry, though there was a higher proportion of lesser gentry and lawyers than in previous parliaments. Less than one-quarter of them were regicides. This left the Rump as basically a conservative body whose vested interests in the existing land ownership and legal systems made it unlikely to want to reform them. Issues and achievements For the first two years of the Commonwealth, the Rump faced economic depression and the risk of invasion from Scotland and Ireland. By 1653 Cromwell and the Army had largely eliminated these threats. There were many disagreements amongst factions of the Rump. Some wanted a republic, but others favoured retaining some type of monarchical government. Most of England's traditional ruling classes regarded the Rump as an illegal government made up of regicides and upstarts. However, they were also aware that the Rump might be all that stood in the way of an outright military dictatorship. High taxes, mainly to pay the Army, were resented by the gentry. Limited reforms were enough to antagonise the ruling class but not enough to satisfy the radicals. Despite its unpopularity, the Rump was a link with the old constitution and helped to settle England down and make it secure after the biggest upheaval in its history. By 1653, France and Spain had recognised England's new government. Reforms Though the Church of England was retained, episcopacy was suppressed and the Act of Uniformity 1558 was repealed in September 1650. Mainly on the insistence of the Army, many independent churches were tolerated, although everyone still had to pay tithes to the established church. Some small improvements were made to law and court procedure; for example, all court proceedings were now conducted in English rather than in Law French or Latin. However, there were no widespread reforms of the common law. This would have upset the gentry, who regarded the common law as reinforcing their status and property rights. The Rump passed many restrictive laws to regulate people's moral behaviour, such as closing down theatres and requiring strict observance of Sunday. This antagonised most of the gentry. Dismissal Cromwell, aided by Thomas Harrison, forcibly dismissed the Rump on 20 April 1653, for reasons that are unclear. Theories are that he feared the Rump was trying to perpetuate itself as the government, or that the Rump was preparing for an election which could return an anti-Commonwealth majority. Many former members of the Rump continued to regard themselves as England's only legitimate constitutional authority. The Rump had not agreed to its own dissolution; their legal, constitutional view it was unlawful was based on Charles' concessionary Act prohibiting the dissolution of Parliament without its own consent (on 11 May 1641, leading to the entire Commonwealth being the latter years of the Long Parliament in their majority view). Barebone's Parliament, July–December 1653 The dissolution of the Rump
was a member of the Republican Party for 30 years when he spoke warmly of the 2008 election of Barack Obama as the first black President of the United States. During the 2016 presidential election Evers supported Donald Trump's presidential campaign. Electoral campaigns In 1968, Evers used volunteer armed guards to protect his Jackson residence during the campaign when he competed with six white candidates for the vacant congressional seat which became open when John Bell Williams was elected governor. In 1971, Evers ran in the gubernatorial general election, but was defeated by Democrat William "Bill" Waller, 601,222 (77 percent) to 172,762 (22.1 percent). Waller had prosecuted the murder case of suspect Byron De La Beckwith. When Waller gave a victory speech on election night, Evers drove across town to a local TV station to congratulate him. A reporter later wrote that Waller's aides learned Evers was in the building and tried to hustle the governor-elect out of the studio as soon as the interview ended. They were not quite quick enough. Surrounded by photographers, reporters, and television crews, Evers approached Waller's car just as it was about to pull out. Waller and his wife were in the back seat. "I just wanted to congratulate you," said Evers. "Whaddya say, Charlie?" boomed Waller. His wife leaned across with a stiff smile and shook the loser's hand. During the campaign Evers told reporters that his main purpose in running was to encourage registration of black voters. In 1978, Evers ran as an independent for the U.S. Senate seat vacated by Democrat James Eastland. He finished in third place behind his opponents, Democrat Maurice Dantin and Republican Thad Cochran. He received 24 percent of the vote, likely siphoning off African-American votes that would have otherwise gone to Dantin. Cochran won the election with a plurality of 45 percent of the vote. With the shift in white voters moving into the Republican Party in the state (and the rest of the South), Cochran was continuously re-elected to his Senate seat. After his failed Senate race, Evers briefly switched political parties and became a Republican. In 1983, Evers ran as an independent for governor of Mississippi but lost to the Democrat Bill Allain. Republican Leon Bramlett of Clarksdale, also known as a college All-American football player, finished second with 39 percent of the vote. Books Evers wrote two autobiographies or memoirs: Evers (1971), written with Grace Halsell and self-published; and Have No Fear, written with Andrew Szanton and published by John Wiley & Sons (1997). Personal life Evers was briefly married to Christine Evers until their marriage ended in annulment. In 1951, Evers married Nannie L. Magee, with whom he had four daughters. The couple divorced in June 1974. Evers lived in Brandon, Mississippi, and served as station manager of WMPR 90.1 FM in Jackson. On July 22, 2020, Evers died in Brandon at age 97. Media portrayal Evers was portrayed by Bill Cobbs in the 1996 film Ghosts of Mississippi (1996). Honors 1969: Evers was named "Man of the Year" by the NAACP. 2012: Evers was honored with a marker on the Mississippi Blues Trail in Fayette. See also List of civil rights leaders Notes References Further reading Charles Evers and Andrew Szanton, Have No Fear, Have No * . Charles M. Payne, I've Got the Light of Freedom: The Organizing Tradition and the Mississippi Freedom Struggle (1945 book). External links The Rise and Fall of Jim Crow, PBS 90.1 WMPR, Jackson Mississippi, Charles Evers station manager : blues, urban contemporary gospel, talk, variety Oral History Interview with Charles Evers, from the Lyndon Baines Johnson
enforcement of the right to vote, Evers was elected mayor of Fayette, Mississippi. He was the first African-American mayor elected in his state since Reconstruction. In a rural area dominated by cotton plantations, Fayette had a majority of black residents. Its minority white community was known to be hostile toward blacks. Evers' election as mayor had great symbolic significance statewide and attracted national attention. The NAACP named Evers the 1969 Man of the Year. Author John Updike mentioned Evers in his popular novel Rabbit Redux (1971). Evers popularized the slogan, "Hands that picked cotton can now pick the mayor." Evers served many terms as mayor of Fayette. Admired by some, he alienated others with his inflexible stands on various issues. Evers did not like to share or delegate power. Evers lost the Democratic primary for mayor in 1981 to Kennie Middleton. Four years later, Evers defeated Middleton in the primaries and won back the office of mayor. In 1989, Evers lost the nomination once again to political rival Kennie Middleton. In his response to the defeat, Evers accepted said he was tired and that: "Twenty years is enough. I'm tired of being out front. Let someone else be out front." Political influence Evers endorsed Ronald Reagan for President of the United States during the 1980 United States presidential election. Evers later attracted controversy for his support of judicial nominee Charles W. Pickering, a Republican, who was nominated by President George H. W. Bush for a seat on the U.S. Court of Appeals. Evers criticized the NAACP and other organizations for opposing Pickering, as he said the candidate had a record of supporting the civil rights movement in Mississippi. Evers befriended a range of people from sharecroppers to presidents. He was an informal adviser to politicians as diverse as Lyndon B. Johnson, George C. Wallace, Ronald Reagan and Robert F. Kennedy. Evers severely criticized such national leaders as Roy Wilkins, Stokely Carmichael, H. Rap Brown and Louis Farrakhan over various issues. Evers was a member of the Republican Party for 30 years when he spoke warmly of the 2008 election of Barack Obama as the first black President of the United States. During the 2016 presidential election Evers supported Donald Trump's presidential campaign. Electoral campaigns In 1968, Evers used volunteer armed guards to protect his Jackson residence during the campaign when he competed with six white candidates for the vacant congressional seat which became open when John Bell Williams was elected governor. In 1971, Evers ran in the gubernatorial general election, but was defeated by Democrat William "Bill" Waller, 601,222 (77 percent) to 172,762 (22.1 percent). Waller had prosecuted the murder case of suspect Byron De La Beckwith. When Waller gave a victory speech on election night, Evers drove across town to a local TV station to congratulate him. A reporter later wrote that Waller's aides learned Evers was in the building and tried to hustle the governor-elect out of the studio as soon as the interview ended. They were not quite quick enough. Surrounded by photographers, reporters, and television crews, Evers approached Waller's car just as it was about to pull out. Waller and his wife were in the back seat. "I just wanted to congratulate you," said Evers. "Whaddya say, Charlie?" boomed Waller. His wife leaned across with a stiff smile and shook the loser's hand. During the campaign Evers told reporters that his main purpose in running was to encourage registration of black voters. In 1978, Evers ran as an independent for the U.S. Senate seat vacated by Democrat James Eastland. He finished in third place behind his opponents, Democrat Maurice Dantin and Republican Thad Cochran. He received 24 percent of the vote, likely siphoning off African-American votes that would have otherwise gone to Dantin. Cochran won the election with a plurality of 45 percent of the vote. With the shift in white voters moving into the Republican Party in the state (and the rest of the South), Cochran was continuously re-elected to his Senate seat. After his failed Senate race, Evers briefly switched political parties and became a Republican. In 1983, Evers ran as an independent for governor of Mississippi but lost to the Democrat Bill Allain. Republican Leon Bramlett of Clarksdale, also known as a college All-American football player, finished second with 39 percent of the vote. Books Evers wrote two autobiographies or memoirs: Evers (1971), written with Grace Halsell and self-published; and Have No Fear, written with Andrew
Gold and Welch sequences. These are not generated by linear-feedback-shift-registers, but have to be stored in lookup tables. Advantages of asynchronous CDMA over other techniques Efficient practical utilization of the fixed frequency spectrum In theory CDMA, TDMA and FDMA have exactly the same spectral efficiency, but, in practice, each has its own challenges – power control in the case of CDMA, timing in the case of TDMA, and frequency generation/filtering in the case of FDMA. TDMA systems must carefully synchronize the transmission times of all the users to ensure that they are received in the correct time slot and do not cause interference. Since this cannot be perfectly controlled in a mobile environment, each time slot must have a guard time, which reduces the probability that users will interfere, but decreases the spectral efficiency. Similarly, FDMA systems must use a guard band between adjacent channels, due to the unpredictable Doppler shift of the signal spectrum because of user mobility. The guard bands will reduce the probability that adjacent channels will interfere, but decrease the utilization of the spectrum. Flexible allocation of resources Asynchronous CDMA offers a key advantage in the flexible allocation of resources i.e. allocation of spreading sequences to active users. In the case of CDM (synchronous CDMA), TDMA, and FDMA the number of simultaneous orthogonal codes, time slots, and frequency slots respectively are fixed, hence the capacity in terms of the number of simultaneous users is limited. There are a fixed number of orthogonal codes, time slots or frequency bands that can be allocated for CDM, TDMA, and FDMA systems, which remain underutilized due to the bursty nature of telephony and packetized data transmissions. There is no strict limit to the number of users that can be supported in an asynchronous CDMA system, only a practical limit governed by the desired bit error probability since the SIR (signal-to-interference ratio) varies inversely with the number of users. In a bursty traffic environment like mobile telephony, the advantage afforded by asynchronous CDMA is that the performance (bit error rate) is allowed to fluctuate randomly, with an average value determined by the number of users times the percentage of utilization. Suppose there are 2N users that only talk half of the time, then 2N users can be accommodated with the same average bit error probability as N users that talk all of the time. The key difference here is that the bit error probability for N users talking all of the time is constant, whereas it is a random quantity (with the same mean) for 2N users talking half of the time. In other words, asynchronous CDMA is ideally suited to a mobile network where large numbers of transmitters each generate a relatively small amount of traffic at irregular intervals. CDM (synchronous CDMA), TDMA, and FDMA systems cannot recover the underutilized resources inherent to bursty traffic due to the fixed number of orthogonal codes, time slots or frequency channels that can be assigned to individual transmitters. For instance, if there are N time slots in a TDMA system and 2N users that talk half of the time, then half of the time there will be more than N users needing to use more than N time slots. Furthermore, it would require significant overhead to continually allocate and deallocate the orthogonal-code, time-slot or frequency-channel resources. By comparison, asynchronous CDMA transmitters simply send when they have something to say and go off the air when they do not, keeping the same signature sequence as long as they are connected to the system. Spread-spectrum characteristics of CDMA Most modulation schemes try to minimize the bandwidth of this signal since bandwidth is a limited resource. However, spread-spectrum techniques use a transmission bandwidth that is several orders of magnitude greater than the minimum required signal bandwidth. One of the initial reasons for doing this was military applications including guidance and communication systems. These systems were designed using spread spectrum because of its security and resistance to jamming. Asynchronous CDMA has some level of privacy built in because the signal is spread using a pseudo-random code; this code makes the spread-spectrum signals appear random or have noise-like properties. A receiver cannot demodulate this transmission without knowledge of the pseudo-random sequence used to encode the data. CDMA is also resistant to jamming. A jamming signal only has a finite amount of power available to jam the signal. The jammer can either spread its energy over the entire bandwidth of the signal or jam only part of the entire signal. CDMA can also effectively reject narrow-band interference. Since narrow-band interference affects only a small portion of the spread-spectrum signal, it can easily be removed through notch filtering without much loss of information. Convolution encoding and interleaving can be used to assist in recovering this lost data. CDMA signals are also resistant to multipath fading. Since the spread-spectrum signal occupies a large bandwidth, only a small portion of this will undergo fading due to multipath at any given time. Like the narrow-band interference, this will result in only a small loss of data and can be overcome. Another reason CDMA is resistant to multipath interference is because the delayed versions of the transmitted pseudo-random codes will have poor correlation with the original pseudo-random code, and will thus appear as another user, which is ignored at the receiver. In other words, as long as the multipath channel induces at least one chip of delay, the multipath signals will arrive at the receiver such that they are shifted in time by at least one chip from the intended signal. The correlation properties of the pseudo-random codes are such that this slight delay causes the multipath to appear uncorrelated with the intended signal, and it is thus ignored. Some CDMA devices use a rake receiver, which exploits multipath delay components to improve the performance of the system. A rake receiver combines the information from several correlators, each one tuned to a different path delay, producing a stronger version of the signal than a simple receiver with a single correlation tuned to the path delay of the strongest signal. Frequency reuse is the ability to reuse the same radio channel frequency at other cell sites within a cellular system. In the FDMA and TDMA systems, frequency planning is an important consideration. The frequencies used in different cells must be planned carefully to ensure signals from different cells do not interfere with each other. In a CDMA system, the same frequency can be used in every cell,
is usually a Gilbert cell mixer in the circuitry. Synchronous CDMA exploits mathematical properties of orthogonality between vectors representing the data strings. For example, the binary string 1011 is represented by the vector (1, 0, 1, 1). Vectors can be multiplied by taking their dot product, by summing the products of their respective components (for example, if u = (a, b) and v = (c, d), then their dot product u·v = ac + bd). If the dot product is zero, the two vectors are said to be orthogonal to each other. Some properties of the dot product aid understanding of how W-CDMA works. If vectors a and b are orthogonal, then and: Each user in synchronous CDMA uses a code orthogonal to the others' codes to modulate their signal. An example of 4 mutually orthogonal digital signals is shown in the figure below. Orthogonal codes have a cross-correlation equal to zero; in other words, they do not interfere with each other. In the case of IS-95, 64-bit Walsh codes are used to encode the signal to separate different users. Since each of the 64 Walsh codes is orthogonal to all other, the signals are channelized into 64 orthogonal signals. The following example demonstrates how each user's signal can be encoded and decoded. Example Start with a set of vectors that are mutually orthogonal. (Although mutual orthogonality is the only condition, these vectors are usually constructed for ease of decoding, for example columns or rows from Walsh matrices.) An example of orthogonal functions is shown in the adjacent picture. These vectors will be assigned to individual users and are called the code, chip code, or chipping code. In the interest of brevity, the rest of this example uses codes v with only two bits. Each user is associated with a different code, say v. A 1 bit is represented by transmitting a positive code v, and a 0 bit is represented by a negative code −v. For example, if v = (v0, v1) = (1, −1) and the data that the user wishes to transmit is (1, 0, 1, 1), then the transmitted symbols would be (v, −v, v, v) = (v0, v1, −v0, −v1, v0, v1, v0, v1) = (1, −1, −1, 1, 1, −1, 1, −1). For the purposes of this article, we call this constructed vector the transmitted vector. Each sender has a different, unique vector v chosen from that set, but the construction method of the transmitted vector is identical. Now, due to physical properties of interference, if two signals at a point are in phase, they add to give twice the amplitude of each signal, but if they are out of phase, they subtract and give a signal that is the difference of the amplitudes. Digitally, this behaviour can be modelled by the addition of the transmission vectors, component by component. If sender0 has code (1, −1) and data (1, 0, 1, 1), and sender1 has code (1, 1) and data (0, 0, 1, 1), and both senders transmit simultaneously, then this table describes the coding steps: Because signal0 and signal1 are transmitted at the same time into the air, they add to produce the raw signal (1, −1, −1, 1, 1, −1, 1, −1) + (−1, −1, −1, −1, 1, 1, 1, 1) = (0, −2, −2, 0, 2, 0, 2, 0). This raw signal is called an interference pattern. The receiver then extracts an intelligible signal for any known sender by combining the sender's code with the interference pattern. The following table explains how this works and shows that the signals do not interfere with one another: Further, after decoding, all values greater than 0 are interpreted as 1, while all values less than zero are interpreted as 0. For example, after decoding, data0 is (2, −2, 2, 2), but the receiver interprets this as (1, 0, 1, 1). Values of exactly 0 mean that the sender did not transmit any data, as in the following example: Assume signal0 = (1, −1, −1, 1, 1, −1, 1, −1) is transmitted alone. The following table shows the decode at the receiver: When the receiver attempts to decode the signal using sender1's code, the data is all zeros; therefore the cross-correlation is equal to zero and it is clear that sender1 did not transmit any data. Asynchronous CDMA When mobile-to-base links cannot be precisely coordinated, particularly due to the mobility of the handsets, a different approach is required. Since it is not mathematically possible to create signature sequences that are both orthogonal for arbitrarily random starting points and which make full use of the code space, unique "pseudo-random" or "pseudo-noise" sequences called spreading sequences are used in asynchronous CDMA systems. A spreading sequence is a binary sequence that appears random but can be reproduced in a deterministic manner by intended receivers. These spreading sequences are used to encode and decode a user's signal in asynchronous CDMA in the same manner as the orthogonal codes in synchronous CDMA (shown in the example above). These spreading sequences are statistically uncorrelated, and the sum of a large number of spreading sequences results in multiple access interference (MAI) that is approximated by a Gaussian noise process (following the central limit theorem in statistics). Gold codes are an example of a spreading sequence suitable for this purpose, as there is low correlation between the codes. If all of the users are received with the same power level, then the variance (e.g., the noise power) of the MAI increases in direct proportion to the number of users. In other words, unlike synchronous CDMA, the signals of other users will appear as noise to the signal of interest and interfere slightly with the desired signal in proportion to number of users. All forms of CDMA use the spread-spectrum spreading factor to allow receivers to partially discriminate against unwanted signals. Signals encoded with the specified spreading sequences are received, while signals with different sequences (or the same sequences but different timing offsets) appear as wideband noise reduced by the spreading factor. Since each user generates MAI, controlling the signal strength is an important issue with CDMA transmitters. A CDM (synchronous CDMA), TDMA, or FDMA receiver can in theory completely reject arbitrarily strong signals using different codes, time slots or frequency channels due to the orthogonality of these systems. This is not true for asynchronous CDMA; rejection of unwanted signals is only partial. If any or all of the unwanted signals are much stronger than the desired signal, they will overwhelm it. This leads to a general requirement in any asynchronous CDMA system to approximately match the various signal power levels as seen at the receiver. In CDMA cellular, the base station uses a fast closed-loop power-control scheme to tightly control each mobile's transmit power. In 2019, schemes to precisely estimate the required length of the codes in dependence of Doppler and delay characteristics have been developed. Soon after, machine learning based techniques that generate sequences of a desired length and spreading properties have been published as well. These are highly competitive with the classic Gold and Welch sequences. These are not generated by linear-feedback-shift-registers, but have to be stored in lookup tables. Advantages of asynchronous CDMA over other techniques Efficient practical utilization of the fixed frequency spectrum In theory CDMA, TDMA and FDMA have exactly the same spectral efficiency, but, in practice, each has its own challenges – power control in the case of CDMA, timing in the case of TDMA, and frequency generation/filtering in the case of FDMA. TDMA systems must carefully synchronize the transmission times of all the users to ensure that they are received in the correct time slot and do not cause interference. Since this cannot be perfectly controlled in a mobile environment, each time slot must have a guard time, which reduces the probability that users will interfere, but decreases the spectral efficiency. Similarly, FDMA systems must use a guard band between adjacent channels, due to the unpredictable Doppler shift of the signal spectrum because of user mobility. The guard bands will reduce the probability that adjacent channels will interfere, but decrease the utilization of the spectrum. Flexible allocation of resources Asynchronous CDMA offers a key advantage in the flexible allocation of resources i.e. allocation of spreading sequences to active users. In the case of CDM (synchronous CDMA), TDMA, and FDMA the number of simultaneous orthogonal codes, time slots, and frequency slots respectively are fixed, hence the capacity in terms of the number of simultaneous users is limited. There are a fixed number of orthogonal codes, time slots or frequency bands that can be allocated for CDM, TDMA, and FDMA systems, which remain underutilized due to the bursty nature of telephony and packetized data transmissions. There is no strict limit to the number of users that can be supported in an asynchronous CDMA system, only a practical limit governed by the desired bit error probability since the SIR (signal-to-interference ratio) varies inversely with the number of users. In a bursty traffic environment like mobile telephony, the advantage afforded by asynchronous CDMA is that the performance (bit error rate) is allowed to fluctuate randomly, with an average value determined by the number of users times the percentage of utilization. Suppose there are 2N users that only talk half of the time, then 2N users can be accommodated with the same average bit error probability as N users that talk all of the time. The key difference here is that the bit error probability for N users talking all of the time is constant, whereas it is a random quantity (with the same mean) for 2N users talking half of the time. In other words, asynchronous CDMA is ideally suited to a mobile network where large numbers of transmitters each generate a relatively small amount of traffic at irregular intervals. CDM (synchronous CDMA), TDMA, and FDMA systems cannot recover the underutilized resources inherent to bursty traffic due to the fixed number of orthogonal codes, time slots or frequency channels that can be assigned to individual transmitters. For instance, if there are N time slots in a TDMA system and 2N users that talk half of the time, then half of the time there will be more than N users needing to use more than N time slots. Furthermore, it would require significant overhead to continually allocate and deallocate the orthogonal-code, time-slot or frequency-channel resources. By comparison, asynchronous CDMA transmitters simply send when they have something to say and go off the air when they do not, keeping the same signature sequence as long
that block material containing pornography, or controversial religious, political, or news-related content en route are often utilized by parents who do not permit their children to access content not conforming to their personal beliefs. Content filtering software can, however, also be used to block malware and other content that is or contains hostile, intrusive, or annoying material including adware, spam, computer viruses, worms, trojan horses, and spyware. Most content control software is marketed to organizations or parents. It is, however, also marketed on occasion to facilitate self-censorship, for example by people struggling with addictions to online pornography, gambling, chat rooms, etc. Self-censorship software may also be utilised by some in order to avoid viewing content they consider immoral, inappropriate, or simply distracting. A number of accountability software products are marketed as self-censorship or accountability software. These are often promoted by religious media and at religious gatherings. Criticism Filtering errors Overblocking Utilizing a filter that is overly zealous at filtering content, or mislabels content not intended to be censored can result in over blocking, or over-censoring. Over blocking can filter out material that should be acceptable under the filtering policy in effect, for example health related information may unintentionally be filtered along with porn-related material because of the Scunthorpe problem. Filter administrators may prefer to err on the side of caution by accepting over blocking to prevent any risk of access to sites that they determine to be undesirable. Content-control software was mentioned as blocking access to Beaver College before its name change to Arcadia University. Another example was the filtering of Horniman Museum. As well, over-blocking may encourage users to bypass the filter entirely. Underblocking Whenever new information is uploaded to the Internet, filters can under block, or under-censor, content if the parties responsible for maintaining the filters do not update them quickly and accurately, and a blacklisting rather than a whitelisting filtering policy is in place. Morality and opinion Many would not be satisfied with government filtering viewpoints on moral or political issues, agreeing that this could become support for propaganda. Many would also find it unacceptable that an ISP, whether by law or by the ISP's own choice, should deploy such software without allowing the users to disable the filtering for their own connections. In the United States, the First Amendment to the United States Constitution has been cited in calls to criminalise forced internet censorship. (See section below) Without adequate governmental supervision, content-filtering software could enable private companies to censor as they please. (See Religious or political censorship, below). Government utilisation or encouragement of content-control software is a component of Internet Censorship (not to be confused with Internet Surveillance, in which content is monitored and not necessarily restricted). The governments of countries such as the People's Republic of China, and Cuba are current examples of countries in which this ethically controversial activity is alleged to have taken place. Legal actions In 1998, a United States federal district court in Virginia ruled (Loudoun v. Board of Trustees of the Loudoun County Library) that the imposition of mandatory filtering in a public library violates the First Amendment. In 1996 the US Congress passed the Communications Decency Act, banning indecency on the Internet. Civil liberties groups challenged the law under the First Amendment, and in 1997 the Supreme Court ruled in their favor. Part of the civil liberties argument, especially from groups like the Electronic Frontier Foundation, was that parents who wanted to block sites could use their own content-filtering software, making government involvement unnecessary. In the late 1990s, groups such as the Censorware Project began reverse-engineering the content-control software and decrypting the blacklists to determine what kind of sites the software blocked. This led to legal action alleging violation of the "Cyber Patrol" license agreement. They discovered that such tools routinely blocked unobjectionable sites while also failing to block intended targets. Some content-control software companies responded by claiming that their filtering criteria were backed by intensive manual checking. The companies' opponents argued, on the other hand, that performing the necessary checking would require resources greater than the companies possessed and that therefore their claims were not valid. The Motion Picture Association successfully obtained a UK ruling enforcing ISPs to use content-control software to prevent copyright infringement by their subscribers. Religious, anti-religious, and political censorship Many types of content-control software have been shown to block sites based on the religious and political leanings of the company owners. Examples include blocking several religious sites (including the Web site of the Vatican), many political sites, and homosexuality-related sites. X-Stop was shown to block sites such as the Quaker web site, the National Journal of Sexual Orientation Law, The Heritage Foundation, and parts of The Ethical Spectacle. CYBERsitter blocks out sites like National Organization for Women. Nancy Willard, an academic researcher and attorney, pointed out that many U.S. public schools and libraries use the same filtering software that many Christian organizations use. Cyber Patrol, a product developed by The Anti-Defamation League and Mattel's The Learning Company, has been found to block not only political sites it deems to be engaging in 'hate speech' but also human rights web sites, such as Amnesty International's web page about Israel and gay-rights web sites, such as glaad.org. Content labeling Content labeling may be considered another form of content-control software. In 1994, the Internet Content Rating Association (ICRA) — now part of the Family Online Safety Institute — developed a content rating system for online content providers. Using an online questionnaire a webmaster describes the nature of their web content. A small file is generated that contains a condensed, computer readable digest of this description that can then be used by content filtering software to block or allow that site. ICRA labels come in a variety of formats. These include the World Wide Web Consortium's Resource Description Framework (RDF) as well as Platform for Internet Content Selection (PICS) labels used by Microsoft's Internet Explorer Content Advisor. ICRA labels are an example of self-labeling. Similarly, in 2006 the Association of Sites Advocating Child Protection (ASACP) initiated the Restricted to Adults self-labeling initiative. ASACP members were concerned that various forms of legislation being proposed in the United States were going to have the effect of forcing adult companies to label their content. The RTA label, unlike ICRA labels, does not require a webmaster to fill out a questionnaire or sign up to use. Like ICRA the RTA label is free. Both labels are recognized by a wide variety of content-control software. The Voluntary Content Rating (VCR) system was devised by Solid Oak Software for their CYBERsitter filtering software, as an alternative to the PICS system, which some critics deemed too complex. It employs HTML metadata tags embedded within web page documents to specify the type of content contained in the document. Only two levels are specified, mature and adult, making the specification extremely simple. Use in public libraries United States The use of Internet filters or content-control software varies widely in public libraries in the United States, since Internet use policies are established by the local library board. Many libraries adopted Internet filters after Congress conditioned the receipt of universal service discounts on the use of Internet filters through the Children's Internet Protection Act (CIPA). Other libraries do not install content control software, believing that acceptable use policies and educational efforts address the issue of children accessing age-inappropriate content while preserving adult users' right to freely access information. Some libraries use Internet filters on computers used by children only. Some libraries that employ content-control software allow the software to be deactivated on a case-by-case basis on application to a librarian; libraries that are subject to CIPA are required to have a policy that allows adults to request that the filter be disabled without having to explain the reason for their request. Many legal scholars believe that a number of legal cases, in particular Reno v. American Civil Liberties Union, established that the use of content-control software in libraries is a violation of the First Amendment. The Children's Internet Protection Act [CIPA] and the June 2003 case United States v. American Library Association found CIPA constitutional as a condition placed on the receipt of federal funding, stating that First Amendment concerns were dispelled by the law's provision that allowed adult library users to have the filtering software disabled, without having to explain the reasons for their request. The plurality decision left open a future "as-applied" Constitutional challenge, however. In November 2006, a lawsuit was filed against the North Central Regional Library District (NCRL) in Washington State for its policy of refusing to disable restrictions upon requests of adult patrons, but CIPA was not challenged in that matter. In May 2010, the Washington State Supreme Court provided an opinion after it was asked to certify a question referred by the United States District Court for the Eastern District of Washington: "Whether a public library, consistent with Article I, § 5 of the Washington Constitution, may filter Internet access for all patrons without disabling Web sites containing constitutionally-protected speech upon the request of an adult library patron." The Washington State Supreme Court ruled that NCRL's internet filtering policy did not violate Article I, Section 5 of the Washington State Constitution. The Court said: "It appears to us that NCRL's filtering policy is reasonable and accords with its mission and these policies and is viewpoint neutral. It appears that no article I, section 5 content-based violation exists in this case. NCRL's essential mission is to promote reading and lifelong learning. As NCRL maintains,
to this type of service is subject to restrictions. The type of filters can be used to implement government, regulatory or parental control over subscribers. Network-based filtering This type of filter is implemented at the transport layer as a transparent proxy, or at the application layer as a web proxy. Filtering software may include data loss prevention functionality to filter outbound as well as inbound information. All users are subject to the access policy defined by the institution. The filtering can be customized, so a school district's high school library can have a different filtering profile than the district's junior high school library. DNS-based filtering This type of filtering is implemented at the DNS layer and attempts to prevent lookups for domains that do not fit within a set of policies (either parental control or company rules). Multiple free public DNS services offer filtering options as part of their services. DNS Sinkholes such as Pi-Hole can be also be used for this purpose, though client-side only. Search-engine filters Many search engines, such as Google and Bing offer users the option of turning on a safety filter. When this safety filter is activated, it filters out the inappropriate links from all of the search results. If users know the actual URL of a website that features explicit or adult content, they have the ability to access that content without using a search engine. Some providers offer child-oriented versions of their engines that permit only children friendly websites. Reasons for filtering The Internet does not intrinsically provide content blocking, and therefore there is much content on the Internet that is considered unsuitable for children, given that much content is given certifications as suitable for adults only, e.g. 18-rated games and movies. Internet service providers (ISPs) that block material containing pornography, or controversial religious, political, or news-related content en route are often utilized by parents who do not permit their children to access content not conforming to their personal beliefs. Content filtering software can, however, also be used to block malware and other content that is or contains hostile, intrusive, or annoying material including adware, spam, computer viruses, worms, trojan horses, and spyware. Most content control software is marketed to organizations or parents. It is, however, also marketed on occasion to facilitate self-censorship, for example by people struggling with addictions to online pornography, gambling, chat rooms, etc. Self-censorship software may also be utilised by some in order to avoid viewing content they consider immoral, inappropriate, or simply distracting. A number of accountability software products are marketed as self-censorship or accountability software. These are often promoted by religious media and at religious gatherings. Criticism Filtering errors Overblocking Utilizing a filter that is overly zealous at filtering content, or mislabels content not intended to be censored can result in over blocking, or over-censoring. Over blocking can filter out material that should be acceptable under the filtering policy in effect, for example health related information may unintentionally be filtered along with porn-related material because of the Scunthorpe problem. Filter administrators may prefer to err on the side of caution by accepting over blocking to prevent any risk of access to sites that they determine to be undesirable. Content-control software was mentioned as blocking access to Beaver College before its name change to Arcadia University. Another example was the filtering of Horniman Museum. As well, over-blocking may encourage users to bypass the filter entirely. Underblocking Whenever new information is uploaded to the Internet, filters can under block, or under-censor, content if the parties responsible for maintaining the filters do not update them quickly and accurately, and a blacklisting rather than a whitelisting filtering policy is in place. Morality and opinion Many would not be satisfied with government filtering viewpoints on moral or political issues, agreeing that this could become support for propaganda. Many would also find it unacceptable that an ISP, whether by law or by the ISP's own choice, should deploy such software without allowing the users to disable the filtering for their own connections. In the United States, the First Amendment to the United States Constitution has been cited in calls to criminalise forced internet censorship. (See section below) Without adequate governmental supervision, content-filtering software could enable private companies to censor as they please. (See Religious or political censorship, below). Government utilisation or encouragement of content-control software is a component of Internet Censorship (not to be confused with Internet Surveillance, in which content is monitored and not necessarily restricted). The governments of countries such as the People's Republic of China, and Cuba are current examples of countries in which this ethically controversial activity is alleged to have taken place. Legal actions In 1998, a United States federal district court in Virginia ruled (Loudoun v. Board of Trustees of the Loudoun County Library) that the imposition of mandatory filtering in a public library violates the First Amendment. In 1996 the US Congress passed the Communications Decency Act, banning indecency on the Internet. Civil liberties groups challenged the law under the First Amendment, and in 1997 the Supreme Court ruled in their favor. Part of the civil liberties argument, especially from groups like the Electronic Frontier Foundation, was that parents who wanted to block sites could use their own content-filtering software, making government involvement unnecessary. In the late 1990s, groups such as the Censorware Project began reverse-engineering the content-control software and decrypting the blacklists to determine what kind of sites the software blocked. This led to legal action alleging violation of the "Cyber Patrol" license agreement. They discovered that such tools routinely blocked unobjectionable sites while also failing to block intended targets. Some content-control software companies responded by claiming that their filtering criteria were backed by intensive manual checking. The companies' opponents argued, on the other hand, that performing the necessary checking would require resources greater than the companies possessed and that therefore their claims were not valid. The Motion Picture Association successfully obtained a UK ruling enforcing ISPs to use content-control software to prevent copyright infringement by their subscribers. Religious, anti-religious, and political censorship Many types of content-control software have been shown to block sites based on the religious and political leanings of the company owners. Examples include blocking several religious sites (including the Web site of the Vatican), many political sites, and homosexuality-related sites. X-Stop was shown to block sites such as the Quaker web site, the National Journal of Sexual Orientation Law, The Heritage Foundation, and parts of The Ethical Spectacle. CYBERsitter blocks out sites like National Organization for Women. Nancy Willard, an academic researcher and attorney, pointed out that many U.S. public schools and libraries use the same filtering software that many Christian organizations use. Cyber Patrol, a product developed by The Anti-Defamation League and Mattel's The Learning Company, has been found to block not only political sites it deems to be engaging in 'hate speech' but also human rights web sites, such as Amnesty International's web page about Israel and gay-rights web sites, such as glaad.org. Content labeling Content labeling may be considered another form of content-control software. In 1994, the Internet Content Rating Association (ICRA) — now part of the Family Online Safety Institute — developed a content rating system for online content providers. Using an online questionnaire a webmaster describes the nature of their web content. A small file is generated that contains a condensed, computer readable digest of this description that can then be used by content filtering software to block or allow that site. ICRA labels come in a variety of formats. These include the World Wide Web Consortium's Resource Description Framework (RDF) as well as Platform for Internet Content Selection (PICS) labels used by Microsoft's Internet Explorer Content Advisor. ICRA labels are an example of self-labeling. Similarly, in 2006 the Association of Sites Advocating Child Protection (ASACP) initiated the Restricted to Adults self-labeling initiative. ASACP members were concerned that various forms of legislation being proposed in the United States were going to have the effect of forcing adult companies to label their content. The RTA label, unlike ICRA labels, does not require a webmaster to fill out a questionnaire or sign up to use. Like ICRA the RTA label is free. Both labels are recognized by a wide variety of content-control software. The Voluntary Content Rating (VCR) system was devised by Solid Oak Software for their CYBERsitter filtering software, as an alternative to the PICS system, which some critics deemed too complex. It employs HTML metadata tags embedded within web page documents to specify the type of content contained in the document. Only two levels are specified, mature and adult, making the specification extremely simple. Use in public libraries United States The use of Internet filters or content-control software varies widely in public libraries in the United States, since Internet use policies are established by the local library board. Many libraries adopted Internet filters after Congress conditioned the receipt of universal service discounts on the use of Internet filters through the Children's Internet Protection Act (CIPA). Other libraries do not install content control software, believing that acceptable use policies and educational efforts address the issue of children accessing age-inappropriate content while preserving adult users' right to freely access information. Some libraries use Internet filters on computers used by children only. Some libraries that employ content-control software allow the software to be deactivated on a case-by-case basis on application to a librarian; libraries that are subject to CIPA are required to have a policy that allows adults to request that the filter be disabled without having to explain the reason for their request. Many legal scholars believe that a number of legal cases, in particular Reno v. American Civil Liberties Union, established that the use of content-control software in libraries is a violation of the First Amendment. The Children's Internet Protection Act [CIPA] and the June 2003 case United States v. American Library Association found CIPA constitutional as a condition placed on the receipt of federal funding, stating that First Amendment concerns were dispelled by the law's provision that allowed adult library users to have the filtering software disabled, without having to explain the reasons for their request. The plurality decision left open a future "as-applied" Constitutional challenge, however. In November 2006, a lawsuit was filed against the North Central Regional Library District (NCRL) in Washington State for its policy of refusing to disable restrictions upon requests of adult patrons, but CIPA was not challenged in that matter. In May 2010, the Washington State Supreme Court provided an opinion after it was asked to certify a question referred by the United States District Court for the Eastern District of Washington: "Whether a public library, consistent with Article I,
Shetland The Shetland or Zetland group are relatively small passage graves, that are round or heel-shaped in outline. The whole chamber is cross or trefoil-shaped and there are no smaller individual compartments. An example is to be found on the uninhabited island of Vementry on the north side of the West Mainland, where it appears that the cairn may have originally been circular and its distinctive heel shape added as a secondary development, a process repeated elsewhere in Shetland. This probably served to make the cairn more distinctive and the forecourt area more defined. Hebridean Like the Shetland cairn the Hebridean group appear relatively late in the Neolithic. They are largely found in the Outer Hebrides, although a mixture of cairn types are found here. These passage graves are usually larger than the Shetland type and are round or have funnel-shaped forecourts, although a few are long cairns – perhaps originally circular but with later tails added. They often have a polygonal chamber and a short passage to one end of the cairn. The Rubha an Dùnain peninsula on the island of Skye provides an example from the 2nd or 3rd millennium BC. Barpa Langass on North Uist is the best preserved chambered cairn in the Hebrides. Bargrennan Bargrennan chambered cairns are a class of passage graves found only in south-west Scotland, in western Dumfries and Galloway and southern Ayrshire. As well as being structurally different from the nearby Clyde cairns, Bargrennan cairns are distinguished by their siting and distribution; they are found in upland, inland areas of Galloway and Ayrshire. Bronze Age In addition to the increasing prominence of individual burials, during the Bronze Age regional differences in architecture in Scotland became more pronounced. The Clava cairns date from this period, with about 50 cairns of this type in the Inverness area. Corrimony chambered cairn near Drumnadrochit is an example dated to 2000 BC or older. The only surviving evidence of burial was a stain indicating the presence of a single body. The cairn is surrounded by a circle of 11 standing stones. The cairns at Balnuaran of Clava are of a similar date. The largest of three is the north-east cairn, which was partially reconstructed in the 19th century and the central cairn may have been used as a funeral pyre.<ref>"A Visitors’ Guide to Balnuaran of Clava: A prehistoric cemetery. (2012) Historic Scotland.</ref>"The Cairns of Clava, Scottish Highlands" . The Heritage Trail. Retrieved 19 July 2012. Glebe cairn in Kilmartin Glen in Argyll dates from 1700 BC and has two stone cists inside one of which a jet necklace was found during 19th century excavations."Kilmartin Glebe". Canmore. Retrieved 4 August 2012. There are numerous prehistoric sites in the vicinity including Nether Largie North cairn, which was entirely removed and rebuilt during excavations in 1930. Wales
east and stone-chambered cairns in the west. During the later Neolithic (3300–2500 BC) massive circular enclosures and the use of grooved ware and Unstan ware pottery emerge. Scotland has a particularly large number of chambered cairns; they are found in various different types described below. Along with the excavations of settlements such as Skara Brae, Links of Noltland, Barnhouse, Rinyo and Balfarg and the complex site at Ness of Brodgar these cairns provide important clues to the character of civilization in Scotland in the Neolithic. However the increasing use of cropmarks to identify Neolithic sites in lowland areas has tended to diminish the relative prominence of these cairns. In the early phases bones of numerous bodies are often found together and it has been argued that this suggests that in death at least, the status of individuals was played down. During the late Neolithic henge sites were constructed and single burials began to become more commonplace; by the Bronze Age it is possible that even where chambered cairns were still being built they had become the burial places of prominent individuals rather than of communities as a whole. Clyde-Carlingford court cairns The Clyde or Clyde-Carlingford type are principally found in northern and western Ireland and southwestern Scotland. They first were identified as a separate group in the Firth of Clyde region, hence the name. Over 100 have been identified in Scotland alone. Lacking a significant passage, they are a form of gallery grave. The burial chamber is normally located at one end of a rectangular or trapezoidal cairn, while a roofless, semi-circular forecourt at the entrance provided access from the outside (although the entrance itself was often blocked), and gives this type of chambered cairn its alternate name of court tomb or court cairn. These forecourts are typically fronted by large stones and it is thought the area in front of the cairn was used for public rituals of some kind. The chambers were created from large stones set on end, roofed with large flat stones and often sub-divided by slabs into small compartments. They are generally considered to be the earliest in Scotland. Examples include Cairn Holy I and Cairn Holy II near Newton Stewart, a cairn at Port Charlotte, Islay, which dates to 3900–4000 BC, and Monamore, or Meallach's Grave, Arran, which may date from the early fifth millennium BC. Excavations at the Mid Gleniron cairns near Cairnholy revealed a multi-period construction which shed light on the development of this class of chambered cairn. Orkney-Cromarty The Orkney-Cromarty group is by far the largest and most diverse. It has been subdivided into Yarrows, Camster and Cromarty subtypes but the differences are extremely subtle. The design is of dividing slabs at either side of a rectangular chamber, separating it into compartments or stalls. The number of these compartments ranges from 4 in the earliest examples to over 24 in an extreme example on Orkney. The actual shape of the cairn varies from simple circular designs to elaborate 'forecourts' protruding from each end, creating what look like small amphitheatres. It is likely that these are the result of cultural influences from mainland Europe, as they are similar to designs found in France and Spain. Examples include Midhowe on Rousay and Unstan Chambered Cairn from the Orkney Mainland, both of which date from the mid 4th millennium BC and were probably in use over long periods of time. When the latter was excavated in 1884, grave goods were found that gave their name to Unstan ware pottery. Blackhammer cairn on Rousay is another example dating
distillery in 1864, which he would eventually purchase in 1883. Meanwhile, Americans Hiram Walker and J.P. Wiser moved to Canada: Walker to Windsor in 1858 to open a flour mill and distillery and Wiser to Prescott in 1857 to work at his uncle's distillery where he introduced a rye whisky and was successful enough to buy the distillery five years later. The disruption of American Civil War created an export opportunity for Canadian-made whiskies and their quality, particularly those from Walker and Wiser who had already begun the practice of aging their whiskies, sustained that market even after post-war tariffs were introduced. In the 1880s, Canada's National Policy placed high tariffs on foreign alcoholic products as whisky began to be sold in bottles and the federal government instituted a bottled in bond program that provided certification of the time a whisky spent aging and allowed deferral of taxes for that period, which encouraged aging. In 1890 Canada became the first country to enact an aging law for whiskies, requiring them to be aged at least two years. The growing temperance movement culminated in prohibition in 1916 and distilleries had to either specialize in the export market or switch to alternative products, like industrial alcohols which were in demand in support of the war effort. With the deferred revenue and storage costs of the Aging Law acting as a barrier to new entrants and the reduced market due to prohibition, consolidation of Canadian whisky had begun. Henry Corby Jr. modernized and expanded upon his father's distillery and sold it, in 1905, to businessman Mortimer Davis who also purchased the Wiser distillery, in 1918, from the heirs of J.P. Wiser. Davis's salesman Harry Hatch spent time promoting the Corby and Wiser brands and developing a distribution network in the United States which held together as Canadian prohibition ended and American prohibition began. After Hatch's falling out with Davis, Hatch purchased the struggling Gooderham and Worts in 1923 and switched out Davis's whisky for his. Hatch was successful enough to be able to also purchase the Walker distillery, and the popular Canadian Club brand, from Hiram's grandsons in 1926. While American prohibition created risk and instability in the Canadian whisky industry, some benefited from purchasing unused American distillation equipment and from sales to exporters (nominally to foreign countries like Saint Pierre and Miquelon, though actually to bootleggers to the United States). Along with Hatch, the Bronfman family was able to profit from making whisky destined for United States during prohibition, though mostly in Western Canada and were able to open a distillery in LaSalle, Quebec and merge their company, in 1928, with Seagram's which had struggled with transitioning to the prohibition marketplace. Samuel Bronfman became president of the company and, with his dominant personality, began a strategy of increasing their capacity and aging whiskies in anticipation of the end of prohibition. When that did occur, in 1933, Seagram's was in a position to quickly expand; they purchased The British Columbia Distilling Company from the Riefel family in 1935, as well as several American distilleries and introduced new brands, one of them being Crown Royal, in 1939, which would eventually become one of the best-selling Canadian whiskies. While some capacity was switched to producing industrial alcohols in support of the country's World War II efforts, the industry expanded again after the war until the 1980s. In 1945, Schenley Industries purchased one of those industrial alcohol distilleries in Valleyfield, Quebec, and repurposed several defunct American whiskey brands, like Golden Wedding, Old Fine Copper, and starting in 1972, Gibson's Finest. Seeking to secure their supply of Canadian whisky, Barton Brands also built a new distillery in Collingwood, Ontario, in 1967, where they would produce Canadian Mist, though they sold the distillery and brand only four years later to Brown–Forman. As proximity to the shipping routes (by rail and boat) to the US became less important, large distilleries were established in Alberta and Manitoba. Five years after starting to experiment with whiskies in their Toronto gin distillery, W. & A. Gilbey Ltd. created the Black Velvet blend in 1951 which was so successful a new distillery in Lethbridge, Alberta was constructed in 1973 to produce it. Also in the west, a Calgary-based business group recruited the Riefels from British Columbia to oversee their Alberta Distillers operations in 1948. The company became an innovator in the practice of bulk shipping whiskies to the United States for bottling and the success of their Windsor Canadian brand (produced in Alberta but bottled in the United States) led National Distillers Limited to purchase Alberta Distillers, in 1964, to secure their supply chain. More Alberta investors founded the Highwood Distillery in 1974 in High River, Alberta, which specialized in wheat-based whiskies. Seagram's opened a large, new plant in Gimli, Manitoba, in 1969, which would eventually replace their Waterloo and LaSalle distilleries. In British Columbia, Ernie Potter who had been producing fruit liqueurs from alcohols distilled at Alberta Distillers built his own whisky distillery in Langley in 1958 and produced the Potter's and Century brands of whisky. Hiram Walker's built the Okanagan Distillery in Winfield, British Columbia, in 1970 with the intention of producing Canadian Club but was redirected to fulfill contracts to produce whiskies for Suntory before being closed in 1995. After decades of expansion, a shift in consumer preferences towards white spirits (such as vodka) in the American market resulted in an excess supply of Canadian whiskies. While this allowed the whiskies to be aged longer, the unexpected storage costs and deferred revenue strained individual companies. With the distillers seeking investors and multinational corporations seeking value brands, a series of acquisitions and mergers occurred. Alberta Distillers was bought in 1987 by Fortune Brands which would go on to become part of Beam Suntory. Hiram Walker was sold in 1987 to Allied Lyons which Pernod Ricard took over in 2006, with Fortune Brands acquiring the Canadian Club brand. Grand Metropolitan had purchased Black Velvet in 1972 but sold the brand in 1999 to Constellation Brands who in turn sold it to Heaven Hill in 2019. Schenley was acquired in 1990 by United Distillers which would go on to become part of Diageo, though Gibson's Finest was sold
by volume" and "may contain caramel and flavouring". Within these parameters Canadian whiskies can vary considerably, especially with the allowance of "flavouring"—though the additional requirement that they "possess the aroma, taste and character generally attributed to Canadian whisky" can act as a limiting factor. Canadian whiskies are most typically blends of whiskies made from a single grain, principally corn and rye, but also sometimes wheat or barley. Mash bills of multiple grains may also be used for some flavouring whiskies. The availability of inexpensive American corn, with its higher proportion of usable starches relative to other cereal grains, has led it to be most typically used to create base whiskies to which flavouring whiskies are blended in. Exceptions to this include the Highwood Distillery which specializes in using wheat and the Alberta Distillers which developed its own proprietary yeast strain that specializes in distilling rye. The flavouring whiskies are most typically rye whiskies, blended into the product to add most of its flavour and aroma. While Canadian whisky may be labelled as a "rye whisky" this blending technique only necessitates a small percentage (such as 10%) of rye to create the flavour, whereas much more rye would be required if it were added to a mash bill alongside the more readily distilled corn. The base whiskies are distilled to between 180 and 190 proof which results in few congener by-products (such as fusel alcohol, aldehydes, esters, etc.) and creates a lighter taste. By comparison, an American whisky distilled any higher than 160 proof is labelled as "light whiskey". The flavouring whiskies are distilled to a lower proof so that they retain more of the grain's flavour. The relative lightness created by the use of base whiskies makes Canadian whisky useful for mixing into cocktails and highballs. The minimum three year aging in small wood barrels applies to all whiskies used in the blend. As the regulations do not limit the specific type of wood that must be used, a variety of flavours can be achieved by blending whiskies aged in different types of barrels. In addition to new wood barrels, charred or uncharred, flavour can be added by aging whiskies in previously used bourbon or fortified wine barrels for different lengths of time. History In the 18th and early 19th centuries, gristmills distilled surplus grains to avoid spoilage. Most of these early whiskies would have been rough, mostly unaged wheat whiskey. Distilling methods and technologies were brought to Canada by American and European immigrants with experience in distilling wheat and rye. This early whisky from improvised stills, often with the grains closest to spoilage, was produced with various, uncontrolled proofs and was consumed, unaged, by the local market. While most distilling capacity was taken up producing rum, a result of Atlantic Canada's position in the British sugar trade, the first commercial scale production of whisky in Canada began in 1801 when John Molson purchased a copper pot still, previously used to produce rum, in Montreal. With his son Thomas Molson, and eventually partner James Morton, the Molsons operated a distillery in Montreal and Kingston and were the first in Canada to export whisky, benefiting from Napoleonic Wars' disruption in supplying French wine and brandies to England. Gooderham and Worts began producing whisky in 1837 in Toronto as a side business to their wheat milling but surpassed Molson's production by the 1850s as it expanded their operations with a new distillery in what would become the Distillery District. Henry Corby started distilling whisky as a side business from his gristmill in 1859 in what became known as Corbyville and Joseph Seagram began working in his father-in-law's Waterloo flour mill and distillery in 1864, which he would eventually purchase in 1883. Meanwhile, Americans Hiram Walker and J.P. Wiser moved to Canada: Walker to Windsor in 1858 to open a flour mill and distillery and Wiser to Prescott in 1857 to work at his uncle's distillery where he introduced a rye whisky and was successful enough to buy the distillery five years later. The disruption of American Civil War created an export opportunity for Canadian-made whiskies and their quality, particularly those from Walker and Wiser who had already begun the practice of aging their whiskies, sustained that market even after post-war tariffs were introduced. In the 1880s, Canada's National Policy placed high tariffs on foreign alcoholic products as whisky began to be sold in bottles and the federal government instituted a bottled in bond program that provided certification of the time a whisky spent aging and allowed deferral of taxes for that period, which encouraged aging. In 1890 Canada became the first country to enact an aging law for whiskies, requiring them to be aged at least two years. The growing temperance movement culminated in prohibition in 1916 and
using "terms of venery" or "nouns of assembly," collective nouns that are specific to certain kinds of animals, stems from an English hunting tradition of the Late Middle Ages. The fashion of a consciously developed hunting language came to England from France. It was marked by an extensive proliferation of specialist vocabulary, applying different names to the same feature in different animals. The elements can be shown to have already been part of French and English hunting terminology by the beginning of the 14th century. In the course of the 14th century, it became a courtly fashion to extend the vocabulary, and by the 15th century, the tendency had reached exaggerated and even satirical proportions. The Treatise, written by Walter of Bibbesworth in the mid-1200s, is the earliest source for collective nouns of animals in any European vernacular (and also the earliest source for animal noises). The Venerie of Twiti (early 14th century) distinguished three types of droppings of animals, and three different terms for herds of animals. Gaston Phoebus (14th century) had five terms for droppings of animals, which were extended to seven in the Master of the Game (early 15th century). The focus on collective terms for groups of animals emerged in the later 15th century. Thus, a list of collective nouns in Egerton MS 1995, dated to c. 1452 under the heading of "termis of venery &c.", extends to 70 items, and the list in the Book of Saint Albans (1486) runs to 164 items, many of which, even though introduced by "the compaynys of beestys and fowlys", relate not to venery but to human groups and professions and are clearly humorous, such as "a Doctryne of doctoris", "a Sentence of Juges", "a Fightyng of beggers", "an uncredibilite of Cocoldis", "a Melody of harpers", "a Gagle of women", "a Disworship of Scottis", etc. The Book of Saint Albans became very popular during the 16th century and was reprinted frequently. Gervase Markham edited and commented on the list in his The Gentleman's Academic, in 1595. The book's popularity had the effect of perpetuating many of these terms as part of the Standard English lexicon even if they were originally meant to be humorous and have long ceased to have any practical application. Even in their original context of medieval venery, the terms were of the nature of kennings, intended as a mark of erudition of the gentlemen able to use them correctly rather than for practical communication. The popularity of the terms in the modern period has resulted in the addition of numerous lighthearted, humorous or facetious collective nouns. See also Linguistics concepts Grammatical number Mass noun Measure words Plural Plurale tantum Synesis Lists List of animal names, including names for groups Interdisciplinary Social unit Further reading Hodgkin, John. "Proper Terms: An attempt at a rational explanation of the meanings of the Collection of Phrases in 'The Book of St Albans', 1486, entitled 'The Compaynys of besties and fowls and similar lists", Transactions of the Philological Society 1907–1910 Part III, pp. 1–187, Kegan, Paul, Trench & Trübner & Co, Ltd, London, 1909. Shulman, Alon. A Mess of Iguanas... A Whoop of Gorillas: An Amazement of Animal Facts. Penguin. (First published Penguin 2009.) . Lipton, James. An Exaltation of Larks, or The "Veneral" Game. Penguin. (First published Grossman Publishers 1968.) (Penguin first reprint 1977 ); in 1993 it was republished in Penguin with The Ultimate Edition as part of the title with the (paperback), (hardcover) PatrickGeorge. A filth of starlings. PatrickGeorge. (First
nouns (e.g., "The team have finished the project."). Conversely, in the English language as a whole, singular verb forms can often be used with nouns ending in "-s" that were once considered plural (e.g., "Physics is my favorite academic subject"). This apparent "number mismatch" is a natural and logical feature of human language, and its mechanism is a subtle metonymic shift in the concepts underlying the words. In British English, it is generally accepted that collective nouns can take either singular or plural verb forms depending on the context and the metonymic shift that it implies. For example, "the team is in the dressing room" (formal agreement) refers to the team as an ensemble, while "the team are fighting among themselves" (notional agreement) refers to the team as individuals. That is also the British English practice with names of countries and cities in sports contexts (e.g., "Newcastle have won the competition."). In American English, collective nouns almost always take singular verb forms (formal agreement). In cases that a metonymic shift would be revealed nearby, the whole sentence should be recast to avoid the metonymy. (For example, "The team are fighting among themselves" may become "the team members are fighting among themselves" or simply "The team is infighting.") Collective proper nouns are usually taken as singular ("Apple is expected to release a new phone this year"), unless the plural is explicit in the proper noun itself, in which case it is taken as plural ("The Green Bay Packers are scheduled to play the Minnesota Vikings this weekend"). More explicit examples of collective proper nouns include "General Motors is once again the world's largest producer of vehicles," and "Texas Instruments is a large producer of electronics here," and "British Airways is an airline company in Europe." Furthermore, "American Telephone & Telegraph is a telecommunications company in North America." Such phrases might look plural, but they are not. Examples of metonymic shift A good example of such a metonymic shift in the singular-to-plural direction (which exclusively takes place in British English) is the following sentence: "The team have finished the project." In that sentence, the underlying thought is of the individual members of the team working together to finish the project. Their accomplishment is collective, and the emphasis is not on their individual identities, but they are still discrete individuals; the word choice "team have" manages to convey both their collective and discrete identities simultaneously. Collective nouns that have a singular form but take a plural verb form are called collective plurals. A good example of such a metonymic shift in the plural-to-singular direction is the following sentence: "Mathematics is my favorite academic subject." The word "mathematics" may have originally been plural in concept, referring to mathematic endeavors, but metonymic shift (the shift in concept from "the endeavors" to "the whole set of endeavors") produced the usage of "mathematics" as a singular entity taking singular verb forms. (A true mass-noun sense of "mathematics" followed naturally.) Nominally singular pronouns can be collective nouns taking plural verb forms, according to the same rules that apply to other collective nouns. For example, it is correct usage in both British English and American English usage to say: "None are so fallible as those who are sure they're right." In that case, the plural verb is used because the context for "none" suggests more than one thing or person. This also applies to the use of an adjective as a collective noun: "The British are coming!"; "The poor will always be with you." Other examples include: "Creedence Clearwater Revival was founded in El Cerrito, California" (but in British English, "Creedence Clearwater Revival were founded ...") "Arsenal have won the match" (but in American English, "Arsenal has won the game") "Nintendo is a video game company headquartered in Japan". This does not, however, affect the tense later in the sentence: "Cream is a psychedelic rock band who were primarily popular in the 1960s. Abbreviations provide other "exceptions" in American usage concerning plurals: "Runs Batted In" becomes "RBIs". "Smith had 10 RBIs in the last three games." "Revised Statutes Annotated" or RSAs. "The RSAs contain our laws." When only the name is plural but not the object, place, or person: "The bends is a deadly disease mostly affecting SCUBA divers." "Hot Rocks is a greatest hits compilation by The Rolling Stones" Terms of venery The tradition of using "terms of venery" or "nouns of assembly," collective nouns that are specific to certain kinds of animals, stems from an English hunting tradition of the Late Middle Ages. The fashion of a consciously developed hunting language came to England from France. It was marked by an extensive proliferation of specialist vocabulary, applying different names to the same feature in different animals. The elements can be shown to have already been part of French and English hunting terminology by the beginning of the 14th century. In the course of the 14th century, it became a courtly fashion to extend the vocabulary, and by the 15th century, the tendency had reached exaggerated and even satirical proportions. The Treatise, written by Walter of Bibbesworth in the mid-1200s, is the earliest source for collective nouns of animals in any European vernacular
measure jewelry, because it was believed that there was little variance in their mass distribution. However, this was a factual inaccuracy, as their mass varies about as much as seeds of other species. In the past, each country had its own carat. It was often used for weighing gold. Beginning in the 1570s, it was used to measure weights of diamonds. Standardization An 'international carat' of 205 milligrams was proposed in 1871 by the Syndical Chamber of Jewellers, etc., in Paris, and accepted in 1877 by the Syndical Chamber of Diamond Merchants in Paris. A metric carat of 200 milligrams – exactly one-fifth of a gram – had often been suggested in various countries, and was finally proposed by the International Committee of Weights and Measures, and unanimously accepted at the fourth sexennial General Conference of the Metric Convention held in Paris in October 1907. It was soon made compulsory by law in France, but uptake of the new carat was slower in England, where its use was allowed by the Weights and Measures (Metric System) Act of 1897. Historical definitions UK Board of Trade In the United Kingdom the original Board of Trade carat was exactly grains (~3.170 grains = ~205 mg); in 1888, the Board of Trade carat was changed to exactly grains (~3.168 grains = ~205 mg). Despite its being a non-metric unit, a number of metric countries have used this unit for its limited range of application. The Board of Trade carat was divisible into four diamond grains, but measurements were
15 grains troy each. Likewise, the ounce troy was divisible into 24 ounce carats of 20 grains troy each; the ounce carat was divisible into four ounce grains of 5 grains troy each; and the ounce grain was divisible into four ounce quarters of grains troy each. Greco-Roman The solidus was also a Roman weight unit. There is literary evidence that the weight of 72 coins of the type called solidus was exactly 1 Roman pound, and that the weight of 1 solidus was 24 siliquae. The weight of a Roman pound is generally believed to have been 327.45 g or possibly up to 5 g less. Therefore, the metric equivalent of 1 siliqua was approximately 189 mg. The Greeks had a similar unit of the same value. Gold fineness in carats comes from carats and grains of gold in a solidus of coin. The conversion rates 1 solidus = 24 carats, 1 carat = 4 grains still stand. Woolhouse's Measures, Weights and Moneys of all Nations gives gold fineness in carats of 4 grains, and silver in troy pounds of 12 troy ounces of 20
des télécommunications. CEPT was responsible for the creation of the European Telecommunications Standards Institute (ETSI) in 1988. CEPT is organised into three main components: Electronic Communications Committee (ECC) - responsible for radiocommunications and telecommunications matters and formed by the merger of ECTRA (European Committee for Telecommunications Regulatory Affairs) and ERC (European Radiocommunications Committee) in September 2001 The permanent secretariat of the ECC is the European Communications Office (ECO) European Committee for Postal Regulation (CERP, after the French "Comité européen des régulateurs postaux") - responsible for postal matters The Committee for ITU Policy (Com-ITU) is responsible for organising the co-ordination of CEPT actions for the preparation for and during the course of the ITU activities meetings of the Council, Plenipotentiary Conferences, World Telecommunication Development Conferences, World Telecommunication Standardisation Assemblies Member countries As of March 2019: 48 countries. Albania, Andorra, Austria, Azerbaijan, Belarus, Belgium, Bosnia and Herzegovina, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Georgia, Germany, Greece, Hungary, Iceland, Ireland, Italy, Latvia,
and during the course of the ITU activities meetings of the Council, Plenipotentiary Conferences, World Telecommunication Development Conferences, World Telecommunication Standardisation Assemblies Member countries As of March 2019: 48 countries. Albania, Andorra, Austria, Azerbaijan, Belarus, Belgium, Bosnia and Herzegovina, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Georgia, Germany, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Liechtenstein, Lithuania, Luxembourg, Malta, Moldova, Monaco, Montenegro, Netherlands, North Macedonia, Norway, Poland, Portugal, Romania, Russian Federation, San Marino, Serbia, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, Ukraine, United Kingdom, Vatican City, See also Europa postage stamp CEPT Recommendation T/CD 06-01 (standard for videotex) E-carrier (standard for multiplexed telephone circuits) International Telecommunication Union LPD433 PMR446 SRD860 Universal Postal Union WiMAX African Telecommunications Union (ATU) Asia-Pacific Telecommunity (APT) Caribbean Postal Union (CPU) Caribbean Telecommunications Union (CTU) Inter-American Telecommunication Commission (CITEL) Postal Union of the Americas, Spain and Portugal Notes External links ECC
of a number of National Rail lines, running parallel to franchised services, or in some cases, runs on previously abandoned railway corridors. Between Birkbeck and Beckenham Junction, Tramlink uses the Crystal Palace line, running on a single track alongside the track carrying Southern rail services. The National Rail track had been singled some years earlier. From Elmers End to Woodside, Tramlink follows the former Addiscombe Line. At Woodside, the old station buildings stand disused, and the original platforms have been replaced by accessible low platforms. Tramlink then follows the former Woodside and South Croydon Railway (W&SCR) to reach the current Addiscombe tram stop, adjacent to the site of the demolished Bingham Road railway station. It continues along the former railway route to near Sandilands, where Tramlink curves sharply towards Sandilands tram stop. Another route from Sandilands tram stop curves sharply on to the W&SCR before passing through Park Hill (or Sandilands) tunnels and to the site of Coombe Road station after which it curves away across Lloyd Park. Between Wimbledon station and Wandle Park, Tramlink follows the former West Croydon to Wimbledon Line, which was first opened in 1855 and closed on 31 May 1997 to allow for conversion into Tramlink. Within this section, from near Phipps Bridge to near Reeves Corner, Tramlink follows the Surrey Iron Railway, giving Tramlink a claim to one of the world's oldest railway alignments. Beyond Wandle Park, a Victorian footbridge beside Waddon New Road was dismantled to make way for the flyover over the West Croydon to Sutton railway line. The footbridge has been re-erected at Corfe Castle station on the Swanage Railway (although some evidence suggests that this was a similar footbridge removed from the site of Merton Park railway station). Feeder buses Bus routes T31, T32 and T33 used to connect with Tramlink at the New Addington, Fieldway and Addington Village stops. T31 and T32 no longer run, and T33 has been renumbered as 433. Rolling stock Current fleet Tramlink currently uses 35 trams. In summary: Bombardier CR4000 The original fleet comprised 24 articulated low floor Bombardier Flexity Swift CR4000 trams built in Vienna numbered beginning at 2530, continuing from the highest-numbered tram 2529 on London's former tram network, which closed in 1952. The original livery was red and white. One (2550) was painted in FirstGroup white, blue and pink livery. In 2006, the CR4000 fleet was refreshed, with the bus-style destination roller blinds being replaced with a digital dot-matrix display. In 2008/09 the fleet was repainted externally in the new green livery and the interiors were refurbished with new flooring, seat covers retrimmed in a new moquette and stanchions repainted from yellow to green. One (2551) has not returned to service after the fatal accident on 9 November 2016. In 2007 tram 2535 was named after Steven Parascandolo, a well known tram enthusiast. Croydon Variobahn In January 2011, Tramtrack Croydon invited tenders for the supply of then new or second-hand trams, and on 18 August 2011, TfL announced that Stadler Rail had won a $19.75million contract to supply six Variobahn trams similar to those used by Bybanen in Bergen, Norway. They entered service in 2012. In August 2013, TfL ordered an additional four Variobahn trams for delivery in 2015, an order which was later increased to six. This brought the total Variobahn fleet up to ten in 2015, and 12 in 2016 when the final two trams were delivered. Ancillary vehicles Engineers' vehicles used in Tramlink construction were hired for that purpose. In November 2006 Tramlink purchased five second-hand engineering vehicles from Deutsche Bahn. These were two engineers' trams (numbered 058 and 059 in Tramlink service), and three 4-wheel wagons (numbered 060, 061, and 062). Service tram 058 and trailer 061 were both sold to the National Tramway Museum in 2010. Fares and ticketing TfL Bus & Tram Passes are valid on Tramlink, as are Travelcards that include any of zones 3, 4, 5 and 6. Pay-as-you-go Oyster Card fares are the same as on London Buses, although special fares may apply when using Tramlink feeder buses. When using Oyster cards, passengers must touch in on the platform before boarding the tram. Special arrangements apply at Wimbledon station, where the Tramlink stop is within the National Rail and London Underground station. Tramlink passengers must therefore touch in at the station entry barriers then again at the Tramlink platform to inform the system that no mainline/LUL rail journey has been made. EMV contactless payment cards can also be used to pay for fares in the same manner as Oyster cards. Ticket machines were withdrawn on 16 July 2018. Services Onboard announcements The onboard announcements are by BBC News reader (and tram enthusiast) Nicholas Owen. The announcement pattern is as follows: e.g. This tram is for Wimbledon; the next stop will be Merton Park. Corporate affairs Ownership and structure The service was created as a result of the Croydon Tramlink Act 1994 that received Royal Assent on 21 July 1994, a Private Bill jointly promoted by London Regional Transport (the predecessor of Transport for London (TfL)) and Croydon London Borough Council. Following a competitive tender, a consortium company Tramtrack Croydon Limited (incorporated in 1995) was awarded a 99-year concession to build and run the system. Since 28 June 2008, the company has been a subsidiary of TfL. Tramlink is currently operated by Tram Operations Ltd (TOL), a subsidiary of FirstGroup, who have a contract to operate the service until 2030. TOL provides the drivers and management to operate the service; the infrastructure and trams are owned and maintained by a TfL subsidiary. Business trends The key available trends in recent years for Tramlink are (years ending 31 March): Activities in the financial year 2020/21 were severely reduced by the impact of the coronavirus pandemic. Passenger numbers Detailed passenger journeys since Tramlink commenced operations in May 2000 were: Future developments Sutton Link As of 2020, the only extension actively being pursued by the Mayor of London and TfL is a new line to Sutton from Wimbledon or Colliers Wood, known as the Sutton Link. In July 2013, then Mayor Boris Johnson had affirmed that there was a reasonable business case for Tramlink to cover the Wimbledon – Sutton corridor, which might also include a loop via St Helier Hospital and an extension to The Royal Marsden Hospital. In 2014, a proposed £320m scheme for a new line to connect Wimbledon to Sutton via Morden was made and brought to consultation jointly by the London Boroughs of Merton and Sutton. Although £100m from TfL was initially secured in the draft 2016/17 budget, this was subsequently reallocated. In 2018, TfL opened a consultation on proposals for a connection to Sutton, with three route options: from South Wimbledon, from Colliers Wood (both having an option of a bus rapid transit route or a tram line) or from Wimbledon (only as a tram line). In February 2020, following the consultation, TfL announced their preference for a north–south tramway between Colliers Wood and Sutton town centre, with a projected cost of £425m, on the condition of securing additional funding. Work on the project stopped in July 2020, as Transport for London could not find sufficient funding for it to continue. Previous proposals Numerous extensions to the network have been discussed or proposed over the years, involving varying degrees of support and investigative effort. In 2002, as part of The Mayor's Transport Strategy for London, a number of proposed extensions were identified, including to Sutton from Wimbledon or Mitcham; to Crystal Palace; to Colliers Wood/Tooting; and along the A23. The Strategy said that "extensions to the network could, in principle, be developed at relatively modest cost where there is potential demand..." and sought initial views on the viability of a number of extensions by summer 2002. In 2006, in a TfL consultation on an extension to Crystal Palace, three options were presented: on-street, off-street and a mixture of the two. After the consultation, the off-street option was favoured, to include Crystal Palace Station and Crystal Palace Parade. TfL stated in 2008 that due to lack of funding the plans for this extension would not be taken forward. They were revived shortly after Boris Johnson's re-election as Mayor in May 2012, but six months later they were cancelled again. In November 2014, a 15-year plan, Trams 2030, called for upgrades to increase capacity on the network in line with an expected increase in ridership to 60 million passengers by 2031 (although the passenger numbers at the time (2013/14: 31.2 million) have not been exceeded since (as at 2019)). The upgrades were to improve reliability, support regeneration in the Croydon metropolitan centre, and future-proof the network for Crossrail 2, a potential Bakerloo line extension, and extensions to the tram network itself to a wide variety of destinations. The plans involve dual-tracking across the network and introducing diverting loops on either side of Croydon, allowing for a higher frequency of trams on all four branches without increasing congestion in central Croydon. The £737m investment was to be funded by the Croydon Growth Zone, TfL Business Plan, housing levies, and the respective boroughs, and by the affected developers. All the various developments, if implemented, could theoretically require an increase in the fleet from 30 to up to 80 trams (depending on whether longer trams or coupled trams are used). As such, an increase in depot and stabling capacity would also be required; enlargement of the current Therapia Lane site, as well as sites near the Elmers End and Harrington Road tram stops, were shortlisted. Accidents and incidents On 7 September 2008, a bus on route 468 travelled through a red traffic signal and collided with tram 2534 in George Street, Croydon, causing a fatality. The driver of the bus was convicted of causing death by dangerous driving a year later in December 2009 and was sentenced to four years in prison. On 13 September 2008, tram 2530 collided with a cyclist at Morden Hall Park footpath crossing between the Morden Road and Phipps Bridge tram stops. The cyclist sustained serious injuries and later died.
The first tram was delivered in October 1998 to the new Depot at Therapia Lane and testing on the sections of the Wimbledon line began shortly afterwards. Opening The official opening of Tramlink took place on 10 May 2000 when route 3 from Croydon to New Addington opened to the public. Route 2 from Croydon to Beckenham Junction followed on 23 May 2000, and route 1 from Elmers End to Wimbledon opened a week later on 30 May 2000. Buyout by Transport for London In March 2008, TfL announced that it had reached agreement to buy TC for £98 million. The purchase was finalised on 28 June 2008. The background to this purchase relates to the requirement that TfL (who took over from London Regional Transport in 2000) compensates TC for the consequences of any changes to the fares and ticketing policy introduced since 1996. In 2007 that payment was £4m, with an annual increase in rate. FirstGroup continues to operate the service. In October 2008 TfL introduced a new livery, using the blue, white and green of the routes on TfL maps, to distinguish the trams from buses operating in the area. The colour of the cars was changed to green, and the brand name was changed from Croydon Tramlink to simply Tramlink. These refurbishments were completed in early 2009. Additional stop and trams Centrale tram stop, in Tamworth Road on the one-way central loop, opened on 10 December 2005, increasing journey times slightly. As turnround times were already quite tight, this raised the issue of buying an extra tram to maintain punctuality. Partly for this reason but also to take into account the planned restructuring of services (subsequently introduced in July 2006), TfL issued tenders for a new tram. However, nothing resulted from this. In January 2011, Tramtrack Croydon opened a tender for the supply of 10 new or second-hand trams from the end of summer 2011, for use between Therapia Lane and Elmers End. On 18 August 2011, TfL announced that Stadler Rail had won a $19.75 million contract to supply six Variobahn trams similar to those used by Bybanen in Bergen, Norway. They entered service in 2012. In August 2013, TfL ordered an additional four Variobahns for delivery in 2015, for use on the Wimbledon to Croydon link, an order later increased to six. This brought the total Variobahn fleet up to ten in 2015, and twelve in 2016 when the final two trams were delivered. Current network Stops There are 39 stops, with 38 opened in the initial phase, and Centrale tram stop added on 10 December 2005. Most stops are long. They are virtually level with the doors and are all wider than . This allows wheelchairs, prams, pushchairs and the elderly to board the tram easily with no steps. In street sections, the stop is integrated with the pavement. The tram stops have low platforms, above rail level. They are unstaffed and had automated ticket machines that are no longer in use due to TfL making trams cashless. In general, access between the platforms involves crossing the tracks by pedestrian level crossing. Tramlink uses some former main-line stations on the Wimbledon–West Croydon and Elmers End–Coombe Lane stretches of line. The railway platforms have been demolished and rebuilt to Tramlink specifications, except at Elmers End and Wimbledon where the track level was raised to meet the higher main-line platforms to enable cross-platform interchange. All stops have disabled access, raised paving, CCTV, a Passenger Help Point, a Passenger Information Display (PID), litter bins, a ticket machine, a noticeboard and lamp-posts, and most also have seats and a shelter. The PIDs display the destinations and expected arrival times of the next two trams. They can also display any message the controllers want to display, such as information on delays or even safety instructions for vandals to stop putting rubbish or other objects onto the track. Routes Tramlink has been shown on the principal tube map since 1 June 2016, having previously appeared only on the "London Connections" map. When Tramlink first opened it had three routes: Line 1 (yellow) from Wimbledon to Elmers End, Line 2 (red) from Croydon to Beckenham Junction, and Line 3 (green) from Croydon to New Addington. On 23 July 2006 the network was restructured, with Route 1 from Elmers End to Croydon, Route 2 from Beckenham Junction to Croydon and Route 3 from New Addington to Wimbledon. On 25 June 2012 Route 4 from Therapia Lane to Elmers End was introduced. On Monday 4 April 2016, Route 4 was extended from Therapia Lane to Wimbledon. On 25 February 2018, the network and timetables were restructured again for more even and reliable services. As part of this change, trams would no longer display route numbers on their dot matrix destination screens. This resulted in three routes: New Addington to West Croydon, returning to New Addington every 7–8 minutes (every 10 minutes on Sunday shopping hours and every 15 minutes at late evenings). Wimbledon to Beckenham Junction every 10 minutes (every 15 minutes on Sundays and late evening) Wimbledon to Elmers End every 10 minutes (every 15 minutes on Sundays and terminates at Croydon in late evening every 15 minutes) Additionally, the first two trams from New Addington will run to Wimbledon. Overall, this would result in a decrease in 2tph leaving Elmers End, resulting in a 25% decrease in capacity here, and 14% in the Addiscombe area. However, this would also regulate waiting times in this area and on the Wimbledon branch to every 5 minutes, from every 2–7 minutes. Former lines reused Tramlink makes use of a number of National Rail lines, running parallel to franchised services, or in some cases, runs on previously abandoned railway corridors. Between Birkbeck and Beckenham Junction, Tramlink uses the Crystal Palace line, running on a single track alongside the track carrying Southern rail services. The National Rail track had been singled some years earlier. From Elmers End to Woodside, Tramlink follows the former Addiscombe Line. At Woodside, the old station buildings stand disused, and the original platforms have been replaced by accessible low platforms. Tramlink then follows the former Woodside and South Croydon Railway (W&SCR) to reach the current Addiscombe tram stop, adjacent to the site of the demolished Bingham Road railway station. It continues along the former railway route to near Sandilands, where Tramlink curves sharply towards Sandilands tram stop. Another route from Sandilands tram stop curves sharply on to the W&SCR before passing through Park Hill (or Sandilands) tunnels and to the site of Coombe Road station after which it curves away across Lloyd Park. Between Wimbledon station and Wandle Park, Tramlink follows the former West Croydon to Wimbledon Line, which was first opened in 1855 and closed on 31 May 1997 to allow for conversion into Tramlink. Within this section, from near Phipps Bridge to near Reeves Corner, Tramlink follows the Surrey Iron Railway, giving Tramlink a claim to one of the world's oldest railway alignments. Beyond Wandle Park, a Victorian footbridge beside Waddon New Road was dismantled to make way for the flyover over the West Croydon to Sutton railway line. The footbridge has been re-erected at Corfe Castle station on the Swanage Railway (although some evidence suggests that this was a similar footbridge removed from the site of Merton Park railway station). Feeder buses Bus routes T31, T32 and T33 used to connect with Tramlink at the New Addington, Fieldway and Addington Village stops. T31 and T32 no longer run, and T33 has been renumbered as 433. Rolling stock Current fleet Tramlink currently uses 35 trams. In summary: Bombardier CR4000 The original fleet comprised 24 articulated low floor Bombardier Flexity Swift CR4000 trams built in Vienna numbered beginning at 2530, continuing from the highest-numbered tram 2529 on London's former tram network, which closed in 1952. The original livery was red and white. One (2550) was painted in FirstGroup white, blue and pink livery. In 2006, the CR4000 fleet was refreshed, with the bus-style destination roller blinds being replaced with a digital dot-matrix display. In 2008/09 the fleet was repainted externally in the new green livery and the interiors were refurbished with new flooring, seat covers retrimmed in a new moquette and stanchions repainted from yellow to green. One (2551) has not returned to service after the fatal accident on 9 November 2016. In 2007 tram 2535 was named after Steven Parascandolo, a well known tram enthusiast. Croydon Variobahn In January 2011, Tramtrack Croydon invited tenders for the supply of then new or second-hand trams, and on 18 August 2011, TfL announced that Stadler Rail had won a $19.75million contract to supply six Variobahn trams similar to those used by Bybanen in Bergen, Norway. They entered service in 2012. In August 2013, TfL ordered an additional four Variobahn trams for delivery in 2015, an order which was later increased to six. This brought the total Variobahn fleet up to ten in 2015, and 12 in 2016 when the final two trams were delivered. Ancillary vehicles Engineers' vehicles used in Tramlink construction were hired for that purpose. In November 2006 Tramlink purchased five second-hand engineering vehicles from Deutsche Bahn. These were two engineers' trams (numbered 058 and 059 in Tramlink service), and three 4-wheel wagons (numbered 060, 061, and 062). Service tram 058 and trailer 061 were both sold to the National Tramway Museum in 2010. Fares and ticketing TfL Bus & Tram Passes are
is the tractrix. Another roulette, formed by rolling a line on a catenary, is another line. This implies that square wheels can roll perfectly smoothly on a road made of a series of bumps in the shape of an inverted catenary curve. The wheels can be any regular polygon except a triangle, but the catenary must have parameters corresponding to the shape and dimensions of the wheels. Geometrical properties Over any horizontal interval, the ratio of the area under the catenary to its length equals , independent of the interval selected. The catenary is the only plane curve other than a horizontal line with this property. Also, the geometric centroid of the area under a stretch of catenary is the midpoint of the perpendicular segment connecting the centroid of the curve itself and the -axis. Science A moving charge in a uniform electric field travels along a catenary (which tends to a parabola if the charge velocity is much less than the speed of light ). The surface of revolution with fixed radii at either end that has minimum surface area is a catenary revolved about the -axis. Analysis Model of chains and arches In the mathematical model the chain (or cord, cable, rope, string, etc.) is idealized by assuming that it is so thin that it can be regarded as a curve and that it is so flexible any force of tension exerted by the chain is parallel to the chain. The analysis of the curve for an optimal arch is similar except that the forces of tension become forces of compression and everything is inverted. An underlying principle is that the chain may be considered a rigid body once it has attained equilibrium. Equations which define the shape of the curve and the tension of the chain at each point may be derived by a careful inspection of the various forces acting on a segment using the fact that these forces must be in balance if the chain is in static equilibrium. Let the path followed by the chain be given parametrically by where represents arc length and is the position vector. This is the natural parameterization and has the property that where is a unit tangent vector. A differential equation for the curve may be derived as follows. Let be the lowest point on the chain, called the vertex of the catenary. The slope of the curve is zero at C since it is a minimum point. Assume is to the right of since the other case is implied by symmetry. The forces acting on the section of the chain from to are the tension of the chain at , the tension of the chain at , and the weight of the chain. The tension at is tangent to the curve at and is therefore horizontal without any vertical component and it pulls the section to the left so it may be written where is the magnitude of the force. The tension at is parallel to the curve at and pulls the section to the right. The tension at can be split into two components so it may be written , where is the magnitude of the force and is the angle between the curve at and the -axis (see tangential angle). Finally, the weight of the chain is represented by where is the mass per unit length, is the acceleration of gravity and is the length of the segment of chain between and . The chain is in equilibrium so the sum of three forces is , therefore and and dividing these gives It is convenient to write which is the length of chain whose weight is equal in magnitude to the tension at . Then is an equation defining the curve. The horizontal component of the tension, is constant and the vertical component of the tension, is proportional to the length of chain between and the vertex. Derivation of equations for the curve The differential equation given above can be solved to produce equations for the curve. From the formula for arc length gives Then and The second of these equations can be integrated to give and by shifting the position of the -axis, can be taken to be 0. Then The -axis thus chosen is called the directrix of the catenary. It follows that the magnitude of the tension at a point is , which is proportional to the distance between the point and the directrix. This tension may also be expressed as . The integral of the expression for can be found using standard techniques, giving and, again, by shifting the position of the -axis, can be taken to be 0. Then The -axis thus chosen passes through the vertex and is called the axis of the catenary. These results can be used to eliminate giving Alternative derivation The differential equation can be solved using a different approach. From it follows that and Integrating gives, and As before, the and -axes can be shifted so and can be taken to be 0. Then and taking the reciprocal of both sides Adding and subtracting the last two equations then gives the solution and Determining parameters In general the parameter is the position of the axis. The equation can be determined in this case as follows: Relabel if necessary so that is to the left of and let be the horizontal and be the vertical distance from to . Translate the axes so that the vertex of the catenary lies on the -axis and its height is adjusted so the catenary satisfies the standard equation of the curve and let the coordinates of and be and respectively. The curve passes through these points, so the difference of height is and the length of the curve from to is When is expanded using these expressions the result is so This is a transcendental equation in and must be solved numerically. It can be shown with the methods of calculus that there is at most one solution with and so there is at most one position of equilibrium. However, if both ends of the curve ( and ) are at the same level (), it can be shown that where L is the total length of the curve between and and is the sag (vertical distance between , and the vertex of the curve). It can also be shown that and where H is the horizontal distance between and which are located at the same level (). The horizontal traction force at and is , where is the mass per unit length of the chain or cable. Variational formulation Consider a chain of length suspended from two points of equal height and at distance . The curve has to minimize its potential energy and is subject to the constraint . The modified Lagrangian is therefore where is the Lagrange multiplier to be determined. As the independent variable does not appear in the Lagrangian, we can use the Beltrami identity where is an integration constant, in order to obtain a first integral This is an ordinary first order differential equation that can be solved by the method of separation of variables. Its solution is the usual hyperbolic cosine where the parameters are obtained from the constraints. Generalizations with vertical force Nonuniform chains If the density of the chain is variable then the analysis above can be adapted to produce equations for the curve given the density, or given the curve to find the density. Let denote the weight per unit length of the chain, then the weight of the chain has magnitude where the limits of integration are and . Balancing forces as in the uniform chain produces and and therefore Differentiation then gives In terms of and the radius of curvature this becomes Suspension bridge curve A similar analysis can be done to find the curve followed by the cable supporting a suspension bridge with a horizontal roadway. If the weight of the roadway per unit length is and the weight of the cable and the wire supporting the bridge is negligible in comparison, then the weight on the cable (see the figure in Catenary#Model of chains and arches) from to is where is the horizontal distance between and . Proceeding as before gives the differential equation This is solved by simple integration to get and so the cable follows a parabola. If the weight of the cable and supporting wires is not negligible then the analysis is more complex. Catenary of equal strength In a catenary of equal strength, the cable is strengthened according to the magnitude of the tension at each point, so its resistance to breaking is constant along its length. Assuming that the strength of the cable is proportional to its density per unit length, the weight, , per unit length of the chain can be written , where is constant, and the analysis for nonuniform chains can be applied. In this case the equations for tension are Combining gives and by differentiation where is the radius of curvature. The solution to this is In this case, the curve has vertical asymptotes and this limits the span to . Other relations are The curve was studied 1826 by Davies Gilbert and, apparently independently, by Gaspard-Gustave Coriolis in 1836. Recently, it was shown that this type of catenary could act as a building block of electromagnetic metasurface and was known as "catenary of equal phase gradient". Elastic catenary In an elastic catenary, the chain is replaced by a spring which can stretch in response to tension. The spring is assumed to stretch in accordance with Hooke's Law. Specifically, if is the natural length of a section of spring, then the length of the spring with tension applied has length where is a constant equal to , where is the stiffness of the spring. In the catenary the value of is variable, but ratio remains valid at a
1744 that the catenary is the curve which, when rotated about the -axis, gives the surface of minimum surface area (the catenoid) for the given bounding circles. Nicolas Fuss gave equations describing the equilibrium of a chain under any force in 1796. Inverted catenary arch Catenary arches are often used in the construction of kilns. To create the desired curve, the shape of a hanging chain of the desired dimensions is transferred to a form which is then used as a guide for the placement of bricks or other building material. The Gateway Arch in St. Louis, Missouri, United States is sometimes said to be an (inverted) catenary, but this is incorrect. It is close to a more general curve called a flattened catenary, with equation , which is a catenary if . While a catenary is the ideal shape for a freestanding arch of constant thickness, the Gateway Arch is narrower near the top. According to the U.S. National Historic Landmark nomination for the arch, it is a "weighted catenary" instead. Its shape corresponds to the shape that a weighted chain, having lighter links in the middle, would form. The logo for McDonald's, the Golden Arches, while intended to be two joined parabolas, is also based on the catenary. Catenary bridges In free-hanging chains, the force exerted is uniform with respect to length of the chain, and so the chain follows the catenary curve. The same is true of a simple suspension bridge or "catenary bridge," where the roadway follows the cable. A stressed ribbon bridge is a more sophisticated structure with the same catenary shape. However, in a suspension bridge with a suspended roadway, the chains or cables support the weight of the bridge, and so do not hang freely. In most cases the roadway is flat, so when the weight of the cable is negligible compared with the weight being supported, the force exerted is uniform with respect to horizontal distance, and the result is a parabola, as discussed below (although the term "catenary" is often still used, in an informal sense). If the cable is heavy then the resulting curve is between a catenary and a parabola. Anchoring of marine objects The catenary produced by gravity provides an advantage to heavy anchor rodes. An anchor rode (or anchor line) usually consists of chain or cable or both. Anchor rodes are used by ships, oil rigs, docks, floating wind turbines, and other marine equipment which must be anchored to the seabed. When the rope is slack, the catenary curve presents a lower angle of pull on the anchor or mooring device than would be the case if it were nearly straight. This enhances the performance of the anchor and raises the level of force it will resist before dragging. To maintain the catenary shape in the presence of wind, a heavy chain is needed, so that only larger ships in deeper water can rely on this effect. Smaller boats also rely on catenary to maintain maximum holding power. Mathematical description Equation The equation of a catenary in Cartesian coordinates has the form where is the hyperbolic cosine function, and where is measured from the lowest point. All catenary curves are similar to each other, since changing the parameter is equivalent to a uniform scaling of the curve. The Whewell equation for the catenary is Differentiating gives and eliminating gives the Cesàro equation The radius of curvature is then which is the length of the line normal to the curve between it and the -axis. Relation to other curves When a parabola is rolled along a straight line, the roulette curve traced by its focus is a catenary. The envelope of the directrix of the parabola is also a catenary. The involute from the vertex, that is the roulette traced by a point starting at the vertex when a line is rolled on a catenary, is the tractrix. Another roulette, formed by rolling a line on a catenary, is another line. This implies that square wheels can roll perfectly smoothly on a road made of a series of bumps in the shape of an inverted catenary curve. The wheels can be any regular polygon except a triangle, but the catenary must have parameters corresponding to the shape and dimensions of the wheels. Geometrical properties Over any horizontal interval, the ratio of the area under the catenary to its length equals , independent of the interval selected. The catenary is the only plane curve other than a horizontal line with this property. Also, the geometric centroid of the area under a stretch of catenary is the midpoint of the perpendicular segment connecting the centroid of the curve itself and the -axis. Science A moving charge in a uniform electric field travels along a catenary (which tends to a parabola if the charge velocity is much less than the speed of light ). The surface of revolution with fixed radii at either end that has minimum surface area is a catenary revolved about the -axis. Analysis Model of chains and arches In the mathematical model the chain (or cord, cable, rope, string, etc.) is idealized by assuming that it is so thin that it can be regarded as a curve and that it is so flexible any force of tension exerted by the chain is parallel to the chain. The analysis of the curve for an optimal arch is similar except that the forces of tension become forces of compression and everything is inverted. An underlying principle is that the chain may be considered a rigid body once it has attained equilibrium. Equations which define the shape of the curve and the tension of the chain at each point may be derived by a careful inspection of the various forces acting on a segment using the fact that these forces must be in balance if the chain is in static equilibrium. Let the path followed by the chain be given parametrically by where represents arc length and is the position vector. This is the natural parameterization and has the property that where is a unit tangent vector. A differential equation for the curve may be derived as follows. Let be the lowest point on the chain, called the vertex of the catenary. The slope of the curve is zero at C since it is a minimum point. Assume is to the right of since the other case is implied by symmetry. The forces acting on the section of the chain from to are the tension of the chain at , the tension of the chain at , and the weight of the chain. The tension at is tangent to the curve at and is therefore horizontal without any vertical component and it pulls the section to the left so it may be written where is the
Sun over the course of the day is mainly a result of the scattering of sunlight and is not due to changes in black-body radiation. Rayleigh scattering of sunlight by Earth's atmosphere causes the blue color of the sky, which tends to scatter blue light more than red light. Some daylight in the early morning and late afternoon (the golden hours) has a lower ("warmer") color temperature due to increased scattering of shorter-wavelength sunlight by atmospheric particles – an optical phenomenon called the Tyndall effect. Daylight has a spectrum similar to that of a black body with a correlated color temperature of 6500 K (D65 viewing standard) or 5500 K (daylight-balanced photographic film standard). For colors based on black-body theory, blue occurs at higher temperatures, whereas red occurs at lower temperatures. This is the opposite of the cultural associations attributed to colors, in which "red" is "hot", and "blue" is "cold". Applications Lighting For lighting building interiors, it is often important to take into account the color temperature of illumination. A warmer (i.e., a lower color temperature) light is often used in public areas to promote relaxation, while a cooler (higher color temperature) light is used to enhance concentration, for example in schools and offices. CCT dimming for LED technology is regarded as a difficult task, since binning, age and temperature drift effects of LEDs change the actual color value output. Here feedback loop systems are used, for example with color sensors, to actively monitor and control the color output of multiple color mixing LEDs. Aquaculture In fishkeeping, color temperature has different functions and foci in the various branches. In freshwater aquaria, color temperature is generally of concern only for producing a more attractive display. Lights tend to be designed to produce an attractive spectrum, sometimes with secondary attention paid to keeping the plants in the aquaria alive. In a saltwater/reef aquarium, color temperature is an essential part of tank health. Within about 400 to 3000 nanometers, light of shorter wavelength can penetrate deeper into water than longer wavelengths, providing essential energy sources to the algae hosted in (and sustaining) coral. This is equivalent to an increase of color temperature with water depth in this spectral range. Because coral typically live in shallow water and receive intense, direct tropical sunlight, the focus was once on simulating this situation with 6500 K lights. In the meantime, higher temperature light sources have become more popular, first with 10000 K and more recently 16000 K and 20000 K. Actinic lighting at the violet end of the visible range (420–460 nm) is used to allow night viewing without increasing algae bloom or enhancing photosynthesis, and to make the somewhat fluorescent colors of many corals and fish "pop", creating brighter display tanks. Digital photography In digital photography, the term color temperature sometimes refers to remapping of color values to simulate variations in ambient color temperature. Most digital cameras and raw image software provide presets simulating specific ambient values (e.g., sunny, cloudy, tungsten, etc.) while others allow explicit entry of white balance values in kelvins. These settings vary color values along the blue–yellow axis, while some software includes additional controls (sometimes labeled "tint") adding the magenta–green axis, and are to some extent arbitrary and a matter of artistic interpretation. Photographic film Photographic emulsion film does not respond to lighting color identically to the human retina or visual perception. An object that appears to the observer to be white may turn out to be very blue or orange in a photograph. The color balance may need to be corrected during printing to achieve a neutral color print. The extent of this correction is limited since color film normally has three layers sensitive to different colors and when used under the "wrong" light source, every layer may not respond proportionally, giving odd color casts in the shadows, although the mid-tones may have been correctly white-balanced under the enlarger. Light sources with discontinuous spectra, such as fluorescent tubes, cannot be fully corrected in printing either, since one of the layers may barely have recorded an image at all. Photographic film is made for specific light sources (most commonly daylight film and tungsten film), and, used properly, will create a neutral color print. Matching the sensitivity of the film to the color temperature of the light source is one way to balance color. If tungsten film is used indoors with incandescent lamps, the yellowish-orange light of the tungsten incandescent lamps will appear as white (3200 K) in the photograph. Color negative film is almost always daylight-balanced, since it is assumed that color can be adjusted in printing (with limitations, see above). Color transparency film, being the final artefact in the process, has to be matched to the light source or filters must be used to correct color. Filters on a camera lens, or color gels over the light source(s) may be used to correct color balance. When shooting with a bluish light (high color temperature) source such as on an overcast day, in the shade, in window light, or if using tungsten film with white or blue light, a yellowish-orange filter will correct this. For shooting with daylight film (calibrated to 5600 K) under warmer (low color temperature) light sources such as sunsets, candlelight or tungsten lighting, a bluish (e.g. #80A) filter may be used. More-subtle filters are needed to correct for the difference between, say 3200 K and 3400 K tungsten lamps or to correct for the slightly blue cast of some flash tubes, which may be 6000 K. If there is more than one light source with varied color temperatures, one way to balance the color is to use daylight film and place color-correcting gel filters over each light source. Photographers sometimes use color temperature meters. These are usually designed to read only two regions along the visible spectrum (red and blue); more expensive ones read three regions (red, green, and blue). However, they are ineffective with sources such as fluorescent or discharge lamps, whose light varies in color and may be harder to correct for. Because this light is often greenish, a magenta filter may correct it. More sophisticated colorimetry tools can be used if such meters are lacking. Desktop publishing In the desktop publishing industry, it is important to know a monitor's color temperature. Color matching software, such as Apple's ColorSync for Mac OS, measures a monitor's color temperature and then adjusts its settings accordingly. This enables on-screen color to more closely match printed color. Common monitor color temperatures, along with matching standard illuminants in parentheses, are as follows: 5000 K (CIE D50) 5500 K (CIE D55) 6500 K (D65) 7500 K (CIE D75) 9300 K D50 is scientific shorthand for a standard illuminant: the daylight spectrum at a correlated color temperature of 5000 K. Similar definitions exist for D55, D65 and D75. Designations such as D50 are used to help classify color temperatures of light tables and viewing booths. When viewing a color slide at a light table, it is important that the light be balanced properly so that the colors are not shifted towards the red or blue. Digital cameras, web graphics, DVDs, etc., are normally designed for a 6500 K color temperature. The sRGB standard commonly used for images on the Internet stipulates (among other things) a 6500 K display white point. TV, video, and digital still cameras The NTSC and PAL TV norms call for a compliant TV screen to display an electrically black and white signal (minimal color saturation) at a color temperature of 6500 K. On many consumer-grade televisions, there is a very noticeable deviation from this requirement. However, higher-end consumer-grade televisions can have their color temperatures adjusted to 6500 K by using a preprogrammed setting or a custom calibration. Current versions of ATSC explicitly call for the color temperature data to be included in the data stream, but old versions of ATSC allowed this data to be omitted. In this case, current versions of ATSC cite default colorimetry standards depending on the format. Both of the cited standards specify a 6500 K color temperature. Most video and digital still cameras can adjust for color temperature by zooming into a white or neutral colored object and setting the manual "white balance" (telling the camera that "this object is white"); the camera then shows true white as white and adjusts all the other colors accordingly. White-balancing is necessary especially when indoors under fluorescent lighting and when moving the camera from one lighting situation to another. Most cameras also have an automatic white balance function that attempts to determine the color of the light and correct accordingly. While these settings were once unreliable, they are much improved in today's digital cameras and produce an accurate white balance in a wide variety of lighting situations. Artistic application via control of color temperature Video camera operators can white-balance objects that are not white, downplaying the color of the object used for white-balancing. For instance, they can bring more warmth into a picture by white-balancing off something that is light blue, such as faded blue denim; in this way white-balancing can replace a filter or lighting gel when those are not available. Cinematographers do not "white balance" in the same way as video camera operators; they use techniques such as filters, choice of film stock, pre-flashing, and, after shooting, color grading, both by exposure at the labs and also digitally. Cinematographers also work closely with set designers and lighting crews to achieve the desired color effects. For artists, most pigments and papers have a cool or warm cast, as the human eye can detect even a minute amount of saturation. Gray mixed with yellow, orange, or red is a "warm gray". Green, blue, or purple create "cool grays". Note that this sense of temperature is the reverse of that of real temperature; bluer is described as "cooler" even though it corresponds to a higher-temperature black body. Lighting designers sometimes select filters by color temperature, commonly to match light that is theoretically white. Since fixtures using discharge type lamps produce a light of a considerably higher color temperature than do tungsten lamps, using the two in conjunction could potentially produce a stark contrast, so sometimes fixtures with HID lamps, commonly producing light of 6000–7000 K, are fitted with 3200 K filters to emulate tungsten light. Fixtures with color mixing features or with multiple colors (if including 3200 K), are also capable of producing tungsten-like light. Color temperature may also be a factor when selecting lamps, since
tungsten film is used indoors with incandescent lamps, the yellowish-orange light of the tungsten incandescent lamps will appear as white (3200 K) in the photograph. Color negative film is almost always daylight-balanced, since it is assumed that color can be adjusted in printing (with limitations, see above). Color transparency film, being the final artefact in the process, has to be matched to the light source or filters must be used to correct color. Filters on a camera lens, or color gels over the light source(s) may be used to correct color balance. When shooting with a bluish light (high color temperature) source such as on an overcast day, in the shade, in window light, or if using tungsten film with white or blue light, a yellowish-orange filter will correct this. For shooting with daylight film (calibrated to 5600 K) under warmer (low color temperature) light sources such as sunsets, candlelight or tungsten lighting, a bluish (e.g. #80A) filter may be used. More-subtle filters are needed to correct for the difference between, say 3200 K and 3400 K tungsten lamps or to correct for the slightly blue cast of some flash tubes, which may be 6000 K. If there is more than one light source with varied color temperatures, one way to balance the color is to use daylight film and place color-correcting gel filters over each light source. Photographers sometimes use color temperature meters. These are usually designed to read only two regions along the visible spectrum (red and blue); more expensive ones read three regions (red, green, and blue). However, they are ineffective with sources such as fluorescent or discharge lamps, whose light varies in color and may be harder to correct for. Because this light is often greenish, a magenta filter may correct it. More sophisticated colorimetry tools can be used if such meters are lacking. Desktop publishing In the desktop publishing industry, it is important to know a monitor's color temperature. Color matching software, such as Apple's ColorSync for Mac OS, measures a monitor's color temperature and then adjusts its settings accordingly. This enables on-screen color to more closely match printed color. Common monitor color temperatures, along with matching standard illuminants in parentheses, are as follows: 5000 K (CIE D50) 5500 K (CIE D55) 6500 K (D65) 7500 K (CIE D75) 9300 K D50 is scientific shorthand for a standard illuminant: the daylight spectrum at a correlated color temperature of 5000 K. Similar definitions exist for D55, D65 and D75. Designations such as D50 are used to help classify color temperatures of light tables and viewing booths. When viewing a color slide at a light table, it is important that the light be balanced properly so that the colors are not shifted towards the red or blue. Digital cameras, web graphics, DVDs, etc., are normally designed for a 6500 K color temperature. The sRGB standard commonly used for images on the Internet stipulates (among other things) a 6500 K display white point. TV, video, and digital still cameras The NTSC and PAL TV norms call for a compliant TV screen to display an electrically black and white signal (minimal color saturation) at a color temperature of 6500 K. On many consumer-grade televisions, there is a very noticeable deviation from this requirement. However, higher-end consumer-grade televisions can have their color temperatures adjusted to 6500 K by using a preprogrammed setting or a custom calibration. Current versions of ATSC explicitly call for the color temperature data to be included in the data stream, but old versions of ATSC allowed this data to be omitted. In this case, current versions of ATSC cite default colorimetry standards depending on the format. Both of the cited standards specify a 6500 K color temperature. Most video and digital still cameras can adjust for color temperature by zooming into a white or neutral colored object and setting the manual "white balance" (telling the camera that "this object is white"); the camera then shows true white as white and adjusts all the other colors accordingly. White-balancing is necessary especially when indoors under fluorescent lighting and when moving the camera from one lighting situation to another. Most cameras also have an automatic white balance function that attempts to determine the color of the light and correct accordingly. While these settings were once unreliable, they are much improved in today's digital cameras and produce an accurate white balance in a wide variety of lighting situations. Artistic application via control of color temperature Video camera operators can white-balance objects that are not white, downplaying the color of the object used for white-balancing. For instance, they can bring more warmth into a picture by white-balancing off something that is light blue, such as faded blue denim; in this way white-balancing can replace a filter or lighting gel when those are not available. Cinematographers do not "white balance" in the same way as video camera operators; they use techniques such as filters, choice of film stock, pre-flashing, and, after shooting, color grading, both by exposure at the labs and also digitally. Cinematographers also work closely with set designers and lighting crews to achieve the desired color effects. For artists, most pigments and papers have a cool or warm cast, as the human eye can detect even a minute amount of saturation. Gray mixed with yellow, orange, or red is a "warm gray". Green, blue, or purple create "cool grays". Note that this sense of temperature is the reverse of that of real temperature; bluer is described as "cooler" even though it corresponds to a higher-temperature black body. Lighting designers sometimes select filters by color temperature, commonly to match light that is theoretically white. Since fixtures using discharge type lamps produce a light of a considerably higher color temperature than do tungsten lamps, using the two in conjunction could potentially produce a stark contrast, so sometimes fixtures with HID lamps, commonly producing light of 6000–7000 K, are fitted with 3200 K filters to emulate tungsten light. Fixtures with color mixing features or with multiple colors (if including 3200 K), are also capable of producing tungsten-like light. Color temperature may also be a factor when selecting lamps, since each is likely to have a different color temperature. Correlated color temperature Motivation Black-body radiators are the reference by which the whiteness of light sources is judged. A black body can be described by its temperature and produces light of a particular hue, as depicted above. This set of colors is called color temperature. By analogy, nearly Planckian light sources such as certain fluorescent or high-intensity discharge lamps can be judged by their correlated color temperature (CCT), the temperature of the Planckian radiator whose color best approximates them. For light source spectra that are not Planckian, matching them to that of
comic strips—as well as comic books and graphic novels—are usually referred to as "cartoonists". Although humor is the most prevalent subject matter, adventure and drama are also represented in this medium. Some noteworthy cartoonists of humorous comic strips are Scott Adams, Charles Schulz, E. C. Segar, Mort Walker and Bill Watterson. Political Political cartoons are like illustrated editorial that serve visual commentaries on political events. They offer subtle criticism which are cleverly quoted with humour and satire to the extent that the criticized does not get embittered. The pictorial satire of William Hogarth is regarded as a precursor to the development of political cartoons in 18th century England. George Townshend produced some of the first overtly political cartoons and caricatures in the 1750s. The medium began to develop in the latter part of the 18th century under the direction of its great exponents, James Gillray and Thomas Rowlandson, both from London. Gillray explored the use of the medium for lampooning and caricature, and has been referred to as the father of the political cartoon. By calling the king, prime ministers and generals to account for their behaviour, many of Gillray's satires were directed against George III, depicting him as a pretentious buffoon, while the bulk of his work was dedicated to ridiculing the ambitions of revolutionary France and Napoleon. George Cruikshank became the leading cartoonist in the period following Gillray, from 1815 until the 1840s. His career was renowned for his social caricatures of English life for popular publications. By the mid 19th century, major political newspapers in many other countries featured cartoons commenting on the politics of the day. Thomas Nast, in New York City, showed how realistic German drawing techniques could redefine American cartooning. His 160 cartoons relentlessly pursued the criminal characteristic of the Tweed machine in New York City, and helped bring it down. Indeed, Tweed was arrested in Spain when police identified him from Nast's cartoons. In Britain, Sir John Tenniel was the toast of London. In France under the July Monarchy, Honoré Daumier took up the new genre of political and social caricature, most famously lampooning the rotund King Louis Philippe. Political cartoons can be humorous or satirical, sometimes with piercing effect. The target of the humor may complain, but can seldom fight back. Lawsuits have been very rare; the first successful lawsuit against a cartoonist in over a century in Britain came in 1921, when J. H. Thomas, the leader of the National Union of Railwaymen (NUR), initiated libel proceedings against the magazine of the British Communist Party. Thomas claimed defamation in the form of cartoons and words depicting the events of "Black Friday", when he allegedly betrayed the locked-out Miners' Federation. To Thomas, the framing of his image by the far left threatened to grievously degrade his character in the popular imagination. Soviet-inspired communism was a new element in European politics, and cartoonists unrestrained by tradition tested the boundaries of libel law. Thomas won the lawsuit and restored his reputation. Scientific Cartoons such as xkcd have also found their place in the world of science, mathematics, and technology. For example, the cartoon Wonderlab looked at daily life in the chemistry lab. In the U.S., one well-known cartoonist for these fields is Sidney Harris. Many of Gary Larson's cartoons have a scientific flavor. Comic books Books with cartoons are usually magazine-format "comic books," or occasionally reprints of newspaper cartoons. In Britain in the 1930s adventure magazines became quite popular, especially those published by DC Thomson; the publisher sent observers around the country to talk to boys and learn what they wanted to read about. The story line in magazines, comic books and cinema that most appealed to boys was the glamorous heroism of British soldiers fighting wars that were exciting and just. D.C. Thomson issued the first The Dandy Comic in December 1937. It had a revolutionary design that broke away from the usual children's comics that were published broadsheet in size and not very colourful. Thomson capitalized on its success with a similar product The Beano in 1938. On some occasions, new gag cartoons have been created for book publication, as was the case with Think Small, a 1967 promotional book distributed as a giveaway by Volkswagen dealers. Bill Hoest and other cartoonists of that decade drew cartoons showing Volkswagens, and these were published along with humorous automotive essays by such humorists as H. Allen Smith, Roger Price and Jean Shepherd. The book's design juxtaposed each cartoon alongside a photograph of the cartoon's creator. Animation Because of the stylistic similarities between comic strips and early animated films, cartoon came to refer to animation, and the word cartoon is currently used in reference to both animated cartoons and gag cartoons. While animation designates any style of illustrated images seen in rapid succession to give the impression of
either: an image or series of images intended for satire, caricature, or humor; or a motion picture that relies on a sequence of illustrations for its animation. Someone who creates cartoons in the first sense is called a cartoonist, and in the second sense they are usually called an animator. The concept originated in the Middle Ages, and first described a preparatory drawing for a piece of art, such as a painting, fresco, tapestry, or stained glass window. In the 19th century, beginning in Punch magazine in 1843, cartoon came to refer – ironically at first – to humorous illustrations in magazines and newspapers. Then it also was used for political cartoons and comic strips. When the medium developed, in the early 20th century, it began to refer to animated films which resembled print cartoons. Fine art A cartoon (from and —words describing strong, heavy paper or pasteboard) is a full-size drawing made on sturdy paper as a design or modello for a painting, stained glass, or tapestry. Cartoons were typically used in the production of frescoes, to accurately link the component parts of the composition when painted on damp plaster over a series of days (giornate). In media such as stained tapestry or stained glass, the cartoon was handed over by the artist to the skilled craftsmen who produced the final work. Such cartoons often have pinpricks along the outlines of the design so that a bag of soot patted or "pounced" over a cartoon, held against the wall, would leave black dots on the plaster ("pouncing"). Cartoons by painters, such as the Raphael Cartoons in London, and examples by Leonardo da Vinci, are highly prized in their own right. Tapestry cartoons, usually colored, were followed with the eye by the weavers on the loom. Mass media In print media, a cartoon is an illustration or series of illustrations, usually humorous in intent. This usage dates from 1843, when Punch magazine applied the term to satirical drawings in its pages, particularly sketches by John Leech. The first of these parodied the preparatory cartoons for grand historical frescoes in the then-new Palace of Westminster. The original title for these drawings was Mr Punch's face is the letter Q and the new title "cartoon" was intended to be ironic, a reference to the self-aggrandizing posturing of Westminster politicians. Cartoons can be divided into gag cartoons, which include editorial cartoons, and comic strips. Modern single-panel gag cartoons, found in magazines, generally consist of a single drawing with a typeset caption positioned beneath, or—less often—a speech balloon. Newspaper syndicates have also distributed single-panel gag cartoons by Mel Calman, Bill Holman, Gary Larson, George Lichty, Fred Neher and others. Many consider New Yorker cartoonist Peter Arno the father of the modern gag cartoon (as did Arno himself). The roster of magazine gag cartoonists includes Charles Addams, Charles Barsotti, and Chon Day. Bill Hoest, Jerry Marcus, and Virgil Partch began as magazine gag cartoonists and moved to syndicated comic strips. Richard Thompson illustrated numerous feature articles in The Washington Post before creating his Cul de Sac comic strip. The sports section of newspapers usually featured cartoons, sometimes including syndicated features such as Chester "Chet" Brown's All in Sport. Editorial cartoons
was succeeded on 13 August 1977 by Paul Everingham (CLP) as Majority Leader. When the Territory attained self-government on 1 July 1978, Everingham became Chief Minister with greatly expanded powers. In 2001, Clare Martin became the first Labor and female chief minister of the Northern Territory. Until 2004 the conduct of elections and drawing of electoral boundaries was performed by the Northern Territory Electoral Office, a unit of the Department of the Chief Minister. In March 2004 the independent Northern Territory Electoral Commission was established. In 2013, Mills was replaced as Chief Minister and CLP leader by Adam Giles at the 2013 CLP leadership ballot on 13 March to become the first indigenous Australian to lead a state or territory government in Australia. Following the 2016 election landslide outcome, Labor's Michael Gunner became Chief Minister. List of chief ministers of the Northern Territory From the foundation of the Northern Territory Legislative Assembly in 1974 until the granting of self-government in 1978, the head of government was known as the majority leader: From 1978, the position was known as the chief minister: See also List of chief ministers of the
August, following the 2016 election, the chief minister is Michael Gunner of the Labor Party. He is the first chief minister to have been born in the Northern Territory. History The Country Liberal Party won the first Northern Territory election on 19 October 1974 and elected Goff Letts Majority Leader. He headed an Executive that carried out most of the functions of a ministry at the state level. At the 1977 election Letts lost his seat and party leadership. He was succeeded on 13 August 1977 by Paul Everingham (CLP) as Majority Leader. When the Territory attained self-government on 1 July 1978, Everingham became Chief Minister with greatly expanded powers. In 2001, Clare Martin became the first Labor and female chief minister of the Northern Territory. Until 2004 the conduct of elections and drawing of electoral boundaries was performed by the Northern Territory Electoral Office, a unit of the Department of the Chief Minister. In March 2004 the independent Northern Territory Electoral Commission was established. In 2013, Mills was replaced as Chief Minister and CLP leader by Adam Giles at the 2013 CLP leadership
neoplasia after successful chemotherapy or radiotherapy treatment can occur. The most common secondary neoplasm is secondary acute myeloid leukemia, which develops primarily after treatment with alkylating agents or topoisomerase inhibitors. Survivors of childhood cancer are more than 13 times as likely to get a secondary neoplasm during the 30 years after treatment than the general population. Not all of this increase can be attributed to chemotherapy. Infertility Some types of chemotherapy are gonadotoxic and may cause infertility. Chemotherapies with high risk include procarbazine and other alkylating drugs such as cyclophosphamide, ifosfamide, busulfan, melphalan, chlorambucil, and chlormethine. Drugs with medium risk include doxorubicin and platinum analogs such as cisplatin and carboplatin. On the other hand, therapies with low risk of gonadotoxicity include plant derivatives such as vincristine and vinblastine, antibiotics such as bleomycin and dactinomycin, and antimetabolites such as methotrexate, mercaptopurine, and 5-fluorouracil. Female infertility by chemotherapy appears to be secondary to premature ovarian failure by loss of primordial follicles. This loss is not necessarily a direct effect of the chemotherapeutic agents, but could be due to an increased rate of growth initiation to replace damaged developing follicles. People may choose between several methods of fertility preservation prior to chemotherapy, including cryopreservation of semen, ovarian tissue, oocytes, or embryos. As more than half of cancer patients are elderly, this adverse effect is only relevant for a minority of patients. A study in France between 1999 and 2011 came to the result that embryo freezing before administration of gonadotoxic agents to females caused a delay of treatment in 34% of cases, and a live birth in 27% of surviving cases who wanted to become pregnant, with the follow-up time varying between 1 and 13 years. Potential protective or attenuating agents include GnRH analogs, where several studies have shown a protective effect in vivo in humans, but some studies show no such effect. Sphingosine-1-phosphate (S1P) has shown similar effect, but its mechanism of inhibiting the sphingomyelin apoptotic pathway may also interfere with the apoptosis action of chemotherapy drugs. In chemotherapy as a conditioning regimen in hematopoietic stem cell transplantation, a study of people conditioned with cyclophosphamide alone for severe aplastic anemia came to the result that ovarian recovery occurred in all women younger than 26 years at time of transplantation, but only in five of 16 women older than 26 years. Teratogenicity Chemotherapy is teratogenic during pregnancy, especially during the first trimester, to the extent that abortion usually is recommended if pregnancy in this period is found during chemotherapy. Second- and third-trimester exposure does not usually increase the teratogenic risk and adverse effects on cognitive development, but it may increase the risk of various complications of pregnancy and fetal myelosuppression. In males previously having undergone chemotherapy or radiotherapy, there appears to be no increase in genetic defects or congenital malformations in their children conceived after therapy. The use of assisted reproductive technologies and micromanipulation techniques might increase this risk. In females previously having undergone chemotherapy, miscarriage and congenital malformations are not increased in subsequent conceptions. However, when in vitro fertilization and embryo cryopreservation is practised between or shortly after treatment, possible genetic risks to the growing oocytes exist, and hence it has been recommended that the babies be screened. Peripheral neuropathy Between 30 and 40 percent of people undergoing chemotherapy experience chemotherapy-induced peripheral neuropathy (CIPN), a progressive, enduring, and often irreversible condition, causing pain, tingling, numbness and sensitivity to cold, beginning in the hands and feet and sometimes progressing to the arms and legs. Chemotherapy drugs associated with CIPN include thalidomide, epothilones, vinca alkaloids, taxanes, proteasome inhibitors, and the platinum-based drugs. Whether CIPN arises, and to what degree, is determined by the choice of drug, duration of use, the total amount consumed and whether the person already has peripheral neuropathy. Though the symptoms are mainly sensory, in some cases motor nerves and the autonomic nervous system are affected. CIPN often follows the first chemotherapy dose and increases in severity as treatment continues, but this progression usually levels off at completion of treatment. The platinum-based drugs are the exception; with these drugs, sensation may continue to deteriorate for several months after the end of treatment. Some CIPN appears to be irreversible. Pain can often be managed with drug or other treatment but the numbness is usually resistant to treatment. Cognitive impairment Some people receiving chemotherapy report fatigue or non-specific neurocognitive problems, such as an inability to concentrate; this is sometimes called post-chemotherapy cognitive impairment, referred to as "chemo brain" in popular and social media. Tumor lysis syndrome In particularly large tumors and cancers with high white cell counts, such as lymphomas, teratomas, and some leukemias, some people develop tumor lysis syndrome. The rapid breakdown of cancer cells causes the release of chemicals from the inside of the cells. Following this, high levels of uric acid, potassium and phosphate are found in the blood. High levels of phosphate induce secondary hypoparathyroidism, resulting in low levels of calcium in the blood. This causes kidney damage and the high levels of potassium can cause cardiac arrhythmia. Although prophylaxis is available and is often initiated in people with large tumors, this is a dangerous side-effect that can lead to death if left untreated. Organ damage Cardiotoxicity (heart damage) is especially prominent with the use of anthracycline drugs (doxorubicin, epirubicin, idarubicin, and liposomal doxorubicin). The cause of this is most likely due to the production of free radicals in the cell and subsequent DNA damage. Other chemotherapeutic agents that cause cardiotoxicity, but at a lower incidence, are cyclophosphamide, docetaxel and clofarabine. Hepatotoxicity (liver damage) can be caused by many cytotoxic drugs. The susceptibility of an individual to liver damage can be altered by other factors such as the cancer itself, viral hepatitis, immunosuppression and nutritional deficiency. The liver damage can consist of damage to liver cells, hepatic sinusoidal syndrome (obstruction of the veins in the liver), cholestasis (where bile does not flow from the liver to the intestine) and liver fibrosis. Nephrotoxicity (kidney damage) can be caused by tumor lysis syndrome and also due direct effects of drug clearance by the kidneys. Different drugs will affect different parts of the kidney and the toxicity may be asymptomatic (only seen on blood or urine tests) or may cause acute kidney injury. Ototoxicity (damage to the inner ear) is a common side effect of platinum based drugs that can produce symptoms such as dizziness and vertigo. Children treated with platinum analogues have been found to be at risk for developing hearing loss. Other side-effects Less common side-effects include red skin (erythema), dry skin, damaged fingernails, a dry mouth (xerostomia), water retention, and sexual impotence. Some medications can trigger allergic or pseudoallergic reactions. Specific chemotherapeutic agents are associated with organ-specific toxicities, including cardiovascular disease (e.g., doxorubicin), interstitial lung disease (e.g., bleomycin) and occasionally secondary neoplasm (e.g., MOPP therapy for Hodgkin's disease). Hand-foot syndrome is another side effect to cytotoxic chemotherapy. Nutritional problems are also frequently seen in cancer patients at diagnosis and through chemotherapy treatment. Research suggests that in children and young people undergoing cancer treatment, parenteral nutrition may help with this leading to weight gain and increased calorie and protein intake, when compared to enteral nutrition. Limitations Chemotherapy does not always work, and even when it is useful, it may not completely destroy the cancer. People frequently fail to understand its limitations. In one study of people who had been newly diagnosed with incurable, stage 4 cancer, more than two-thirds of people with lung cancer and more than four-fifths of people with colorectal cancer still believed that chemotherapy was likely to cure their cancer. The blood–brain barrier poses an obstacle to delivery of chemotherapy to the brain. This is because the brain has an extensive system in place to protect it from harmful chemicals. Drug transporters can pump out drugs from the brain and brain's blood vessel cells into the cerebrospinal fluid and blood circulation. These transporters pump out most chemotherapy drugs, which reduces their efficacy for treatment of brain tumors. Only small lipophilic alkylating agents such as lomustine or temozolomide are able to cross this blood–brain barrier. Blood vessels in tumors are very different from those seen in normal tissues. As a tumor grows, tumor cells furthest away from the blood vessels become low in oxygen (hypoxic). To counteract this they then signal for new blood vessels to grow. The newly formed tumor vasculature is poorly formed and does not deliver an adequate blood supply to all areas of the tumor. This leads to issues with drug delivery because many drugs will be delivered to the tumor by the circulatory system. Resistance Resistance is a major cause of treatment failure in chemotherapeutic drugs. There are a few possible causes of resistance in cancer, one of which is the presence of small pumps on the surface of cancer cells that actively move chemotherapy from inside the cell to the outside. Cancer cells produce high amounts of these pumps, known as p-glycoprotein, in order to protect themselves from chemotherapeutics. Research on p-glycoprotein and other such chemotherapy efflux pumps is currently ongoing. Medications to inhibit the function of p-glycoprotein are undergoing investigation, but due to toxicities and interactions with anti-cancer drugs their development has been difficult. Another mechanism of resistance is gene amplification, a process in which multiple copies of a gene are produced by cancer cells. This overcomes the effect of drugs that reduce the expression of genes involved in replication. With more copies of the gene, the drug can not prevent all expression of the gene and therefore the cell can restore its proliferative ability. Cancer cells can also cause defects in the cellular pathways of apoptosis (programmed cell death). As most chemotherapy drugs kill cancer cells in this manner, defective apoptosis allows survival of these cells, making them resistant. Many chemotherapy drugs also cause DNA damage, which can be repaired by enzymes in the cell that carry out DNA repair. Upregulation of these genes can overcome the DNA damage and prevent the induction of apoptosis. Mutations in genes that produce drug target proteins, such as tubulin, can occur which prevent the drugs from binding to the protein, leading to resistance to these types of drugs. Drugs used in chemotherapy can induce cell stress, which can kill a cancer cell; however, under certain conditions, cells stress can induce changes in gene expression that enables resistance to several types of drugs. In lung cancer, the transcription factor NFκB is thought to play a role in resistance to chemotherapy, via inflammatory pathways. Cytotoxics and targeted therapies Targeted therapies are a relatively new class of cancer drugs that can overcome many of the issues seen with the use of cytotoxics. They are divided into two groups: small molecule and antibodies. The massive toxicity seen with the use of cytotoxics is due to the lack of cell specificity of the drugs. They will kill any rapidly dividing cell, tumor or normal. Targeted therapies are designed to affect cellular proteins or processes that are utilised by the cancer cells. This allows a high dose to cancer tissues with a relatively low dose to other tissues. Although the side effects are often less severe than that seen of cytotoxic chemotherapeutics, life-threatening effects can occur. Initially, the targeted therapeutics were supposed to be solely selective for one protein. Now it is clear that there is often a range of protein targets that the drug can bind. An example target for targeted therapy is the BCR-ABL1 protein produced from the Philadelphia chromosome, a genetic lesion found commonly in chronic myelogenous leukemia and in some patients with acute lymphoblastic leukemia. This fusion protein has enzyme activity that can be inhibited by imatinib, a small molecule drug. Mechanism of action Cancer is the uncontrolled growth of cells coupled with malignant behaviour: invasion and metastasis (among other features). It is caused by the interaction between genetic susceptibility and environmental factors. These factors lead to accumulations of genetic mutations in oncogenes (genes that control the growth rate of cells) and tumor suppressor genes (genes that help to prevent cancer), which gives cancer cells their malignant characteristics, such as uncontrolled growth. In the broad sense, most chemotherapeutic drugs work by impairing mitosis (cell division), effectively targeting fast-dividing cells. As these drugs cause damage to cells, they are termed cytotoxic. They prevent mitosis by various mechanisms including damaging DNA and inhibition of the cellular machinery involved in cell division. One theory as to why these drugs kill cancer cells is that they induce a programmed form of cell death known as apoptosis. As chemotherapy affects cell division, tumors with high growth rates (such as acute myelogenous leukemia and the aggressive lymphomas, including Hodgkin's disease) are more sensitive to chemotherapy, as a larger proportion of the targeted cells are undergoing cell division at any time. Malignancies with slower growth rates, such as indolent lymphomas, tend to respond to chemotherapy much more modestly. Heterogeneic tumours may also display varying sensitivities to chemotherapy agents, depending on the subclonal populations within the tumor. Cells from the immune system also make crucial contributions to the antitumor effects of chemotherapy. For example, the chemotherapeutic drugs oxaliplatin and cyclophosphamide can cause tumor cells to die in a way that is detectable by the immune system (called immunogenic cell death), which mobilizes immune cells with antitumor functions. Chemotherapeutic drugs that cause cancer immunogenic tumor cell death can make unresponsive tumors sensitive to immune checkpoint therapy. Other uses Some chemotherapy drugs are used in diseases other than cancer, such as in autoimmune disorders, and noncancerous plasma cell dyscrasia. In some cases they are often used at lower doses, which means that the side effects are minimized, while in other cases doses similar to ones used to treat cancer are used. Methotrexate is used in the treatment of rheumatoid arthritis (RA), psoriasis, ankylosing spondylitis and multiple sclerosis. The anti-inflammatory response seen in RA is thought to be due to increases in adenosine, which causes immunosuppression; effects on immuno-regulatory cyclooxygenase-2 enzyme pathways; reduction in pro-inflammatory cytokines; and anti-proliferative properties. Although methotrexate is used to treat both multiple sclerosis and ankylosing spondylitis, its efficacy in these diseases is still uncertain. Cyclophosphamide is sometimes used to treat lupus nephritis, a common symptom of systemic lupus erythematosus. Dexamethasone along with either bortezomib or melphalan is commonly used as a treatment for AL amyloidosis. Recently, bortezomid in combination with cyclophosphamide and dexamethasone has also shown promise as a treatment for AL amyloidosis. Other drugs used to treat myeloma such as lenalidomide have shown promise in treating AL amyloidosis. Chemotherapy drugs are also used in conditioning regimens prior to bone marrow transplant (hematopoietic stem cell transplant). Conditioning regimens are used to suppress the recipient's immune system in order to allow a transplant to engraft. Cyclophosphamide is a common cytotoxic drug used in this manner and is often used in conjunction with total body irradiation. Chemotherapeutic drugs may be used at high doses to permanently remove the recipient's bone marrow cells (myeloablative conditioning) or at lower doses that will prevent permanent bone marrow loss (non-myeloablative and reduced intensity conditioning). When used in non-cancer setting, the treatment is still called "chemotherapy", and is often done in the same treatment centers used for people with cancer. Occupational exposure and safe handling In the 1970s, antineoplastic (chemotherapy) drugs were identified as hazardous, and the American Society of Health-System Pharmacists (ASHP) has since then introduced the concept of hazardous drugs after publishing a recommendation in 1983 regarding handling hazardous drugs. The adaptation of federal regulations came when the U.S. Occupational Safety and Health Administration (OSHA) first released its guidelines in 1986 and then updated them in 1996, 1999, and, most recently, 2006. The National Institute for Occupational Safety and Health (NIOSH) has been conducting an assessment in the workplace since then regarding these drugs. Occupational exposure to antineoplastic drugs has been linked to multiple health effects, including infertility and possible carcinogenic effects. A few cases have been reported by the NIOSH alert report, such as one in which a female pharmacist was diagnosed with papillary transitional cell carcinoma. Twelve years before the pharmacist was diagnosed with the condition, she had worked for 20 months in a hospital where she was responsible for preparing multiple antineoplastic drugs. The pharmacist didn't have any other risk factor for cancer, and therefore, her cancer was attributed to the exposure to the antineoplastic drugs, although a cause-and-effect relationship has not been established in the literature. Another case happened when a malfunction in biosafety cabinetry is believed to have exposed nursing personnel to antineoplastic drugs. Investigations revealed evidence of genotoxic biomarkers two and nine months after that exposure. Routes of exposure Antineoplastic drugs are usually given through intravenous, intramuscular, intrathecal, or subcutaneous administration. In most cases, before the medication is administered to the patient, it needs to be prepared and handled by several workers. Any worker who is involved in handling, preparing, or administering the drugs, or with cleaning objects that have come into contact with antineoplastic drugs, is potentially exposed to hazardous drugs. Health care workers are exposed to drugs in different circumstances, such as when pharmacists and pharmacy technicians prepare and handle antineoplastic drugs and when nurses and physicians administer the drugs to patients. Additionally, those who are responsible for disposing antineoplastic drugs in health care facilities are also at risk of exposure. Dermal exposure is thought to be the main route of exposure due to the fact that significant amounts of the antineoplastic agents have been found in the gloves worn by healthcare workers who prepare, handle, and administer the agents. Another noteworthy route of exposure is inhalation of the drugs' vapors. Multiple studies have investigated inhalation as a route of exposure, and although air sampling has not shown any dangerous levels, it is still a potential route of exposure. Ingestion by hand to mouth is a route of exposure that is less likely compared to others because of the enforced hygienic standard in the health institutions. However, it is still a potential route, especially in the workplace, outside of a health institute. One can also be exposed to these hazardous drugs through injection by needle sticks. Research conducted in this area has established that occupational exposure occurs by examining evidence in multiple urine samples from health care workers. Hazards Hazardous drugs expose health care workers to serious health risks. Many studies show that antineoplastic drugs could have many side effects on the reproductive system, such as fetal loss, congenital malformation, and infertility. Health care workers who are exposed to antineoplastic drugs on many occasions have adverse reproductive outcomes such as spontaneous abortions, stillbirths, and congenital malformations. Moreover, studies have shown that exposure to these drugs leads to menstrual cycle irregularities. Antineoplastic drugs may also increase the risk of learning disabilities among children of health care workers who are exposed to these hazardous substances. Moreover, these drugs have carcinogenic effects. In the past five decades, multiple studies have shown the carcinogenic effects of exposure to antineoplastic drugs. Similarly, there have been research studies that linked alkylating agents with humans developing leukemias. Studies have reported elevated risk of breast cancer, nonmelanoma skin cancer, and cancer of the rectum among nurses who are exposed to these drugs. Other investigations revealed that there is a potential genotoxic effect from anti-neoplastic drugs to workers in health care settings. Safe handling in health care settings As of 2018, there were no occupational exposure limits set for antineoplastic drugs, i.e., OSHA or the American Conference of Governmental Industrial Hygienists (ACGIH) have not set workplace safety guidelines. Preparation NIOSH recommends using a ventilated cabinet that is designed to decrease worker exposure. Additionally, it recommends training of all staff, the use of cabinets, implementing an initial evaluation of the technique of the safety program, and wearing protective gloves and gowns when opening drug packaging, handling vials, or labeling. When wearing personal protective equipment, one should inspect gloves for physical defects before use and always wear double gloves and protective gowns. Health care workers are also required to wash their hands with water and soap before and after working with antineoplastic drugs, change gloves every 30 minutes or whenever punctured, and discard them immediately in a chemotherapy waste container. The gowns used should be disposable gowns made of polyethylene-coated polypropylene. When wearing gowns, individuals should make sure that the gowns are closed and have long sleeves. When preparation is done, the final product should be completely sealed in a plastic bag. The health care worker should also wipe all waste containers inside the ventilated cabinet before removing them from the cabinet. Finally, workers should remove all protective wear and put them in a bag for their disposal inside the ventilated cabinet. Administration Drugs should only be administered using protective medical devices such as needle lists and closed systems and techniques such as priming of IV tubing by pharmacy personnel inside a ventilated cabinet. Workers should always wear personal protective equipment such as double gloves, goggles, and protective gowns when opening the outer bag and assembling the delivery system to deliver the drug to the patient, and when disposing of all material used in the administration of the drugs. Hospital workers should never remove tubing from an IV bag that contains an antineoplastic drug, and when disconnecting the tubing in the system, they should make sure the tubing has been thoroughly flushed. After removing the IV bag, the workers should place it together with other disposable items directly in the yellow chemotherapy waste container with the lid closed. Protective equipment should be removed and put into a disposable chemotherapy waste container. After this has been done, one should double bag the chemotherapy waste before or after removing one's inner gloves. Moreover, one must always wash one's hands with soap and water before leaving the drug administration site. Employee training All employees whose jobs in health care facilities expose them to hazardous drugs must receive training. Training should include shipping and receiving personnel, housekeepers, pharmacists, assistants, and all individuals involved in the transportation and storage of antineoplastic drugs. These individuals should receive information and training to inform them of the hazards of the drugs present in their areas of work. They should be informed and trained on operations and procedures in their work areas where they can encounter hazards, different methods used to detect the presence of hazardous drugs and how the hazards are released, and the physical and health hazards of the drugs, including their reproductive and carcinogenic hazard potential. Additionally, they should be informed and trained on the measures they should take to avoid and protect themselves from these hazards. This information ought to be provided when health care workers come into contact with the drugs, that is, perform the initial assignment in a work area with hazardous drugs. Moreover, training should also be provided when new hazards emerge as well as when new drugs, procedures, or equipment are introduced. Housekeeping and waste disposal When performing cleaning and decontaminating the work area where antineoplastic drugs are used, one should make sure that there is sufficient ventilation to prevent the buildup of airborne drug concentrations. When cleaning the work surface, hospital workers should use deactivation and cleaning agents before and after each activity as well as at the end of their shifts. Cleaning should always be done using double protective gloves and disposable gowns. After employees finish up cleaning, they should dispose of the items used in the activity in a yellow chemotherapy waste container while still wearing protective gloves. After removing the gloves, they should thoroughly wash their hands with soap and water. Anything that comes into contact or has a trace of the antineoplastic drugs, such as needles, empty vials, syringes, gowns, and gloves, should be put in the chemotherapy waste container. Spill control A written policy needs to be in place in case of a spill of antineoplastic products. The policy should address the possibility of various sizes of spills as well as the procedure and personal protective equipment required for each size. A trained worker should handle a large spill and always dispose of all cleanup materials in the chemical waste container according to EPA regulations, not in a yellow chemotherapy waste container. Occupational monitoring A medical surveillance program must be established. In case of exposure, occupational health professionals need to ask for a detailed history and do a thorough physical exam. They should test the urine of the potentially exposed worker by doing a urine dipstick or microscopic examination, mainly looking for blood, as several antineoplastic drugs are known to cause bladder damage. Urinary mutagenicity is a marker of exposure to antineoplastic drugs that was first used by Falck and colleagues in 1979 and uses bacterial mutagenicity assays. Apart from being nonspecific, the test can be influenced by extraneous factors such as dietary intake and smoking and is, therefore, used sparingly. However, the test played a significant role in changing the use of horizontal flow cabinets to vertical flow biological safety cabinets during the preparation of antineoplastic drugs because the former exposed health care workers to high levels of drugs. This changed the handling of drugs and effectively reduced workers' exposure to antineoplastic drugs. Biomarkers of exposure to antineoplastic drugs commonly include urinary platinum, methotrexate, urinary cyclophosphamide and ifosfamide, and urinary metabolite of 5-fluorouracil. In addition to this, there are other drugs used to measure the drugs directly in the urine, although they are rarely used. A measurement of these drugs directly in one's urine is a sign of high exposure levels and that an uptake of the drugs is happening either through inhalation or dermally. Available agents There is an extensive list of antineoplastic agents. Several classification schemes have been used to subdivide
that kill rapidly dividing cells or blood cells can reduce the number of platelets in the blood, which can result in bruises and bleeding. Extremely low platelet counts may be temporarily boosted through platelet transfusions and new drugs to increase platelet counts during chemotherapy are being developed. Sometimes, chemotherapy treatments are postponed to allow platelet counts to recover. Fatigue may be a consequence of the cancer or its treatment, and can last for months to years after treatment. One physiological cause of fatigue is anemia, which can be caused by chemotherapy, surgery, radiotherapy, primary and metastatic disease or nutritional depletion. Aerobic exercise has been found to be beneficial in reducing fatigue in people with solid tumours. Nausea and vomiting Nausea and vomiting are two of the most feared cancer treatment-related side-effects for people with cancer and their families. In 1983, Coates et al. found that people receiving chemotherapy ranked nausea and vomiting as the first and second most severe side-effects, respectively. Up to 20% of people receiving highly emetogenic agents in this era postponed, or even refused potentially curative treatments. Chemotherapy-induced nausea and vomiting (CINV) are common with many treatments and some forms of cancer. Since the 1990s, several novel classes of antiemetics have been developed and commercialized, becoming a nearly universal standard in chemotherapy regimens, and helping to successfully manage these symptoms in many people. Effective mediation of these unpleasant and sometimes-crippling symptoms results in increased quality of life for the recipient and more efficient treatment cycles, due to less stoppage of treatment due to better tolerance and better overall health. Hair loss Hair loss (alopecia) can be caused by chemotherapy that kills rapidly dividing cells; other medications may cause hair to thin. These are most often temporary effects: hair usually starts to regrow a few weeks after the last treatment, but sometimes with a change in color, texture, thickness or style. Sometimes hair has a tendency to curl after regrowth, resulting in "chemo curls." Severe hair loss occurs most often with drugs such as doxorubicin, daunorubicin, paclitaxel, docetaxel, cyclophosphamide, ifosfamide and etoposide. Permanent thinning or hair loss can result from some standard chemotherapy regimens. Chemotherapy induced hair loss occurs by a non-androgenic mechanism, and can manifest as alopecia totalis, telogen effluvium, or less often alopecia areata. It is usually associated with systemic treatment due to the high mitotic rate of hair follicles, and more reversible than androgenic hair loss, although permanent cases can occur. Chemotherapy induces hair loss in women more often than men. Scalp cooling offers a means of preventing both permanent and temporary hair loss; however, concerns about this method have been raised. Secondary neoplasm Development of secondary neoplasia after successful chemotherapy or radiotherapy treatment can occur. The most common secondary neoplasm is secondary acute myeloid leukemia, which develops primarily after treatment with alkylating agents or topoisomerase inhibitors. Survivors of childhood cancer are more than 13 times as likely to get a secondary neoplasm during the 30 years after treatment than the general population. Not all of this increase can be attributed to chemotherapy. Infertility Some types of chemotherapy are gonadotoxic and may cause infertility. Chemotherapies with high risk include procarbazine and other alkylating drugs such as cyclophosphamide, ifosfamide, busulfan, melphalan, chlorambucil, and chlormethine. Drugs with medium risk include doxorubicin and platinum analogs such as cisplatin and carboplatin. On the other hand, therapies with low risk of gonadotoxicity include plant derivatives such as vincristine and vinblastine, antibiotics such as bleomycin and dactinomycin, and antimetabolites such as methotrexate, mercaptopurine, and 5-fluorouracil. Female infertility by chemotherapy appears to be secondary to premature ovarian failure by loss of primordial follicles. This loss is not necessarily a direct effect of the chemotherapeutic agents, but could be due to an increased rate of growth initiation to replace damaged developing follicles. People may choose between several methods of fertility preservation prior to chemotherapy, including cryopreservation of semen, ovarian tissue, oocytes, or embryos. As more than half of cancer patients are elderly, this adverse effect is only relevant for a minority of patients. A study in France between 1999 and 2011 came to the result that embryo freezing before administration of gonadotoxic agents to females caused a delay of treatment in 34% of cases, and a live birth in 27% of surviving cases who wanted to become pregnant, with the follow-up time varying between 1 and 13 years. Potential protective or attenuating agents include GnRH analogs, where several studies have shown a protective effect in vivo in humans, but some studies show no such effect. Sphingosine-1-phosphate (S1P) has shown similar effect, but its mechanism of inhibiting the sphingomyelin apoptotic pathway may also interfere with the apoptosis action of chemotherapy drugs. In chemotherapy as a conditioning regimen in hematopoietic stem cell transplantation, a study of people conditioned with cyclophosphamide alone for severe aplastic anemia came to the result that ovarian recovery occurred in all women younger than 26 years at time of transplantation, but only in five of 16 women older than 26 years. Teratogenicity Chemotherapy is teratogenic during pregnancy, especially during the first trimester, to the extent that abortion usually is recommended if pregnancy in this period is found during chemotherapy. Second- and third-trimester exposure does not usually increase the teratogenic risk and adverse effects on cognitive development, but it may increase the risk of various complications of pregnancy and fetal myelosuppression. In males previously having undergone chemotherapy or radiotherapy, there appears to be no increase in genetic defects or congenital malformations in their children conceived after therapy. The use of assisted reproductive technologies and micromanipulation techniques might increase this risk. In females previously having undergone chemotherapy, miscarriage and congenital malformations are not increased in subsequent conceptions. However, when in vitro fertilization and embryo cryopreservation is practised between or shortly after treatment, possible genetic risks to the growing oocytes exist, and hence it has been recommended that the babies be screened. Peripheral neuropathy Between 30 and 40 percent of people undergoing chemotherapy experience chemotherapy-induced peripheral neuropathy (CIPN), a progressive, enduring, and often irreversible condition, causing pain, tingling, numbness and sensitivity to cold, beginning in the hands and feet and sometimes progressing to the arms and legs. Chemotherapy drugs associated with CIPN include thalidomide, epothilones, vinca alkaloids, taxanes, proteasome inhibitors, and the platinum-based drugs. Whether CIPN arises, and to what degree, is determined by the choice of drug, duration of use, the total amount consumed and whether the person already has peripheral neuropathy. Though the symptoms are mainly sensory, in some cases motor nerves and the autonomic nervous system are affected. CIPN often follows the first chemotherapy dose and increases in severity as treatment continues, but this progression usually levels off at completion of treatment. The platinum-based drugs are the exception; with these drugs, sensation may continue to deteriorate for several months after the end of treatment. Some CIPN appears to be irreversible. Pain can often be managed with drug or other treatment but the numbness is usually resistant to treatment. Cognitive impairment Some people receiving chemotherapy report fatigue or non-specific neurocognitive problems, such as an inability to concentrate; this is sometimes called post-chemotherapy cognitive impairment, referred to as "chemo brain" in popular and social media. Tumor lysis syndrome In particularly large tumors and cancers with high white cell counts, such as lymphomas, teratomas, and some leukemias, some people develop tumor lysis syndrome. The rapid breakdown of cancer cells causes the release of chemicals from the inside of the cells. Following this, high levels of uric acid, potassium and phosphate are found in the blood. High levels of phosphate induce secondary hypoparathyroidism, resulting in low levels of calcium in the blood. This causes kidney damage and the high levels of potassium can cause cardiac arrhythmia. Although prophylaxis is available and is often initiated in people with large tumors, this is a dangerous side-effect that can lead to death if left untreated. Organ damage Cardiotoxicity (heart damage) is especially prominent with the use of anthracycline drugs (doxorubicin, epirubicin, idarubicin, and liposomal doxorubicin). The cause of this is most likely due to the production of free radicals in the cell and subsequent DNA damage. Other chemotherapeutic agents that cause cardiotoxicity, but at a lower incidence, are cyclophosphamide, docetaxel and clofarabine. Hepatotoxicity (liver damage) can be caused by many cytotoxic drugs. The susceptibility of an individual to liver damage can be altered by other factors such as the cancer itself, viral hepatitis, immunosuppression and nutritional deficiency. The liver damage can consist of damage to liver cells, hepatic sinusoidal syndrome (obstruction of the veins in the liver), cholestasis (where bile does not flow from the liver to the intestine) and liver fibrosis. Nephrotoxicity (kidney damage) can be caused by tumor lysis syndrome and also due direct effects of drug clearance by the kidneys. Different drugs will affect different parts of the kidney and the toxicity may be asymptomatic (only seen on blood or urine tests) or may cause acute kidney injury. Ototoxicity (damage to the inner ear) is a common side effect of platinum based drugs that can produce symptoms such as dizziness and vertigo. Children treated with platinum analogues have been found to be at risk for developing hearing loss. Other side-effects Less common side-effects include red skin (erythema), dry skin, damaged fingernails, a dry mouth (xerostomia), water retention, and sexual impotence. Some medications can trigger allergic or pseudoallergic reactions. Specific chemotherapeutic agents are associated with organ-specific toxicities, including cardiovascular disease (e.g., doxorubicin), interstitial lung disease (e.g., bleomycin) and occasionally secondary neoplasm (e.g., MOPP therapy for Hodgkin's disease). Hand-foot syndrome is another side effect to cytotoxic chemotherapy. Nutritional problems are also frequently seen in cancer patients at diagnosis and through chemotherapy treatment. Research suggests that in children and young people undergoing cancer treatment, parenteral nutrition may help with this leading to weight gain and increased calorie and protein intake, when compared to enteral nutrition. Limitations Chemotherapy does not always work, and even when it is useful, it may not completely destroy the cancer. People frequently fail to understand its limitations. In one study of people who had been newly diagnosed with incurable, stage 4 cancer, more than two-thirds of people with lung cancer and more than four-fifths of people with colorectal cancer still believed that chemotherapy was likely to cure their cancer. The blood–brain barrier poses an obstacle to delivery of chemotherapy to the brain. This is because the brain has an extensive system in place to protect it from harmful chemicals. Drug transporters can pump out drugs from the brain and brain's blood vessel cells into the cerebrospinal fluid and blood circulation. These transporters pump out most chemotherapy drugs, which reduces their efficacy for treatment of brain tumors. Only small lipophilic alkylating agents such as lomustine or temozolomide are able to cross this blood–brain barrier. Blood vessels in tumors are very different from those seen in normal tissues. As a tumor grows, tumor cells furthest away from the blood vessels become low in oxygen (hypoxic). To counteract this they then signal for new blood vessels to grow. The newly formed tumor vasculature is poorly formed and does not deliver an adequate blood supply to all areas of the tumor. This leads to issues with drug delivery because many drugs will be delivered to the tumor by the circulatory system. Resistance Resistance is a major cause of treatment failure in chemotherapeutic drugs. There are a few possible causes of resistance in cancer, one of which is the presence of small pumps on the surface of cancer cells that actively move chemotherapy from inside the cell to the outside. Cancer cells produce high amounts of these pumps, known as p-glycoprotein, in order to protect themselves from chemotherapeutics. Research on p-glycoprotein and other such chemotherapy efflux pumps is currently ongoing. Medications to inhibit the function of p-glycoprotein are undergoing investigation, but due to toxicities and interactions with anti-cancer drugs their development has been difficult. Another mechanism of resistance is gene amplification, a process in which multiple copies of a gene are produced by cancer cells. This overcomes the effect of drugs that reduce the expression of genes involved in replication. With more copies of the gene, the drug can not prevent all expression of the gene and therefore the cell can restore its proliferative ability. Cancer cells can also cause defects in the cellular pathways of apoptosis (programmed cell death). As most chemotherapy drugs kill cancer cells in this manner, defective apoptosis allows survival of these cells, making them resistant. Many chemotherapy drugs also cause DNA damage, which can be repaired by enzymes in the cell that carry out DNA repair. Upregulation of these genes can overcome the DNA damage and prevent the induction of apoptosis. Mutations in genes that produce drug target proteins, such as tubulin, can occur which prevent the drugs from binding to the protein, leading to resistance to these types of drugs. Drugs used in chemotherapy can induce cell stress, which can kill a cancer cell; however, under certain conditions, cells stress can induce changes in gene expression that enables resistance to several types of drugs. In lung cancer, the transcription factor NFκB is thought to play a role in resistance to chemotherapy, via inflammatory pathways. Cytotoxics and targeted therapies Targeted therapies are a relatively new class of cancer drugs that can overcome many of the issues seen with the use of cytotoxics. They are divided into two groups: small molecule and antibodies. The massive toxicity seen with the use of cytotoxics is due to the lack of cell specificity of the drugs. They will kill any rapidly dividing cell, tumor or normal. Targeted therapies are designed to affect cellular proteins or processes that are utilised by the cancer cells. This allows a high dose to cancer tissues with a relatively low dose to other tissues. Although the side effects are often less severe than that seen of cytotoxic chemotherapeutics, life-threatening effects can occur. Initially, the targeted therapeutics were supposed to be solely selective for one protein. Now it is clear that there is often a range of protein targets that the drug can bind. An example target for targeted therapy is the BCR-ABL1 protein produced from the Philadelphia chromosome, a genetic lesion found commonly in chronic myelogenous leukemia and in some patients with acute lymphoblastic leukemia. This fusion protein has enzyme activity that can be inhibited by imatinib, a small molecule drug. Mechanism of action Cancer is the uncontrolled growth of cells coupled with malignant behaviour: invasion and metastasis (among other features). It is caused by the interaction between genetic susceptibility and environmental factors. These factors lead to accumulations of genetic mutations in oncogenes (genes that control the growth rate of cells) and tumor suppressor genes (genes that help to prevent cancer), which gives cancer cells their malignant characteristics, such as uncontrolled growth. In the broad sense, most chemotherapeutic drugs work by impairing mitosis (cell division), effectively targeting fast-dividing cells. As these drugs cause damage to cells, they are termed cytotoxic. They prevent mitosis by various mechanisms including damaging DNA and inhibition of the cellular machinery involved in cell division. One theory as to why these drugs kill cancer cells is that they induce a programmed form of cell death known as apoptosis. As chemotherapy affects cell division, tumors with high growth rates (such as acute myelogenous leukemia and the aggressive lymphomas, including Hodgkin's disease) are more sensitive to chemotherapy, as a larger proportion of the targeted cells are undergoing cell division at any time. Malignancies with slower growth rates, such as indolent lymphomas, tend to respond to chemotherapy much more modestly. Heterogeneic tumours may also display varying sensitivities to chemotherapy agents, depending on the subclonal populations within the tumor. Cells from the immune system also make crucial contributions to the antitumor effects of chemotherapy. For example, the chemotherapeutic drugs oxaliplatin and cyclophosphamide can cause tumor cells to die in a way that is detectable by the immune system (called immunogenic cell death), which mobilizes immune cells with antitumor functions. Chemotherapeutic drugs that cause cancer immunogenic tumor cell death can make unresponsive tumors sensitive to immune checkpoint therapy. Other uses Some chemotherapy drugs are used in diseases other than cancer, such as in autoimmune disorders, and noncancerous plasma cell dyscrasia. In some cases they are often used at lower doses, which means that the side effects are minimized, while in other cases doses similar to ones used to treat cancer are used. Methotrexate is used in the treatment of rheumatoid arthritis (RA), psoriasis, ankylosing spondylitis and multiple sclerosis. The anti-inflammatory response seen in RA is thought to be due to increases in adenosine, which causes immunosuppression; effects on immuno-regulatory cyclooxygenase-2 enzyme pathways; reduction in pro-inflammatory cytokines; and anti-proliferative properties. Although methotrexate is used to treat both multiple sclerosis and ankylosing spondylitis, its efficacy in these diseases is still uncertain. Cyclophosphamide is sometimes used to treat lupus nephritis, a common symptom of systemic lupus erythematosus. Dexamethasone along with either bortezomib or melphalan is commonly used as a treatment for AL amyloidosis. Recently, bortezomid in combination with cyclophosphamide and dexamethasone has also shown promise as a treatment for AL amyloidosis. Other drugs used to treat myeloma such as lenalidomide have shown promise in treating AL amyloidosis. Chemotherapy drugs are also used in conditioning regimens prior to bone marrow transplant (hematopoietic stem cell transplant). Conditioning regimens are used to suppress the recipient's immune system in order to allow a transplant to engraft. Cyclophosphamide is a common cytotoxic drug used in this manner and is often used in conjunction with total body irradiation. Chemotherapeutic drugs may be used at high doses to permanently remove the recipient's bone marrow cells (myeloablative conditioning) or at lower doses that will prevent permanent bone marrow loss (non-myeloablative and reduced intensity conditioning). When used in non-cancer setting, the treatment is still called "chemotherapy", and is often done in the same treatment centers used for people with cancer. Occupational exposure and safe handling In the 1970s, antineoplastic (chemotherapy) drugs were identified as hazardous, and the American Society of Health-System Pharmacists (ASHP) has since then introduced the concept of hazardous drugs after publishing a recommendation in 1983 regarding handling hazardous drugs. The adaptation of federal regulations came when the U.S. Occupational Safety and Health Administration (OSHA) first released its guidelines in 1986 and then updated them in 1996, 1999, and, most recently, 2006. The National Institute for Occupational Safety and Health (NIOSH) has been conducting an assessment in the workplace since then regarding these drugs. Occupational exposure to antineoplastic drugs has been linked to multiple health effects, including infertility and possible carcinogenic effects. A few cases have been reported by the NIOSH alert report, such as one in which a female pharmacist was diagnosed with papillary transitional cell carcinoma. Twelve years before the pharmacist was diagnosed with the condition, she had worked for 20 months in a hospital where she was responsible for preparing multiple antineoplastic drugs. The pharmacist didn't have any other risk factor for cancer, and therefore, her cancer was attributed to the exposure to the antineoplastic drugs, although a cause-and-effect relationship has not been established in the literature. Another case happened when a malfunction in biosafety cabinetry is believed to have exposed nursing personnel to antineoplastic drugs. Investigations revealed evidence of genotoxic biomarkers two and nine months after that exposure. Routes of exposure Antineoplastic drugs are usually given through intravenous, intramuscular, intrathecal, or subcutaneous administration. In most cases, before the medication is administered to the patient, it needs to be prepared and handled by several workers. Any worker who is involved in handling, preparing, or administering the drugs, or with cleaning objects that have come into contact with antineoplastic drugs, is potentially exposed to hazardous drugs. Health care workers are exposed to drugs in different circumstances, such as when pharmacists and pharmacy technicians prepare and handle antineoplastic drugs and when nurses and physicians administer the drugs to patients. Additionally, those who are responsible for disposing antineoplastic drugs in health care facilities are also at risk of exposure. Dermal exposure is thought to be the main route of exposure due to the fact that significant amounts of the antineoplastic agents have been found in the gloves worn by healthcare workers who prepare, handle, and administer the agents. Another noteworthy route of exposure is inhalation of the drugs' vapors. Multiple studies have investigated inhalation as a route of exposure, and although air sampling has not shown any dangerous levels, it is still a potential route of exposure. Ingestion by hand to mouth is a route of exposure that is less likely compared to others because of the enforced hygienic standard in the health institutions. However, it is still a potential route, especially in the workplace, outside of a health institute. One can also be exposed to these hazardous drugs through injection by needle sticks. Research conducted in this area has established that occupational exposure occurs by examining evidence in multiple urine samples from health care workers. Hazards Hazardous drugs expose health care workers to serious health risks. Many studies show that antineoplastic drugs could have many side effects on the reproductive system, such as fetal loss, congenital malformation, and infertility. Health care workers who are exposed to antineoplastic drugs on many occasions have adverse reproductive outcomes such as spontaneous abortions, stillbirths, and congenital malformations. Moreover, studies have shown that exposure to these drugs leads to menstrual cycle irregularities. Antineoplastic drugs may also increase the risk of learning disabilities among children of health care workers who are exposed to these hazardous substances. Moreover, these drugs have carcinogenic effects. In the past five decades, multiple studies have shown the carcinogenic effects of exposure to antineoplastic drugs. Similarly, there have been research studies that linked alkylating agents with humans developing leukemias. Studies have reported elevated risk of breast cancer, nonmelanoma skin cancer, and cancer of the rectum among nurses who are exposed to these drugs. Other investigations revealed that there is a potential genotoxic effect from anti-neoplastic drugs to workers in health care settings. Safe handling in health care settings As of 2018, there were no occupational exposure limits set for antineoplastic drugs, i.e., OSHA or the American Conference of Governmental Industrial Hygienists (ACGIH) have not set workplace safety guidelines. Preparation NIOSH recommends using a ventilated cabinet that is designed to decrease worker exposure. Additionally, it recommends training of all staff, the use of cabinets, implementing an initial evaluation of the technique of the safety program, and wearing protective gloves and gowns when opening drug packaging, handling vials, or labeling. When wearing personal protective equipment, one should inspect gloves for physical defects before use and always wear double gloves and protective gowns. Health care workers are also required to wash their hands with water and soap before and after working with antineoplastic drugs, change gloves every 30 minutes or whenever punctured, and discard them immediately in a chemotherapy waste container. The gowns used should be disposable gowns made of polyethylene-coated polypropylene. When wearing gowns, individuals should make sure that the gowns are closed and have long sleeves. When preparation is done, the final product should be completely sealed in a plastic bag. The health care worker should also wipe all waste containers inside the ventilated cabinet before removing them from the cabinet. Finally, workers should remove all protective wear and put them in a bag for their disposal inside the ventilated cabinet. Administration Drugs should only be administered using protective medical devices such as needle lists and closed systems and techniques such as priming of IV tubing by pharmacy personnel inside a ventilated cabinet. Workers should always wear personal protective equipment such as double gloves, goggles, and protective gowns when opening the outer bag and assembling the delivery system to deliver the drug to the patient, and when disposing of all material used in the administration of the drugs. Hospital workers should never remove tubing from an IV bag that contains an antineoplastic drug, and when disconnecting the tubing
whose founder is able to rectify many of society's problems and begin the cycle anew. Over time, many people felt a full correction was not possible, and that the golden age of Yao and Shun could not be attained. This teleological theory implies that there can be only one rightful sovereign under heaven at a time. Thus, despite the fact that Chinese history has had many lengthy and contentious periods of disunity, a great effort was made by official historians to establish a legitimate precursor whose fall allowed a new dynasty to acquire its mandate. Similarly, regardless of the particular merits of individual emperors, founders would be portrayed in more laudatory terms, and the last ruler of a dynasty would always be castigated as depraved and unworthy – even when that was not the case. Such a narrative was employed after the fall of the empire by those compiling the history of the Qing, and by those who justified the attempted restorations of the imperial system by Yuan Shikai and Zhang Xun. Multi-ethnic history As early as the 1930s, the American scholar Owen Lattimore argued that China was the product of the interaction of farming and pastoral societies, rather than simply the expansion of the Han people. Lattimore did not accept the more extreme Sino-Babylonian theories that the essential elements of early Chinese technology and religion had come from Western Asia, but he was among the scholars to argue against the assumption they had all been indigenous. Both the Republic of China and the People's Republic of China hold the view that Chinese history should include all the ethnic groups of the lands held by the Qing dynasty during its territorial peak, with these ethnicities forming part of the Zhonghua minzu (Chinese nation). This view is in contrast with Han chauvinism promoted by the Qing-era Tongmenghui. This expanded view encompasses internal and external tributary lands, as well as conquest dynasties in the history of a China seen as a coherent multi-ethnic nation since time immemorial, incorporating and accepting the contributions and cultures of non-Han ethnicities. The acceptance of this view by ethnic minorities sometimes depends on their views on present-day issues. The 14th Dalai Lama, long insistent on Tibet's history being separate from that of China, conceded in 2005 that Tibet "is a part of" China's "5,000-year history" as part of a new proposal for Tibetan autonomy. Korean nationalists have virulently reacted against China's application to UNESCO for recognition of the Goguryeo tombs in Chinese territory. The absolute independence of Goguryeo is a central aspect of Korean identity, because, according to Korean legend, Goguryeo was independent of China and Japan, compared to subordinate states such as the Joseon dynasty and the Korean Empire. The legacy of Genghis Khan has been contested between China, Mongolia, and Russia, all three states having significant numbers of ethnic Mongols within their borders and holding territory that was conquered by the Khan. The Jin dynasty tradition of a new dynasty composing the official history for its preceding dynasty/dynasties has been seen to foster an ethnically inclusive interpretation of Chinese history. The compilation of official histories usually involved monumental intellectual labor. The Yuan and Qing dynasties, ruled by the Mongols and Manchus, faithfully carried out this practice, composing the official Chinese-language histories of the Han-ruled Song and Ming dynasties, respectively. Recent Western scholars have reacted against the ethnically inclusive narrative in Communist-sponsored history, by writing revisionist histories of China such as the New Qing History that feature, according to James A. Millward, "a degree of 'partisanship' for the indigenous underdogs of frontier history". Scholarly interest in writing about Chinese minorities from non-Chinese perspectives is growing. Marxism Most Chinese history that is published in the People's Republic of China is based on a Marxist interpretation of history. These theories were first applied in the 1920s by Chinese scholars such as Guo Moruo, and became orthodoxy in academic study after 1949. The Marxist view of history is that history is governed by universal laws and that according to these laws, a society moves through a series of stages, with the transition between stages being driven by class struggle. These stages are: Slave society Feudal society Capitalist society Socialist society The world communist society The official historical view within the People's Republic of China associates each of these stages with a particular era in Chinese history. Slave society – Xia to Shang Feudal society (decentralized) – Zhou to Sui Feudal society (bureaucratic) – Tang to the First Opium War Feudal society (semi-colonial) – First Opium War to end of Qing dynasty Capitalist society – Republican era Socialist society – PRC 1949 to present Because of the strength of the Chinese Communist Party and the importance of the Marxist interpretation of history in legitimizing its rule, it was for many years difficult for historians within the PRC to actively argue in favor of non-Marxist and anti-Marxist interpretations of history. However, this political restriction is less confining than it may first appear in that the Marxist historical framework is surprisingly flexible, and it is a rather simple matter to modify an alternative historical theory to use language that at least does not challenge the Marxist interpretation of history. Partly because of the interest of Mao Zedong, historians in the 1950s took a special interest in the role of peasant rebellions in Chinese history and compiled documentary histories to examine them. There are several problems associated with imposing Marx's European-based framework on Chinese history. First, slavery existed throughout China's history but never as the primary form of labor. While the Zhou and earlier dynasties may be labeled as feudal, later dynasties were much more centralized than how Marx analyzed their European counterparts as being. To account for the discrepancy, Chinese Marxists invented the term "bureaucratic feudalism". The placement of the Tang as the beginning of the bureaucratic phase rests largely on the replacement of patronage networks with the imperial examination. Some world-systems analysts, such as Janet Abu-Lughod, claim that analysis of Kondratiev waves shows that capitalism first arose in Song dynasty China, although widespread trade was subsequently disrupted and then curtailed. The Japanese scholar Tanigawa Michio, writing in the 1970s and 1980s, set out to revise the generally Marxist views of China prevalent in post-war Japan. Tanigawa writes that historians in Japan fell into two schools. One held that China followed the set European pattern which Marxists thought to be universal; that is, from ancient slavery to medieval feudalism to modern capitalism; while another group argued that "Chinese society was extraordinarily saturated with stagnancy, as compared to the West" and assumed that China existed in a "qualitatively different historical world from Western society". That is, there is an argument between those who see "unilinear, monistic world history" and those who conceive of a "two-tracked or multi-tracked world history". Tanigawa reviewed the applications of these theories in Japanese writings about Chinese history and then tested them by analyzing the Six Dynasties 220–589 CE period, which Marxist historians saw as feudal. His conclusion was that China did not have feudalism in the sense that Marxists use, that Chinese military governments did not lead to a European-style military aristocracy. The period established social and political patterns which shaped China's history from that point on. There was a gradual relaxation of Marxist interpretation after the death of Mao in 1976,
the Republic of China and the People's Republic of China hold the view that Chinese history should include all the ethnic groups of the lands held by the Qing dynasty during its territorial peak, with these ethnicities forming part of the Zhonghua minzu (Chinese nation). This view is in contrast with Han chauvinism promoted by the Qing-era Tongmenghui. This expanded view encompasses internal and external tributary lands, as well as conquest dynasties in the history of a China seen as a coherent multi-ethnic nation since time immemorial, incorporating and accepting the contributions and cultures of non-Han ethnicities. The acceptance of this view by ethnic minorities sometimes depends on their views on present-day issues. The 14th Dalai Lama, long insistent on Tibet's history being separate from that of China, conceded in 2005 that Tibet "is a part of" China's "5,000-year history" as part of a new proposal for Tibetan autonomy. Korean nationalists have virulently reacted against China's application to UNESCO for recognition of the Goguryeo tombs in Chinese territory. The absolute independence of Goguryeo is a central aspect of Korean identity, because, according to Korean legend, Goguryeo was independent of China and Japan, compared to subordinate states such as the Joseon dynasty and the Korean Empire. The legacy of Genghis Khan has been contested between China, Mongolia, and Russia, all three states having significant numbers of ethnic Mongols within their borders and holding territory that was conquered by the Khan. The Jin dynasty tradition of a new dynasty composing the official history for its preceding dynasty/dynasties has been seen to foster an ethnically inclusive interpretation of Chinese history. The compilation of official histories usually involved monumental intellectual labor. The Yuan and Qing dynasties, ruled by the Mongols and Manchus, faithfully carried out this practice, composing the official Chinese-language histories of the Han-ruled Song and Ming dynasties, respectively. Recent Western scholars have reacted against the ethnically inclusive narrative in Communist-sponsored history, by writing revisionist histories of China such as the New Qing History that feature, according to James A. Millward, "a degree of 'partisanship' for the indigenous underdogs of frontier history". Scholarly interest in writing about Chinese minorities from non-Chinese perspectives is growing. Marxism Most Chinese history that is published in the People's Republic of China is based on a Marxist interpretation of history. These theories were first applied in the 1920s by Chinese scholars such as Guo Moruo, and became orthodoxy in academic study after 1949. The Marxist view of history is that history is governed by universal laws and that according to these laws, a society moves through a series of stages, with the transition between stages being driven by class struggle. These stages are: Slave society Feudal society Capitalist society Socialist society The world communist society The official historical view within the People's Republic of China associates each of these stages with a particular era in Chinese history. Slave society – Xia to Shang Feudal society (decentralized) – Zhou to Sui Feudal society (bureaucratic) – Tang to the First Opium War Feudal society (semi-colonial) – First Opium War to end of Qing dynasty Capitalist society – Republican era Socialist society – PRC 1949 to present Because of the strength of the Chinese Communist Party and the importance of the Marxist interpretation of history in legitimizing its rule, it was for many years difficult for historians within the PRC to actively argue in favor of non-Marxist and anti-Marxist interpretations of history. However, this political restriction is less confining than it may first appear in that the Marxist historical framework is surprisingly flexible, and it is a rather simple matter to modify an alternative historical theory to use language that at least does not challenge the Marxist interpretation of history. Partly because of the interest of Mao Zedong, historians in the 1950s took a special interest in the role of peasant rebellions in Chinese history and compiled documentary histories to examine them. There are several problems associated with imposing Marx's European-based framework on Chinese history. First, slavery existed throughout China's history but never as the primary form of labor. While the Zhou and earlier dynasties may be labeled as feudal, later dynasties were much more centralized than how Marx analyzed their European counterparts as being. To account for the discrepancy, Chinese Marxists invented the term "bureaucratic feudalism". The placement of the Tang as the beginning of the bureaucratic phase rests largely on the replacement of patronage networks with the imperial examination. Some world-systems analysts, such as Janet Abu-Lughod, claim that analysis of Kondratiev waves shows that capitalism first arose in Song dynasty China, although widespread trade was subsequently disrupted and then curtailed. The Japanese scholar Tanigawa Michio, writing in the 1970s and 1980s, set out to revise the generally Marxist views of China prevalent in post-war Japan. Tanigawa writes that historians in Japan fell into two schools. One held that China followed the set European pattern which Marxists thought to be universal; that is, from ancient slavery to medieval feudalism to modern capitalism; while another group argued that "Chinese society was extraordinarily saturated with stagnancy, as compared to the West" and assumed that China existed in a "qualitatively different historical world from Western society". That is, there is an argument between those who see "unilinear, monistic world history" and those who conceive of a "two-tracked or multi-tracked world history". Tanigawa reviewed the applications of these theories in Japanese writings about Chinese history and then tested them by analyzing the Six Dynasties 220–589 CE period, which Marxist historians saw as feudal. His conclusion was that China did not have feudalism in the sense that Marxists use, that Chinese military governments did not lead to a European-style military aristocracy. The period established social and political patterns which shaped China's history from that point on. There was a gradual relaxation of Marxist interpretation after the death of Mao in 1976, which was accelerated after the Tian'anmen Square protest and other revolutions in 1989, which damaged Marxism's ideological legitimacy in the eyes of Chinese academics. Modernization This view of Chinese history sees Chinese society as a traditional society needing to become modern, usually with the implicit assumption of Western society as the model. Such a view was common amongst European and American historians during the 19th and early 20th centuries, but is now criticized for being a Eurocentric viewpoint, since such a view permits an implicit justification for breaking the society from its static past and bringing it into the modern world under European direction. By the mid-20th century, it was increasingly clear to historians that the notion of "changeless China" was untenable. A new concept, popularized by John Fairbank, was the notion of "change within tradition", which argued that China did change in the pre-modern period but that this change existed within certain cultural traditions. This notion has also been subject to the criticism that to say "China has not changed fundamentally" is tautological, since it requires that one look for things that have not changed and then arbitrarily define those as fundamental. Nonetheless, studies seeing China's interaction with Europe as the driving force behind its recent history are still common. Such studies may consider the First Opium War as the starting point for China's modern period. Examples include the works of H.B. Morse, who wrote chronicles of China's international relations such as Trade and Relations of the Chinese Empire. In the 1950s, several of Fairbank's students argued that Confucianism was incompatible with modernity. Joseph Levenson and Mary C. Wright, and Albert Feuerwerker argued in effect that traditional Chinese values were a barrier to modernity and would have to be abandoned before China could make progress. Wright concluded, "The failure of the T'ung-chih [Tongzhi] Restoration demonstrated with a rare clarity that even in the most favorable circumstances there is no way in which an effective modern state can be grafted onto a Confucian society. Yet in the decades that followed, the political ideas that had been tested and, for all their grandeur, found wanting, were never given a decent burial." In a different view of modernization, the Japanese historian Naito Torajiro argued that China reached modernity during its mid-Imperial period, centuries before Europe. He believed that the reform of the civil service into a meritocratic system and the disappearance of the ancient Chinese nobility from the bureaucracy constituted a modern society. The problem associated with this approach is the subjective meaning of modernity.
people." The theory legitimized the entry of private business owners and bourgeois elements into the party. Hu Jintao, Jiang Zemin's successor as general secretary, took office in 2002. Unlike Mao, Deng and Jiang Zemin, Hu laid emphasis on collective leadership and opposed one-man dominance of the political system. The insistence on focusing on economic growth led to a wide range of serious social problems. To address these, Hu introduced two main ideological concepts: the Scientific Outlook on Development and Harmonious Socialist Society. Hu resigned from his post as CCP general secretary and Chairman of the CMC at the 18th National Congress held in 2012, and was succeeded in both posts by Xi Jinping. Since taking power, Xi has initiated a wide-reaching anti-corruption campaign, while centralizing powers in the office of CCP general secretary at the expense of the collective leadership of prior decades. Commentators have described the campaign as a defining part of Xi's leadership as well as "the principal reason why he has been able to consolidate his power so quickly and effectively." Foreign commentators have likened him to Mao. Xi's leadership has also overseen an increase in the Party's role in China. Xi has added his ideology, named after himself, into the CCP constitution in 2017. As has been speculated, Xi Jinping may not retire from his top posts after serving for 10 years in 2022. On 21 October 2020, the Subcommittee on International Human Rights (SDIR) of the Canadian House of Commons Standing Committee on Foreign Affairs and International Development condemned the persecution of Uyghurs and other Turkic Muslims in Xinjiang by the Government of China and concluded that the Chinese Communist Party's actions amount to genocide of the Uyghurs per the Genocide Convention. On 1 July 2021, the celebrations of the 100th anniversary of the CCP, one of the Two Centenaries, took place. More than 500 political parties participated in the CPC and World Political Parties Summit. Ideology It has been argued in recent years, mainly by foreign commentators, that the CCP does not have an ideology, and that the party organization is pragmatic and interested only in what works. The party itself, however, argues otherwise. For instance, Hu Jintao stated in 2012 that the Western world is "threatening to divide us" and that "the international culture of the West is strong while we are weak ... Ideological and cultural fields are our main targets". The CCP puts a great deal of effort into the party schools and into crafting its ideological message. Before the "" campaign, the relationship between ideology and decision-making was a deductive one, meaning that policy-making was derived from ideological knowledge. Under Deng this relationship was turned upside down, with decision-making justifying ideology and not the other way around. Lastly, Chinese policy-makers believe that the Soviet Union's state ideology was "rigid, unimaginative, ossified, and disconnected from reality" and that this was one of the reasons for the dissolution of the Soviet Union. They therefore believe that their party ideology must be dynamic to safeguard the party's rule. Main ideologies of the party have corresponded with distinct generations of Chinese leadership. As both the CCP and the People's Liberation Army promote according to seniority, it is possible to discern distinct generations of Chinese leadership. In official discourse, each group of leadership is identified with a distinct extension of the ideology of the party. Historians have studied various periods in the development of the government of the People's Republic of China by reference to these "generations". Formal ideology Marxism–Leninism was the first official ideology of the CCP. According to the CCP, "Marxism–Leninism reveals the universal laws governing the development of history of human society." To the CCP, Marxism–Leninism provides a "vision of the contradictions in capitalist society and of the inevitability of a future socialist and communist societies". According to the People's Daily, Mao Zedong Thought "is Marxism–Leninism applied and developed in China". Mao Zedong Thought was conceived not only by Mao Zedong, but by leading party officials. While non-Chinese analysts generally agree that the CCP has rejected orthodox Marxism–Leninism and Mao Zedong Thought (or at least basic thoughts within orthodox thinking), the CCP itself disagrees. Certain groups argue that Jiang Zemin ended the CCP's formal commitment to Marxism with the introduction of the ideological theory, the Three Represents. However, party theorist Leng Rong disagrees, claiming that "President Jiang rid the Party of the ideological obstacles to different kinds of ownership [...] He did not give up Marxism or socialism. He strengthened the Party by providing a modern understanding of Marxism and socialism—which is why we talk about a 'socialist market economy' with Chinese characteristics." The attainment of true "communism" is still described as the CCP's and China's "ultimate goal". While the CCP claims that China is in the primary stage of socialism, party theorists argue that the current development stage "looks a lot like capitalism". Alternatively, certain party theorists argue that "capitalism is the early or first stage of communism." Some have dismissed the concept of a primary stage of socialism as intellectual cynicism. According to Robert Lawrence Kuhn, a China analyst, "When I first heard this rationale, I thought it more comic than clever—a wry caricature of hack propagandists leaked by intellectual cynics. But the 100-year horizon comes from serious political theorists". Deng Xiaoping Theory was added to the party constitution at the 14th National Congress. The concepts of "socialism with Chinese characteristics" and "the primary stage of socialism" were credited to the theory. Deng Xiaoping Theory can be defined as a belief that state socialism and state planning is not by definition communist, and that market mechanisms are class neutral. In addition, the party needs to react to the changing situation dynamically; to know if a certain policy is obsolete or not, the party had to "seek truth from facts" and follow the slogan "practice is the sole criterion for the truth". At the 14th National Congress, Jiang reiterated Deng's mantra that it was unnecessary to ask if something was socialist or capitalist, since the important factor was whether it worked. The "Three Represents", Jiang Zemin's contribution to the party's ideology, was adopted by the party at the 16th National Congress. The Three Represents defines the role of the CCP, and stresses that the Party must always represent the requirements for developing China's advanced productive forces, the orientation of China's advanced culture and the fundamental interests of the overwhelming majority of the Chinese people." Certain segments within the CCP criticized the Three Represents as being un-Marxist and a betrayal of basic Marxist values. Supporters viewed it as a further development of socialism with Chinese characteristics. Jiang disagreed, and had concluded that attaining the communist mode of production, as formulated by earlier communists, was more complex than had been realized, and that it was useless to try to force a change in the mode of production, as it had to develop naturally, by following the economic laws of history. The theory is most notable for allowing capitalists, officially referred to as the "new social strata", to join the party on the grounds that they engaged in "honest labor and work" and through their labour contributed "to build[ing] socialism with Chinese characteristics." The 3rd Plenary Session of the 16th Central Committee conceived and formulated the ideology of the Scientific Outlook on Development (SOD). It is considered to be Hu Jintao's contribution to the official ideological discourse. The SOD incorporates scientific socialism, sustainable development, social welfare, a humanistic society, increased democracy, and, ultimately, the creation of a Socialist Harmonious Society. According to official statements by the CCP, the concept integrates "Marxism with the reality of contemporary China and with the underlying features of our times, and it fully embodies the Marxist worldview on and methodology for development." Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era, commonly known as Xi Jinping Thought, was added to the party constitution in the 19th National Congress. Xi himself has described the thought as part of the broad framework created around socialism with Chinese characteristics. In official party documentation and pronouncements by Xi's colleagues, the thought is said to be a continuation of previous party ideologies as part of a series of guiding ideologies that embody "Marxism adapted to Chinese conditions" and contemporary considerations. The party combines elements of both socialist patriotism and Chinese nationalism. Economics Deng did not believe that the fundamental difference between the capitalist mode of production and the socialist mode of production was central planning versus free markets. He said, "A planned economy is not the definition of socialism, because there is planning under capitalism; the market economy happens under socialism, too. Planning and market forces are both ways of controlling economic activity". Jiang Zemin supported Deng's thinking, and stated in a party gathering that it did not matter if a certain mechanism was capitalist or socialist, because the only thing that mattered was whether it worked. It was at this gathering that Jiang Zemin introduced the term socialist market economy, which replaced Chen Yun's "planned socialist market economy". In his report to the 14th National Congress Jiang Zemin told the delegates that the socialist state would "let market forces play a basic role in resource allocation." At the 15th National Congress, the party line was changed to "make market forces further play their role in resource allocation"; this line continued until the of the 18th Central Committee, when it was amended to "let market forces play a decisive role in resource allocation." Despite this, the 3rd Plenary Session of the 18th Central Committee upheld the creed "Maintain the dominance of the public sector and strengthen the economic vitality of the State-owned economy." The CCP views the world as organized into two opposing camps; socialist and capitalist. They insist that socialism, on the basis of historical materialism, will eventually triumph over capitalism. In recent years, when the party has been asked to explain the capitalist globalization occurring, the party has returned to the writings of Karl Marx. Despite admitting that globalization developed through the capitalist system, the party's leaders and theorists argue that globalization is not intrinsically capitalist. The reason being that if globalization was purely capitalist, it would exclude an alternative socialist form of modernity. Globalization, as with the market economy, therefore does not have one specific class character (neither socialist nor capitalist) according to the party. The insistence that globalization is not fixed in nature comes from Deng's insistence that China can pursue socialist modernization by incorporating elements of capitalism. Because of this there is considerable optimism within the CCP that despite the current capitalist dominance of globalization, globalization can be turned into a vehicle supporting socialism. Governance Collective leadership Collective leadership, the idea that decisions will be taken through consensus, is the ideal in the CCP. The concept has its origins back to Vladimir Lenin and the Russian Bolshevik Party. At the level of the central party leadership this means that, for instance, all members of the Politburo Standing Committee are of equal standing (each member having only one vote). A member of the Politburo Standing Committee often represents a sector; during Mao's reign, he controlled the People's Liberation Army, Kang Sheng, the security apparatus, and Zhou Enlai, the State Council and the Ministry of Foreign Affairs. This counts as informal power. Despite this, in a paradoxical relation, members of a body are ranked hierarchically (despite the fact that members are in theory equal to one another). Informally, the collective leadership is headed by a "leadership core"; that is, the paramount leader, the person who holds the offices of CCP general secretary, CMC chairman and PRC president. Before Jiang Zemin's tenure as paramount leader, the party core and collective leadership were indistinguishable. In practice, the core was not responsible to the collective leadership. However, by the time of Jiang, the party had begun propagating a responsibility system, referring to it in official pronouncements as the "core of the collective leadership". Democratic centralism The CCP's organizational principle is democratic centralism, which is based on two principles: democracy (synonymous in official discourse with "socialist democracy" and "inner-party democracy") and centralism. This has been the guiding organizational principle of the party since the 5th National Congress, held in 1927. In the words of the party constitution, "The Party is an integral body organized under its program and constitution and on the basis of democratic centralism". Mao once quipped that democratic centralism was "at once democratic and centralized, with the two seeming opposites of democracy and centralization united in a definite form." Mao claimed that the superiority of democratic centralism lay in its internal contradictions, between democracy and centralism, and freedom and discipline. Currently, the CCP is claiming that "democracy is the lifeline of the Party, the lifeline of socialism". But for democracy to be implemented, and functioning properly, there needs to be centralization. The goal of democratic centralism was not to obliterate capitalism or its policies but instead it is the movement towards regulating capitalism while involving socialism and democracy. Democracy in any form, the CCP claims, needs centralism, since without centralism there will be no order. According to Mao, democratic centralism "is centralized on the basis of democracy and democratic under centralized guidance. This is the only system that can give full expression to democracy with full powers vested in the people's congresses at all levels and, at the same time, guarantee centralized administration with the governments at each level exercising centralized management of all the affairs entrusted to them by the people's congresses at the corresponding level and safeguarding whatever is essential to the democratic life of the people". Shuanggui Shuanggui is an intra-party disciplinary process conducted by the Central Commission for Discipline Inspection (CCDI). This formally independent internal control institution conducts shuanggui on members accused of "disciplinary violations", a charge which generally refers to political corruption. The process, which literally translates to "double regulation", aims to extract confessions from members accused of violating party rules. According to the Dui Hua Foundation, tactics such as cigarette burns, beatings and simulated drowning are among those used to extract confessions. Other reported techniques include the use of induced hallucinations, with one subject of this method reporting that "In the end I was so exhausted, I agreed to all the accusations against me even though they were false." Multi-Party Cooperation System The Multi-Party Cooperation and Political Consultation System is led by the CCP in cooperation and consultation with the eight parties which make up the United Front. Consultation takes place under the leadership of the CCP, with mass organizations, the United Front parties, and "representatives from all walks of life". These consultations contribute, at least in theory, to the formation of the country's basic policy in the fields of political, economic, cultural and social affairs. The CCP's relationship with other parties is based on the principle of "long-term coexistence and mutual supervision, treating each other with full sincerity and sharing weal or woe." This process is institutionalized in the Chinese People's Political Consultative Conference (CPPCC). All the parties in the United Front support China's road to socialism, and hold steadfast to the leadership of the CCP. Despite all this, the CPPCC is a body without any real power. While discussions do take place, they are all supervised by the CCP. Organization Central organization The National Congress is the party's highest body, and, since the 9th National Congress in 1969, has been convened every five years (prior to the 9th Congress they were convened on an irregular basis). According to the party's constitution, a congress may not be postponed except "under extraordinary circumstances." The party constitution gives the National Congress six responsibilities: electing the Central Committee; electing the Central Commission for Discipline Inspection (CCDI); examining the report of the outgoing Central Committee; examining the report of the outgoing CCDI; discussing and enacting party policies; and, revising the party's constitution. In practice, the delegates rarely discuss issues at length at the National Congresses. Most substantive discussion takes place before the congress, in the preparation period, among a group of top party leaders. In between National Congresses, the Central Committee is the highest decision-making institution. The CCDI
of the CCP, with mass organizations, the United Front parties, and "representatives from all walks of life". These consultations contribute, at least in theory, to the formation of the country's basic policy in the fields of political, economic, cultural and social affairs. The CCP's relationship with other parties is based on the principle of "long-term coexistence and mutual supervision, treating each other with full sincerity and sharing weal or woe." This process is institutionalized in the Chinese People's Political Consultative Conference (CPPCC). All the parties in the United Front support China's road to socialism, and hold steadfast to the leadership of the CCP. Despite all this, the CPPCC is a body without any real power. While discussions do take place, they are all supervised by the CCP. Organization Central organization The National Congress is the party's highest body, and, since the 9th National Congress in 1969, has been convened every five years (prior to the 9th Congress they were convened on an irregular basis). According to the party's constitution, a congress may not be postponed except "under extraordinary circumstances." The party constitution gives the National Congress six responsibilities: electing the Central Committee; electing the Central Commission for Discipline Inspection (CCDI); examining the report of the outgoing Central Committee; examining the report of the outgoing CCDI; discussing and enacting party policies; and, revising the party's constitution. In practice, the delegates rarely discuss issues at length at the National Congresses. Most substantive discussion takes place before the congress, in the preparation period, among a group of top party leaders. In between National Congresses, the Central Committee is the highest decision-making institution. The CCDI is responsible for supervising party's internal anti-corruption and ethics system. In between congresses the CCDI is under the authority of the Central Committee. The Central Committee, as the party's highest decision-making institution between national congresses, elects several bodies to carry out its work. The first plenary session of a newly elected central committee elects the general secretary of the Central Committee, the party's leader; the Central Military Commission (CMC); the Politburo; the Politburo Standing Committee (PSC); and since 2013, the Central National Security Commission (CNSC). The first plenum also endorses the composition of the Secretariat and the leadership of the CCDI. According to the party constitution, the general secretary must be a member of the Politburo Standing Committee (PSC), and is responsible for convening meetings of the PSC and the Politburo, while also presiding over the work of the Secretariat. The Politburo "exercises the functions and powers of the Central Committee when a plenum is not in session". The PSC is the party's highest decision-making institution when the Politburo, the Central Committee and the National Congress are not in session. It convenes at least once a week. It was established at the 8th National Congress, in 1958, to take over the policy-making role formerly assumed by the Secretariat. The Secretariat is the top implementation body of the Central Committee, and can make decisions within the policy framework established by the Politburo; it is also responsible for supervising the work of organizations that report directly into the Central Committee, for example departments, commissions, publications, and so on. The CMC is the highest decision-making institution on military affairs within the party, and controls the operations of the People's Liberation Army. The general secretary has, since Jiang Zemin, also served as Chairman of the CMC. Unlike the collective leadership ideal of other party organs, the CMC chairman acts as commander-in-chief with full authority to appoint or dismiss top military officers at will. The CNSC "co-ordinates security strategies across various departments, including intelligence, the military, foreign affairs and the police in order to cope with growing challenges to stability at home and abroad." The general secretary serves as the Chairman of the CNSC. A first plenum of the Central Committee also elects heads of departments, bureaus, central leading groups and other institutions to pursue its work during a term (a "term" being the period elapsing between national congresses, usually five years). The General Office is the party's "nerve centre", in charge of day-to-day administrative work, including communications, protocol, and setting agendas for meetings. The CCP currently has four main central departments: the Organization Department, responsible for overseeing provincial appointments and vetting cadres for future appointments, the Publicity Department (formerly "Propaganda Department"), which oversees the media and formulates the party line to the media, the International Department, functioning as the party's "foreign affairs ministry" with other parties, and the United Front Work Department, which oversees work with the country's non-communist parties, mass organizations, and influence groups outside of the country. The CC also has direct control over the Central Policy Research Office, which is responsible for researching issues of significant interest to the party leadership, the Central Party School, which provides political training and ideological indoctrination in communist thought for high-ranking and rising cadres, the Party History Research Centre, which sets priorities for scholarly research in state-run universities and the Central Party School, and the Compilation and Translation Bureau, which studies and translates the classical works of Marxism. The party's newspaper, the People's Daily, is under the direct control of the Central Committee and is published with the objectives "to tell good stories about China and the (Party)" and to promote its party leader. The theoretical magazines Seeking Truth from Facts and Study Times are published by the Central Party School. The various offices of the "Central Leading Groups", such as the Hong Kong and Macau Affairs Office, the Taiwan Affairs Office, and the Central Finance Office, also report to the central committee during a plenary session. Lower-level organizations After seizing political power, the CCP extended the dual party-state command system to all government institutions, social organizations, and economic entities. The State Council and the Supreme Court each has a party core group (党组), established since November 1949. Party committees permeate in every state administrative organ as well as the People's Consultation Conferences and mass organizations at all levels. Party committees exist inside of companies, both private and state-owned. Modeled after the Soviet Nomenklatura system, the party committee's organization department at each level has the power to recruit, train, monitor, appoint, and relocate these officials. Party committees exist at the level of provinces, cities, counties, and neighborhoods. These committees play a key role in directing local policy by selecting local leaders and assigning critical tasks. The Party secretary at each level is more senior than that of the leader of the government, with the CCP standing committee being the main source of power. Party committee members in each level are selected by the leadership in the level above, with provincial leaders selected by the central Organizational Department, and not removable by the local party secretary. In theory, however, party committees are elected by party congresses at their own level. Local party congresses are supposed to be held every fifth year, but under extraordinary circumstances they may be held earlier or postponed. However that decision must be approved by the next higher level of the local party committee. The number of delegates and the procedures for their election are decided by the local party committee, but must also have the approval of the next higher party committee. A local party congress has many of the same duties as the National Congress, and it is responsible for examining the report of the local Party Committee at the corresponding level; examining the report of the local Commission for Discipline Inspection at the corresponding level; discussing and adopting resolutions on major issues in the given area; and electing the local Party Committee and the local Commission for Discipline Inspection at the corresponding level. Party committees of "a province, autonomous region, municipality directly under the central government, city divided into districts, or autonomous prefecture [are] elected for a term of five years", and include full and alternate members. The party committees "of a county (banner), autonomous county, city not divided into districts, or municipal district [are] elected for a term of five years", but full and alternate members "must have a Party standing of three years or more." If a local Party Congress is held before or after the given date, the term of the members of the Party Committee shall be correspondingly shortened or lengthened. Vacancies in a Party Committee shall be filled by an alternate members according to the order of precedence, which is decided by the number of votes an alternate member got during his or hers election. A Party Committee must convene for at least two plenary meetings a year. During its tenure, a Party Committee shall "carry out the directives of the next higher Party organizations and the resolutions of the Party congresses at the corresponding levels." The local Standing Committee (analogous to the Central Politburo) is elected at the first plenum of the corresponding Party Committee after the local party congress. A Standing Committee is responsible to the Party Committee at the corresponding level and the Party Committee at the next higher level. A Standing Committee exercises the duties and responsibilities of the corresponding Party Committee when it is not in session. Funding The funding of all CCP organizations mainly comes from state fiscal revenue. Data for the proportion of total CCP organizations’ expenditures in total China fiscal revenue is unavailable. However, occasionally small local governments in China release such data. For example, on 10 October 2016, the local government of Mengmao Township, Ruili City, Yunnan Province released a concise fiscal revenue and expenditure report for the year 2014. According to this report, the fiscal Revenue amounted to RMB 29,498,933.58, and CCP organization' expenditures amounted to RMB 1,660,115.50, that is, 5.63% of fiscal revenue is used by the CCP for its own operation. This value is similar to the social security and employment expenditure of the whole town—RMB 1,683,064.90. Members To join the CCP, an applicant must go through an approval process. In 2014, only 2 million applications were accepted out of some 22 million applicants. Admitted members then spend a year as a probationary member. In contrast to the past, when emphasis was placed on the applicants' ideological criteria, the current CCP stresses technical and educational qualifications. To become a probationary member, the applicant must take an admission oath before the party flag. The relevant CCP organization is responsible for observing and educating probationary members. Probationary members have duties similar to those of full members, with the exception that they may not vote in party elections nor stand for election. Many join the CCP through the Communist Youth League. Under Jiang Zemin, private entrepreneurs were allowed to become party members. According to the CCP constitution, a member, in short, must follow orders, be disciplined, uphold unity, serve the Party and the people, and promote the socialist way of life. Members enjoy the privilege of attending Party meetings, reading relevant Party documents, receiving Party education, participating in Party discussions through the Party's newspapers and journals, making suggestions and proposal, making "well-grounded criticism of any Party organization or member at Party meetings" (even of the central party leadership), voting and standing for election, and of opposing and criticizing Party resolutions ("provided that they resolutely carry out the resolution or policy while it is in force"); and they have the ability "to put forward any request, appeal, or complaint to higher Party organizations, even up to the Central Committee, and ask the organizations concerned for a responsible reply." No party organization, including the CCP central leadership, can deprive a member of these rights. As of 30 June 2016, individuals who identify as farmers, herdsmen and fishermen make up 26 million members; (30%) members identifying as workers totalled 7.2 million. Another group, the "Managing, professional and technical staff in enterprises and public institutions", made up 12.5 million, 9 million identified as working in administrative staff and 7.4 million described themselves as party cadres. 22.3 million women (25%) are CCP members. The CCP currently has 95.14 million members, making it the second largest political party in the world after India's Bharatiya Janata Party. Women in China have low participation rates as political leaders. Women's disadvantage is most evident in their severe under representation in the more powerful political positions. At the top level of decision making, no woman has ever been among the nine members of the Standing Committee of the Communist Party's Politburo. Just 3 of 27 government ministers are women, and importantly, since 1997, China has fallen to 53rd place from 16th in the world in terms of female representation at its parliament, the National People's Congress, according to the Inter-Parliamentary Union. Party leaders such as Zhao Ziyang have vigorously opposed the participation of women in the political process. Within the party women face a glass ceiling. Communist Youth League The Communist Youth League (CYL) is the CCP's youth wing, and the largest mass organization for youth in China. According to the CCP's constitution the CYL is a "mass organization of advanced young people under the leadership of the Communist Party of China; it functions as a party school where a large number of young people learn about socialism with Chinese characteristics and about communism through practice; it is the Party's assistant and reserve force." To join, an applicant has to be between the ages of 14 and 28. It controls and supervises Young Pioneers, a youth organization for children below the age of 14. The organizational structure of CYL is an exact copy of the CCP's; the highest body is the National Congress, followed by the , Politburo and the Politburo Standing Committee. However, the Central Committee (and all central organs) of the CYL work under the guidance of the CCP central leadership. Therefore, in a peculiar situation, CYL bodies are both responsible to higher bodies within CYL and the CCP, a distinct organization. As of the 17th National Congress (held in 2013), CYL had 89 million members. Symbols According to Article 53 of the CCP constitution, "the Party emblem and flag are the symbol and sign of the Communist Party of China." At the beginning of its history, the CCP did not have a single official standard for the flag, but instead allowed individual party committees to copy the flag of the Communist Party of the Soviet Union. On 28 April 1942, the Central Politburo decreed the establishment of a sole official flag. "The flag of the CPC has the length-to-width proportion of 3:2 with a hammer and sickle in the upper-left corner, and with no five-pointed star. The Political Bureau authorizes the General Office to custom-make a number of standard flags and distribute them to all major organs". According to People's Daily, "The standard party flag is 120 centimeters (cm) in length and 80 cm in width. In the center of the upper-left corner (a quarter of the length and width to the border) is a yellow hammer-and-sickle 30 cm in diameter. The flag sleeve (pole hem) is in white and 6.5 cm in width. The dimension of the pole hem is not included in the measure of the flag. The red color symbolizes revolution; the hammer-and-sickle are tools of workers and peasants, meaning that the Communist Party of China represents the interests of the masses and the people; the yellow color signifies brightness." In total the flag has five dimensions, the sizes are "no. 1: 388 cm in length and 192 cm in width; no. 2: 240 cm in length and 160 cm in width; no. 3: 192 cm in length and 128 cm in width; no. 4: 144 cm in length and 96 cm in width; no. 5: 96 cm in length and 64 cm in width." On 21 September 1966, the CCP General Office issued "Regulations on the Production and Use of the CCP Flag and Emblem", which stated that the emblem and flag were the official symbols and signs of the party. Party-to-party relations The International Liaison Department of the CCP is responsible for dialogue with global political parties. Communist parties The CCP continues to have relations with non-ruling communist and workers' parties and attends international communist conferences, most notably the International Meeting of Communist and Workers' Parties. Delegates of foreign communist parties still visit China; in 2013, for instance, the General Secretary of the Portuguese Communist Party (PCP), Jeronimo de Sousa, personally met with Liu Qibao, a member of the Central Politburo. In another instance, Pierre Laurent, the National Secretary of the French Communist Party (PCF), met with Liu Yunshan, a Politburo Standing Committee member. In 2014 Xi Jinping, the CCP general secretary, personally met with Gennady Zyuganov, the First Secretary of the Communist Party of the Russian Federation (CPRF), to discuss party-to-party relations. While the CCP retains contact with major parties such as the Communist Party of Portugal, the Communist Party of France, the Communist Party of the Russian Federation, the Communist Party of Bohemia and Moravia, the Communist Party of Brazil, the Communist Party of Greece, the Communist Party of Nepal and the Communist Party of Spain, the party also retains relations with minor communist and workers' parties, such as the Communist Party of Australia, the Workers Party of Bangladesh, the Communist Party of Bangladesh (Marxist–Leninist) (Barua), the Communist Party of Sri Lanka, the Workers' Party of Belgium, the Hungarian Workers' Party, the Dominican Workers' Party, the Nepal Workers Peasants Party, and the Party for the Transformation of Honduras, for instance. In recent years, noting the self-reform of the European social democratic movement in the 1980s and 1990s, the CCP "has noted the increased marginalization of West European communist parties." Ruling parties of socialist states According to David Shambaugh, the CCP has retained close relations with the remaining socialist states still espousing communism: Cuba, Laos, and Vietnam and their respective ruling parties, as well as North Korea and its ruling party, which officially removed all mentions of communism from the constitution in 2009. It spends a fair amount of time analyzing the situation in the remaining socialist states, trying to reach conclusions as to why these states survived when so many did not, following the collapse of the Eastern European socialist states in 1989 and the dissolution of the Soviet Union in 1991. In general, the analyses of the remaining socialist states and their chances of survival have been positive, and the CCP believes that the socialist movement will be revitalized sometime in the future. The ruling party which the CCP is most interested in is the Communist Party of Vietnam (CPV). In general the CPV is considered a model example of socialist development in the post-Soviet era. Chinese analysts on Vietnam believe that the introduction of the Doi Moi reform policy at the 6th CPV National Congress is the key reason for Vietnam's current success. While the CCP is probably the organization with most access to North Korea, writing about North Korea is tightly circumscribed. The few reports accessible to the general public are those about North Korean economic reforms. While Chinese analysts of North Korea tend to speak positively of North Korea in public, in official discussions circa 2008 they show much disdain for North Korea's economic system, the cult of personality which pervades society, the Kim family, the idea of hereditary succession in a socialist state, the security state, the use of scarce resources on the Korean People's Army and the general impoverishment of the North Korean people. Circa 2008 there are those analysts who compare the current situation of North Korea with that of China during the Cultural Revolution. Over the years, the CCP has tried to persuade the Workers' Party of Korea (or WPK, North Korea's ruling party) to introduce economic reforms by showing them key economic infrastructure in China. For instance, in 2006 the CCP invited the WPK general secretary Kim Jong-il to Guangdong to showcase the success economic reforms have brought China. In general, the CCP considers the WPK and North Korea to be negative examples of a communist ruling party and socialist state. There is a considerable degree of interest in Cuba within the CCP. Fidel Castro, the former First Secretary of the Communist Party of Cuba (PCC), is greatly admired, and books have been written focusing on the successes of the Cuban Revolution. Communication between the CCP and the PCC has increased since the 1990s. At the 4th Plenary Session of the 16th Central Committee, which
as Celsius which measures from the freezing point of water at sea level or Fahrenheit which measures from the freezing point of a particular brine solution at sea level. Definitions and distinctions Cryogenics The branches of engineering that involve the study of very low temperatures, how to produce them, and how materials behave at those temperatures. Cryobiology The branch of biology involving the study of the effects of low temperatures on organisms (most often for the purpose of achieving cryopreservation). Cryoconservation of animal genetic resources The conservation of genetic material with the intention of conserving a breed. Cryosurgery The branch of surgery applying cryogenic temperatures to destroy and kill tissue, e.g. cancer cells. Cryoelectronics The study of electronic phenomena at cryogenic temperatures. Examples include superconductivity and variable-range hopping. Cryonics Cryopreserving humans and animals with the intention of future revival. "Cryogenics" is sometimes erroneously used to mean "Cryonics" in popular culture and the press. Etymology The word cryogenics stems from Greek κρύος (cryos) – "cold" + γενής (genis) – "generating". Cryogenic fluids Cryogenic fluids with their boiling point in kelvins. Industrial applications Liquefied gases, such as liquid nitrogen and liquid helium, are used in many cryogenic applications. Liquid nitrogen is the most commonly used element in cryogenics and is legally purchasable around the world. Liquid helium is also commonly used and allows for the lowest attainable temperatures to be reached. These liquids may be stored in Dewar flasks, which are double-walled containers with a high vacuum between the walls to reduce heat transfer into the liquid. Typical laboratory Dewar flasks are spherical, made of glass and protected in a metal outer container. Dewar flasks for extremely cold liquids such as liquid helium have another double-walled container filled with liquid nitrogen. Dewar flasks are named after their inventor, James Dewar, the man who first liquefied hydrogen. Thermos bottles are smaller vacuum flasks fitted in a protective casing. Cryogenic barcode labels are used to mark Dewar flasks containing these liquids, and will not frost over down to −195 degrees Celsius. Cryogenic transfer pumps are the pumps used on LNG piers to transfer liquefied natural gas from LNG carriers to LNG storage tanks, as are cryogenic valves. Cryogenic processing The field of cryogenics advanced during World War II when scientists found that metals frozen to low temperatures showed more resistance to wear. Based on this theory of cryogenic hardening, the commercial cryogenic processing industry was founded in 1966 by Ed Busch. With a background in the heat treating industry, Busch founded a company in Detroit called CryoTech in 1966 which merged with 300 Below in 1999 to become the world's largest and oldest commercial cryogenic processing company. Busch originally experimented with the possibility of increasing the life of metal tools to anywhere between 200% and 400% of the original life expectancy using cryogenic tempering instead of heat treating. This evolved in the late 1990s into the treatment of other parts. Cryogens, such as liquid nitrogen, are further used for specialty chilling and freezing applications. Some chemical reactions, like those used to produce the active ingredients for the popular statin drugs, must occur at low temperatures of approximately . Special cryogenic chemical reactors are used to remove reaction heat and provide a low temperature environment. The freezing of foods and biotechnology products, like vaccines, requires nitrogen in blast freezing or immersion freezing systems. Certain soft or elastic materials become hard and brittle at very low temperatures, which makes cryogenic milling (cryomilling) an option for some materials that cannot easily be milled at higher temperatures. Cryogenic processing is not a substitute for heat treatment, but rather an extension of the heating–quenching–tempering cycle. Normally, when an item is quenched, the final temperature is ambient. The only reason for this is that most heat treaters do not have cooling equipment. There is nothing metallurgically significant about ambient temperature. The cryogenic process continues this action from ambient temperature down to . In most instances the cryogenic cycle is followed by a heat tempering procedure. As all alloys do not have the same chemical constituents, the tempering procedure varies according to the material's chemical composition, thermal history and/or a tool's particular service application. The entire process takes 3–4 days. Fuels Another use of cryogenics is cryogenic fuels for rockets with liquid hydrogen
at ambient pressure. Cheap metallic superconductors can be used for the coil wiring. So-called high-temperature superconducting compounds can be made to super conduct with the use of liquid nitrogen, which boils at around 77 K. Magnetic resonance imaging (MRI) is a complex application of NMR where the geometry of the resonances is deconvoluted and used to image objects by detecting the relaxation of protons that have been perturbed by a radio-frequency pulse in the strong magnetic field. This is most commonly used in health applications. In large cities, it is difficult to transmit power by overhead cables, so underground cables are used. But underground cables get heated and the resistance of the wire increases, leading to waste of power. Superconductors could be used to increase power throughput, although they would require cryogenic liquids such as nitrogen or helium to cool special alloy-containing cables to increase power transmission. Several feasibility studies have been performed and the field is the subject of an agreement within the International Energy Agency. Cryogenic gases are used in transportation and storage of large masses of frozen food. When very large quantities of food must be transported to regions like war zones, earthquake hit regions, etc., they must be stored for a long time, so cryogenic food freezing is used. Cryogenic food freezing is also helpful for large scale food processing industries. Many infrared (forward looking infrared) cameras require their detectors to be cryogenically cooled. Certain rare blood groups are stored at low temperatures, such as −165 °C, at blood banks. Cryogenics technology using liquid nitrogen and CO2 has been built into nightclub effect systems to create a chilling effect and white fog that can be illuminated with colored lights. Cryogenic cooling is used to cool the tool tip at the time of machining in manufacturing process. It increases the tool life. Oxygen is used to perform several important functions in the steel manufacturing process. Many rockets use cryogenic gases as propellants. These include liquid oxygen, liquid hydrogen, and liquid methane. By freezing the automobile or truck tire in liquid nitrogen, the rubber is made brittle and can be crushed into small particles. These particles can be used again for other items. Experimental research on certain physics phenomena, such as spintronics and magnetotransport properties, requires cryogenic temperatures for the effects to be observed. Certain vaccines must be stored at cryogenic temperatures. For example, the Pfizer–BioNTech COVID-19 vaccine must be stored at temperatures of . (See cold chain.) Production Cryogenic cooling of devices and material is usually achieved via the use of liquid nitrogen, liquid helium, or a mechanical cryocooler (which uses high-pressure helium lines). Gifford-McMahon cryocoolers, pulse tube cryocoolers and Stirling cryocoolers are in wide use with selection based on
television series The Outer Limits. Shortly afterward he received another Golden Satellite Award nomination for his work on the ensemble NBC Television movie Uprising opposite Jon Voight directed by Jon Avnet. Elwes had a recurring role in the final season (from 2001 to 2002) of Chris Carter's hit series The X-Files as FBI Assistant Director Brad Follmer. In 2004, he portrayed serial killer Ted Bundy in the A&E Network film The Riverman, which became one of the highest rated original movies in the network's history and garnered a prestigious BANFF Rockie Award nomination. The following year, Elwes played the young Karol Wojtyła in the CBS television film Pope John Paul II. The TV film was highly successful not only in North America but also in Europe, where it broke box office records in the late Pope's native Poland and became the first film ever to break $1 million (GBP 739,075.00 current) in three days. In 2007, he made a guest appearance on the Law & Order: Special Victims Unit episode "Dependent" as a Mafia lawyer. In 2009, he played the role of Pierre Despereaux, an international art thief, in the fourth-season premiere of Psych. In 2010, he returned to Psych, reprising his role in the second half of the fifth season, again in the show's sixth season, and again in the show's eighth season premiere. In 2014, Elwes played Hugh Ashmeade, Director of the CIA, in the second season of the BYUtv series Granite Flats. In May 2015, Elwes was cast as Arthur Davenport, a shrewd and eccentric world-class collector of illegal art and antiquities in Crackle's first streaming network series drama, The Art of More, which explored the cutthroat world of premium auction houses. The series debuted on 19 November and was picked up for a second season. In April 2018 Elwes portrayed Larry Kline, Mayor of Hawkins, for the third season of the Netflix series Stranger Things, which premiered in July 2019. In May 2019, it was announced that he would be joining the third season of the Amazon series The Marvelous Mrs. Maisel as Gavin Hawk. Voice-over work Elwes's voice-over work includes the narrator in James Patterson's audiobook The Jester, as well as characters in film and television animations such as Quest for Camelot, Pinky and The Brain, Batman Beyond, and the English versions of the Studio Ghibli films, Porco Rosso, Whisper of the Heart and The Cat Returns. For the 2004 video game The Bard's Tale, he served as screenwriter, improviser, and voice actor of the main character The Bard. In 2009, Elwes reunited with Jason Alexander for the Indian film, Delhi Safari. The following year Elwes portrayed the part of Gremlin Gus in Disney's video game, Epic Mickey 2: The Power of Two. In 2014, he appeared in Cosmos: A Spacetime Odyssey as the voice of scientists Edmond Halley and Robert Hooke. Motion capture work In 2009 Elwes joined the cast of Robert Zemeckis's motion capture adaptation of Charles Dickens' A Christmas Carol portraying five roles. That same year he was chosen by Steven Spielberg to appear in his motion capture adaptation of Belgian artist Hergé's popular comic strip The Adventures of Tintin: The Secret of the Unicorn. Theatre In 2003 Elwes portrayed Kerry Max Cook in the off-Broadway play The Exonerated in New York, directed by Bob Balaban (18–23 March 2003). Literature In October 2014 Touchstone (Simon & Schuster) published Elwes's memoir of the making of The Princess Bride, entitled As You Wish: Inconceivable Tales from the Making of The Princess Bride, which Elwes co-wrote with Joe Layden. The book featured never-before-told stories, exclusive behind-the-scenes photographs, and interviews with co-stars Robin Wright, Wallace Shawn, Billy Crystal, Christopher Guest, Fred Savage and Mandy Patinkin, as well as author and screenwriter William Goldman, producer Norman Lear, and director Rob Reiner. The book debuted on The New York Times Best Seller list. Other projects In 2014, Elwes co-wrote the screenplay for a film entitled Elvis & Nixon, about the pair's famous meeting at the White House in 1970. The film, which starred Michael Shannon and Kevin Spacey, was bought by Amazon as their first theatrical feature and was released on 22 April 2016. Lawsuit In August 2005, Elwes filed a lawsuit against Evolution Entertainment, his management firm and producer
film producer John Houseman for Tim Robbins in his ensemble film based on Orson Welles's musical, Cradle Will Rock. Following that, he travelled to Luxembourg to work with John Malkovich and Willem Dafoe in Shadow of the Vampire. In 2001, he co-starred in Peter Bogdanovich's ensemble film The Cat's Meow portraying movie mogul Thomas Ince, who died mysteriously while vacationing with William Randolph Hearst on his yacht. In 2004, Elwes starred in the horror–thriller Saw which, at a budget of a little over $1 million, grossed over $100 million worldwide. The same year he appeared in Ella Enchanted, this time as the villain, not the hero. He made an uncredited appearance as Sam Green, the man who introduced Andy Warhol to Edie Sedgwick, in the 2006 film Factory Girl. In 2007, he appeared in Garry Marshall's Georgia Rule opposite Jane Fonda. In 2010, he returned to the Saw franchise in Saw 3D (2010), the seventh film in the series, as Dr. Lawrence Gordon. In 2011, he was selected by Ivan Reitman to star alongside Natalie Portman in No Strings Attached. That same year, Elwes and Garry Marshall teamed up again in the ensemble romantic comedy New Year's Eve opposite Robert de Niro and Halle Berry. In 2012, Elwes starred in the independent drama The Citizen. and the following year Elwes joined Selena Gomez for the comedy ensemble, Behaving Badly directed by Tim Garrick. In 2015, he completed Sugar Mountain directed by Richard Gray; the drama We Don't Belong Here, opposite Anton Yelchin and Catherine Keener directed by Peer Pedersen, and Being Charlie which reunited Elwes with director Rob Reiner after 28 years and premiered at the Toronto International Film Festival. In 2016, Elwes starred opposite Penelope Cruz in Fernando Trueba's Spanish-language period pic The Queen of Spain, a sequel to Trueba's 1998 drama The Girl of Your Dreams. This also re-united Elwes with his Princess Bride co-star, Mandy Patinkin. Television Elwes made his first television appearance in 1996 as David Lookner on Seinfeld. Two years later he played astronaut Michael Collins in the Golden Globe Award-winning HBO miniseries From the Earth To the Moon. The following year Elwes was nominated for a Golden Satellite Award for Best Performance by an Actor in a Mini-Series or Motion Picture Made for Television for his portrayal of Colonel James Burton in The Pentagon Wars directed by Richard Benjamin. In 1999, he guest starred as Dr. John York in an episode of the television series The Outer Limits. Shortly afterward he received another Golden Satellite Award nomination for his work on the ensemble NBC Television movie Uprising opposite Jon Voight directed by Jon Avnet. Elwes had a recurring role in the final season (from 2001 to 2002) of Chris Carter's hit series The X-Files as FBI Assistant Director Brad Follmer. In 2004, he portrayed serial killer Ted Bundy in the A&E Network film The Riverman, which became one of the highest rated original movies in the network's history and garnered a prestigious BANFF Rockie Award nomination. The following year, Elwes played the young Karol Wojtyła in the CBS television film Pope John Paul II. The TV film was highly successful not only in North America but also in Europe, where it broke box office records in the late Pope's native Poland and became the first film ever to break $1 million (GBP 739,075.00 current) in three days. In 2007, he made a guest appearance on the Law & Order: Special Victims Unit episode "Dependent" as a Mafia lawyer. In 2009, he played the role of Pierre Despereaux, an international art thief, in the fourth-season premiere of Psych. In 2010, he returned to Psych, reprising his role in the second half of the fifth season, again in the show's sixth season, and again in the show's eighth season premiere. In 2014, Elwes played Hugh Ashmeade, Director of the CIA, in the second season of the BYUtv series Granite Flats. In May 2015, Elwes was cast as Arthur Davenport, a shrewd and eccentric world-class collector of illegal art and antiquities in Crackle's first streaming network series drama, The Art of More, which explored the cutthroat world of premium auction houses. The series debuted on 19 November and was picked up for a second season. In April 2018 Elwes portrayed Larry Kline, Mayor of Hawkins, for the third season of the Netflix series Stranger Things, which premiered in July 2019. In May 2019, it was announced that he would be joining the third season of the Amazon series The Marvelous Mrs. Maisel as Gavin Hawk. Voice-over work Elwes's voice-over work includes the narrator in James Patterson's audiobook The Jester, as well as characters in film and television animations such as Quest for Camelot, Pinky and The Brain, Batman Beyond, and the English versions of the Studio Ghibli films, Porco Rosso, Whisper of the Heart and The Cat Returns. For the 2004 video game The Bard's Tale, he served as screenwriter,
He was nominated for the Academy Award for Best Supporting Actor for his performance as Leon Shermer in Dog Day Afternoon (1975). Early life Christopher Sarandon Jr. was born and raised in Beckley, West Virginia, the son of restaurateurs Chris Sarandon and Cliffie (née Cardullias). His father, whose surname was originally "Sarondonedes", was born in Istanbul, Turkey, of Greek ancestry; his mother is also of Greek descent. Sarandon graduated from Woodrow Wilson High School in Beckley. He earned a degree in speech at West Virginia University. He earned his master's degree in theater from The Catholic University of America (CUA) in Washington, D.C. Career After graduation, he toured with numerous improvisational companies and became much involved with regional theatre, making his professional debut in the play The Rose Tattoo during 1965. In 1968, Sarandon moved to New York City, where he obtained his first television role as Dr. Tom Halverson for the series The Guiding Light (1973–1974). He appeared in the primetime television movies The Satan Murders (1974) and Thursday's Game before obtaining the role in Dog Day Afternoon (1975), a performance which earned him nominations for Best New Male Star of the Year at the Golden Globes and the Academy Award for Best Supporting Actor. Sarandon appeared in the Broadway play The Rothschilds and The Two Gentlemen of Verona, as well making regular appearances at numerous Shakespeare and George Bernard Shaw festivals in the United States and Canada. He also had a series of television roles, some of which (such as A Tale of Two Cities in 1980) corresponded to his affinity for the classics. He also had roles in the thriller movie Lipstick (1976) and as a demon in the movie The Sentinel (1977). To avoid being typecast in villainous roles, Sarandon accepted various roles of other types during the years to come, portraying the title role of Christ in the made-for-television movie The Day Christ Died (1980). He received accolades for his portrayal of Sydney Carton in a TV-movie version of A Tale of Two Cities (1980), co-starred with Dennis Hopper in the 1983 movie The Osterman Weekend, which was based on the Robert Ludlum novel of the same name, and co-starred with Goldie Hawn in the movie Protocol (1984). These were followed by another mainstream success as the vampire-next-door in the horror movie Fright Night (1985). He starred in the 1986 TV movie Liberty, which addressed the making of New York City's Statue of Liberty. He is best known in the film industry for his role as Prince Humperdinck in Rob Reiner's 1987 movie The Princess Bride, though he also has had supporting parts in other successful movies such as the original Child's Play (1988). In 1992, he played Joseph Curwen/Charles Dexter Ward in The Resurrected. He also played Jack Skellington, the main character of Tim Burton's animated Disney movie The Nightmare Before Christmas (1993), and has since reprised the role in other productions, including the Disney/Square video games Kingdom Hearts and Kingdom Hearts II and the Capcom sequel to the original movie, Oogie's Revenge. Sarandon also reprised his role
America (CUA) in Washington, D.C. Career After graduation, he toured with numerous improvisational companies and became much involved with regional theatre, making his professional debut in the play The Rose Tattoo during 1965. In 1968, Sarandon moved to New York City, where he obtained his first television role as Dr. Tom Halverson for the series The Guiding Light (1973–1974). He appeared in the primetime television movies The Satan Murders (1974) and Thursday's Game before obtaining the role in Dog Day Afternoon (1975), a performance which earned him nominations for Best New Male Star of the Year at the Golden Globes and the Academy Award for Best Supporting Actor. Sarandon appeared in the Broadway play The Rothschilds and The Two Gentlemen of Verona, as well making regular appearances at numerous Shakespeare and George Bernard Shaw festivals in the United States and Canada. He also had a series of television roles, some of which (such as A Tale of Two Cities in 1980) corresponded to his affinity for the classics. He also had roles in the thriller movie Lipstick (1976) and as a demon in the movie The Sentinel (1977). To avoid being typecast in villainous roles, Sarandon accepted various roles of other types during the years to come, portraying the title role of Christ in the made-for-television movie The Day Christ Died (1980). He received accolades for his portrayal of Sydney Carton in a TV-movie version of A Tale of Two Cities (1980), co-starred with Dennis Hopper in the 1983 movie The Osterman Weekend, which was based on the Robert Ludlum novel of the same name, and co-starred with Goldie Hawn in the movie Protocol (1984). These were followed by another mainstream success as the vampire-next-door in the horror movie Fright Night (1985). He starred in the 1986 TV movie Liberty, which addressed the making of New York City's Statue of Liberty. He is best known in the film industry for his role as Prince Humperdinck in Rob Reiner's 1987 movie The Princess Bride, though he also has had supporting parts in other successful movies such as the original Child's Play (1988). In 1992, he played Joseph Curwen/Charles Dexter Ward in The Resurrected. He also played Jack Skellington, the main character of Tim Burton's animated Disney movie The Nightmare Before Christmas (1993), and has since reprised the role in other productions, including the Disney/Square video games Kingdom Hearts and Kingdom Hearts II and the Capcom sequel to the original movie, Oogie's Revenge. Sarandon also reprised his role as
that make the characters in his films so interesting. He maintains that his intention is not to mock anyone, but to explore insular, perhaps obscure communities through his method of filmmaking. Together, Guest, his frequent writing partner Eugene Levy, and a small band of other actors have formed a loose repertory group, which appear across several films. These include Catherine O'Hara, Michael McKean, Parker Posey, Bob Balaban, Jane Lynch, John Michael Higgins, Harry Shearer, Jennifer Coolidge, Ed Begley, Jr., Jim Piddock, and Fred Willard. Guest and Levy write backgrounds for each of the characters and notecards for each specific scene, outlining the plot, and then leave it up to the actors to improvise the dialogue, which is supposed to result in a much more natural conversation than scripted dialogue would. Typically, everyone who appears in these movies receives the same fee and the same portion of profits. Guest had a guest voice-over role in the animated comedy series SpongeBob SquarePants as SpongeBob's cousin, Stanley. Guest again collaborated with Reiner in A Few Good Men (1992), appearing as Dr. Stone. In the 2000s, Guest appeared in the 2005 biographical musical Mrs Henderson Presents and in the 2009 comedy The Invention of Lying. He is also currently a member of the musical group The Beyman Bros, which he formed with childhood friend David Nichtern and Spinal Tap's current keyboardist C. J. Vanston. Their debut album Memories of Summer as a Child was released on January 20, 2009. In 2010, the United States Census Bureau paid $2.5 million to have a television commercial directed by Guest shown during television coverage of Super Bowl XLIV. Guest holds an honorary doctorate from and is a member of the board of trustees for Berklee College of Music in Boston. In 2013, Guest was the co-writer and producer of the HBO series Family Tree, in collaboration with Jim Piddock, a lighthearted story in the style he made famous in This is Spinal Tap, in which the main character, Tom Chadwick, inherits a box of curios from his great aunt, spurring interest in his ancestry. On August 11, 2015, Netflix announced that Mascots, a film directed by Guest and co-written with Jim Piddock, about the competition for the World Mascot Association championship's Gold Fluffy Award, would debut in 2016. Guest replayed his role as Count Tyrone Rugen in the Princess Bride Reunion on September 13, 2020. Family Guest became the 5th Baron Haden-Guest, of Great Saling, in the County of Essex, when his father died in 1996. He succeeded upon the ineligibility of his older half-brother, Anthony Haden-Guest, who was born prior to the marriage of his parents. According to an article in The Guardian, Guest attended the House of Lords regularly until the House of Lords Act 1999 barred most hereditary peers from their seats. In the article Guest remarked: Personal life Guest married actress Jamie Lee Curtis in 1984 at the home of their mutual friend, Rob Reiner. They have two adopted daughters: Annie (born 1986) and Ruby (born 1996), who is transgender. Because Guest's children are adopted, they cannot inherit the family barony under the terms of the letters patent that created it, though a 2004 Royal Warrant addressing the style of a peer's adopted children states that they can use courtesy titles. The current heir presumptive to the barony is Guest's younger brother, actor Nicholas Guest. As reported by Louis B. Hobson, "On film, Guest is a hilariously droll comedian. In person he is serious and almost dour." He quotes Guest as saying, "People want me to be funny all the time. They think I'm being funny no matter what I say or do and that's not the case. I rarely joke unless I'm in front of a camera. It's not what I am in real life. It's what I do for a living." Guest was played by Seth Green in the film A Futile and Stupid Gesture. Filmography Film Television Recurring cast members Guest has worked multiple times with certain actors, notably with frequent writing partner Eugene Levy, who has appeared in five of his projects. Other repeat collaborators of Guest include Fred Willard (7 projects); Michael McKean, Bob Balaban, and Ed Begley, Jr. (6 projects each); Parker Posey, Jim Piddock, Michael Hitchcock and Harry Shearer (5 projects each); Catherine O'Hara, Larry Miller, John Michael Higgins, Jane Lynch, and Jennifer Coolidge (4 projects each). Awards and nominations References External links "Nowt so queer as folk". The Guardian (UK). January 10, 2004. Richard Grant. Interview for release of A Mighty Wind. 1948 births Male actors from New York City English male comedians English comedy musicians English male film actors English film directors English male television actors English television writers English people of Jewish descent English people of American descent English people of Russian-Jewish descent Jewish English male actors American male comedians 21st-century American comedians American comedy musicians American male film actors American
as 'the Hon. Christopher Haden-Guest. This was his official style and name until he inherited the barony in 1996. 1990–present The experience of making This is Spinal Tap directly informed the second phase of his career. Starting in 1996, Guest began writing, directing, and acting in his own series of substantially improvised films. Many of them came to be definitive examples of what came to be known as "mockumentaries"—not a term Guest appreciates in describing his unusual approach to exploring the passions that make the characters in his films so interesting. He maintains that his intention is not to mock anyone, but to explore insular, perhaps obscure communities through his method of filmmaking. Together, Guest, his frequent writing partner Eugene Levy, and a small band of other actors have formed a loose repertory group, which appear across several films. These include Catherine O'Hara, Michael McKean, Parker Posey, Bob Balaban, Jane Lynch, John Michael Higgins, Harry Shearer, Jennifer Coolidge, Ed Begley, Jr., Jim Piddock, and Fred Willard. Guest and Levy write backgrounds for each of the characters and notecards for each specific scene, outlining the plot, and then leave it up to the actors to improvise the dialogue, which is supposed to result in a much more natural conversation than scripted dialogue would. Typically, everyone who appears in these movies receives the same fee and the same portion of profits. Guest had a guest voice-over role in the animated comedy series SpongeBob SquarePants as SpongeBob's cousin, Stanley. Guest again collaborated with Reiner in A Few Good Men (1992), appearing as Dr. Stone. In the 2000s, Guest appeared in the 2005 biographical musical Mrs Henderson Presents and in the 2009 comedy The Invention of Lying. He is also currently a member of the musical group The Beyman Bros, which he formed with childhood friend David Nichtern and Spinal Tap's current keyboardist C. J. Vanston. Their debut album Memories of Summer as a Child was released on January 20, 2009. In 2010, the United States Census Bureau paid $2.5 million to have a television commercial directed by Guest shown during television coverage of Super Bowl XLIV. Guest holds an honorary doctorate from and is a member of the board of trustees for Berklee College of Music in Boston. In 2013, Guest was the co-writer and producer of the HBO series Family Tree, in collaboration with Jim Piddock, a lighthearted story in the style he made famous in This is Spinal Tap, in which the main character, Tom Chadwick, inherits a box of curios from his great aunt, spurring interest in his ancestry. On August 11, 2015, Netflix announced that Mascots, a film directed by Guest and co-written with Jim Piddock, about the competition for the World Mascot Association championship's Gold Fluffy Award, would debut in 2016. Guest replayed his role as Count Tyrone Rugen in the Princess Bride Reunion on September 13, 2020. Family Guest became the 5th Baron Haden-Guest, of Great Saling, in the County of Essex, when his father died in 1996. He succeeded upon the ineligibility of his older half-brother, Anthony Haden-Guest, who was born prior to the marriage of his parents. According to an article in The Guardian, Guest attended the House of Lords regularly until the House of Lords Act 1999 barred most hereditary peers from their seats. In the article Guest remarked: Personal life Guest married actress Jamie Lee Curtis in 1984 at the home of their mutual friend, Rob Reiner. They have two adopted daughters: Annie (born 1986) and Ruby (born 1996), who is transgender. Because Guest's children are adopted, they cannot inherit the family barony under the terms of the letters patent that created it, though a 2004 Royal Warrant addressing the style of a peer's adopted children states that they can use courtesy titles. The current heir presumptive to the barony is Guest's younger brother, actor Nicholas Guest. As reported by Louis B. Hobson, "On film, Guest is a hilariously droll comedian. In person he is
She received two Emmy Awards for her work in the series. In 1984, Kane appeared in episode 12, season 3 of Cheers as Amanda, an acquaintance of Diane Chambers from her time spent in a mental institution. Kane was a regular on the 1986 series All Is Forgiven, a regular on the 1990–1991 series American Dreamer, guest-starred on a 1994 episode of Seinfeld, a 1996 episode of Ellen and had a supporting role in the short-lived sitcom Pearl. In 1988, Kane appeared in the Cinemax Comedy Experiment Rap Master Ronnie: A Report Card alongside Jon Cryer and the Smothers Brothers. In January 2009, she appeared in the television series Two and a Half Men as the mother of Alan Harper's receptionist. In March 2010, Kane appeared in the television series Ugly Betty as Justin Suarez's acting teacher. In 2014, she had a recurring role in the TV series Gotham as Gertrude Kapelput, Oswald Cobblepot's (Penguin's) mother. In 2015, she was cast as Lillian Kaushtupper, the landlord to the title character of Netflix's series Unbreakable Kimmy Schmidt. She reprised the role in the television movie Kimmy vs the Reverend. In 2020, Kane was part of the ensemble cast of the Amazon show Hunters, which includes Al Pacino and Logan Lerman. Films Kane appeared in Carnal Knowledge (1971), The Last Detail (1973), Hester Street (1975), Dog Day Afternoon (1975), Annie Hall (1977), The World's Greatest Lover (1977), When a Stranger Calls (1979), Norman Loves Rose (1982), Transylvania 6-5000 (1985), The Princess Bride (1987), Scrooged (1988), in which Variety called her "unquestionably [the] pic's comic highlight"; Flashback (1989), with Dennis Hopper; and as a potential love interest for Steve Martin's character in My Blue Heaven (1990). In 1998, she played Mother Duck on the cartoon movie The First Snow of Winter. In 1999, she made
at HB Studio and also went to the Professional Children's School, in New York City, and made her professional theatre debut in a 1966 production of The Prime of Miss Jean Brodie, starring Tammy Grimes. Career Television Kane portrayed Simka Dahblitz-Gravas, wife of Latka Gravas (Andy Kaufman), on the American television series Taxi from 1981 to 1983. She received two Emmy Awards for her work in the series. In 1984, Kane appeared in episode 12, season 3 of Cheers as Amanda, an acquaintance of Diane Chambers from her time spent in a mental institution. Kane was a regular on the 1986 series All Is Forgiven, a regular on the 1990–1991 series American Dreamer, guest-starred on a 1994 episode of Seinfeld, a 1996 episode of Ellen and had a supporting role in the short-lived sitcom Pearl. In 1988, Kane appeared in the Cinemax Comedy Experiment Rap Master Ronnie: A Report Card alongside Jon Cryer and the Smothers Brothers. In January 2009, she appeared in the television series Two and a Half Men as the mother of Alan Harper's receptionist. In March 2010, Kane appeared in the television series Ugly Betty as Justin Suarez's acting teacher. In 2014, she had a recurring role in the TV series Gotham as Gertrude Kapelput, Oswald Cobblepot's (Penguin's) mother. In 2015, she was cast as Lillian Kaushtupper, the landlord to the title character of Netflix's series Unbreakable Kimmy Schmidt. She reprised the role in the television movie Kimmy vs the Reverend. In 2020, Kane was part of the ensemble cast of the Amazon show Hunters, which includes Al Pacino and Logan Lerman. Films Kane appeared in Carnal Knowledge (1971), The Last Detail (1973), Hester Street (1975), Dog Day Afternoon (1975), Annie Hall (1977), The World's Greatest Lover (1977), When a Stranger Calls (1979), Norman Loves Rose (1982), Transylvania 6-5000 (1985), The Princess Bride (1987), Scrooged (1988), in which Variety called her "unquestionably [the] pic's comic highlight"; Flashback (1989), with Dennis Hopper; and as a potential love interest for Steve Martin's character in My Blue Heaven (1990). In 1998, she played Mother Duck on the cartoon movie The First Snow of Winter. In 1999, she made a cameo in the Andy Kaufman biopic Man on the Moon as her Taxi character. At the 48th Academy Awards, Kane was nominated for an Academy Award for Best Actress for her role in the film Hester Street. Theatre She starred in the off-Broadway play Love, Loss, and What
A and B are said to be isomorphic. Some history: B*-algebras and C*-algebras The term B*-algebra was introduced by C. E. Rickart in 1946 to describe Banach *-algebras that satisfy the condition: for all x in the given B*-algebra. (B*-condition) This condition automatically implies that the *-involution is isometric, that is, . Hence, , and therefore, a B*-algebra is also a C*-algebra. Conversely, the C*-condition implies the B*-condition. This is nontrivial, and can be proved without using the condition . For these reasons, the term B*-algebra is rarely used in current terminology, and has been replaced by the term 'C*-algebra'. The term C*-algebra was introduced by I. E. Segal in 1947 to describe norm-closed subalgebras of B(H), namely, the space of bounded operators on some Hilbert space H. 'C' stood for 'closed'. In his paper Segal defines a C*-algebra as a "uniformly closed, self-adjoint algebra of bounded operators on a Hilbert space". Structure of C*-algebras C*-algebras have a large number of properties that are technically convenient. Some of these properties can be established by using the continuous functional calculus or by reduction to commutative C*-algebras. In the latter case, we can use the fact that the structure of these is completely determined by the Gelfand isomorphism. Self-adjoint elements Self-adjoint elements are those of the form . The set of elements of a C*-algebra A of the form forms a closed convex cone. This cone is identical to the elements of the form . Elements of this cone are called non-negative (or sometimes positive, even though this terminology conflicts with its use for elements of ℝ) The set of self-adjoint elements of a C*-algebra A naturally has the structure of a partially ordered vector space; the ordering is usually denoted . In this ordering, a self-adjoint element satisfies if and only if the spectrum of is non-negative, if and only if for some . Two self-adjoint elements and of A satisfy if . This partially ordered subspace allows the definition of a positive linear functional on a C*-algebra, which in turn is used to define the states of a C*-algebra, which in turn can be used to construct the spectrum of a C*-algebra using the GNS construction. Quotients and approximate identities Any C*-algebra A has an approximate identity. In fact, there is a directed family {eλ}λ∈I of self-adjoint elements of A such that In case A is separable, A has a sequential approximate identity. More generally, A will have a sequential approximate identity if and only if A contains a strictly positive element, i.e. a positive element h such that hAh is dense in A. Using approximate identities, one can show that the algebraic quotient of a C*-algebra by a closed proper two-sided ideal, with the natural norm, is a C*-algebra. Similarly, a closed two-sided ideal of a C*-algebra is itself a C*-algebra. Examples Finite-dimensional C*-algebras The algebra M(n, C) of n × n matrices over C becomes a C*-algebra if we consider matrices as operators on the Euclidean space, Cn, and use the operator norm ||·|| on matrices. The involution is given by the conjugate transpose. More generally, one can consider finite direct sums of matrix algebras. In fact, all C*-algebras that are finite dimensional as vector spaces are of this form, up to isomorphism. The self-adjoint requirement means finite-dimensional C*-algebras are semisimple, from which fact one can deduce the following theorem of Artin–Wedderburn type: Theorem. A finite-dimensional C*-algebra, A, is canonically isomorphic to a finite direct sum where min A is the set of minimal nonzero self-adjoint central projections of A. Each C*-algebra, Ae, is isomorphic (in a noncanonical way) to the full matrix algebra M(dim(e), C). The finite family indexed on min A given by {dim(e)}e is called the dimension vector of A. This vector uniquely determines the isomorphism class of a finite-dimensional C*-algebra. In the language of K-theory, this vector is the positive cone of the K0 group of A. A †-algebra (or, more explicitly, a †-closed algebra) is the name occasionally used in physics for a finite-dimensional C*-algebra. The dagger, †, is used in the name because physicists typically use the symbol to denote a Hermitian adjoint, and are often not worried about the subtleties associated with an infinite number of dimensions. (Mathematicians usually use the asterisk, *, to denote the Hermitian adjoint.) †-algebras feature prominently in quantum mechanics, and especially quantum information science. An immediate generalization of finite dimensional C*-algebras are the approximately finite dimensional C*-algebras. C*-algebras of operators The prototypical example of a C*-algebra is the algebra B(H) of bounded (equivalently continuous) linear operators defined on a complex Hilbert space H; here x* denotes the adjoint operator of the operator x : H → H. In fact, every C*-algebra, A, is *-isomorphic to a norm-closed adjoint closed subalgebra of B(H) for a suitable Hilbert space, H; this is the content of the Gelfand–Naimark theorem. C*-algebras of compact operators Let H be a separable infinite-dimensional Hilbert space. The algebra
field of complex numbers, together with a map for with the following properties: It is an involution, for every x in A: For all x, y in A: For every complex number λ in C and every x in A: For all x in A: Remark. The first three identities say that A is a *-algebra. The last identity is called the C* identity and is equivalent to: which is sometimes called the B*-identity. For history behind the names C*- and B*-algebras, see the history section below. The C*-identity is a very strong requirement. For instance, together with the spectral radius formula, it implies that the C*-norm is uniquely determined by the algebraic structure: A bounded linear map, π : A → B, between C*-algebras A and B is called a *-homomorphism if For x and y in A For x in A In the case of C*-algebras, any *-homomorphism π between C*-algebras is contractive, i.e. bounded with norm ≤ 1. Furthermore, an injective *-homomorphism between C*-algebras is isometric. These are consequences of the C*-identity. A bijective *-homomorphism π is called a C*-isomorphism, in which case A and B are said to be isomorphic. Some history: B*-algebras and C*-algebras The term B*-algebra was introduced by C. E. Rickart in 1946 to describe Banach *-algebras that satisfy the condition: for all x in the given B*-algebra. (B*-condition) This condition automatically implies that the *-involution is isometric, that is, . Hence, , and therefore, a B*-algebra is also a C*-algebra. Conversely, the C*-condition implies the B*-condition. This is nontrivial, and can be proved without using the condition . For these reasons, the term B*-algebra is rarely used in current terminology, and has been replaced by the term 'C*-algebra'. The term C*-algebra was introduced by I. E. Segal in 1947 to describe norm-closed subalgebras of B(H), namely, the space of bounded operators on some Hilbert space H. 'C' stood for 'closed'. In his paper Segal defines a C*-algebra as a "uniformly closed, self-adjoint algebra of bounded operators on a Hilbert space". Structure of C*-algebras C*-algebras have a large number of properties that are technically convenient. Some of these properties can be established by using the continuous functional calculus or by reduction to commutative C*-algebras. In the latter case, we can use the fact that the structure of these is completely determined by the Gelfand isomorphism. Self-adjoint elements Self-adjoint elements are those of the form . The set of elements of a C*-algebra A of the form forms a closed convex cone. This cone is identical to the elements of the form . Elements of this cone are called non-negative (or sometimes positive, even though this terminology conflicts with its use for elements of ℝ) The set of self-adjoint elements of a C*-algebra A naturally has the structure of a partially ordered vector space; the ordering is usually denoted . In this ordering, a self-adjoint element satisfies if and only if the spectrum of is non-negative, if and only if for some . Two self-adjoint elements and of A satisfy if . This partially ordered subspace allows the definition of a positive linear functional on a C*-algebra, which in turn is used to define the states of a C*-algebra, which in turn can be used to construct the spectrum of a C*-algebra using the GNS construction. Quotients and approximate identities Any C*-algebra A has an approximate identity. In fact, there is a directed family {eλ}λ∈I of self-adjoint elements of A such that In case A is separable, A has a sequential approximate identity. More generally, A will have a sequential approximate identity if and only if A contains a strictly positive element, i.e. a positive element h such that hAh is dense in A. Using approximate identities, one can show that the algebraic quotient of a C*-algebra by a closed proper two-sided ideal, with the natural norm, is a C*-algebra. Similarly, a closed two-sided ideal of a C*-algebra is itself a C*-algebra. Examples Finite-dimensional C*-algebras The algebra M(n, C) of n × n matrices over C becomes a C*-algebra if we consider matrices as operators on the Euclidean space, Cn, and use the operator norm ||·|| on matrices. The involution is given by the conjugate transpose. More generally, one can consider finite direct sums of matrix algebras. In fact, all C*-algebras that are finite dimensional as vector spaces are of this form, up to isomorphism. The self-adjoint requirement means finite-dimensional C*-algebras are semisimple, from which fact one can deduce the following theorem of Artin–Wedderburn type: Theorem. A finite-dimensional C*-algebra, A, is canonically isomorphic to a finite direct sum where min A is the set of minimal nonzero self-adjoint central projections of A. Each C*-algebra, Ae, is isomorphic (in a noncanonical way) to the full matrix algebra M(dim(e), C). The finite family indexed on min A given by {dim(e)}e is called the dimension vector of A. This vector uniquely determines the isomorphism class of a finite-dimensional C*-algebra. In the language of K-theory, this vector is the positive cone of the K0 group of A. A †-algebra (or, more explicitly, a †-closed algebra) is the name occasionally used in physics for a finite-dimensional C*-algebra. The dagger, †, is used in the name because physicists typically use the symbol to denote a Hermitian adjoint, and are often not worried about the subtleties associated with an infinite number of dimensions. (Mathematicians usually use the asterisk, *, to denote the Hermitian adjoint.) †-algebras feature prominently in quantum mechanics, and especially quantum information science. An immediate generalization of finite dimensional C*-algebras are the approximately finite dimensional C*-algebras. C*-algebras of operators The prototypical example
of London. The borough is now one of London's leading business, financial and cultural centres, and its influence in entertainment and the arts contribute to its status as a major metropolitan centre. Its population is 386,710, making it the second largest London borough and fifteenth largest English district. The borough was formed in 1965 from the merger of the County Borough of Croydon with Coulsdon and Purley Urban District, both of which had been within Surrey. The local authority, Croydon London Borough Council, is now part of London Councils, the local government association for Greater London. The economic strength of Croydon dates back mainly to Croydon Airport which was a major factor in the development of Croydon as a business centre. Once London's main airport for all international flights to and from the capital, it was closed on 30 September 1959 due to the lack of expansion space needed for an airport to serve the growing city. It is now a Grade II listed building and tourist attraction. Croydon Council and its predecessor Croydon Corporation unsuccessfully applied for city status in 1954, 2000, 2002 and 2012. The area is currently going through a large regeneration project called Croydon Vision 2020 which is predicted to attract more businesses and tourists to the area as well as backing Croydon's bid to become "London's Third City" (after the City of London and Westminster). Croydon is mostly urban, though there are large suburban and rural uplands towards the south of the borough. Since 2003, Croydon has been certified as a Fairtrade borough by the Fairtrade Foundation. It was the first London borough to have Fairtrade status which is awarded on certain criteria. The area is one of the hearts of culture in London and the South East of England. Institutions such as the major arts and entertainment centre Fairfield Halls add to the vibrancy of the borough. However, its famous fringe theatre, the Warehouse Theatre, went into administration in 2012 when the council withdrew funding, and the building itself was demolished in 2013. The Croydon Clocktower was opened by Queen Elizabeth II in 1994 as an arts venue featuring a library, the independent David Lean Cinema (closed by the council in 2011 after sixteen years of operating, but now partially reopened on a part-time and volunteer basis) and museum. From 2000 to 2010, Croydon staged an annual summer festival celebrating the area's black and Indian cultural diversity, with audiences reaching over 50,000 people. Premier League football club Crystal Palace F.C. play at Selhurst Park in Selhurst, a stadium they have been based in since 1924. Other landmarks in the borough include Addington Palace, an eighteenth-century mansion which became the official second residence of six Archbishops of Canterbury, Shirley Windmill, one of the few surviving large windmills in Greater London built in the 1850s, and the BRIT School, a creative arts institute run by the BRIT Trust which has produced artists such as Adele, Amy Winehouse and Leona Lewis. History For the history of the original town see History of Croydon The London Borough of Croydon was formed in 1965 from the Coulsdon and Purley Urban District and the County Borough of Croydon. The name Croydon comes from Crogdene or Croindone, named by the Saxons in the 8th century when they settled here, although the area had been inhabited since prehistoric times. It is thought to derive from the Anglo-Saxon croeas deanas, meaning "the valley of the crocuses", indicating that, like Saffron Walden in Essex, it was a centre for the collection of saffron. By the time of the Norman invasion Croydon had a church, a mill and around 365 inhabitants as recorded in the Domesday Book. The Archbishop of Canterbury, Archbishop Lanfranc lived at Croydon Palace which still stands. Visitors included Thomas Becket (another Archbishop), and royal figures such as Henry VIII of England and Elizabeth I. The royal charter for Surrey Street Market dates back to 1276, Croydon carried on through the ages as a prosperous market town, they produced charcoal, tanned leather, and ventured into brewing. Croydon was served by the Surrey Iron Railway, the first public railway (horse drawn) in the world, in 1803, and by the London to Brighton rail link in the mid-19th century, helping it to become the largest town in what was then Surrey. In the 20th century Croydon became known for industries such as metal working, car manufacture and its aerodrome, Croydon Airport. Starting out during World War I as an airfield for protection against Zeppelins, an adjacent airfield was combined, and the new aerodrome opened on 29 March 1920. It became the largest in London, and was the main terminal for international air freight into the capital. It developed into one of the great airports of the world during the 1920s and 1930s, and welcomed the world's pioneer aviators in its heyday. British Airways Ltd used the airport for a short period after redirecting from Northolt Aerodrome, and Croydon was the operating base for Imperial Airways. It was partly due to the airport that Croydon suffered heavy bomb damage during World War II. As aviation technology progressed, however, and aircraft became larger and more numerous, it was recognised in 1952 that the airport would be too small to cope with the ever-increasing volume of air traffic. The last scheduled flight departed on 30 September 1959. It was superseded as the main airport by both London Heathrow and London Gatwick Airport (see below). The air terminal, now known as Airport House, has been restored, and has a hotel and museum in it. In the late 1950s and through the 1960s the council commercialised the centre of Croydon with massive development of office blocks and the Whitgift Centre which was formerly the biggest in-town shopping centre in Europe. The centre was officially opened in October 1970 by the Duchess of Kent. The original Whitgift School there had moved to Haling Park, South Croydon in the 1930s; the replacement school on the site, Whitgift Middle School, now the Trinity School of John Whitgift, moved to Shirley Park in the 1960s, when the buildings were demolished. The borough council unsuccessfully applied for city status in 1965, 2000 and again in 2002. If it had been successful, it would have been the third local authority in Greater London to hold that status, along with the City of London and the City of Westminster. At present the London Borough of Croydon is the second most populous local government district of England without city status, Kirklees being the first. Croydon's applications were refused as it was felt not to have an identity separate from the rest of Greater London. In 1965 it was described as "...now just part of the London conurbation and almost indistinguishable from many of the other Greater London boroughs" and in 2000 as having "no particular identity of its own". Croydon, in common with many other areas, was hit by extensive rioting in August 2011. Reeves, an historic furniture store established in 1867, that gave its name to a junction and tram stop in the town centre, was destroyed by arson. Croydon is currently going through a vigorous regeneration plan, called Croydon Vision 2020. This will change the urban planning of central Croydon completely. Its main aims are to make Croydon London's Third City and the hub of retail, business, culture and living in south London and South East England. The plan was showcased in a series of events called Croydon Expo. It was aimed at business and residents in the London Borough of Croydon, to demonstrate the £3.5bn development projects the Council wishes to see in Croydon in the next ten years. There have also been exhibitions for regional districts of Croydon, including Waddon, South Norwood and Woodside, Purley, New Addington and Coulsdon. Examples of upcoming architecture featured in the expo can easily be found to the centre of the borough, in the form of the Croydon Gateway site and the Cherry Orchard Road Towers. Governance Politics of Croydon Council Croydon London Borough Council has seventy councillors elected in 24 wards. Croydon is a cabinet-style council, and the Leader heads a ten-person cabinet, its members responsible for areas such as education or planning. There is a Shadow Cabinet drawn from the sole opposition party. A backbench cross-party scrutiny and overview committee is in place to hold the executive cabinet to account. From the borough's creation in 1965 until 1994 the council saw continuous control under first Conservatives and Residents' Ratepayers councillors up to 1986 and then Conservatives. From 1994 to 2006 Labour Party councillors controlled the council. After a further eight-year period of Conservative control the Labour group secured a ten-seat majority in the local council elections on 22 May 2014. Councillor Tony Newman returned to lead the council for Labour. In the 2014 local elections the Labour party gained all the seats in the Ashburton and Waddon wards and gained the one seat held by the Conservatives in the New Addington ward. The election marked the first time that Ashburton ward had been represented by Labour. Elected as a Labour councillor in Waddon was Croydon Central's previous Conservative then Independent MP and leader of the Conservatives on Croydon council up to 2005, Andrew Pelling. At the 2010 Croydon local elections seats lost previously in Addiscombe, South Norwood and Upper Norwood were retaken by Labour Party councillors; in New Addington the Conservative party gained a councillor, the first time that the Conservatives had taken a seat there since 1968. The composition of the council after the 2010 elections was Conservatives 37, Labour 33. Mike Fisher, Conservative group leader since May 2005, was named as Council Leader following the Conservative victory in 2006. Since 2000 At the 2006 local elections Conservative councillors regained control in gaining 12 councillors, taking ten seats from Labour in Addiscombe, Waddon, South Norwood and Upper Norwood and ousting the single Liberal Democrat councillor in Coulsdon. Between the 2006 and 2010 elections, a by-election in February 2007 saw a large swing to Labour from the Conservatives. Whereas 6% Conservative to Labour swings were produced in the two previous by-elections to 2006, won by a councillor from the incumbent party (in both cases the party of a councillor who had died). Crossover has occurred in political affiliation, during 2002–06 one Conservative councillor defected to Labour, went back to the Conservatives and spent some time as an independent. In March 2008, the Labour councillor Mike Mogul joined the Conservatives while a Conservative councillor became an independent. Councillor Jonathan Driver, who became Mayor in 2008, died unexpectedly at the close of the year, causing a by-election in highly marginal Waddon which was successfully held by the Conservatives. From February 2005 until May 2006 the Leader of Croydon Council was Labour Co-operative Councillor Tony Newman, succeeding Hugh Malyan. Westminster representation The borough is covered by three parliamentary constituencies: these are Croydon North, Croydon Central and Croydon South. Civic history For much of its history, Croydon Council was controlled by the Conservative Party or Conservative-leaning independents. Former Croydon councillors include former MPs Andrew Pelling, Vivian Bendall, David Congdon, Geraint Davies and Reg Prentice, London Assembly member Valerie Shawcross, Lord Bowness, John Donaldson, Baron Donaldson of Lymington (Master of the Rolls) and H.T. Muggeridge, MP and father of Malcolm Muggeridge. The first Mayor of the newly created county borough was Jabez Balfour, later a disgraced Member of Parliament. Former Conservative Director of Campaigning, Gavin Barwell, was a Croydon councillor between 1998 and 2010 and was the MP for Croydon Central from 2010 until 2017. Sarah Jones (politician) won the Croydon Central seat for Labour in 2017. Croydon North has a Labour MP, Steve Reed (politician), and Croydon South has a Conservative MP, Chris Philp. Some 10,000 people work directly or indirectly for the council, at its main offices at Bernard Weatherill House or in its schools, care homes, housing offices or work depots. The council is generally well regarded, having made important improvements in education and social services. However, there have been concerns over benefits, leisure services and waste collection. Although the council has one of London's lower rates of council tax, there are claims that it is too high and that resources are wasted. Councillor Sherwan Chowdhury was appointed as Mayor of Croydon for 2021–22. The Leader is Cllr Hamida Ali and the Deputy Leader is Cllr Stuart King. The Chief Executive since 14 September 2020 has been Katherine Kerswell. Government buildings Croydon Town Hall on Katharine Street in Central Croydon houses the committee rooms, the mayor's and other councillors' offices, electoral services and the arts and heritage services. The present Town Hall is Croydon's third. The first town hall is thought to have been built in either 1566 or 1609. The second was built in 1808 to serve the growing town but was demolished after the present town hall was erected in 1895. The 1808 building cost £8,000, which was regarded as an enormous sum for those days and was perhaps as controversial as the administrative building Bernard Weatherill House opened for occupation in 2013 and reputed to have cost £220,000,000. The early 19th century building was known initially as "Courthouse" as, like its predecessor and successor, the local court met there. The building stood on the western side of the High Street near to the junction with Surrey Street, the location of the town's market. The building became inadequate for the growing local administrative responsibilities and stood at a narrow point of a High Street in need of widening. The present town hall was designed by local architect Charles Henman and was officially opened by the Prince and Princess of Wales on 19 May 1896. It was constructed in red brick, sourced from Wrotham in Kent, with Portland stone dressings and green Westmoreland slates for the roof. It also housed the court and most central council employees. The Borough's incorporation in 1883 and a desire to improve central Croydon with improvements to traffic flows and the removal of social deprivation in Middle Row prompted the move to a new configuration of town hall provision. The second closure of the Central Railway Station provided the corporation with the opportunity to buy the station land from the London, Brighton and South Coast Railway Company for £11,500 to provide the site for the new town hall. Indeed, the council hoped to be able to sell on some of the land purchased with enough for municipal needs and still "leave a considerable margin of land which might be disposed of". The purchase of the failed railway station came despite local leaders having successfully urged the re-opening of the poorly patronised railway station. The railway station re-opening had failed to be a success so freeing up the land for alternative use. Parts, including the former court rooms, have been converted into the Museum of Croydon and exhibition galleries. The original public library was converted into the David Lean Cinema, part of the Croydon Clocktower. The Braithwaite Hall is used for events and performances. The town hall was renovated in the mid-1990s and the imposing central staircase, long closed to the public and kept for councillors only, was re-opened in 1994. The civic complex, meanwhile, was substantially added to, with buildings across Mint Walk and the 19-floor Taberner House to house the rapidly expanding corporation's employees. Ruskin House is the headquarters of Croydon's Labour, Trade Union and Co-operative movements and is itself a co-operative with shareholders from organisations across the three movements. In the 19th century, Croydon was a bustling commercial centre of London. It was said that, at the turn of the 20th century, approximately £10,000 was spent in Croydon's taverns and inns every week. For the early labour movement, then, it was natural to meet in the town's public houses, in this environment. However, the temperance movement was equally strong, and Georgina King Lewis, a keen member of the Croydon United Temperance Council, took it upon herself to establish a dry centre for the labour movement. The first Ruskin House was highly successful, and there has been two more since. The current house was officially opened in 1967 by the then Labour Prime Minister, Harold Wilson. Today, Ruskin House continues to serve as the headquarters of the Trade Union, Labour and Co-operative movements in Croydon, hosting a range of meetings and being the base for several labour movement groups. Office tenants include the headquarters of the Communist Party of Britain and Croydon Labour Party. Geraint Davies, the MP for Croydon Central, had offices in the building, until he was defeated by Andrew Pelling and is now the Labour representative standing for Swansea West in Wales. Taberner House was built between 1964 and 1967, designed by architect H. Thornley, with Allan Holt and Hugh Lea as borough engineers. Although the council had needed extra space since the 1920s, it was only with the imminent creation of the London Borough of Croydon that action was taken. The building, being demolished in 2014, was in classic 1960s style, praised at the time but subsequently much derided. It has its elegant upper slab block narrowing towards both ends, a formal device which has been compared to the famous Pirelli Tower in Milan. It was named after Ernest Taberner OBE, Town Clerk from 1937 to 1963. Until September 2013, Taberner House housed most of the council's central employees and was the main location for the public to access information and services, particularly with respect to housing. In September 2013, Council staff moved into Bernard Weatherill House in Fell Road, (named after the former Speaker of the House and Member of Parliament for Croydon North-East). Staff from the Met Police, NHS, Jobcentre Plus, Croydon Credit Union, Citizens Advice Bureau as well as 75 services from the council all moved to the new building. Geography and climate The borough is in the far south of London, with the M25 orbital motorway stretching to the south of it, between Croydon and Tandridge. To the north and east, the borough mainly borders the London Borough of Bromley, and in the north west the boroughs of Lambeth and Southwark. The boroughs of Sutton and Merton are located directly to the west. It is at the head of the River Wandle, just to the north of a significant gap in the North Downs. It lies south of Central London, and the earliest settlement may have been a Roman staging post on the London-Portslade road, although conclusive evidence has not yet been found. The main town centre houses a great variety of well-known stores on North End and two shopping centres. It was pedestrianised in 1989 to attract people back to the town centre. Another shopping centre called Park Place was due to open in 2012 but has since been scrapped. Townscape description The CR postcode area covers most of the south and centre of the borough while the SE and SW postcodes cover the northern parts, including Crystal Palace, Upper Norwood, South Norwood, Selhurst (part), Thornton Heath (part), Norbury and Pollards Hill (part). Districts in the London Borough of Croydon include Addington, a village to the east of Croydon which until 2000 was poorly linked to the rest of the borough as it was without any railway or light rail stations, with only a few patchy bus services. Addiscombe is a district just northeast of the centre of Croydon, and is popular with commuters to central London as it is close to the busy East Croydon station. Ashburton, to the northeast of Croydon, is mostly home to residential houses and flats, being named after Ashburton House, one of the three big houses in the Addiscombe area. Broad Green is a small district, centred on a large green with many homes and local shops in West Croydon. Coombe is an area, just east of Croydon, which has barely been urbanised and has retained its collection of large houses fairly intact. Coulsdon, south west of Central Croydon, which has retained a good mix of traditional high street shops as well as a large number of restaurants for its size. Croydon is the principal area of the borough, Crystal Palace is an area north of Croydon, which is shared with the London Boroughs of Lambeth, Southwark, Lewisham and Bromley. Fairfield, just northeast of Croydon, holds the Fairfield Halls and the village of Forestdale, to the east of Croydon's main area, commenced work in the late 1960s and completed in the mid-70s to create a larger town on what was previously open ground. Hamsey Green is a place on the plateau of the North Downs, south of Croydon. Kenley, again south of the centre, lie within the London Green Belt and features a landscape dominated by green space. New Addington, to the east, is a large local council estate surrounded by open countryside and golf courses. Norbury, to the northwest, is a suburb with a large ethnic population. Norwood New Town is a part of the Norwood triangle, to the north of Croydon. Monks Orchard is a small district made up of large houses and open space in the northeast of the borough. Pollards Hill is a residential district with houses on roads, which are lined with pollarded lime trees, stretching to Norbury. Purley, to the south, is a main town whose name derives from "pirlea", which means 'Peartree lea'. Sanderstead, to the south, is a village mainly on high ground at the edge of suburban development in Greater London. Selhurst is a town, to the north of Croydon, which holds the nationally known school, The BRIT School. Selsdon is a suburb which was developed during the inter-war period in the 1920s and 1930s, and is remarkable for its many Art Deco houses, to the southeast of Croydon Centre. Shirley, is to the east of Croydon, and holds Shirley Windmill. South Croydon, to the south of Croydon, is a locality which holds local landmarks such as The Swan and Sugarloaf public house and independent Whitgift School part of the Whitgift Foundation. South Norwood, to the north, is in common with West Norwood and Upper Norwood, named after a contraction of Great North Wood and has a population of around 14,590. Thornton Heath is a town, to the northwest of Croydon, which holds Croydon's principal hospital Mayday. Upper Norwood is north of Croydon, on a mainly elevated area of the borough. Waddon is a residential area, mainly based on the Purley Way retail area, to the west of the borough. Woodside is located to the northeast of the borough, with streets based on Woodside Green, a small sized area of green land. And finally Whyteleafe is a town, right to the edge of Croydon with some areas in the Surrey district of Tandridge. Croydon is a gateway to the south from central
House is the headquarters of Croydon's Labour, Trade Union and Co-operative movements and is itself a co-operative with shareholders from organisations across the three movements. In the 19th century, Croydon was a bustling commercial centre of London. It was said that, at the turn of the 20th century, approximately £10,000 was spent in Croydon's taverns and inns every week. For the early labour movement, then, it was natural to meet in the town's public houses, in this environment. However, the temperance movement was equally strong, and Georgina King Lewis, a keen member of the Croydon United Temperance Council, took it upon herself to establish a dry centre for the labour movement. The first Ruskin House was highly successful, and there has been two more since. The current house was officially opened in 1967 by the then Labour Prime Minister, Harold Wilson. Today, Ruskin House continues to serve as the headquarters of the Trade Union, Labour and Co-operative movements in Croydon, hosting a range of meetings and being the base for several labour movement groups. Office tenants include the headquarters of the Communist Party of Britain and Croydon Labour Party. Geraint Davies, the MP for Croydon Central, had offices in the building, until he was defeated by Andrew Pelling and is now the Labour representative standing for Swansea West in Wales. Taberner House was built between 1964 and 1967, designed by architect H. Thornley, with Allan Holt and Hugh Lea as borough engineers. Although the council had needed extra space since the 1920s, it was only with the imminent creation of the London Borough of Croydon that action was taken. The building, being demolished in 2014, was in classic 1960s style, praised at the time but subsequently much derided. It has its elegant upper slab block narrowing towards both ends, a formal device which has been compared to the famous Pirelli Tower in Milan. It was named after Ernest Taberner OBE, Town Clerk from 1937 to 1963. Until September 2013, Taberner House housed most of the council's central employees and was the main location for the public to access information and services, particularly with respect to housing. In September 2013, Council staff moved into Bernard Weatherill House in Fell Road, (named after the former Speaker of the House and Member of Parliament for Croydon North-East). Staff from the Met Police, NHS, Jobcentre Plus, Croydon Credit Union, Citizens Advice Bureau as well as 75 services from the council all moved to the new building. Geography and climate The borough is in the far south of London, with the M25 orbital motorway stretching to the south of it, between Croydon and Tandridge. To the north and east, the borough mainly borders the London Borough of Bromley, and in the north west the boroughs of Lambeth and Southwark. The boroughs of Sutton and Merton are located directly to the west. It is at the head of the River Wandle, just to the north of a significant gap in the North Downs. It lies south of Central London, and the earliest settlement may have been a Roman staging post on the London-Portslade road, although conclusive evidence has not yet been found. The main town centre houses a great variety of well-known stores on North End and two shopping centres. It was pedestrianised in 1989 to attract people back to the town centre. Another shopping centre called Park Place was due to open in 2012 but has since been scrapped. Townscape description The CR postcode area covers most of the south and centre of the borough while the SE and SW postcodes cover the northern parts, including Crystal Palace, Upper Norwood, South Norwood, Selhurst (part), Thornton Heath (part), Norbury and Pollards Hill (part). Districts in the London Borough of Croydon include Addington, a village to the east of Croydon which until 2000 was poorly linked to the rest of the borough as it was without any railway or light rail stations, with only a few patchy bus services. Addiscombe is a district just northeast of the centre of Croydon, and is popular with commuters to central London as it is close to the busy East Croydon station. Ashburton, to the northeast of Croydon, is mostly home to residential houses and flats, being named after Ashburton House, one of the three big houses in the Addiscombe area. Broad Green is a small district, centred on a large green with many homes and local shops in West Croydon. Coombe is an area, just east of Croydon, which has barely been urbanised and has retained its collection of large houses fairly intact. Coulsdon, south west of Central Croydon, which has retained a good mix of traditional high street shops as well as a large number of restaurants for its size. Croydon is the principal area of the borough, Crystal Palace is an area north of Croydon, which is shared with the London Boroughs of Lambeth, Southwark, Lewisham and Bromley. Fairfield, just northeast of Croydon, holds the Fairfield Halls and the village of Forestdale, to the east of Croydon's main area, commenced work in the late 1960s and completed in the mid-70s to create a larger town on what was previously open ground. Hamsey Green is a place on the plateau of the North Downs, south of Croydon. Kenley, again south of the centre, lie within the London Green Belt and features a landscape dominated by green space. New Addington, to the east, is a large local council estate surrounded by open countryside and golf courses. Norbury, to the northwest, is a suburb with a large ethnic population. Norwood New Town is a part of the Norwood triangle, to the north of Croydon. Monks Orchard is a small district made up of large houses and open space in the northeast of the borough. Pollards Hill is a residential district with houses on roads, which are lined with pollarded lime trees, stretching to Norbury. Purley, to the south, is a main town whose name derives from "pirlea", which means 'Peartree lea'. Sanderstead, to the south, is a village mainly on high ground at the edge of suburban development in Greater London. Selhurst is a town, to the north of Croydon, which holds the nationally known school, The BRIT School. Selsdon is a suburb which was developed during the inter-war period in the 1920s and 1930s, and is remarkable for its many Art Deco houses, to the southeast of Croydon Centre. Shirley, is to the east of Croydon, and holds Shirley Windmill. South Croydon, to the south of Croydon, is a locality which holds local landmarks such as The Swan and Sugarloaf public house and independent Whitgift School part of the Whitgift Foundation. South Norwood, to the north, is in common with West Norwood and Upper Norwood, named after a contraction of Great North Wood and has a population of around 14,590. Thornton Heath is a town, to the northwest of Croydon, which holds Croydon's principal hospital Mayday. Upper Norwood is north of Croydon, on a mainly elevated area of the borough. Waddon is a residential area, mainly based on the Purley Way retail area, to the west of the borough. Woodside is located to the northeast of the borough, with streets based on Woodside Green, a small sized area of green land. And finally Whyteleafe is a town, right to the edge of Croydon with some areas in the Surrey district of Tandridge. Croydon is a gateway to the south from central London, with some major roads running through it. Purley Way, part of the A23, was built to by-pass Croydon town centre. It is one of the busiest roads in the borough, and is the site of several major retail developments including one of only 18 IKEA stores in the country, built on the site of the former power station. The A23 continues southward as Brighton Road, which is the main route running towards the south from Croydon to Purley. The centre of Croydon is very congested, and the urban planning has since become out of date and quite inadequate, due to the expansion of Croydon's main shopping area and office blocks. Wellesley Road is a north–south dual carriageway that cuts through the centre of the town, and makes it hard to walk between the town centre's two railway stations. Croydon Vision 2020 includes a plan for a more pedestrian-friendly replacement. It has also been named as one of the worst roads for cyclists in the area. Construction of the Croydon Underpass beneath the junction of George Street and Wellesley Road/Park Lane started in the early 1960s, mainly to alleviate traffic congestion on Park Lane, above the underpass. The Croydon Flyover is also near the underpass, and next to Taberner House. It mainly leads traffic on to Duppas Hill, towards Purley Way with links to Sutton and Kingston upon Thames. The major junction on the flyover is for Old Town, which is also a large three-lane road. Topography and climate Croydon covers an area of 86.52 km2. Croydon's physical features consist of many hills and rivers that are spread out across the borough and into the North Downs, Surrey and the rest of south London. Addington Hills is a major hilly area to the south of London and is recognised as a significant obstacle to the growth of London from its origins as a port on the north side of the river, to a large circular city. The Great North Wood is a former natural oak forest that covered the Sydenham Ridge and the southern reaches of the River Effra and its tributaries. The most notable tree, called Vicar's Oak, marked the boundary of four ancient parishes; Lambeth, Camberwell, Croydon and Bromley. John Aubrey referred to this "ancient remarkable tree" in the past tense as early as 1718, but according to JB Wilson, the Vicar's Oak survived until 1825. The River Wandle is also a major tributary of the River Thames, where it stretches to Wandsworth and Putney for from its main source in Waddon. Croydon has a temperate climate in common with most areas of Great Britain: its Köppen climate classification is Cfb. Its mean annual temperature of 9.6 °C is similar to that experienced throughout the Weald, and slightly cooler than nearby areas such as the Sussex coast and central London. Rainfall is considerably below England's average (1971–2000) level of 838 mm, and every month is drier overall than the England average. The nearest weather station is at Gatwick Airport. Architecture The skyline of Croydon has significantly changed over the past 50 years. High rise buildings, mainly office blocks, now dominate the skyline. The most notable of these buildings include Croydon Council's headquarters Taberner House, which has been compared to the famous Pirelli Tower of Milan, and the Nestlé Tower, the former UK headquarters of Nestlé. In recent years, the development of tall buildings, such as the approved Croydon Vocational Tower and Wellesley Square, has been encouraged in the London Plan, and will lead to the erection of new skyscrapers in the coming years as part of London's high-rise boom. No. 1 Croydon, formerly the NLA Tower, Britain's 88th tallest tower, close to East Croydon station, is an example of 1970s architecture. The tower was originally nicknamed the Threepenny bit building, as it resembles a stack of pre-decimalisation Threepence coins, which were 12-sided. It is now most commonly called The Octagon, being 8-sided. Lunar House is another high-rise building. Like other government office buildings on Wellesley Road, such as Apollo House, the name of the building was inspired by the US moon landings (In the Croydon suburb of New Addington there is a public house, built during the same period, called The Man on the Moon). Lunar House houses the Home Office building for Visas and Immigration. Apollo House houses The Border Patrol Agency. A new generation of buildings are being considered by the council as part of Croydon Vision 2020, so that the borough doesn't lose its title of having the "largest office space in the south east", excluding central London. Projects such as Wellesley Square, which will be a mix of residential and retail with an eye-catching colour design and 100 George Street a proposed modern office block are incorporated in this vision. Notable events that have happened to Croydon's skyline include the Millennium project to create the largest single urban lighting project ever. It was created for the buildings of Croydon to illuminate them for the third millennium. The project provided new lighting for the buildings, and provided an opportunity to project images and words onto them, mixing art and poetry with coloured light, and also displaying public information after dark. Apart from increasing night time activity in Croydon and thereby reducing the fear of crime, it helped to promote the sustainable use of older buildings by displaying them in a more positive way. Landmarks There are a large number of attractions and places of interest all across the borough of Croydon, ranging from historic sites in the north and south to modern towers in the centre. Croydon Airport was once London's main airport, but closed on 30 September 1959 due to the expansion of London and because it didn't have room to grow; so Heathrow International Airport took over as London's main airport. It has now been mostly converted to offices, although some important elements of the airport remain. It is a tourist attraction. The Croydon Clocktower arts venue was opened by Elizabeth II in 1994. It includes the Braithwaite Hall (the former reference library - named after the Rev. Braithwaite who donated it to the town) for live events, David Lean Cinema (built in memory of David Lean), the Museum of Croydon and Croydon Central Library. The Museum of Croydon (formerly known as Croydon Lifetimes Museum) highlights Croydon in the past and the present and currently features high-profile exhibitions including the Riesco Collection, The Art of Dr Seuss and the Whatever the Weather gallery. Shirley Windmill is a working windmill and one of the few surviving large windmills in Surrey, built in 1854. It is Grade II listed and received a £218,100 grant from the Heritage Lottery Fund. Addington Palace is an 18th-century mansion in Addington which was originally built as Addington Place in the 16th century. The palace became the official second residence of six archbishops, five of whom are buried in St Mary's Church and churchyard nearby. North End is the main pedestrianised shopping road in Croydon, having Centrale to one side and the Whitgift Centre to the other. The Warehouse Theatre is a popular theatre for mostly young performers and is due to get a face-lift on the Croydon Gateway site. The Nestlé Tower was the UK headquarters of Nestlé and is one of the tallest towers in England, which is due to be re-fitted during the Park Place development. The Fairfield Halls is a well known concert hall and exhibition centre, opened in 1962. It is frequently used for BBC recordings and was formerly the home of ITV's World of Sport. It includes the Ashcroft Theatre and the Arnhem Gallery. Croydon Palace was the summer residence of the Archbishop of Canterbury for over 500 years and included regular visitors such as Henry III and Queen Elizabeth I. It is thought to have been built around 960. Croydon Cemetery is a large cemetery and crematorium west of Croydon and is most famous for the gravestone of Derek Bentley, who was wrongly hanged in 1953. Mitcham Common is an area of common land partly shared with the boroughs of Sutton and Merton. Almost 500,000 years ago, Mitcham Common formed part of the river bed of the River Thames. The BRIT School is a performing Arts & Technology school, owned by the BRIT Trust (known for the BRIT Awards
sometimes called "Pan" between 1955 and 1975 (Pan is now the name of a satellite of Saturn). It gives its name to the Carme group, made up of irregular retrograde moons orbiting Jupiter at a distance ranging between 23 and 24 Gm and at an inclination of about 165°. Its orbital elements are as of January 2000. They are continuously changing due to solar
"Pan" between 1955 and 1975 (Pan is now the name of a satellite of Saturn). It gives its name to the Carme group, made up of irregular retrograde moons orbiting Jupiter at a distance ranging between 23 and 24 Gm and at an inclination of about 165°. Its orbital elements are as of January 2000. They are continuously changing
Jacobi identity. Additional identities If is a fixed element of a ring R, identity (1) can be interpreted as a Leibniz rule for the map given by . In other words, the map adA defines a derivation on the ring R. Identities (2), (3) represent Leibniz rules for more than two factors, and are valid for any derivation. Identities (4)–(6) can also be interpreted as Leibniz rules. Identities (7), (8) express Z-bilinearity. Some of the above identities can be extended to the anticommutator using the above ± subscript notation. For example: Exponential identities Consider a ring or algebra in which the exponential can be meaningfully defined, such as a Banach algebra or a ring of formal power series. In such a ring, Hadamard's lemma applied to nested commutators gives: (For the last expression, see Adjoint derivation below.) This formula underlies the Baker–Campbell–Hausdorff expansion of log(exp(A) exp(B)). A similar expansion expresses the group commutator of expressions (analogous to elements of a Lie group) in terms of a series of nested commutators (Lie brackets), Graded rings and algebras When dealing with graded algebras, the commutator is usually replaced by the graded commutator, defined in homogeneous components as Adjoint derivation Especially if one deals with multiple commutators in a ring R, another notation turns out to be useful. For an element , we define the adjoint mapping by: This mapping is a derivation on the ring R: By the Jacobi identity, it is also a derivation over the commutation operation: Composing such mappings, we get for example and We may consider itself as a mapping, , where is the ring of mappings from R to itself with composition as the multiplication operation. Then is a Lie algebra homomorphism, preserving the commutator: By contrast, it is not always a ring homomorphism: usually . General Leibniz rule The general Leibniz rule, expanding repeated derivatives of a product, can be written abstractly using the adjoint representation: Replacing x by the differentiation operator , and y by the multiplication operator , we get
G generated by all commutators is closed and is called the derived group or the commutator subgroup of G. Commutators are used to define nilpotent and solvable groups and the largest abelian quotient group. The definition of the commutator above is used throughout this article, but many other group theorists define the commutator as . Identities (group theory) Commutator identities are an important tool in group theory. The expression denotes the conjugate of by , defined as . and and and Identity (5) is also known as the Hall–Witt identity, after Philip Hall and Ernst Witt. It is a group-theoretic analogue of the Jacobi identity for the ring-theoretic commutator (see next section). N.B., the above definition of the conjugate of by is used by some group theorists. Many other group theorists define the conjugate of by as . This is often written . Similar identities hold for these conventions. Many identities are used that are true modulo certain subgroups. These can be particularly useful in the study of solvable groups and nilpotent groups. For instance, in any group, second powers behave well: If the derived subgroup is central, then Ring theory The commutator of two elements a and b of a ring (including any associative algebra) is defined by It is zero if and only if a and b commute. In linear algebra, if two endomorphisms of a space are represented by commuting matrices in terms
(plural ). Cairns have been and are used for a broad variety of purposes, from prehistoric times to the present. In modern times, cairns are often erected as landmarks, a use they have had since ancient times. However, since prehistory, they have also been built and used as burial monuments; for defense and hunting; for ceremonial purposes, sometimes relating to astronomy; to locate buried items, such as caches of food or objects; and to mark trails, among other purposes. Stonehenge Cairns are used as trail markers in many parts of the world, in uplands, on moorland, on mountaintops, near waterways and on sea cliffs, as well as in barren deserts and tundras. They vary in size from small stone markers to entire artificial hills, and in complexity from loose conical rock piles to delicately balanced sculptures and elaborate feats of megalithic engineering. Cairns may be painted or otherwise decorated, whether for increased visibility or for religious reasons. An ancient example is the inuksuk (plural inuksuit), used by the Inuit, Inupiat, Kalaallit, Yupik, and other peoples of the Arctic region of North America. Inuksuit are found from Alaska to Greenland. This region, above the Arctic Circle, is dominated by the tundra biome and has areas with few natural landmarks. Modern cairns Different types of cairns exist from rough piles of stones to interlocking dry stone round cylinders. The most important cairns commonly used around the world are interlocking stone survey cairns constructed around a central survey mark about every 30 km on the tallest peaks across a nation. These physical survey mark cairn systems are the basis for national survey grids to interconnect individual land survey measurements for entire nations. On occasion these permanent interlocking stone cairns are taken down then reconstructed to re-mark measurements to increase the accuracy of the national survey grid. They can also be used in unpopulated countries as emergency location points. In North America and Northern Europe any type of cairn can be used to mark mountain bike and hiking trails and other cross-country trail blazing, especially in mountain regions at or above the tree line. For example, the extensive trail network maintained by the DNT, the Norwegian Trekking Association, extensively uses cairns in conjunction with T-painted rock faces to mark trails. Other examples of these can be seen in the lava fields of Volcanoes National Park to mark several hikes. Placed at regular intervals, a series of cairns can be used to indicate a path across stony or barren terrain, even across glaciers. Such cairns are often placed at junctions or in places where the trail direction is not obvious. They may also be used to indicate an obscured danger such as a sudden drop, or a noteworthy point such as the summit of a mountain. Most trail cairns are small, usually being a foot or less in height. However, they may be built taller so as to protrude through a layer of snow. Hikers passing by often add a stone, as a small bit of maintenance to counteract the erosive effects of severe weather. North American trail marks are sometimes called "ducks" or "duckies", because they sometimes have a "beak" pointing in the direction of the route. The expression "two rocks do not make a duck" reminds hikers that just one rock resting upon another could be the result of accident or nature rather than intentional trail marking. The building of cairns for recreational purposes along trails, to mark one's personal passage through the area, can result in an overabundance of rock piles. This distracts from cairns used as genuine navigational guides, and also conflicts with the Leave No Trace ethic. This ethic of outdoor practice advocates for leaving the outdoors undisturbed and in its natural condition. Coastal cairns, or "sea marks", are also common in the northern latitudes, especially in the island-strewn waters of Scandinavia and eastern Canada. Often indicated on navigation charts, they may be painted white or lit as beacons for greater visibility offshore. Modern cairns may also be erected for historical or memorial commemoration or simply for decorative or artistic reasons. One example is a series of many cairns marking British soldiers' mass graves at the site of the Battle of Isandlwana, South Africa. Another is the Matthew Flinders Cairn on the side of Arthur's Seat,
are interlocking stone survey cairns constructed around a central survey mark about every 30 km on the tallest peaks across a nation. These physical survey mark cairn systems are the basis for national survey grids to interconnect individual land survey measurements for entire nations. On occasion these permanent interlocking stone cairns are taken down then reconstructed to re-mark measurements to increase the accuracy of the national survey grid. They can also be used in unpopulated countries as emergency location points. In North America and Northern Europe any type of cairn can be used to mark mountain bike and hiking trails and other cross-country trail blazing, especially in mountain regions at or above the tree line. For example, the extensive trail network maintained by the DNT, the Norwegian Trekking Association, extensively uses cairns in conjunction with T-painted rock faces to mark trails. Other examples of these can be seen in the lava fields of Volcanoes National Park to mark several hikes. Placed at regular intervals, a series of cairns can be used to indicate a path across stony or barren terrain, even across glaciers. Such cairns are often placed at junctions or in places where the trail direction is not obvious. They may also be used to indicate an obscured danger such as a sudden drop, or a noteworthy point such as the summit of a mountain. Most trail cairns are small, usually being a foot or less in height. However, they may be built taller so as to protrude through a layer of snow. Hikers passing by often add a stone, as a small bit of maintenance to counteract the erosive effects of severe weather. North American trail marks are sometimes called "ducks" or "duckies", because they sometimes have a "beak" pointing in the direction of the route. The expression "two rocks do not make a duck" reminds hikers that just one rock resting upon another could be the result of accident or nature rather than intentional trail marking. The building of cairns for recreational purposes along trails, to mark one's personal passage through the area, can result in an overabundance of rock piles. This distracts from cairns used as genuine navigational guides, and also conflicts with the Leave No Trace ethic. This ethic of outdoor practice advocates for leaving the outdoors undisturbed and in its natural condition. Coastal cairns, or "sea marks", are also common in the northern latitudes, especially in the island-strewn waters of Scandinavia and eastern Canada. Often indicated on navigation charts, they may be painted white or lit as beacons for greater visibility offshore. Modern cairns may also be erected for historical or memorial commemoration or simply for decorative or artistic reasons. One example is a series of many cairns marking British soldiers' mass graves at the site of the Battle of Isandlwana, South Africa. Another is the Matthew Flinders Cairn on the side of Arthur's Seat, a small mountain on the shores of Port Phillip Bay, Australia. A large cairn, commonly referred to as "the igloo" by the locals, was built atop a hill next to the I-476 highway in Radnor, Pennsylvania and is a part of a series of large rock sculptures initiated in 1988 to symbolize the township's Welsh heritage and to beautify the visual imagery along the highway. Some are merely places where farmers have collected stones removed from a field. These can be seen in the Catskill Mountains, North America where there is a strong Scottish heritage, and may also represent places where livestock were lost. In locales exhibiting fantastic rock formations, such as the Grand Canyon, tourists often construct simple cairns in reverence of the larger counterparts. By contrast, cairns may have a strong aesthetic purpose, for example in the art of Andy Goldsworthy. History Europe The building of cairns for various purposes goes back into prehistory in Eurasia, ranging in size from small rock sculptures to substantial man-made hills of stone (some built on top of larger, natural hills). The latter are often relatively massive Bronze Age or earlier structures which, like kistvaens and dolmens, frequently contain burials; they are comparable to tumuli (kurgans), but of stone construction instead of earthworks. Cairn originally could more broadly refer to various types of hills and natural stone piles, but today is used exclusively of artificial ones. The word cairn derives from Scots (with the same meaning), in turn from Scottish Gaelic , which is essentially the same as the corresponding words in other native Celtic languages of Britain, Ireland and Brittany, including Welsh (and ), Breton , Irish , and Cornish or . Cornwall () itself may actually be named after the cairns that dot its landscape, such as Cornwall's highest point, Brown Willy Summit Cairn, a 5 m (16 ft) high and 24 m (79 ft) diameter mound atop Brown Willy hill in Bodmin Moor, an area with many ancient cairns. Burial cairns and other megaliths are the subject of a variety of legends and folklore throughout Britain and Ireland. In Scotland, it is traditional to carry a stone up from the bottom of a hill to place on a cairn at its top. In such a fashion, cairns would grow ever larger. An old Scottish Gaelic blessing is , "I'll put a stone on your stone". In Highland folklore it is recounted that before Highland clans fought in a battle, each man would place a stone in a pile. Those who survived the battle returned and removed a stone from the pile. The stones that remained were built into a cairn to honour the dead. Cairns in the region were also put to vital practical use. For example, Dún Aonghasa, an all-stone Iron Age Irish hill fort on Inishmore in the Aran Islands, is still surrounded by small cairns and strategically placed jutting rocks, used collectively as an alternative to defensive earthworks because of the karst landscape's lack of soil. In Scandinavia, cairns have been used for centuries as trail and sea marks, among other purposes. In Iceland, cairns were often used as markers along the numerous single-file roads or paths that crisscrossed the island; many of these ancient cairns are still standing, although the paths have disappeared. In Norse Greenland, cairns were used as a hunting implement, a game-driving "lane", used to direct reindeer towards a game jump. In the mythology of ancient Greece, cairns were associated with Hermes, the god of overland travel. According to one legend, Hermes was put on trial by Hera for slaying her favorite servant, the monster Argus. All of the other gods acted as a jury, and as a way of declaring their verdict they were given pebbles, and told to throw them at whichever person they deemed to be in the right, Hermes or Hera. Hermes argued so skillfully that he ended up buried under a heap of pebbles, and this was the first cairn. In Croatia, in areas of ancient Dalmatia, such as Herzegovina and the Krajina, they are known as gromila. In Portugal a cairn is called a . In a legend the moledros are enchanted soldiers, and if one stone is taken from the pile and put under a pillow, in the morning a soldier will appear for a brief moment, then will change back to a stone and magically return to the pile. The cairns that mark the place where someone died or cover the graves alongside the roads where in the past people were buried are called . The same name given to the stones was given to the dead whose identity was unknown. The or are, in the Galician legends, spirits of the night. The word "Fes" or "Fieis" is thought to mean fairy,
the cyclic subgroups of order 4 is normal, but none of these are characteristic. However, the subgroup, , is characteristic, since it is the only subgroup of order 2. If is even, the dihedral group of order has 3 subgroups of index 2, all of which are normal. One of these is the cyclic subgroup, which is characteristic. The other two subgroups are dihedral; these are permuted by an outer automorphism of the parent group, and are therefore not characteristic. Strictly characteristic subgroup A , or a , which is invariant under surjective endomorphisms. For finite groups, surjectivity of an endomorphism implies injectivity, so a surjective endomorphism is an automorphism; thus being strictly characteristic is equivalent to characteristic. This is not the case anymore for infinite groups. Fully characteristic subgroup For an even stronger constraint, a fully characteristic subgroup (also, fully invariant subgroup; cf. invariant subgroup), , of a group , is a group remaining invariant under every endomorphism of ; that is, . Every group has itself (the improper subgroup) and the trivial subgroup as two of its fully characteristic subgroups. The commutator subgroup of a group is always a fully characteristic subgroup. Every endomorphism of induces an endomorphism of , which yields a map . Verbal subgroup An even stronger constraint is verbal subgroup, which is the image of a fully invariant subgroup of a free group under a homomorphism. More generally, any verbal subgroup is always fully characteristic. For any reduced free group, and, in particular, for any free group, the converse also holds: every fully characteristic subgroup is verbal. Transitivity The property of being characteristic or fully characteristic is transitive; if is a (fully) characteristic subgroup of , and is a (fully) characteristic subgroup of , then is a (fully) characteristic subgroup of . . Moreover, while normality is not transitive, it is true that every characteristic subgroup of a normal subgroup is normal. Similarly, while being strictly characteristic (distinguished) is not transitive, it is true that every fully characteristic subgroup of a strictly characteristic subgroup is strictly characteristic. However, unlike normality, if and is a subgroup of containing , then in general is not necessarily characteristic in . Containments Every subgroup that is fully characteristic is certainly strictly characteristic and characteristic; but a characteristic or even strictly characteristic subgroup need not be fully characteristic. The center of a group is always a strictly characteristic subgroup, but it is not always fully characteristic. For example, the finite group of order 12, , has a homomorphism taking to , which takes the center, , into a subgroup of , which meets the center only in the identity. The relationship amongst these subgroup properties can be expressed as: Subgroup ⇐ Normal subgroup ⇐ Characteristic subgroup ⇐ Strictly characteristic subgroup ⇐ Fully characteristic subgroup
is characteristic in . Related concepts Normal subgroup A subgroup of that is invariant under all inner automorphisms is called normal; also, an invariant subgroup. Since and a characteristic subgroup is invariant under all automorphisms, every characteristic subgroup is normal. However, not every normal subgroup is characteristic. Here are several examples: Let be a nontrivial group, and let be the direct product, . Then the subgroups, and , are both normal, but neither is characteristic. In particular, neither of these subgroups is invariant under the automorphism, , that switches the two factors. For a concrete example of this, let be the Klein four-group (which is isomorphic to the direct product, ). Since this group is abelian, every subgroup is normal; but every permutation of the 3 non-identity elements is an automorphism of , so the 3 subgroups of order 2 are not characteristic. Here . Consider and consider the automorphism, ; then is not contained in . In the quaternion group of order 8, each of the cyclic subgroups of order 4 is normal, but none of these are characteristic. However, the subgroup, , is characteristic, since it is the only subgroup of order 2. If is even, the dihedral group of order has 3 subgroups of index 2, all of which are normal. One of these is the cyclic subgroup, which is characteristic. The other two subgroups are dihedral; these are permuted by an outer automorphism of the parent group, and are therefore not characteristic. Strictly characteristic subgroup A , or a , which is invariant under surjective endomorphisms. For finite groups, surjectivity of an endomorphism implies injectivity, so a surjective endomorphism is an automorphism; thus being strictly characteristic is equivalent to characteristic. This is not the case anymore for infinite groups. Fully characteristic subgroup For an even stronger constraint, a fully characteristic subgroup (also, fully invariant subgroup; cf. invariant subgroup), , of a group , is a group remaining invariant under every endomorphism of ; that is, . Every group has itself (the improper subgroup) and the trivial subgroup as two of its fully characteristic subgroups. The commutator subgroup of a group is always a fully characteristic subgroup. Every endomorphism of induces an endomorphism of , which yields a map . Verbal subgroup An even stronger constraint is verbal subgroup, which is the image of a fully invariant subgroup of a free group under a homomorphism. More generally, any verbal subgroup is always fully characteristic. For any reduced free group, and, in particular, for any free group, the converse also holds: every fully characteristic subgroup is verbal. Transitivity The property of being characteristic or fully characteristic is transitive; if is a (fully) characteristic subgroup of , and is a (fully) characteristic subgroup of
individual animal may be considered different breeds by different registries (though not necessarily eligible for registry in them all, depending on its exact ancestry). For example, TICA's Himalayan is considered a colorpoint variety of the Persian by the CFA, while the Javanese (or Colorpoint Longhair) is a color variation of the Balinese in both the TICA and the CFA; both breeds are merged (along with the Colorpoint Shorthair) into a single "mega-breed", the Colourpoint, by the World Cat Federation (WCF), who have repurposed the name "Javanese" for the Oriental Longhair. Also, "Colo[u]rpoint Longhair" refers to different breeds in other registries. There are many examples of nomenclatural overlap and differences of this sort. Furthermore, many geographical and cultural names for cat breeds are fanciful selections made by Western
variation of the Balinese in both the TICA and the CFA; both breeds are merged (along with the Colorpoint Shorthair) into a single "mega-breed", the Colourpoint, by the World Cat Federation (WCF), who have repurposed the name "Javanese" for the Oriental Longhair. Also, "Colo[u]rpoint Longhair" refers to different breeds in other registries. There are many examples of nomenclatural overlap and differences of this sort. Furthermore, many geographical and cultural names for cat breeds are fanciful selections made by Western breeders to be exotic sounding and bear no relationship to the actual origin of the breeds; the Balinese, Javanese, and Himalayan are all examples of this trend. The domestic short-haired and domestic long-haired cat types are not breeds, but terms used (with various spellings)
to seek justice. Other cases, however, may be more conducive to class treatment. The preamble to the Class Action Fairness Act of 2005, passed by the United States Congress, found: Class-action lawsuits are an important and valuable part of the legal system when they permit the fair and efficient resolution of legitimate claims of numerous parties by allowing the claims to be aggregated into a single action against a defendant that has allegedly caused harm. Criticisms There are several criticisms of class actions. The preamble to the Class Action Fairness Act stated that some abusive class actions harmed class members with legitimate claims and defendants that have acted responsibly, adversely affected interstate commerce, and undermined public respect for the country's judicial system. Class members often receive little or no benefit from class actions. Examples cited for this include large fees for the attorneys, while leaving class members with coupons or other awards of little or no value; unjustified awards are made to certain plaintiffs at the expense of other class members; and confusing notices are published that prevent class members from being able to fully understand and effectively exercise their rights. For example, in the United States, class lawsuits sometimes bind all class members with a low settlement. These "coupon settlements" (which usually allow the plaintiffs to receive a small benefit such as a small check or a coupon for future services or products with the defendant company) are a way for a defendant to forestall major liability by precluding many people from litigating their claims separately, to recover reasonable compensation for the damages. However, existing law requires judicial approval of all class-action settlements, and in most cases, class members are given a chance to opt out of class settlement, though class members, despite opt-out notices, may be unaware of their right to opt-out because they did not receive the notice, did not read it or did not understand it. The Class Action Fairness Act of 2005 addresses these concerns. An independent expert may scrutinize coupon settlements before judicial approval in order to ensure that the settlement will be of value to the class members (28 U.S.C.A. 1712(d)). Further, if the action provides for settlement in coupons, "the portion of any attorney’s fee award to class counsel that is attributable to the award of the coupons shall be based on the value to class members of the coupons that are redeemed". 28 U.S.C.A. 1712(a). Ethics Class action cases present significant ethical challenges. Defendants can hold reverse auctions and any of several parties can engage in collusive settlement discussions. Subclasses may have interests that diverge greatly from the class but may be treated the same. Proposed settlements could offer some groups (such as former customers) much greater benefits than others. In one paper presented at an ABA conference on class actions in 2007, authors commented that "competing cases can also provide opportunities for collusive settlement discussions and reverse auctions by defendants anxious to resolve their new exposure at the most economic cost". Defendant class action Although normally plaintiffs are the class, defendant class actions are also possible. For example, in 2005, the Roman Catholic Archdiocese of Portland in Oregon was sued as part of the Catholic priest sex-abuse scandal. All parishioners of the Archdiocese's churches were cited as a defendant class. This was done to include their assets (local churches) in any settlement. Where both the plaintiffs and the defendants have been organized into court-approved classes, the action is called a bilateral class action. Mass actions In a class action, the plaintiff seeks court approval to litigate on behalf of a group of similarly situated persons. Not every plaintiff looks for or could obtain such approval. As a procedural alternative, plaintiff's counsel may attempt to sign up every similarly situated person that counsel can find as a client. Plaintiff's counsel can then join the claims of all of these persons in one complaint, a so-called "mass action", hoping to have the same efficiencies and economic leverage as if a class had been certified. Because mass actions operate outside the detailed procedures laid out for class actions, they can pose special difficulties for both plaintiffs, defendants, and the court. For example, settlement of class actions follows a predictable path of negotiation with class counsel and representatives, court scrutiny, and notice. There may not be a way to uniformly settle all of the many claims brought via a mass action. Some states permit plaintiff's counsel to settle for all the mass action plaintiffs according to a majority vote, for example. Other states, such as New Jersey, require each plaintiff to approve the settlement of that plaintiff's own individual claims. Class action legislation Argentina Class actions were recognized in "Halabi" leading case (Supreme Court, 2009). Australia and New Zealand Class actions became part of the Australian legal landscape only when the Federal Parliament amended the Federal Court of Australia Act ("the FCAA") in 1992 to introduce the "representative proceedings", the equivalent of the American "class actions". Likewise, class actions appeared slowly in the New Zealand legal system. However, a group can bring litigation through the action of a representative under the High Court Rules which provide that one or a multitude of persons may sue on behalf of, or for the benefit of, all persons "with the same interest in the subject matter of a proceeding". The presence and expansion of litigation funders have been playing a significant role in the emergence of class actions in New Zealand. For example, the "Fair Play on Fees" proceedings in relation to penalty fees charged by banks were funded by Litigation Lending Services (LLS), a company specializing in the funding and management of litigation in Australia and New Zealand. It was the biggest class-action suit in New Zealand history. Austria The Austrian Code of Civil Procedure (Zivilprozessordnung – ZPO) does not provide for a special proceeding for complex class-action litigation. However, Austrian consumer organizations (Verein für Konsumenteninformation (VKI) and the Federal Chamber of Labour / Bundesarbeitskammer) have brought claims on behalf of hundreds or even thousands of consumers. In these cases, the individual consumers assigned their claims to one entity, who has then brought an ordinary (two-party) lawsuit over the assigned claims. The monetary benefits were redistributed among the class. This technique, labeled as "class action Austrian style," allows for a significant reduction of overall costs. The Austrian Supreme Court, in a judgment, confirmed the legal admissibility of these lawsuits under the condition that all claims are essentially based on the same grounds. The Austrian Parliament unanimously requested the Austrian Federal Minister for Justice to examine the possibility of new legislation providing for a cost-effective and appropriate way to deal with mass claims. Together with the Austrian Ministry for Social Security, Generations and Consumer Protection, the Justice Ministry opened the discussion with a conference held in Vienna in June 2005. With the aid of a group of experts from many fields, the Justice Ministry began drafting the new law in September 2005. With the individual positions varying greatly, a political consensus could not be reached. Canada Provincial laws in Canada allow class actions. All provinces permit plaintiff classes and some permit defendant classes. Quebec was the first province to enact class proceedings legislation, in 1978. Ontario was next, with the Class Proceedings Act, 1992. As of 2008, 9 of 10 provinces had enacted comprehensive class actions legislation. In Prince Edward Island, where no comprehensive legislation exists, following the decision of the Supreme Court of Canada in Western Canadian Shopping Centres Inc. v. Dutton, [2001] 2 S.C.R. 534, class actions may be advanced under a local rule of court. The Federal Court of Canada permits class actions under Part V.1 of the Federal Courts Rules. Legislation in Saskatchewan, Manitoba, Ontario, and Nova Scotia expressly or by judicial opinion has been read to allow for what are informally known as national "opt-out" class actions, whereby residents of other provinces may be included in the class definition and potentially be bound by the court's judgment on common issues unless they opt-out in a prescribed manner and time. Court rulings have determined that this permits a court in one province to include residents of other provinces in the class action on an "opt-out" basis. Judicial opinions have indicated that provincial legislative national opt-out powers should not be exercised to interfere with the ability of another province to certify a parallel class action for residents of other provinces. The first court to certify will generally exclude residents of provinces whose courts have certified a parallel class action. However, in the Vioxx litigation, two provincial courts certified overlapping class actions whereby Canadian residents were class members in two class actions in two provinces. Both decisions are under appeal. The largest class action suit in Canada was settled in 2005 after Nora Bernard initiated efforts that led to an estimated 79,000 survivors of Canada's residential school system suing the Canadian government. The settlement amounted to upwards of $5 billion. Chile Chile approved class actions in 2004. The Chilean model is technically an opt-out issue class action, followed by a compensatory stage which can be collective or individual. This means that the class action is designed to declare the defendant generally liable with erga omnes effects if and only if the defendant is found liable, and the declaratory judgment can be used then to pursue damages in the same procedure or in individual ones in different jurisdictions. If the latter is the case, the liability cannot be discussed, but only the damages. There under the Chilean procedural rules, one particular case works as an opt-out class action for damages. This is the case when defendants can identify and compensate consumers directly, i.e. because it is their banking institution. In such cases, the judge can skip the compensatory stage and order redress directly. Since 2005 more than 100 cases have been filed, mostly by Servicio Nacional del Consumidor [SERNAC], the Chilean consumer protection agency. Salient cases have been Condecus v. BancoEstado and SERNAC v. La Polar. France Under French law, an association can represent the collective interests of consumers; however, each claimant must be individually named in the lawsuit. On January 4, 2005, President Chirac urged changes that would provide greater consumer protection. A draft bill was proposed in April 2006 but did not pass. Following the change of majority in France in 2012, the new government proposed introducing class actions into French law. The project of "loi Hamon" of May 2013 aimed to limit the class action to consumer and competition disputes. The law was passed on March 1, 2014. Germany Class actions are generally not permitted in Germany, as German law does not recognize the concept of a targeted class being affected by certain actions. This requires each plaintiff to individually prove that they were affected by an action, and present their individual damages, and prove the causality between both parties. Joint litigation (Streitgenossenschaft) is a legal act that may permit plaintiffs that are in the same legal community with respect to the dispute, or are entitled by the same factual or legal reason. These are not typically regarded as class action suits, as each individual plaintiff is entitled to compensation for their individual, incurred damages and not as a result of being a member of a class. The combination of court cases (Prozessverbindung) is another method that permits a judge to combine multiple separate court cases into a single trial with a single verdict. According to § 147 ZPO, this is only permissible if all cases are regarding the same factual and legal event and basis. Mediation Procedure A genuine extension of the legal effect of a court decision beyond the parties involved in the proceedings is offered under corporate law. This procedure applies to the review of stock payoffs under Stock Corporation Act (Aktiengesetz. Pursuant to Sec. 13 Sentence 2 Mediation Procedure Act (Spruchverfahrensgesetz ), the court decision concerning the dismissal or direction of a binding arrangement of an adequate compensation is effective for and against all shareholders, including those who have already agreed to a previous settlement in this matter. Investor Model Case Proceedings The Capital Investor Model Case Act (Kapitalanleger-Musterverfahrensgesetz) is an attempt to enable model cases to be brought by a large number of potentially affected parties in the event of disputes, limited to the investment market. In contrast to the U.S. class actions, each affected party must file a lawsuit in its own name in order to participate in the model proceedings. Model Declaratory Action Effective on November 1, 2018, the Civil Code (Bürgerliches Gesetzbuch) introduced the Model Declaratory Action (§ 606) that created the ability to bundle similar claims by many affected parties efficiently into one proceeding. Registered Consumer Protection Associations can file – if they represent at least 10 individuals – for a (general) judicial finding whether the factual and legal requirements for of claims or legal relationships are met or not. These individuals have to register in order to inhibit their claims. Since these Adjudications are more of a general nature, each individual must assert their claims in their own court proceedings. The competent court is bound by the Model Declaratory Action decision. Associate Action German law also recognizes the Associative Action (Verbandklage), which is comparable to the class action and is predominantly used in environmental law. In civil law, the Associative Action is represented by a foreign body in the matter of asserting and enforcing individual claims and the claimant can no longer control the proceedings. Class Action With Relation to the United States Class actions can be brought by Germans in the U.S. for events in Germany if the facts of the case relate to the U.S. For example, in the case of the Eschede derailment, the lawsuit was allowed because several aggrieved parties came from the US and had purchased rail tickets there. India Decisions of the Indian Supreme Court in the 1980s loosened strict locus standi requirements to permit the filing of suits on behalf of rights of deprived sections of society by public-minded individuals or bodies. Although not strictly "class action litigation" as it is understood in American law, Public Interest Litigation arose out of the wide powers of judicial review granted to the Supreme Court of India and the various High Courts under Article 32 and Article 226 of the Constitution of India. The sort of remedies sought from courts in Public Interest Litigation go beyond mere award of damages to all affected groups, and have sometimes (controversially) gone on to include Court monitoring of the implementation of legislation and even the framing of guidelines in the absence of Parliamentary legislation. However, this innovative jurisprudence did not help the victims of the Bhopal gas tragedy, who were unable to fully prosecute a class-action litigation (as understood in the American sense) against Union Carbide due to procedural rules that would make such litigation impossible to conclude and unwieldy to carry out. Instead, the Government of India exercised its right of parens patriae to appropriate all the claims of the victims and proceeded to litigate on their behalf, first in the New York courts and later, in the Indian courts. Ultimately, the matter was settled between the Union of India and Union Carbide (in a settlement overseen by the Supreme Court of India) for a sum of as a complete settlement of all claims of all victims for all time to come. Public interest litigation has now broadened in scope to cover larger and larger groups of citizens who may be affected by government inaction. Examples of this trend include the conversion of all public transport in the city of Delhi from diesel engines to CNG engines on the basis of the orders of the Delhi High Court; the monitoring of forest use by the High Courts and the Supreme Court to ensure that there is no unjustified loss of forest cover; and the directions mandating the disclosure of assets of electoral candidates for the Houses of Parliament and
compensatory stage which can be collective or individual. This means that the class action is designed to declare the defendant generally liable with erga omnes effects if and only if the defendant is found liable, and the declaratory judgment can be used then to pursue damages in the same procedure or in individual ones in different jurisdictions. If the latter is the case, the liability cannot be discussed, but only the damages. There under the Chilean procedural rules, one particular case works as an opt-out class action for damages. This is the case when defendants can identify and compensate consumers directly, i.e. because it is their banking institution. In such cases, the judge can skip the compensatory stage and order redress directly. Since 2005 more than 100 cases have been filed, mostly by Servicio Nacional del Consumidor [SERNAC], the Chilean consumer protection agency. Salient cases have been Condecus v. BancoEstado and SERNAC v. La Polar. France Under French law, an association can represent the collective interests of consumers; however, each claimant must be individually named in the lawsuit. On January 4, 2005, President Chirac urged changes that would provide greater consumer protection. A draft bill was proposed in April 2006 but did not pass. Following the change of majority in France in 2012, the new government proposed introducing class actions into French law. The project of "loi Hamon" of May 2013 aimed to limit the class action to consumer and competition disputes. The law was passed on March 1, 2014. Germany Class actions are generally not permitted in Germany, as German law does not recognize the concept of a targeted class being affected by certain actions. This requires each plaintiff to individually prove that they were affected by an action, and present their individual damages, and prove the causality between both parties. Joint litigation (Streitgenossenschaft) is a legal act that may permit plaintiffs that are in the same legal community with respect to the dispute, or are entitled by the same factual or legal reason. These are not typically regarded as class action suits, as each individual plaintiff is entitled to compensation for their individual, incurred damages and not as a result of being a member of a class. The combination of court cases (Prozessverbindung) is another method that permits a judge to combine multiple separate court cases into a single trial with a single verdict. According to § 147 ZPO, this is only permissible if all cases are regarding the same factual and legal event and basis. Mediation Procedure A genuine extension of the legal effect of a court decision beyond the parties involved in the proceedings is offered under corporate law. This procedure applies to the review of stock payoffs under Stock Corporation Act (Aktiengesetz. Pursuant to Sec. 13 Sentence 2 Mediation Procedure Act (Spruchverfahrensgesetz ), the court decision concerning the dismissal or direction of a binding arrangement of an adequate compensation is effective for and against all shareholders, including those who have already agreed to a previous settlement in this matter. Investor Model Case Proceedings The Capital Investor Model Case Act (Kapitalanleger-Musterverfahrensgesetz) is an attempt to enable model cases to be brought by a large number of potentially affected parties in the event of disputes, limited to the investment market. In contrast to the U.S. class actions, each affected party must file a lawsuit in its own name in order to participate in the model proceedings. Model Declaratory Action Effective on November 1, 2018, the Civil Code (Bürgerliches Gesetzbuch) introduced the Model Declaratory Action (§ 606) that created the ability to bundle similar claims by many affected parties efficiently into one proceeding. Registered Consumer Protection Associations can file – if they represent at least 10 individuals – for a (general) judicial finding whether the factual and legal requirements for of claims or legal relationships are met or not. These individuals have to register in order to inhibit their claims. Since these Adjudications are more of a general nature, each individual must assert their claims in their own court proceedings. The competent court is bound by the Model Declaratory Action decision. Associate Action German law also recognizes the Associative Action (Verbandklage), which is comparable to the class action and is predominantly used in environmental law. In civil law, the Associative Action is represented by a foreign body in the matter of asserting and enforcing individual claims and the claimant can no longer control the proceedings. Class Action With Relation to the United States Class actions can be brought by Germans in the U.S. for events in Germany if the facts of the case relate to the U.S. For example, in the case of the Eschede derailment, the lawsuit was allowed because several aggrieved parties came from the US and had purchased rail tickets there. India Decisions of the Indian Supreme Court in the 1980s loosened strict locus standi requirements to permit the filing of suits on behalf of rights of deprived sections of society by public-minded individuals or bodies. Although not strictly "class action litigation" as it is understood in American law, Public Interest Litigation arose out of the wide powers of judicial review granted to the Supreme Court of India and the various High Courts under Article 32 and Article 226 of the Constitution of India. The sort of remedies sought from courts in Public Interest Litigation go beyond mere award of damages to all affected groups, and have sometimes (controversially) gone on to include Court monitoring of the implementation of legislation and even the framing of guidelines in the absence of Parliamentary legislation. However, this innovative jurisprudence did not help the victims of the Bhopal gas tragedy, who were unable to fully prosecute a class-action litigation (as understood in the American sense) against Union Carbide due to procedural rules that would make such litigation impossible to conclude and unwieldy to carry out. Instead, the Government of India exercised its right of parens patriae to appropriate all the claims of the victims and proceeded to litigate on their behalf, first in the New York courts and later, in the Indian courts. Ultimately, the matter was settled between the Union of India and Union Carbide (in a settlement overseen by the Supreme Court of India) for a sum of as a complete settlement of all claims of all victims for all time to come. Public interest litigation has now broadened in scope to cover larger and larger groups of citizens who may be affected by government inaction. Examples of this trend include the conversion of all public transport in the city of Delhi from diesel engines to CNG engines on the basis of the orders of the Delhi High Court; the monitoring of forest use by the High Courts and the Supreme Court to ensure that there is no unjustified loss of forest cover; and the directions mandating the disclosure of assets of electoral candidates for the Houses of Parliament and State Assembly. The Supreme Court has observed that the PIL has tended to become a means to gain publicity or obtain relief contrary to constitutionally valid legislation and policy. Observers point out that many High Courts and certain Supreme Court judges are reluctant to entertain PILs filed by non-governmental organizations and activists, citing concerns of separation of powers and parliamentary sovereignty. Ireland In Irish law, there is no such thing as a "class action" per se. Third-party litigation funding is prohibited under Irish law. Instead, there is the 'representative action' () or 'test case' (cás samplach). A representative action is "where one claimant or defendant, with the same interest as a group of claimants or defendants in an action, institutes or defends proceedings on behalf of that group of claimants or defendants." Some test cases in Ireland have included: the CervicalCheck cancer scandal financial product misselling Damages claims brought by Irish hauliers against price-fixing by European truck makers Italy Italy has class action legislation. Consumer associations can file claims on behalf of groups of consumers to obtain judicial orders against corporations that cause injury or damage to consumers. These types of claims are increasing, and Italian courts have allowed them against banks that continue to apply compound interest on retail clients' current account overdrafts. The introduction of class actions is on the government's agenda. On November 19, 2007, the Senato della Repubblica passed a class-action law in Finanziaria 2008, a financial document for the economy management of the government. From 10 December 2007, in order of Italian legislation system, the law is before the House and has to be passed also by the Camera dei Deputati, the second house of Italian Parliament, to become an effective law. In 2004, the Italian parliament considered the introduction of a type of class action, specifically in the area of consumer law. No such law has been enacted, but scholars demonstrated that class actions (azioni rappresentative) do not contrast with Italian principles of civil procedure. Class action is regulated by art. 140 bis of the Italian consumers' code and has been in force since 1 July 2009. Netherlands Dutch law allows associations (verenigingen) and foundations (stichtingen) to bring a so-called collective action on behalf of other persons, provided they can represent the interests of such persons according to their by-laws (statuten) (section 3:305a Dutch Civil Code). All types of actions are permitted. This includes a claim for monetary damages, provided the event occurred after 15 November 2016 (purusuant to new legislation which entered into force 1 January 2020). Most class actions over the past decade have been in the field of securities fraud and financial services. The acting association or foundation may come to a collective settlement with the defendant. The settlement may also include – and usually primarily consists of – monetary compensation of damages. Such settlement can be declared binding for all injured parties by the Amsterdam Court of Appeal (section 7:907 Dutch Civil Code). The injured parties have an opt-out right during the opt-out period set by the Court, usually 3 to 6 months. Settlements involving injured parties from outside The Netherlands can also be declared binding by the Court. Since US courts are reluctant to take up class actions brought on behalf of injured parties not residing in the US who have suffered damages due to acts or omissions committed outside the US, combinations of US class actions and Dutch collective actions may come to a settlement that covers plaintiffs worldwide. An example of this is the Royal Dutch Shell Oil Reserves Settlement that was declared binding upon both US and non-US plaintiffs. Poland "Pozew zbiorowy" or class action has been allowed under Polish law since July 19, 2010. A minimum of 10 persons, suing based on the same law, is required. Russia Collective litigation has been allowed under Russian law since 2002. Basic criteria are, like in the US,
of a court trial or hearing that declares a person or organization to have disobeyed or been disrespectful of the court's authority, called "found" or "held" in contempt. That is the judge's strongest power to impose sanctions for acts that disrupt the court's normal process. A finding of being in contempt of court may result from a failure to obey a lawful order of a court, showing disrespect for the judge, disruption of the proceedings through poor behavior, or publication of material or non-disclosure of material, which in doing so is deemed likely to jeopardize a fair trial. A judge may impose sanctions such as a fine, jail or social service for someone found guilty of contempt of court, which makes contempt of court a process crime. Judges in common law systems usually have more extensive power to declare someone in contempt than judges in civil law systems. In use today Contempt of court is essentially seen as a form of disturbance that may impede the functioning of the court. The judge may impose fines and/or jail time upon any person committing contempt of court. The person is usually let out upon his or her agreement to fulfill the wishes of the court. Civil contempt can involve acts of omission. The judge will make use of warnings in most situations that may lead to a person being charged with contempt if the warnings are ignored. It is relatively rare that a person is charged for contempt without first receiving at least one warning from the judge. Constructive contempt, also called consequential contempt, is when a person fails to fulfill the will of the court as it applies to outside obligations of the person. In most cases, constructive contempt is considered to be in the realm of civil contempt due to its passive nature. Indirect contempt is something that is associated with civil and constructive contempt and involves a failure to follow court orders. Criminal contempt includes anything that could be considered a disturbance, such as repeatedly talking out of turn, bringing forth previously banned evidence, or harassment of any other party in the courtroom. Direct contempt is an unacceptable act in the presence of the judge (in facie curiae), and generally begins with a warning, and may be accompanied by an immediate imposition of punishment. Yawning in some cases can be considered contempt of court. Australia In Australia, a judge may impose a fine or jail for contempt of court, including for refusing to stand up for a judge. Belgium A Belgian correctional or civil judge may immediately try the person for insulting the court. Canada Common law offence In Canada, contempt of court is an exception to the general principle that all criminal offences are set out in the federal Criminal Code. Contempt of court and contempt of Parliament are the only remaining common law offences in Canada. Contempt of court includes the following behaviors: Failing to maintain a respectful attitude, failing to remain silent or failing to refrain from showing approval or disapproval of the proceeding Refusing or neglecting to obey a subpoena Willfully disobeying a process or order of the court Interfering with the orderly administration of justice or impairing the authority or dignity of the court An officer of the court failing to perform his or her duties A sheriff or bailiff not executing a writ of the court forthwith or not making a return thereof Canadian Federal courts This section applies only to Federal Court of Appeal and Federal Court. Under Federal Court Rules, Rules 466, and Rule 467 a person who is accused of Contempt needs to be first served with a contempt order and then appear in court to answer the charges. Convictions can only be made when proof beyond a reasonable doubt is achieved. If it is a matter of urgency or the contempt was done in front of a judge, that person can be punished immediately. Punishment can range from the person being imprisoned for a period of less than five years or until the person complies with the order or fine. Tax Court of Canada Under Tax Court of Canada Rules of Tax Court of Canada Act, a person who is found to be in contempt may be imprisoned for a period of less than two years or fined. Similar procedures for serving an order first is also used at the Tax Court. Provincial courts Different procedures exist for different provincial courts. For example, in British Columbia, a justice of the peace can only issue a summons to an offender for contempt, which will be dealt with by a judge, even if the offence was done in the face of the justice. Hong Kong Judges from the Court of Final Appeal, High Court, District Court along with members from the various tribunals and Coroner's Court all have the power to impose immediate punishments for contempt in the face of the court, derived from legislation or through common law: Insult a judge or justice, witness or officers of the court Interrupts the proceedings of the court Interfere with the course of justice Misbehaves in court (e.g., use of mobile phone or recording devices without permission) Juror who leaves without permission of the court during proceedings Disobeying a judgment or court order Breach of undertaking Breach of a duty imposed upon a solicitor by rules of court The use of insulting or threatening language in the magistrates' courts or against a magistrate is in breach of section 99 of the Magistrates Ordinance (Cap 227) which states the magistrate can 'summarily sentence the offender to a fine at level 3 and to imprisonment for 6 months.' In addition, certain appeal boards are given the statutory authority for contempt by them (e.g., Residential Care Home, Hotel and Guesthouse Accommodation, Air Pollution Control, etc.). For contempt in front of these boards, the chairperson will certify the act of contempt to the Court of First Instance who will then proceed with a hearing and determine the punishment. England and Wales In England and Wales (a common law jurisdiction), the law on contempt is partly set out in case law (common law), and partly codified by the Contempt of Court Act 1981. Contempt may be classified as criminal or civil. The maximum penalty for criminal contempt under the 1981 Act is committal to prison for two years. Disorderly, contemptuous or insolent behavior toward the judge or magistrates while holding the court, tending to interrupt the due course of a trial or other judicial proceeding, may be prosecuted as "direct" contempt. The term "direct" means that the court itself cites the person in contempt by describing the behavior observed on the record. Direct contempt is distinctly different from indirect contempt, wherein another individual may file papers alleging contempt against a person who has willfully violated a lawful court order. There are limits to the powers of contempt created by rulings of European Court of Human Rights. Reporting on contempt of court, the Law Commission commented that "punishment of an advocate for what he or she says in court, whether a criticism of the judge or a prosecutor, amounts to an interference with his or her rights under article 10
of court includes the following behaviors: Failing to maintain a respectful attitude, failing to remain silent or failing to refrain from showing approval or disapproval of the proceeding Refusing or neglecting to obey a subpoena Willfully disobeying a process or order of the court Interfering with the orderly administration of justice or impairing the authority or dignity of the court An officer of the court failing to perform his or her duties A sheriff or bailiff not executing a writ of the court forthwith or not making a return thereof Canadian Federal courts This section applies only to Federal Court of Appeal and Federal Court. Under Federal Court Rules, Rules 466, and Rule 467 a person who is accused of Contempt needs to be first served with a contempt order and then appear in court to answer the charges. Convictions can only be made when proof beyond a reasonable doubt is achieved. If it is a matter of urgency or the contempt was done in front of a judge, that person can be punished immediately. Punishment can range from the person being imprisoned for a period of less than five years or until the person complies with the order or fine. Tax Court of Canada Under Tax Court of Canada Rules of Tax Court of Canada Act, a person who is found to be in contempt may be imprisoned for a period of less than two years or fined. Similar procedures for serving an order first is also used at the Tax Court. Provincial courts Different procedures exist for different provincial courts. For example, in British Columbia, a justice of the peace can only issue a summons to an offender for contempt, which will be dealt with by a judge, even if the offence was done in the face of the justice. Hong Kong Judges from the Court of Final Appeal, High Court, District Court along with members from the various tribunals and Coroner's Court all have the power to impose immediate punishments for contempt in the face of the court, derived from legislation or through common law: Insult a judge or justice, witness or officers of the court Interrupts the proceedings of the court Interfere with the course of justice Misbehaves in court (e.g., use of mobile phone or recording devices without permission) Juror who leaves without permission of the court during proceedings Disobeying a judgment or court order Breach of undertaking Breach of a duty imposed upon a solicitor by rules of court The use of insulting or threatening language in the magistrates' courts or against a magistrate is in breach of section 99 of the Magistrates Ordinance (Cap 227) which states the magistrate can 'summarily sentence the offender to a fine at level 3 and to imprisonment for 6 months.' In addition, certain appeal boards are given the statutory authority for contempt by them (e.g., Residential Care Home, Hotel and Guesthouse Accommodation, Air Pollution Control, etc.). For contempt in front of these boards, the chairperson will certify the act of contempt to the Court of First Instance who will then proceed with a hearing and determine the punishment. England and Wales In England and Wales (a common law jurisdiction), the law on contempt is partly set out in case law (common law), and partly codified by the Contempt of Court Act 1981. Contempt may be classified as criminal or civil. The maximum penalty for criminal contempt under the 1981 Act is committal to prison for two years. Disorderly, contemptuous or insolent behavior toward the judge or magistrates while holding the court, tending to interrupt the due course of a trial or other judicial proceeding, may be prosecuted as "direct" contempt. The term "direct" means that the court itself cites the person in contempt by describing the behavior observed on the record. Direct contempt is distinctly different from indirect contempt, wherein another individual may file papers alleging contempt against a person who has willfully violated a lawful court order. There are limits to the powers of contempt created by rulings of European Court of Human Rights. Reporting on contempt of court, the Law Commission commented that "punishment of an advocate for what he or she says in court, whether a criticism of the judge or a prosecutor, amounts to an interference with his or her rights under article 10 of the ECHR" and that such limits must be "prescribed by law" and be "necessary in a democratic society", citing Nikula v Finland. Criminal contempt The Crown Court is a superior court according to the Senior Courts Act 1981, and Crown Courts have the power to punish contempt. The Divisional Court as part of the High Court has ruled that this power can apply in these three circumstances: Contempt "in the face of the court" (not to be taken literally; the judge does not need to see it, provided it took place within the court precincts or relates to a case currently before that court); Disobedience of a court order; and Breaches of undertakings to the court. Where it is necessary to act quickly, a judge may act to impose committal (to prison) for contempt. Where it is not necessary to be so urgent, or where indirect contempt has taken place the Attorney General can intervene and the Crown Prosecution Service will institute criminal proceedings on his behalf before a Divisional Court of the Queen's Bench Division of the High Court of Justice of England and Wales. Magistrates' courts also have powers under the 1981 Act to order to detain any person who "insults the court" or otherwise disrupts its proceedings until the end of the sitting. Upon contempt being admitted or proved the (invariably) District Judge (sitting as a magistrate) may order committal to prison for a maximum of one month, impose a fine of up to £2,500, or both. It will be contempt to bring an audio recording device or picture-taking device of any sort into an English court without the consent of the court. It will not be contempt according to section 10 of the Act for a journalist to refuse to disclose his sources, unless the court has considered the evidence available and determined that the information is "necessary in the interests of justice or national security or for the prevention of disorder or crime". Strict liability contempt Under the Contempt of Court Act it is criminal contempt to publish anything which creates a real risk that the course of justice in proceedings may be seriously impaired. It only applies where proceedings are active, and the Attorney General has issued guidance as to when he believes this to be the case, and there is also statutory guidance. The clause prevents the newspapers and media from publishing material that is too extreme or sensationalist about a criminal case until the trial or linked trials are over and the juries have given their verdicts. Section 2 of the Act defines and limits the previous common law definition of contempt (which was previously based upon a presumption that any conduct could be treated as contempt, regardless of intent), to only instances where there can be proved an intent to cause a substantial risk of serious prejudice to the administration of justice (i.e./e.g., the conduct of a trial). Civil contempt In civil proceedings there are two main ways in which contempt is committed: Failure to attend at court despite a summons requiring attendance. In respect of the High Court, historically a writ of latitat would have been issued, but now a bench warrant is issued, authorizing the tipstaff to arrange for the arrest of the individual, and imprisonment until the date and time the court appoints to next sit. In practice a groveling letter of apology to the court is sufficient to ward
being fundamental to experimental design. In law, corroboration refers to the requirement in some jurisdictions, such as in Scots law, that any evidence adduced be backed up by at least one other source (see Corroboration in Scots law). An example of corroboration Defendant says, "It was like what he/she (a witness) said but...". This is Corroborative evidence from the defendant that the evidence the witness gave is true and correct. Corroboration is not needed in certain instances. For example, there are certain statutory exceptions. In the Education (Scotland) Act, it is only necessary to produce a register as proof of lack of attendance. No further evidence is needed. England and Wales Perjury See section 13 of the Perjury Act 1911. Speeding offences See section 89(2) of the Road Traffic Regulation Act 1984. Sexual offences See section 32 of the Criminal Justice and Public Order Act 1994. Confessions by mentally handicapped persons See section 77 of the Police and Criminal Evidence Act 1984. Evidence of children See section 34 of the Criminal Justice Act 1988. Evidence of accomplices See section 32 of the Criminal Justice
X drive his automobile into a green car. Meanwhile, Y, another witness, testifies that when he examined X's car, later that day, he noticed green paint on its fender. There can also be corroborating evidence related to a certain source, such as what makes an author think a certain way due to the evidence that was supplied by witnesses or objects. Another type of corroborating evidence comes from using the Baconian method, i.e., the method of agreement, method of difference, and method of concomitant variations. These methods are followed in experimental design. They were codified by Francis Bacon,
Canada allow a cross-examiner to exceed the scope of direct examination. Since a witness called by the opposing party is presumed to be hostile, cross-examination does permit leading questions. A witness called by a direct examiner, on the other hand, may only be treated as hostile by that examiner after being permitted to do so by the judge, at the request of that examiner and as a result of the witness being openly antagonistic and/or prejudiced against the party that called them. Affecting the outcome of jury trials Cross-examination is a key component of a trial and the topic is given substantial attention during courses on trial advocacy. The opinions of a jury or judge are often changed if cross examination casts doubt on the witness. On the other hand, a credible witness may reinforce the substance of their original statements and enhance the judge's or jury's belief. Though the closing argument is often considered the deciding moment of a trial, effective cross-examination wins trials. Attorneys anticipate hostile witness' responses during pretrial planning, and often attempt to shape the witnesses' perception of the questions to draw out information helpful to the attorney's case. Typically during an attorney's closing argument he will repeat any admissions made by witnesses that favor their case. Indeed, in the United States, cross-examination is seen as a core part of the entire adversarial system of justice, in that
shape the witnesses' perception of the questions to draw out information helpful to the attorney's case. Typically during an attorney's closing argument he will repeat any admissions made by witnesses that favor their case. Indeed, in the United States, cross-examination is seen as a core part of the entire adversarial system of justice, in that it "is the principal means by which the believability of a witness and the truth of his testimony are tested." Another key component affecting a trial outcome is the jury selection, in which attorneys will attempt to include jurors from whom they feel they can get a favorable response or at the least unbiased fair decision. So while there are many factors affecting the outcome of a trial, the cross-examination of a witness will often influence an open-minded unbiased jury searching for the certainty
and organizations Christiania Bank, a former Norwegian bank Christiania Theatre in Oslo, Norway Christiania Spigerverk, a steel company which was founded in Oslo, Norway, in 1853 Christiania Norwegian Theatre, founded in 1852 under the name of Norwegian Dramatic School Christiania Avertissements-Blad, a former Norwegian newspaper, issued in Oslo, 1861–1971 Places Christiania or Kristiania, names of Oslo (1624–1924), expression (from 1925) for the part of Oslo that was
(from 1925) for the part of Oslo that was founded by King Christian IV Christiania Islands, a group of islands in the Palmer Archipelago Christiania Township, Minnesota, a township in Jackson County, U.S. Freetown Christiania (or Christiania), a self-proclaimed autonomous neighborhood in Copenhagen, Denmark Sports Christiania SK, a Norwegian Nordic skiing club, based in Oslo, Norway Other uses Christiania (brachiopod), a genus of Strophomenid brachiopods found in the Arenig geological
Charles Xavier Joseph de Franque Ville d'Abancourt (4 July 17589 September 1792) was a French statesman, minister to Louis XVI. Biography D'Abancourt was born in Douai, and was the nephew of Charles Alexandre de Calonne. He was Louis XVI's last minister of war (July 1792), and organised the defence of the Tuileries Palace during the 10 August attack. Ordered by the Legislative Assembly to send away the Swiss Guards, he refused, and was arrested for treason to the nation and sent to Orléans to be tried.
9 September 1792 at Versailles, and Fournier was unjustly charged with complicity in the crime. Notes References 1758 births 1792 deaths People from Douai French murder victims 18th-century French politicians 1792 crimes People killed in the French Revolution People murdered in France French Ministers of War Burials at the Cemetery of Saint-Louis,
Army air arm's Nakajima Ki-27s and Ki-43s, nor the much more famous Zero naval fighter in slow, turning dogfights, at higher speeds the P-40s were more than a match. AVG leader Claire Chennault trained his pilots to use the P-40's particular performance advantages. The P-40 had a higher dive speed than any Japanese fighter aircraft of the early war years, for example, and could exploit so-called "boom-and-zoom" tactics. The AVG was highly successful, and its feats were widely publicized by an active cadre of international journalists to boost sagging public morale at home. According to its official records, in just months, the Flying Tigers destroyed 297 enemy aircraft for the loss of just four of its own in air-to-air combat. In the spring of 1942, the AVG received a small number of Model E's. Each came equipped with a radio, six .50-caliber machine guns, and auxiliary bomb racks that could hold 35-lb fragmentation bombs. Chennault's armorer added bomb racks for 570-lb Russian bombs, which the Chinese had in abundance. These planes were used in the battle of the Salween River Gorge in late May 1942, which kept the Japanese from entering China from Burma and threatening Kunming. Spare parts, however, remained in short supply. "Scores of new planes...were now in India, and there they stayed—in case the Japanese decided to invade... the AVG was lucky to get a few tires and spark plugs with which to carry on its daily war." 4th Air Group China received 27 P-40E models in early 1943. These were assigned to squadrons of the 4th Air Group. United States Army Air Forces A total of 15 USAAF pursuit/fighter groups (FG), along with other pursuit/fighter squadrons and a few tactical reconnaissance (TR) units, operated the P-40 during 1941–45. As was also the case with the Bell P-39 Airacobra, many USAAF officers considered the P-40 exceptional but it was gradually replaced by the Lockheed P-38 Lightning, the Republic P-47 Thunderbolt and the North American P-51 Mustang. The bulk of the fighter operations by the USAAF in 1942–43 were borne by the P-40 and the P-39. In the Pacific, these two fighters, along with the U.S. Navy Grumman F4F Wildcat, contributed more than any other U.S. types to breaking Japanese air power during this critical period. Pacific theaters The P-40 was the main USAAF fighter aircraft in the South West Pacific and Pacific Ocean theaters during 1941–42. At Pearl Harbor and in the Philippines, USAAF P-40 squadrons suffered crippling losses on the ground and in the air to Japanese fighters such as the A6M Zero and Ki-43 Hayabusa respectively. During the attack on Pearl Harbor, most of the USAAF fighters were P-40Bs, most of which were destroyed. However, a few P-40s managed to get in the air and shoot down several Japanese aircraft, most notably by George Welch and Kenneth Taylor. In the Dutch East Indies campaign, the 17th Pursuit Squadron (Provisional), formed from USAAF pilots evacuated from the Philippines, claimed 49 Japanese aircraft destroyed, for the loss of 17 P-40s The seaplane tender USS Langley was sunk by Japanese airplanes while delivering P-40s to Tjilatjap, Java. In the Solomon Islands and New Guinea Campaigns and the air defence of Australia, improved tactics and training allowed the USAAF to better use the strengths of the P-40. Due to aircraft fatigue, scarcity of spare parts and replacement problems, the US Fifth Air Force and Royal Australian Air Force created a joint P-40 management and replacement pool on 30 July 1942 and many P-40s went back and forth between the air forces. The 49th Fighter Group was in action in the Pacific from the beginning of the war. Robert DeHaven scored 10 kills (of 14 overall) in the P-40 with the 49th FG. He compared the P-40 favorably with the P-38: "If you flew wisely, the P-40 was a very capable aircraft. [It] could outturn a P-38, a fact that some pilots didn't realize when they made the transition between the two aircraft. [...] The real problem with it was lack of range. As we pushed the Japanese back, P-40 pilots were slowly left out of the war. So when I moved to P-38s, an excellent aircraft, I did not [believe] that the P-40 was an inferior fighter, but because I knew the P-38 would allow us to reach the enemy. I was a fighter pilot and that was what I was supposed to do." The 8th, 15th, 18th, 24th, 49th, 343rd and 347th PGs/FGs, flew P-40s in the Pacific theaters between 1941 and 1945, with most units converting to P-38s from 1943 to 1944. In 1945, the 71st Reconnaissance Group employed them as armed forward air controllers during ground operations in the Philippines, until it received delivery of P-51s. They claimed 655 aerial victories. Contrary to conventional wisdom, with sufficient altitude, the P-40 could turn with the A6M and other Japanese fighters, using a combination of a nose-down vertical turn with a bank turn, a technique known as a low yo-yo. Robert DeHaven describes how this tactic was used in the 49th Fighter group: [Y]ou could fight a Jap on even terms, but you had to make him fight your way. He could outturn you at slow speed. You could outturn him at high speed. When you got into a turning fight with him, you dropped your nose down so you kept your airspeed up, you could outturn him. At low speed he could outroll you because of those big ailerons ... on the Zero. If your speed was up over 275, you could outroll [a Zero]. His big ailerons didn't have the strength to make high speed rolls... You could push things, too. Because ... [i]f you decided to go home, you could go home. He couldn't because you could outrun him. [...] That left you in control of the fight. China Burma India Theater USAAF and Chinese P-40 pilots performed well in this theater against many Japanese types such as the Ki-43, Nakajima Ki-44 "Tojo" and the Zero. The P-40 remained in use in the China Burma India Theater (CBI) until 1944 and was reportedly preferred over the P-51 Mustang by some US pilots flying in China. The American Volunteer Group (Flying Tigers) was integrated into the USAAF as the 23rd Fighter Group in June 1942. The unit continued to fly newer model P-40s until the end of the war, achieving a high kill-to-loss ratio. In the Battle of the Salween River Gorge of May 1942 the AVG used the P-40E model equipped with wing racks that could carry six 35-pound fragmentation bombs and Chennault's armorer developed belly racks to carry Russian 570-pound bombs, which the Chinese had in large quantity. Units arriving in the CBI after the AVG in the 10th and 14th Air Forces continued to perform well with the P-40, claiming 973 kills in the theater, or 64.8 percent of all enemy aircraft shot down. Aviation historian Carl Molesworth stated that "...the P-40 simply dominated the skies over Burma and China. They were able to establish air superiority over free China, northern Burma and the Assam valley of India in 1942, and they never relinquished it." The 3rd, 5th, 51st and 80th FGs, along with the 10th TRS, operated the P-40 in the CBI. CBI P-40 pilots used the aircraft very effectively as a fighter-bomber. The 80th Fighter Group in particular used its so-called B-40 (P-40s carrying 1,000-pound high-explosive bombs) to destroy bridges and kill bridge repair crews, sometimes demolishing their target with one bomb. At least 40 U.S. pilots reached ace status while flying the P-40 in the CBI. Europe and Mediterranean theaters On 14 August 1942, the first confirmed victory by a USAAF unit over a German aircraft in World War II was achieved by a P-40C pilot. 2nd Lt Joseph D. Shaffer, of the 33rd Fighter Squadron, intercepted a Focke-Wulf Fw 200C-3 maritime patrol aircraft that overflew his base at Reykjavík, Iceland. Shaffer damaged the Fw 200, which was finished off by a P-38F. Warhawks were used extensively in the Mediterranean and Middle East theatre of World War II by USAAF units, including the 33rd, 57th, 58th, 79th, 324th and 325th Fighter Groups. While the P-40 suffered heavy losses in the MTO, many USAAF P-40 units achieved high kill-to-loss ratios against Axis aircraft; the 324th FG scored better than a 2:1 ratio in the MTO. In all, 23 U.S. pilots became aces in the MTO on the P-40, most of them during the first half of 1943. P-40 pilots from the 57th FG were the first USAAF fliers to see action in the MTO, while attached to Desert Air Force Kittyhawk squadrons, from July 1942. The 57th was also the main unit involved in the "Palm Sunday Massacre", on 18 April 1943. Decoded Ultra signals revealed a plan for a large formation of Junkers Ju 52 transports to cross the Mediterranean, escorted by German and Italian fighters. Between 1630 and 1830 hours, all wings of the group were engaged in an intensive effort against the enemy air transports. Of the four Kittyhawk wings, three had left the patrol area before a convoy of a 100+ enemy transports were sighted by 57th FG, which tallied 74 aircraft destroyed. The group was last in the area, and intercepted the Ju 52s escorted by large numbers of Bf 109s, Bf 110s and Macchi C.202s. The group claimed 58 Ju 52s, 14 Bf 109s and two Bf 110s destroyed, with several probables and damaged. Between 20 and 40 of the Axis aircraft landed on the beaches around Cap Bon to avoid being shot down; six Allied fighters were lost, five of them P-40s. On 22 April, in Operation Flax, a similar force of P-40s attacked a formation of 14 Messerschmitt Me 323 Gigant ("Giant") six-engine transports, covered by seven Bf 109s from II./JG 27. All the transports were shot down, for a loss of three P-40s. The 57th FG was equipped with the Curtiss fighter until early 1944, during which time they were credited with at least 140 air-to-air kills. On 23 February 1943, during Operation Torch, the pilots of the 58th FG flew 75 P-40Ls off the aircraft carrier to the newly captured Vichy French airfield, Cazas, near Casablanca, in French Morocco. The aircraft supplied the 33rd FG and the pilots were reassigned. The 325th FG (known as the "Checkertail Clan") flew P-40s in the MTO and was credited with at least 133 air-to-air kills from April–October 1943, of which 95 were Bf 109s and 26 were Macchi C.202s, for the loss of 17 P-40s in combat. The 325th FG historian Carol Cathcart wrote: Cathcart wrote that Lt. Robert Sederberg assisted a comrade being attacked by five Bf 109s, destroyed at least one German aircraft, and may have shot down as many as five. Sederberg was shot down and became a prisoner of war. A famous African-American unit, the 99th FS, better known as the "Tuskegee Airmen" or "Redtails", flew P-40s in stateside training and for their initial eight months in the MTO. On 9 June 1943, they became the first African-American fighter pilots to engage enemy aircraft, over Pantelleria, Italy. A single Focke-Wulf Fw 190 was reported damaged by Lieutenant Willie Ashley Jr. On 2 July the squadron claimed its first verified kill; a Fw 190 destroyed by Captain Charles Hall. The 99th continued to score with P-40s until February 1944, when they were assigned P-39s and P-51 Mustangs. The much-lightened P-40L was most heavily used in the MTO, primarily by U.S. pilots. Many US pilots stripped down their P-40s even further to improve performance, often removing two or more of the wing guns from the P-40F/L. Royal Australian Air Force The Kittyhawk was the main fighter used by the RAAF in World War II, in greater numbers than the Spitfire. Two RAAF squadrons serving with the Desert Air Force, No. 3 and No. 450 Squadrons, were the first Australian units to be assigned P-40s. Other RAAF pilots served with RAF or SAAF P-40 squadrons in the theater. Many RAAF pilots achieved high scores in the P-40. At least five reached "double ace" status: Clive Caldwell, Nicky Barr, John Waddy, Bob Whittle (11 kills each) and Bobby Gibbes (10 kills) in the Middle East, North African and/or New Guinea campaigns. In all, 18 RAAF pilots became aces while flying P-40s. Nicky Barr, like many Australian pilots, considered the P-40 a reliable mount: "The Kittyhawk became, to me, a friend. It was quite capable of getting you out of trouble more often than not. It was a real warhorse." At the same time as the heaviest fighting in North Africa, the Pacific War was also in its early stages, and RAAF units in Australia were completely lacking in suitable fighter aircraft. Spitfire production was being absorbed by the war in Europe; P-38s were trialled, but were difficult to obtain; Mustangs had not yet reached squadrons anywhere, and Australia's tiny and inexperienced aircraft industry was geared towards larger aircraft. USAAF P-40s and their pilots originally intended for the U.S. Far East Air Force in the Philippines, but diverted to Australia as a result of Japanese naval activity were the first suitable fighter aircraft to arrive in substantial numbers. By mid-1942, the RAAF was able to obtain some USAAF replacement shipments. RAAF Kittyhawks played a crucial role in the South West Pacific theater. They fought on the front line as fighters during the critical early years of the Pacific War, and the durability and bomb-carrying abilities (1,000 lb/454 kg) of the P-40 also made it ideal for the ground attack role. For example, 75, and 76 Squadrons played a critical role during the Battle of Milne Bay, fending off Japanese aircraft and providing effective close air support for the Australian infantry, negating the initial Japanese advantage in light tanks and sea power. The RAAF units that most used Kittyhawks in the South West Pacific were 75, 76, 77, 78, 80, 82, 84 and 86 Squadrons. These squadrons saw action mostly in the New Guinea and Borneo campaigns. Late in 1945, RAAF fighter squadrons in the South West Pacific began converting to P-51Ds. However, Kittyhawks were in use with the RAAF until the end of the war, in Borneo. In all, the RAAF acquired 841 Kittyhawks (not counting the British-ordered examples used in North Africa), including 163 P-40E, 42 P-40K, 90 P-40 M and 553 P-40N models. In addition, the RAAF ordered 67 Kittyhawks for use by No. 120 (Netherlands East Indies) Squadron (a joint Australian-Dutch unit in the South West Pacific). The P-40 was retired by the RAAF in 1947. Royal Canadian Air Force A total of 13 Royal Canadian Air Force units operated the P-40 in the North West European or Alaskan theaters. In mid-May 1940, Canadian and US officers watched comparative tests of a XP-40 and a Spitfire, at RCAF Uplands, Ottawa. While the Spitfire was considered to have performed better, it was not available for use in Canada and the P-40 was ordered to meet home air defense requirements. In all, eight Home War Establishment Squadrons were equipped with the Kittyhawk: 72 Kittyhawk I, 12 Kittyhawk Ia, 15 Kittyhawk III and 35 Kittyhawk IV aircraft, for a total of 134 aircraft. These aircraft were mostly diverted from RAF Lend-Lease orders for service in Canada. The P-40 Kittyhawks were obtained in lieu of 144 P-39 Airacobras originally allocated to Canada but reassigned to the RAF. However, before any home units received the P-40, three RCAF Article XV squadrons operated Tomahawk aircraft from bases in the United Kingdom. No. 403 Squadron RCAF, a fighter unit, used the Tomahawk Mk II briefly before converting to Spitfires. Two Army Co-operation (close air support) squadrons: 400 and 414 Sqns trained with Tomahawks, before converting to Mustang Mk. I aircraft and a fighter/reconnaissance role. Of these, only No. 400 Squadron used Tomahawks operationally, conducting a number of armed sweeps over France in the late 1941. RCAF pilots also flew Tomahawks or Kittyhawks with other British Commonwealth units based in North Africa, the Mediterranean, South East Asia and (in at least one case) the South West Pacific. In 1942, the Imperial Japanese Navy occupied two islands, Attu and Kiska, in the Aleutians, off Alaska. RCAF home defense P-40 squadrons saw combat over the Aleutians, assisting the USAAF. The RCAF initially sent 111 Squadron, flying the Kittyhawk I, to the US base on Adak island. During the drawn-out campaign, 12 Canadian Kittyhawks operated on a rotational basis from a new, more advanced base on Amchitka, southeast of Kiska. 14 and 111 Sqns took "turn-about" at the base. During a major attack on Japanese positions at Kiska on 25 September 1942, Squadron Leader Ken Boomer shot down a Nakajima A6M2-N ("Rufe") seaplane. The RCAF also purchased 12 P-40Ks directly from the USAAF while in the Aleutians. After the Japanese threat diminished, these two RCAF squadrons returned to Canada and eventually transferred to England without their Kittyhawks. In January 1943, a further Article XV unit, 430 Squadron was formed at RAF Hartford Bridge, England and trained on obsolete Tomahawk IIA. The squadron converted to the Mustang I before commencing operations in mid-1943. In early 1945 pilots from No. 133 Squadron RCAF, operating the P-40N out of RCAF Patricia Bay, (Victoria, British Columbia), intercepted and destroyed two Japanese balloon-bombs, which were designed to cause wildfires on the North American mainland. On 21 February, Pilot Officer E. E. Maxwell shot down a balloon, which landed on Sumas Mountain in Washington State. On 10 March, Pilot Officer J. 0. Patten destroyed a balloon near Saltspring Island, British Columbia. The last interception took place on 20 April 1945 when Pilot Officer P.V. Brodeur from 135 Squadron out of Abbotsford, British Columbia shot down a balloon over Vedder Mountain. The RCAF units that operated P-40s were, in order of conversion: Article XV squadrons serving in the UK under direct command and control of the RAF, with RAF owned aircraft. 403 Squadron (Tomahawk IIA and IIB, March 1941) 400 Squadron (Tomahawk I, IIA and IIB, April 1941 – September 1942) 414 Squadron (Tomahawk I, IIA and IIB, August 1941 – September 1942) 430 Squadron (Tomahawk IIA and IIB, January 1943 – February 1943) Operational Squadrons of the Home War Establishment (HWE) (Based in Canada) 111 Squadron (Kittyhawk I, IV, November 1941 – December 1943 and P-40K, September 1942 – July 1943), 118 Squadron (Kittyhawk I, November 1941 – October 1943), 14 Squadron (Kittyhawk I, January 1942 – September 1943), 132 Squadron (Kittyhawk IA & III, April 1942 – September 1944), 130 Squadron (Kittyhawk I, May 1942 – October 1942), 163 Squadron (Kittyhawk I & III, October 1943 – March 1944), 133 Squadron (Kittyhawk I, March 1944 – July 1945) and 135 Squadron (Kittyhawk IV, May 1944 – September 1945). Royal New Zealand Air Force Some Royal New Zealand Air Force (RNZAF) pilots and New Zealanders in other air forces flew British P-40s while serving with DAF squadrons in North Africa and Italy, including the ace Jerry Westenra. A total of 301 P-40s were allocated to the RNZAF under Lend-Lease, for use in the Pacific Theater, although four of these were lost in transit. The aircraft equipped 14 Squadron, 15 Squadron, 16 Squadron, 17 Squadron, 18 Squadron, 19 Squadron and 20 Squadron. RNZAF P-40 squadrons were successful in air combat against the Japanese between 1942 and 1944. Their pilots claimed 100 aerial victories in P-40s, whilst losing 20 aircraft in combat Geoff Fisken, the highest scoring British Commonwealth ace in the Pacific, flew P-40s with 15 Squadron, although half of his victories were claimed with the Brewster Buffalo. The overwhelming majority of RNZAF P-40 victories were scored against Japanese fighters, mostly Zeroes. Other victories included Aichi D3A "Val" dive bombers. The only confirmed twin engine claim, a Ki-21 "Sally" (misidentified as a G4M "Betty") fell to Fisken in July 1943. From late 1943 and 1944, RNZAF P-40s were increasingly used against ground targets, including the innovative use of naval depth charges as improvised high-capacity bombs. The last front line RNZAF P-40s were replaced by Vought F4U Corsairs in 1944. The P-40s were relegated to use as advanced pilot trainers. The remaining RNZAF P-40s, excluding the 20 shot down and 154 written off, were mostly scrapped at Rukuhia in 1948. Soviet Union The Soviet Voyenno-Vozdushnye Sily (VVS; "Military Air Forces") and Morskaya Aviatsiya (MA; "Naval Air Service") also referred to P-40s as "Tomahawks" and "Kittyhawks". In fact, the Curtiss P-40 Tomahawk / Kittyhawk was the first Allied fighter supplied to the USSR under the Lend-Lease agreement. The USSR received 247 P-40B/Cs (equivalent to the Tomahawk IIA/B in RAF service) and 2,178 P-40E, -K, -L, and -N models between 1941 and 1944. The Tomahawks were shipped from Great Britain and directly from the US, many of them arriving incomplete,
the data obtained, Curtiss moved the glycol coolant radiator forward to the chin; its new air scoop also accommodated the oil cooler air intake. Other improvements to the landing gear doors and the exhaust manifold combined to give performance that was satisfactory to the USAAC. Without beneficial tail winds, Kelsey flew the XP-40 from Wright Field back to Curtiss's plant in Buffalo at an average speed of . Further tests in December 1939 proved the fighter could reach . An unusual production feature was a special truck rig to speed delivery at the main Curtiss plant in Buffalo, New York. The rig moved the newly built P-40s in two main components, the main wing and the fuselage, the eight miles from the plant to the airport where the two units were mated for flight and delivery. Performance characteristics The P-40 was conceived as a pursuit aircraft and was agile at low and medium altitudes but suffered from a lack of power at higher altitudes. At medium and high speeds it was one of the tightest-turning early monoplane designs of the war, and it could out turn most opponents it faced in North Africa and the Russian Front. In the Pacific Theater it was out-turned at lower speeds by the lightweight fighters A6M Zero and Nakajima Ki-43 "Oscar" which lacked the P-40's structural strength for high-speed hard turns. The American Volunteer Group Commander Claire Chennault advised against prolonged dog-fighting with the Japanese fighters due to speed reduction favoring the Japanese. Allison's V-1710 engines produced at sea level and . This was not powerful compared with contemporary fighters, and the early P-40 variants' top speeds were only average. The single-stage, single-speed supercharger meant that the P-40 was a poor high-altitude fighter. Later versions, with Allisons or more powerful 1,400 hp Packard Merlin engines were more capable. Climb performance was fair to poor, depending on the subtype. Dive acceleration was good and dive speed was excellent. The highest-scoring P-40 ace, Clive Caldwell (RAAF), who claimed 22 of his 28½ kills in the type, said that the P-40 had "almost no vices", although "it was a little difficult to control in terminal velocity". The P-40 had one of the fastest maximum dive speeds of any fighter of the early war period, and good high-speed handling. The P-40 tolerated harsh conditions and a variety of climates. Its semi-modular design was easy to maintain in the field. It lacked innovations such as boosted ailerons or automatic leading edge slats, but its strong structure included a five-spar wing, which enabled P-40s to pull high-G turns and survive some midair collisions. Intentional ramming attacks against enemy aircraft were occasionally recorded as victories by the Desert Air Force and Soviet Air Forces. Caldwell said P-40s "would take a tremendous amount of punishment, violent aerobatics as well as enemy action". Operational range was good by early war standards and was almost double that of the Supermarine Spitfire or Messerschmitt Bf 109, although inferior to the Mitsubishi A6M Zero, Nakajima Ki-43 and Lockheed P-38 Lightning. Caldwell found the P-40C Tomahawk's armament of two .50 in (12.7 mm) Browning AN/M2 "light-barrel" dorsal nose-mount synchronized machine guns and two .303 Browning machine guns in each wing to be inadequate. This was improved with the P-40D (Kittyhawk I) which abandoned the synchronized gun mounts and instead had two .50 in (12.7 mm) guns in each wing, although Caldwell still preferred the earlier Tomahawk in other respects. The D had armor around the engine and the cockpit, which enabled it to withstand considerable damage. This allowed Allied pilots in Asia and the Pacific to attack Japanese fighters head on, rather than try to out-turn and out-climb their opponents. Late-model P-40s were well armored. Visibility was adequate, although hampered by a complex windscreen frame, and completely blocked to the rear in early models by a raised turtledeck. Poor ground visibility and relatively narrow landing gear track caused many losses on the ground. Curtiss tested a follow-on design, the Curtiss XP-46, but it offered little improvement over newer P-40 models and was cancelled. Operational history In April 1939, the U.S. Army Air Corps, having witnessed the new, sleek, high-speed, in-line-engined fighters of the European air forces, placed the largest fighter order it had ever made for 524 P-40s. French Air Force An early order came from the French Armée de l'Air, which was already operating P-36s. The Armée de l'Air ordered 100 (later the order was increased to 230) as the Hawk 81A-1 but the French were defeated before the aircraft had left the factory and the aircraft were diverted to British and Commonwealth service (as the Tomahawk I), in some cases complete with metric flight instruments. In late 1942, as French forces in North Africa split from the Vichy government to side with the Allies, U.S. forces transferred P-40Fs from 33rd FG to GC II/5, a squadron that was historically associated with the Lafayette Escadrille. GC II/5 used its P-40Fs and Ls in combat in Tunisia and later for patrol duty off the Mediterranean coast until mid-1944, when they were replaced by Republic P-47D Thunderbolts. British Commonwealth Deployment In all, 18 Royal Air Force (RAF) squadrons, four Royal Canadian Air Force (RCAF), three South African Air Force (SAAF) and two Royal Australian Air Force (RAAF) squadrons serving with RAF formations, used P-40s. The first units to convert were Hawker Hurricane squadrons of the Desert Air Force (DAF), in early 1941. The first Tomahawks delivered came without armor, bulletproof windscreens or self-sealing fuel tanks, which were installed in subsequent shipments. Pilots used to British fighters sometimes found it difficult to adapt to the P-40's rear-folding landing gear, which was more prone to collapse than the lateral-folding landing gear of the Hawker Hurricane or Supermarine Spitfire. In contrast to the "three-point landing" commonly employed with British types, P-40 pilots were obliged to use a "wheels landing": a longer, low angle approach that touched down on the main wheels first. Testing showed the aircraft did not have the performance needed for use in Northwest Europe at high-altitude, due to the service ceiling limitation. Spitfires used in the theater operated at heights around , while the P-40's Allison engine, with its single-stage, low altitude rated supercharger, worked best at or lower. When the Tomahawk was used by Allied units based in the UK from February 1941, this limitation relegated the Tomahawk to low-level reconnaissance with RAF Army Cooperation Command and only No. 403 Squadron RCAF was used in the fighter role for a mere 29 sorties, before being replaced by Spitfires. Air Ministry deemed the P-40 unsuitable for the theater. UK P-40 squadrons from mid-1942 re-equipped with aircraft such as Mustangs The Tomahawk was superseded in North Africa by the more powerful Kittyhawk ("D"-mark onwards) types from early 1942, though some Tomahawks remained in service until 1943. Kittyhawks included many improvements and were the DAF's air superiority fighter for the critical first few months of 1942, until "tropicalised" Spitfires were available. In 2012, the virtually intact remains of a Kittyhawk were found; it had run out of fuel in the Egyptian Sahara in June 1942. DAF units received nearly 330 Packard V-1650 Merlin-powered P-40Fs, called Kittyhawk IIs, most of which went to the USAAF, and the majority of the 700 "lightweight" L models, also powered by the Packard Merlin, in which the armament was reduced to four .50 in (12.7 mm) Brownings (Kittyhawk IIA). The DAF also received some 21 of the later P-40K and the majority of the 600 P-40Ms built; these were known as Kittyhawk IIIs. The "lightweight" P-40Ns (Kittyhawk IV) arrived from early 1943 and were used mostly as fighter-bombers. From July 1942 until mid-1943, elements of the U.S. 57th Fighter Group (57th FG) were attached to DAF P-40 units. The British government also donated 23 P-40s to the Soviet Union. Combat performance Tomahawks and Kittyhawks bore the brunt of Luftwaffe and Regia Aeronautica fighter attacks during the North African campaign. The P-40s were considered superior to the Hurricane, which they replaced as the primary fighter of the Desert Air Force. The P-40 initially proved quite effective against Axis aircraft and contributed to a slight shift of momentum in the Allies' favor. The gradual replacement of Hurricanes by the Tomahawks and Kittyhawks led to the Luftwaffe accelerating retirement of the Bf 109E and introducing the newer Bf 109F; these were to be flown by the veteran pilots of elite Luftwaffe units, such as Jagdgeschwader 27 (JG27), in North Africa. The P-40 was generally considered roughly equal or slightly superior to the Bf 109 at low altitude but inferior at high altitude, particularly against the Bf 109F. Most air combat in North Africa took place well below , negating much of the Bf 109's superiority. The P-40 usually had an advantage over the Bf 109 in horizontal maneuvers (turning), dive speed and structural strength, was roughly equal in firepower but was slightly inferior in speed and outclassed in rate of climb and operational ceiling. The P-40 was generally superior to early Italian fighter types, such as the Fiat G.50 Freccia and the Macchi C.200. Its performance against the Macchi C.202 Folgore elicited varying opinions. Some observers consider the Macchi C.202 superior. Caldwell, who scored victories against them in his P-40, felt that the Folgore was superior to the P-40 and the Bf 109 except that its armament of only two or four machine guns was inadequate. Other observers considered the two equally matched or favored the Folgore in aerobatic performance, such as turning radius. Aviation historian Walter J. Boyne wrote that over Africa, the P-40 and the Folgore were "equivalent". Against its lack of high-altitude performance, the P-40 was considered to be a stable gun platform, and its rugged construction meant that it was able to operate from rough front line airstrips with a good rate of serviceability. The earliest victory claims by P-40 pilots include Vichy French aircraft, during the 1941 Syria-Lebanon campaign, against Dewoitine D.520s, a type often considered to be the best French fighter of the war. The P-40 was deadly against Axis bombers in the theater, as well as against the Bf 110 twin-engine fighter. In June 1941, Caldwell, of No. 250 Squadron RAF in Egypt, flying as F/O Jack Hamlyn's wingman, recorded in his log book that he was involved in the first air combat victory for the P-40. This was a CANT Z.1007 bomber on 6 June. The claim was not officially recognized, as the crash of the CANT was not witnessed. The first official victory occurred on 8 June, when Hamlyn and Flt Sgt Tom Paxton destroyed a CANT Z.1007 from 211a Squadriglia of the Regia Aeronautica, over Alexandria. Several days later, the Tomahawk was in action over Syria with No. 3 Squadron RAAF, which claimed 19 aerial victories over Vichy French aircraft during June and July 1941, for the loss of one P-40 (and one lost to ground fire). Some DAF units initially failed to use the P-40's strengths or used outdated defensive tactics such as the Lufbery circle. The superior climb rate of the Bf 109 enabled fast, swooping attacks, neutralizing the advantages offered by conventional defensive tactics. Various new formations were tried by Tomahawk units from 1941 to 1942, including "fluid pairs" (similar to the German rotte); one or two "weavers" at the back of a squadron in formation and whole squadrons bobbing and weaving in loose formations. Werner Schröer, who was credited with destroying 114 Allied aircraft in only 197 combat missions, referred to the latter formation as "bunches of grapes", because he found them so easy to pick off. The leading German expert in North Africa, Hans-Joachim Marseille, claimed as many as 101 P-40s during his career. From 26 May 1942, Kittyhawk units operated primarily as fighter-bomber units, giving rise to the nickname "Kittybomber". As a result of this change in role and because DAF P-40 squadrons were frequently used in bomber escort and close air support missions, they suffered relatively high losses; many Desert Air Force P-40 pilots were caught flying low and slow by marauding Bf 109s. Caldwell believed that Operational Training Units did not properly prepare pilots for air combat in the P-40 and as a commander, stressed the importance of training novice pilots properly. Competent pilots who took advantage of the P-40's strengths were effective against the best of the Luftwaffe and Regia Aeronautica. In August 1941, Caldwell was attacked by two Bf 109s, one of them piloted by German ace Werner Schröer. Although Caldwell was wounded three times and his Tomahawk was hit by more than 100 bullets and five 20 mm cannon shells, Caldwell shot down Schröer's wingman and returned to base. Some sources also claim that in December 1941, Caldwell killed a prominent German Experte, Erbo von Kageneck (69 kills), while flying a P-40. Caldwell's victories in North Africa included 10 Bf 109s and two Macchi C.202s. Billy Drake of 112 Squadron was the leading British P-40 ace with 13 victories. James "Stocky" Edwards (RCAF), who achieved 12 kills in the P-40 in North Africa, shot down German ace Otto Schulz (51 kills) while flying a Kittyhawk with No. 260 Squadron RAF. Caldwell, Drake, Edwards and Nicky Barr were among at least a dozen pilots who achieved ace status twice over while flying the P-40. A total of 46 British Commonwealth pilots became aces in P-40s, including seven double aces. Chinese Air Force Flying Tigers (American Volunteer Group) The Flying Tigers, known officially as the 1st American Volunteer Group (AVG), were a unit of the Chinese Air Force, recruited from U.S. Navy, Marines and Army aviators. Chennault received crated Model Bs which his airmen assembled in Burma at the end of 1941, adding self-sealing fuel tanks and a second pair of wing guns, such that the aircraft became a hybrid of B and C models. These were not well-liked by their pilots: they lacked drop tanks for extra range, and there were no bomb racks on the wings. Chennault considered the liquid-cooled engine vulnerable in combat because a single bullet through the coolant system would cause the engine to overheat in minutes. The Tomahawks also had no radios, so the AVG improvised by installing a fragile radio transceiver, the RCA-7-H, which had been built for a Piper Cub. Because the plane had a single-stage low-altitude supercharger, its effective ceiling was about . The most critical problem was the lack of spare parts; the only source was from damaged aircraft. The planes were viewed as cast-offs that no one else wanted, dangerous and difficult to fly. But the pilots did appreciate some of the planes' features. There were two heavy sheets of steel behind the pilot's head and back that offered solid protection, and overall the planes were ruggedly constructed. Compared to opposing Japanese fighters, the P-40B's strengths were that it was sturdy, well armed, faster in a dive and possessed an excellent rate of roll. While the P-40s could not match the maneuverability of the Japanese Army air arm's Nakajima Ki-27s and Ki-43s, nor the much more famous Zero naval fighter in slow, turning dogfights, at higher speeds the P-40s were more than a match. AVG leader Claire Chennault trained his pilots to use the P-40's particular performance advantages. The P-40 had a higher dive speed than any Japanese fighter aircraft of the early war years, for example, and could exploit so-called "boom-and-zoom" tactics. The AVG was highly successful, and its feats were widely publicized by an active cadre of international journalists to boost sagging public morale at home. According to its official records, in just months, the Flying Tigers destroyed 297 enemy aircraft for the loss of just four of its own in air-to-air combat. In the spring of 1942, the AVG received a small number of Model E's. Each came equipped with a radio, six .50-caliber machine guns, and auxiliary bomb racks that could hold 35-lb fragmentation bombs. Chennault's armorer added bomb racks for 570-lb Russian bombs, which the Chinese had in abundance. These planes were used in the battle of the Salween River Gorge in late May 1942, which kept the Japanese from entering China from Burma and threatening Kunming. Spare parts, however, remained in short supply. "Scores of new planes...were now in India, and there they stayed—in case the Japanese decided to invade... the AVG was lucky to get a few tires and spark plugs with which to carry on its daily war." 4th Air Group China received 27 P-40E models in early 1943. These were assigned to squadrons of the 4th Air Group. United States Army Air Forces A total of 15 USAAF pursuit/fighter groups (FG), along with other pursuit/fighter squadrons and a few tactical reconnaissance (TR) units, operated the P-40 during 1941–45. As was also the case with the Bell P-39 Airacobra, many USAAF officers considered the P-40 exceptional but it was gradually replaced by the Lockheed P-38 Lightning, the Republic P-47 Thunderbolt and the North American P-51 Mustang. The bulk of the fighter operations by the USAAF in 1942–43 were borne by the P-40 and the P-39. In the Pacific, these two fighters, along with the U.S. Navy Grumman F4F Wildcat, contributed more than any other U.S. types to breaking Japanese air power during this critical period. Pacific theaters The P-40 was the main USAAF fighter aircraft in the South West Pacific and Pacific Ocean theaters during 1941–42. At Pearl Harbor and in the Philippines, USAAF P-40 squadrons suffered crippling losses on the ground and in the air to Japanese fighters such as the A6M Zero and Ki-43 Hayabusa respectively. During the attack on Pearl Harbor, most of the USAAF fighters were P-40Bs, most of which were destroyed. However, a few P-40s managed to get in the air and shoot down several Japanese aircraft, most notably by George Welch and Kenneth Taylor. In the Dutch East Indies campaign, the 17th Pursuit Squadron (Provisional), formed from USAAF pilots evacuated from the Philippines, claimed 49 Japanese aircraft destroyed, for the loss of 17 P-40s The seaplane tender USS Langley was sunk by Japanese airplanes while delivering P-40s to Tjilatjap, Java. In the Solomon Islands and New Guinea Campaigns and the air defence of Australia, improved tactics and training allowed the USAAF to better use the strengths of the P-40. Due to aircraft fatigue, scarcity of spare parts and replacement problems, the US Fifth Air Force and Royal Australian Air Force created a joint P-40 management and replacement pool on 30 July 1942 and many P-40s went back and forth between the air forces. The 49th Fighter Group was in action in the Pacific from the beginning of the war. Robert DeHaven scored 10 kills (of 14 overall) in the P-40 with the 49th FG. He compared the P-40 favorably with the P-38: "If you flew wisely, the P-40 was a very capable aircraft. [It] could outturn a P-38, a fact that some pilots didn't realize when they made the transition between the two aircraft. [...] The real problem with it was lack of range. As we pushed the Japanese back, P-40 pilots were slowly left out of the war. So when I moved to P-38s, an excellent aircraft, I did not [believe] that the P-40 was an inferior fighter, but because I knew the P-38 would allow us to reach the enemy. I was a fighter pilot and that was what I was supposed to do." The 8th, 15th, 18th, 24th, 49th, 343rd and 347th PGs/FGs, flew P-40s in the Pacific theaters between 1941 and 1945, with most units converting to P-38s from 1943 to 1944. In 1945, the 71st Reconnaissance Group employed them as armed forward air controllers during ground operations in the Philippines, until it received delivery of P-51s. They claimed 655 aerial victories. Contrary to conventional wisdom, with sufficient altitude, the P-40 could turn with the A6M and other Japanese fighters, using a combination of a nose-down vertical turn with a bank turn, a technique known as a low yo-yo. Robert DeHaven describes how this tactic was used in the 49th Fighter group: [Y]ou could fight a Jap on even terms, but you had to make him fight your way. He could outturn you at slow speed. You could outturn him at high speed. When you got into a turning fight with him, you dropped your nose down so you kept your airspeed up, you could outturn him. At low speed he could outroll you because of those big ailerons ... on the Zero. If your speed was up over 275, you could outroll [a Zero]. His big ailerons didn't have the strength to make high speed rolls... You could push things, too. Because ... [i]f you decided to go home, you could go home. He couldn't because you could outrun him. [...] That left you in control of the fight. China Burma India Theater USAAF and Chinese P-40 pilots performed well in this theater against many Japanese types such as the Ki-43, Nakajima Ki-44 "Tojo" and the Zero. The P-40 remained in use in the China Burma India Theater (CBI) until 1944 and was reportedly preferred over the P-51 Mustang by some US pilots flying in China. The American Volunteer Group (Flying Tigers) was integrated into the USAAF as the 23rd Fighter Group in June 1942. The unit continued to fly newer model P-40s until the end of the war, achieving a high kill-to-loss ratio. In the Battle of the Salween River Gorge of May 1942 the AVG used the P-40E model equipped with wing racks that could carry six 35-pound fragmentation bombs and Chennault's armorer developed belly racks to carry Russian 570-pound bombs, which the Chinese had in large quantity. Units arriving in the CBI after the AVG in the 10th and 14th Air Forces continued to perform well with the P-40, claiming 973 kills in the theater, or 64.8 percent of all enemy aircraft shot down. Aviation historian Carl Molesworth stated that "...the P-40 simply dominated the skies over Burma and China. They were able to establish air superiority over free China, northern Burma and the Assam valley of India in 1942, and they never relinquished it." The 3rd, 5th, 51st and 80th FGs, along with the 10th TRS, operated the P-40 in the CBI. CBI P-40 pilots used the aircraft very effectively as a fighter-bomber. The 80th Fighter Group in particular used its so-called B-40 (P-40s carrying 1,000-pound high-explosive bombs) to destroy bridges and kill bridge repair crews, sometimes demolishing their target with one bomb. At least 40 U.S. pilots reached ace status while flying the P-40 in the CBI. Europe and Mediterranean theaters On 14 August 1942, the first confirmed victory by a USAAF unit over a German aircraft in World War II was achieved by a P-40C pilot. 2nd Lt Joseph D. Shaffer, of the 33rd Fighter Squadron, intercepted a Focke-Wulf Fw 200C-3 maritime patrol aircraft that overflew his base at Reykjavík, Iceland. Shaffer damaged the Fw 200, which was finished off by a P-38F. Warhawks were used extensively in the Mediterranean and Middle East theatre of World War II by USAAF units, including the 33rd, 57th, 58th, 79th, 324th and 325th Fighter Groups. While the P-40 suffered heavy losses in the MTO, many USAAF P-40 units achieved high kill-to-loss ratios against Axis aircraft; the 324th FG scored better than a 2:1 ratio in the MTO. In all, 23 U.S. pilots became aces in the MTO on the P-40, most of them during the first half of 1943. P-40 pilots from the 57th FG were the first USAAF fliers to see action in the MTO, while attached to Desert Air Force Kittyhawk squadrons, from July 1942. The 57th was also the main unit involved in the "Palm Sunday Massacre", on 18 April 1943. Decoded Ultra signals revealed a plan for a large formation of Junkers Ju 52 transports to cross the Mediterranean, escorted by German and Italian fighters. Between 1630 and 1830 hours, all wings of the group were engaged in an intensive effort against the enemy air transports. Of the four Kittyhawk wings, three had left the patrol area before a convoy of a 100+ enemy transports were sighted by 57th FG, which tallied 74 aircraft destroyed. The group was last in the area, and intercepted the Ju 52s escorted by large numbers of Bf 109s, Bf 110s and Macchi C.202s. The group claimed 58 Ju 52s, 14 Bf 109s and two Bf 110s destroyed, with several probables and damaged. Between 20 and 40 of the Axis aircraft landed on the beaches around Cap Bon to avoid being shot down; six Allied fighters were lost, five of them P-40s. On 22 April, in Operation Flax, a similar force of P-40s attacked a formation of 14 Messerschmitt Me 323 Gigant ("Giant") six-engine transports, covered by seven Bf 109s from II./JG 27. All the transports were shot down, for a loss of three P-40s. The 57th FG was equipped with the Curtiss fighter until early 1944, during which time they were credited with at least 140 air-to-air kills. On 23 February 1943, during Operation Torch, the pilots of the 58th FG flew 75 P-40Ls off the aircraft carrier to the newly captured Vichy French airfield, Cazas, near Casablanca, in French Morocco. The aircraft supplied the 33rd FG and the pilots were reassigned. The 325th FG (known as the "Checkertail Clan") flew P-40s in the MTO and was credited with at least 133 air-to-air kills from April–October 1943, of which 95 were Bf 109s and 26 were Macchi C.202s, for the loss of 17 P-40s in combat. The 325th FG historian Carol Cathcart wrote: Cathcart wrote that Lt. Robert Sederberg assisted a comrade being attacked by five Bf 109s, destroyed at least one German aircraft, and may have shot down as many as five. Sederberg was shot down and became a prisoner of war. A famous African-American unit, the 99th FS, better known as the "Tuskegee Airmen" or "Redtails", flew P-40s in stateside training and for their initial eight months in the MTO. On 9 June 1943, they became the first African-American fighter pilots to engage enemy aircraft, over Pantelleria, Italy. A single Focke-Wulf Fw 190 was reported damaged by Lieutenant Willie Ashley Jr. On 2 July the squadron claimed its first verified kill; a Fw 190 destroyed by Captain Charles Hall. The 99th continued to score with P-40s until February 1944, when they were assigned P-39s and P-51 Mustangs. The much-lightened P-40L was most heavily used in the MTO, primarily by U.S. pilots. Many US pilots stripped down their P-40s even further to improve performance, often removing two or more of the wing guns from the P-40F/L. Royal Australian Air Force The Kittyhawk was the main fighter used by the RAAF in World War II, in greater numbers than the Spitfire. Two RAAF squadrons serving with the Desert Air Force, No. 3 and No. 450 Squadrons, were the first Australian units to be assigned P-40s. Other RAAF pilots served with RAF or SAAF P-40 squadrons in the theater. Many RAAF pilots achieved high scores in the P-40. At least five reached "double ace" status: Clive Caldwell, Nicky Barr, John Waddy, Bob Whittle (11 kills each) and Bobby Gibbes (10 kills) in the Middle East, North African and/or New Guinea campaigns. In all, 18 RAAF pilots became aces while flying P-40s. Nicky Barr, like many Australian pilots, considered the P-40 a reliable mount: "The Kittyhawk became, to me, a friend. It was quite capable of getting you out of trouble more often than not. It was a real warhorse." At the same time as the heaviest fighting in North Africa, the Pacific War was also in its early stages, and RAAF units in Australia were completely lacking in suitable fighter aircraft. Spitfire production was being absorbed by the war in Europe; P-38s were trialled, but were difficult to obtain; Mustangs had not yet reached squadrons anywhere, and Australia's tiny and inexperienced aircraft industry was geared towards
a confession of faith that Pope Paul VI published with the motu proprio Solemni hac liturgia of 30 June 1968. Pope Paul VI spoke of it as "a creed which, without being strictly speaking a dogmatic definition, repeats in substance, with some developments called for by the spiritual condition of our time, the creed of Nicea, the creed of the immortal tradition of the holy Church of God." Christian confessions of faith Protestant denominations are usually associated with confessions of faith, which are similar to creeds but usually longer. The Sixty-seven Articles of the Swiss reformers, drawn up by Zwingli in 1523; The Schleitheim Confession of the Anabaptist Swiss Brethren in 1527; The Augsburg Confession of 1530, the work of Martin Luther and Philip Melanchthon, which marked the breach with Rome; The Tetrapolitan Confession of the German Reformed Church, 1530; The Smalcald Articles of Martin Luther, 1537 The Guanabara Confession of Faith, 1558; The Gallic Confession, 1559; The Scots Confession, drawn up by John Knox in 1560; The Belgic Confession drawn up by Guido de Bres in 1561; The Thirty-nine Articles of the Church of England in 1562; The Formula of Concord and its Epitome in 1577; The Irish Articles in 1615; The Remonstrant Confession in 1621; The Baptist Confession of Faith in 1644 (upheld by Reformed Baptists) The Westminster Confession of Faith in 1647 was the work of the Westminster Assembly of Divines and has commended itself to the Presbyterian Churches of all English-speaking peoples, and also in other languages. The Savoy Declaration of 1658 which was a modification of the Westminster Confession to suit Congregationalist polity; The Standard Confession in 1660 (upheld by General Baptists); The Orthodox Creed in 1678 (upheld by General Baptists); The Baptist Confession in 1689 (upheld by Reformed Baptists); The Confession of Faith of the Calvinistic Methodists (Presbyterians) of Wales of 1823; The Chicago-Lambeth Quadrilateral of the Anglican Communion in 1870; The Assemblies of God Statement of Fundamental Truths in 1916; and The Confession of Faith of the United Methodist Church, adopted in 1968 The Church of Jesus Christ of Latter-day Saints Within the sects of the Latter Day Saint movement, the Articles of Faith are a list composed by Joseph Smith as part of an 1842 letter sent to "Long" John Wentworth, editor of the Chicago Democrat. It is canonized with the Bible, the Book of Mormon, the Doctrine & Covenants and Pearl of Great Price, as part of the standard works of The Church of Jesus Christ of Latter-day Saints. Controversies In the Swiss Reformed Churches, there was a quarrel about the Apostles' Creed in the mid-19th century. As a result, most cantonal reformed churches stopped prescribing any particular creed. In 2005, Bishop John Shelby Spong, retired Episcopal Bishop of Newark, has written that dogmas and creeds were merely "a stage in our development" and "part of our religious childhood." In his book, Sins of the Scripture, Spong wrote that "Jesus seemed to understand that no one can finally fit the holy God into his or her creeds or doctrines. That is idolatry." Islamic creed In Islamic theology, the term most closely corresponding to "creed" is ʿaqīdah (). The first such creed was written as "a short answer to the pressing heresies of the time" is known as Al-Fiqh Al-Akbar and ascribed to Abū Ḥanīfa. Two well known creeds were the Fiqh Akbar II "representative" of the al-Ash'ari, and Fiqh Akbar III, "representative" of the Ash-Shafi'i. Iman () in Islamic theology denotes a believer's religious faith. Its most simple definition is the belief in the six articles of faith, known as arkān al-īmān. Belief in God Belief in the Angels Belief in Divine Books Belief in the Prophets Belief in the Day of Judgment Belief in God's predestination See also Credo Mission statement The American's Creed – a 1918 statement about Americans' belief in democracy The Five Ks Pesher References Further reading Christian Confessions: a Historical Introduction, [by] Ted A. Campbell. First ed. xxi, 336 p. Louisville, Ky.: Westminster/John Knox Press, 1996. Creeds and Confessions of Faith in the Christian Tradition. Edited by Jaroslav Pelikan and Valerie Hotchkiss. Yale University Press 2003. Creeds in the Making: a Short Introduction to the History of Christian Doctrine, [by] Alan Richardson. Reissued. London: S.C.M. Press, 1979, cop. 1935. 128 p. Ecumenical Creeds and Reformed Confessions. Grand Rapids, Mich.: C.R.C. [i.e. Christian Reformed Church] Publications, 1987. 148 p. The Three Forms of Unity (Heidelberg Catechism, Belgic Confession, [and the] Canons of Dordrecht), and the Ecumenical Creeds (the Apostles' Creed, the Athanasian Creed, [and the] Creed of Chalcedon). Reprinted [ed.]. Mission Committee of the Protestant Reformed Churches in America, 1991. 58 p. Without
part of liturgy. The term is anglicized from Latin credo "I believe", the incipit of the Latin texts of the Apostles' Creed and the Nicene Creed. A creed is sometimes referred to as a symbol in a specialized meaning of that word (which was first introduced to Late Middle English in this sense), after Latin symbolum "creed" (as in Symbolum Apostolorum = the "Apostles' Creed", a shorter version of the traditional Nicene Creed), after Greek symbolon "token, watchword". Some longer statements of faith in the Protestant tradition are instead called "confessions of faith", or simply "confession" (as in e.g. Helvetic Confession). Within Evangelical Protestantism, the terms "doctrinal statement" or "doctrinal basis" tend to be preferred. Doctrinal statements may include positions on lectionary and translations of the Bible, particularly in fundamentalist churches of the King James Only movement. The term creed is sometimes extended to comparable concepts in non-Christian theologies; thus the Islamic concept of ʿaqīdah (literally "bond, tie") is often rendered as "creed". Jewish creed Whether Judaism is creedal in character has generated some controversy. Rabbi Milton Steinberg wrote that "By its nature Judaism is averse to formal creeds which of necessity limit and restrain thought" and asserted in his book Basic Judaism (1947) that "Judaism has never arrived at a creed." The 1976 Centenary Platform of the Central Conference of American Rabbis, an organization of Reform rabbis, agrees that "Judaism emphasizes action rather than creed as the primary expression of a religious life." Others, however, characterize the Shema Yisrael as a creedal statement in strict monotheism embodied in a single prayer: "Hear O Israel, the Lord is our God, the Lord is One" (; transliterated Shema Yisrael Adonai Eloheinu Adonai Echad). A notable statement of Jewish principles of faith was drawn up by Maimonides as his 13 Principles of Faith. Christianity The first confession of faith established within Christianity was the Nicene Creed by the Early Church in 325. It was established to summarize the foundations of the Christian faith and to protect believers from false doctrines. Various Christian denominations from Protestantism and Evangelical Christianity have published confession of faith as a basis for fellowship among churches of the same denomination. Many Christian denominations did not try to be too exhaustive in their confessions of faith and thus allow different opinions on some secondary topics.In addition, some churches are open to revising their confession of faith when necessary. Moreover, Baptist "confessions of faith" have often had a clause such as this from the First London Baptist Confession (Revised edition, 1646): Excommunication Excommunication is a practice of the Bible to exclude members who do not respect the Church's confession of faith and do not want to repent. It is practiced by all Christian denominations and is intended to protect against the consequences of heretics' teachings and apostasy. Christians without creeds Some Christian denominations do not profess a creed. This stance is often referred to as "non-creedalism". The Religious Society of Friends, also known as the Quakers, consider that they have no need for creedal formulations of faith. The Church of the Brethren and other Schwarzenau Brethren churches also espouse no creed, referring to the New Testament, as their "rule of faith and practice." Jehovah's Witnesses contrast "memorizing or repeating creeds" with acting to "do what Jesus said". Unitarian Universalists do not share a creed. Similar reservations about the use of creeds can be found in the Restoration Movement and its descendants, the Christian Church (Disciples of Christ), the Churches of Christ, and the Christian churches and churches of Christ. Restorationists profess "no creed but Christ". Christian creeds Several creeds have originated in Christianity. 1 Corinthians 15:3–7 includes an early creed about Jesus' death and resurrection which was probably received by Paul. The antiquity of the creed has been located by most biblical scholars to no more than five years after Jesus' death, probably originating from the Jerusalem apostolic community. The Old Roman Creed is an earlier and shorter version of the Apostles' Creed. It was based on the 2nd century Rules of Faith and the interrogatory declaration of faith for those receiving baptism, which by the 4th century was everywhere tripartite in structure, following Matthew 28:19. The Apostles' Creed is used in Western Christianity for both liturgical and catechetical purposes. The Nicene Creed reflects the concerns of the First Council of Nicaea in 325 which had as their chief purpose to establish what Christians believed. The Chalcedonian Creed was adopted at the Council of Chalcedon in 451 in Asia Minor. It defines that Christ is 'acknowledged in two natures', which 'come together into one person and hypostasis'. The Athanasian Creed (Quicunque vult) is a Christian statement of belief focusing on Trinitarian doctrine and Christology. It is the first creed in which the equality of the three persons of the Trinity is explicitly stated and differs from the Nicene and Apostles' Creeds in the inclusion